MCP Server: Limits, Usage Tracking, Tool Controls
Team Usage Tracking
You can now see exactly how many MCP servers your team has installed and how close you are to your limits. We added a new "Usage" tab to the team management page that breaks down your total servers, stdio servers, and HTTP servers with progress bars that turn red when you hit a limit.
We also added a compact usage indicator to the Dashboard and MCP Servers pages. It shows something like "Total MCP Servers 3/5 | stdio MCP Servers 1/1" so you can check your usage at a glance without navigating to team settings.
Personal Configuration During Installation
Team admins can now configure their personal MCP server settings right in the installation wizard. Before, you had to complete the installation, then go to a separate page to set up your personal variables. Now it's one step.
Skip it if you want. Configure later. But for most admins, this cuts out several clicks and makes setup feel smoother.

Total MCP Server Limit
Teams now have a total MCP server limit that applies to all transport types. Previously, only stdio servers had a limit. HTTP/SSE servers? Unlimited. That's fixed now - there's a single cap (default is 5) that covers everything.
Administrators can configure both limits when managing teams. Hit your limit? Users see a clear error message telling them to contact their admin.
Featured Servers & Full Catalog
We added two new pages to help you find MCP servers faster. The Featured page (/mcp-server/featured) shows our hand-picked servers organized by category with a sidebar for quick navigation. The Catalog pages (/mcp-server/catalog/:categoryId) let you browse every server in our directory by category.
The install wizard now has "Browse Featured" and "View All Servers" buttons below the search bar. Every catalog category has its own URL, so you can share links to specific server collections with your team.
Disable Individual Tools
Team admins can now disable specific tools from any MCP server installation. If a server comes with a tool you'd rather your team not use - like delete_repository - you can turn it off without removing the entire server.
Disabled tools won't appear in tool discovery. If someone tries to use one anyway, they get a message explaining that it's disabled and suggesting they look for alternatives. Changes sync to satellites within 2 seconds.
PostgreSQL Migration: Technical Changelog
The Problem: Architectural Mismatch
Satellite Infrastructure Write Patterns
DeployStack's satellite architecture creates fundamentally different database load patterns than typical web applications:
Continuous Write Operations:
- Satellite Heartbeats: Every active satellite writes heartbeat data every 30 seconds
- Activity Tracking: Real-time logging of MCP tool executions across all satellites
- Process Lifecycle Events: Start/stop/crash events for stdio MCP servers
- Team Metrics: Per-team, per-satellite usage tracking
- Background Jobs: Job queue state changes, results storage, cleanup operations
- Audit Logs: Security and compliance event logging
Scaling Characteristics:
100 satellites -> 2 writes/min (heartbeat) = 200 writes/min baseline
+ MCP tool execution logging: 10 tools/min -> 100 satellites = 1,000 writes/min
+ Process lifecycle events, metrics, job queue updates
= 50-200 writes/second minimum at moderate scale
This is not a read-heavy web application. Both reads and writes are high-frequency operations.
SQLite/Turso Architectural Constraint
Single-Writer Serialization:
- SQLite's architecture serializes all write operations
- One write blocks all others, regardless of threading
- SQLITE_BUSY errors occur under concurrent load
- 5-second transaction timeout to prevent permanent blocking
- ~150k rows/second maximum throughput with zero improvement from additional threads
Turso's MVCC Implementation (Concurrent Writes) 2025-11-28:
- Technology preview status
- Does not support CREATE INDEX operations
- Substantial memory overhead (stores complete row copies, not deltas)
- No asynchronous I/O (limits scalability)
- No production timeline announced
The Decision: PostgreSQL-Only Architecture
Technical Rationale
1. Production-Ready Concurrent Writes
PostgreSQL's MVCC implementation has been battle-tested in production for decades:
- True multi-writer parallelism without serialization
- Multiple transactions write simultaneously without blocking
- No SQLITE_BUSY errors or artificial timeouts
- Proven performance with thousands of writes per second
2. Architectural Fit for Distributed Systems
The satellite infrastructure maps well to PostgreSQL's design:
- Connection pooling works efficiently with distributed satellites (PgBouncer, built-in pooling)
- Proven performance in microservices architectures
- Better handling of high-concurrency scenarios than single-writer databases
- Mature operational patterns for distributed deployments
3. Operational Maturity
PostgreSQL provides complete production tooling:
- Reliable monitoring (pgAdmin, DataGrip, pganalyze, pg_stat_statements)
- Point-in-time recovery capabilities
- Streaming replication for high availability
- Mature backup and recovery tools (`pg_dump`, WAL-E, `pgBackRest`)
- Extensive production experience across industry
Migration Architecture
Schema Migration Strategy
The migration leveraged Drizzle ORM's database abstraction to minimize changes:
Before (Multi-Database):
// Conditional driver selection
import { sqliteTable, text, integer } from 'drizzle-orm/sqlite-core';
import { pgTable, text, timestamp } from 'drizzle-orm/pg-core';
// Runtime driver switching based on configuration
const db = selectedType === 'sqlite'
? drizzle(sqliteClient, { schema })
: drizzle(postgresClient, { schema });
After (PostgreSQL-Only):
// Single driver implementation
import { drizzle } from 'drizzle-orm/node-postgres';
import { Pool } from 'pg';
const pool = new Pool({
host: config.host,
port: config.port,
database: config.database,
user: config.user,
password: config.password,
ssl: config.ssl ? { rejectUnauthorized: false } : false
});
const db = drizzle(pool, { schema });
Type System Changes
SQLite/Turso Compromises:
- Timestamps stored as integers (Unix epoch)
- Booleans stored as integers (0/1)
- No native JSONB support
- Limited type safety
PostgreSQL Native Types:
- timestamp with timezone for proper datetime handling
- Native boolean type
- jsonb for efficient JSON storage with indexing
- Arrays and custom types
- Full-text search capabilities
Schema Files
Removed:
- src/db/schema.sqlite.ts - SQLite-specific schema
- drizzle/migrations_sqlite/ - SQLite migration history
- Multi-database conditional logic throughout codebase
Retained:
- src/db/schema.ts - PostgreSQL-only schema (renamed from schema.postgres.ts)
- drizzle/migrations/ - PostgreSQL migration files
- src/db/schema-tables/ - Modular table definitions
Migration Directory Structure
services/backend/
├─ drizzle/
├─ migrations/ # PostgreSQL migrations only
├─ 0000_perfect_rogue.sql
├─ 0001_wild_selene.sql
├─ meta/
├─ src/db/
├─ schema.ts # PostgreSQL schema (single source of truth)
├─ schema-tables/ # Modular table definitions
├─ auth.ts
├─ teams.ts
├─ mcp-catalog.ts
├─ ...
├─ config.ts # PostgreSQL configuration only
├─ index.ts # Database initialization
Technical Implementation Changes
1. Database Configuration
Removed Multi-Database Selection:
// Before: Complex type switching
type DatabaseType = 'sqlite' | 'turso' | 'postgresql';
interface DatabaseConfig {
type: DatabaseType;
// ... conditional fields based on type
}
Simplified PostgreSQL Configuration:
// After: PostgreSQL-only configuration
interface DatabaseConfig {
type: 'postgresql';
host: string;
port: number;
database: string;
user: string;
password: string;
ssl: boolean;
}
2. Query Result Handling
Removed Conditional Logic:
// Before: Handle different result formats
const deleted = (result.changes || result.rowsAffected || 0) > 0;
PostgreSQL-Specific Pattern:
// After: Use PostgreSQL's rowCount
const deleted = (result.rowCount || 0) > 0;
3. Session Table Schema Fix
Fixed a critical bug introduced during multi-database support where Drizzle ORM property names didn't match usage:
Schema Definition (Correct):
export const authSession = pgTable('authSession', {
id: text('id').primaryKey(),
userId: text('user_id').notNull(), // TypeScript property: userId
expiresAt: bigint('expires_at').notNull() // TypeScript property: expiresAt
});
Code Usage (Was Incorrect):
// Before (WRONG - used snake_case):
await db.insert(authSession).values({
id: sessionId,
user_id: userId, // L Wrong! Should be userId
expires_at: expiresAt // L Wrong! Should be expiresAt
});
// After (CORRECT - use camelCase):
await db.insert(authSession).values({
id: sessionId,
userId: userId, // Matches TypeScript property
expiresAt: expiresAt // Matches TypeScript property
});
This affected:
- `registerEmail.ts` - User registration
- `loginEmail.ts` - Email login
- `github.ts` - GitHub OAuth (2 locations)
4. Plugin System Updates
Plugin Table Access:
Plugin tables are created dynamically in the database but not exported from the schema. Updated plugins to use raw SQL:
// Before: Tried to access from schema (failed)
const schema = getSchema();
const table = schema[`${pluginId}_${tableName}`]; // L TypeScript error 7053
// After: Use raw SQL queries
const tableName = `${pluginId}_example_entities`;
const result = await db.execute(
sql.raw(`SELECT COUNT(*) as count FROM "${tableName}"`)
);
Plugin Schema Type Inference:
The mock column builder auto-detects types based on column names:
- Column name contains "id" INTEGER type
- Column name contains "at" or "date" TIMESTAMP type
- Default TEXT type
Fixed plugin seed data to match inferred types:
// Before (WRONG - text ID for INTEGER column):
VALUES ('example1', 'Example Entity', ...)
// After (CORRECT - integer ID):
VALUES (1, 'Example Entity', ...)
5. API Spec Generation
Added flags to skip database/plugin initialization during OpenAPI spec generation:
// Skip database initialization for API spec generation
process.env.SKIP_DATABASE_INIT = 'true';
process.env.SKIP_PLUGIN_INIT = 'true';
This allows spec generation without database connectivity and excludes plugin routes.
Performance Characteristics
Write Performance Comparison
PostgreSQL (After):
- Thousands of writes/second baseline
- Scales with CPU cores (parallel writes)
- No artificial timeout limits
- No blocking errors under normal load
Satellite Workload Handling
Scenario: 100 satellites at moderate activity
- Estimated load: 50-200 writes/second
- SQLite/Turso: At the edge of recommended limits
- PostgreSQL: Well within comfortable operating range
Complexity Reduction
Removed Code
Multi-Database Abstraction Layer:
- Conditional driver selection logic
- Result format normalization
- Type system compatibility layers
- SQLite-specific workarounds
SQLite Migration Files:
- 18 migration files removed from drizzle/migrations_sqlite/
- Migration metadata and journal files removed
- SQLite schema definition removed
Configuration Complexity:
- Removed database type selection logic
- Removed conditional environment variable handling
- Simplified database setup flow
Simplified Maintenance
Single Migration Path:
- One migration directory (drizzle/migrations/)
- One schema file (src/db/schema.ts)
- One set of type definitions
- One testing strategy
Reduced Testing Surface:
- No multi-database test matrix
- No driver compatibility testing
- No type conversion testing
- Focus on PostgreSQL-specific optimizations
Operational Impact
Development Workflow
Before:
- Choose database type (SQLite/Turso/PostgreSQL)
- Configure appropriate environment variables
- Handle type differences in code
- Test across multiple database backends
After:
- Configure PostgreSQL environment variables
- Run migrations
- Use consistent PostgreSQL patterns
- Test single database backend
Deployment Changes
Environment Variables:
# Before: Type selection required
DATABASE_TYPE=turso
TURSO_DATABASE_URL=libsql://...
TURSO_AUTH_TOKEN=...
# After: PostgreSQL only
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DATABASE=deploystack
POSTGRES_USER=postgres
POSTGRES_PASSWORD=password
POSTGRES_SSL=false
Docker Compose:
# Before: Optional SQLite, required for Turso/PostgreSQL
services:
backend:
environment:
DATABASE_TYPE: turso
TURSO_DATABASE_URL: ${TURSO_DATABASE_URL}
# After: PostgreSQL service always required
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: deploystack
POSTGRES_USER: deploystack
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
backend:
environment:
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
POSTGRES_DATABASE: deploystack
Monitoring and Observability
PostgreSQL Advantages:
- pg_stat_statements for query performance analysis
- Native metrics export to Prometheus/Grafana
- Complete tooling (pgAdmin, DataGrip, pganalyze)
- Better visibility into concurrent operations
Removed:
- SQLite-specific monitoring workarounds
- Multi-database metric aggregation
- Database type conditional monitoring
Documentation Updates
Updated Files
Backend Documentation:
- services/backend/README.md - Updated database section
- services/backend/.env.example - PostgreSQL-only variables
- services/backend/drizzle.config.ts - PostgreSQL dialect only
Project Documentation:
- README.md - Marked PostgreSQL migration as complete in Phase 1
- Removed SQLite references from roadmap
- Updated background job queue description (PostgreSQL-based)
Technical Guides:
- /documentation/development/backend/database/index.mdx - PostgreSQL overview
- /documentation/development/backend/database/postgresql.mdx - PostgreSQL Detailed technical guide
Lessons Learned
Architectural Decisions
Database Selection Must Match Workload:
- "Write-heavy for web standards" ` write-heavy for distributed infrastructure
- Satellite architecture creates fundamentally different load patterns
- Database choice should be validated against actual workload
Conclusion
The migration to PostgreSQL-only architecture addresses a fundamental architectural mismatch between SQLite's single-writer design and DeployStack's distributed, write-heavy satellite infrastructure.
Key Outcomes:
- Eliminated architectural bottleneck: No more write serialization
- Reduced complexity: Single database backend, simpler codebase
- Better operational tooling: Mature PostgreSQL ecosystem
- Fixed critical bugs: Resolved authSession property name mismatches
OAuth Support for External MCP Servers
DeployStack now supports MCP servers that require OAuth authentication. This means you can connect to services like Box, Linear, and GitHub Copilot directly through our platform. Before this update, these OAuth-protected servers were simply unavailable in DeployStack - now they work out of the box.
When you install an OAuth-requiring MCP server, DeployStack handles the entire authentication flow for you. Click install, authorize in the popup window, and you're done. Your tokens are encrypted with AES-256-GCM and stored securely. The platform automatically refreshes expired tokens in the background, so you never have to re-authenticate unless you revoke access. For teams, each member maintains their own OAuth connection - your Box account stays yours.
On the technical side, we implemented the full OAuth 2.1 specification with PKCE (S256 method), resource indicators per RFC 8707, and on-the-fly endpoint discovery using RFC 9728 and RFC 8414. When an admin adds a new MCP server to the catalog, DeployStack automatically detects whether it requires OAuth by checking for 401 responses with WWW-Authenticate headers. No manual configuration needed - just paste the URL and we figure out the rest.
The Satellite infrastructure handles token injection transparently. For HTTP/SSE transports, tokens go in the Authorization header. For stdio-based MCP servers, tokens are injected as environment variables. This works identically whether you're using our global satellites or running a team satellite in your own infrastructure.
Stats Dashboard & AI Setup
Token Usage Statistics Dashboard
We added a statistics dashboard that shows exactly how much DeployStack saves you in token usage.
Visit the Statistics page to see your team's token savings compared to a traditional MCP setup where all tools are exposed directly. DeployStack's hierarchical routing uses just 2 meta-tools instead.
What you'll see:
Four summary cards showing:
- Your MCP installations count
- Total available tools
- Percentage saved (in green)
- Actual token usage with DeployStack
Visual comparison:
A side-by-side bar chart that makes the savings obvious:
- Left bar: Traditional setup - grows with every MCP server you add
- Right bar: DeployStack's constant usage - stays flat at 1,372 tokens
Detailed breakdown:
An expandable table showing each MCP server installation with:
- Tool count per installation
- Total tokens consumed
- Average tokens per tool
- Click any row to see individual tool token counts
Why this matters:
As you add more MCP servers, the traditional method scales linearly (more tools = more tokens). DeployStack stays constant because you're using 2 meta-tools that discover and execute anything on demand. The dashboard shows you this difference in real-time.
With 5 servers and 25 tools, you might see 16% savings. With 20 servers and 150+ tools, savings can reach 99.5%. The dashboard updates as your team adds more MCP servers, so you can watch the savings grow.
AI Instruction Files
We added ready-to-copy instruction files that help your AI coding assistant understand how to use DeployStack's MCP integration.
What's new:
A new Client Configuration page with two sections:
- Connection Setup: How to connect your IDE to DeployStack satellite
- AI Instructions: Project files for your AI coding assistant
Supported AI assistants:
- Claude Desktop - `CLAUDE.md` for project instructions
- VS Code - `copilot-instructions.md` for GitHub Copilot
- Claude Code - CLAUDE.md for CLI integration
- Cursor - `.cursorrules` configuration
How to use it:
- Visit the Client Configuration page
- Select your AI coding assistant from the sidebar
- Switch to the "AI Instructions" category
- Copy the instruction content
- Add it to your project root (`CLAUDE.md`) or IDE-specific location
Why this helps:
AI coding assistants work better when they understand your tools. These instruction files teach your AI:
- How to discover available MCP tools using short keywords
- How to execute MCP tools via DeployStack's hierarchical router
- When to check for MCP tools versus implementing functionality manually
- Best practices for the 2-tool pattern (instead of managing 150+ individual tools)
This means less time explaining how MCP works in every conversation, and more time building.
Satellite Version Management
We added automatic version management to the satellite service. The version now updates from package.json during releases instead of being hardcoded.
What changed:
The satellite now shows the correct version from package.json in:
- MCP client/server initialization
- Debug endpoint responses
- Server statistics
This makes satellite releases cleaner - no manual version updates needed in code. You'll see accurate version numbers in logs and debug information, which helps when reporting issues or checking which features are available.
MCP Tool Metadata Collection & Display
What Changed
For Users
Before this update:
- You had no way to see what tools were available in your installed MCP servers. This made it hard to know what you could actually do with each installation.
- Token consumption was invisible - you couldn't tell how much context window each MCP server was eating up.
- Understanding the value of DeployStack's hierarchical router required taking our word for it.
After this update:
- Each MCP installation now shows a complete list of available tools with descriptions, so you know exactly what you're working with.
- Token consumption gets calculated and displayed for every tool.
- You can see the total token savings from using DeployStack's hierarchical router with real numbers.
- We show you a side-by-side comparison: traditional MCP vs DeployStack's method.
New Capabilities
1. Automatic Tool Discovery
- When you install an MCP server, DeployStack automatically discovers all available tools. Tool metadata like names, descriptions, and input schemas get collected and stored, while token consumption is calculated for each tool.
2. Tool Visibility
- Browse complete tool lists for each MCP installation, with detailed descriptions and input schemas so you understand what each tool does before using it.
3. Token Usage Analytics
- See total tokens consumed by all your MCP installations. Compare the old way (all tools loaded) vs DeployStack's hierarchical router (just 2 meta-tools), and visualize token savings percentage across your team.
4. Team-Wide Insights
- Get an aggregated view across all team installations showing total tool count and token savings from using DeployStack.
Technical Implementation
Backend Changes
- New Database Table: `mcpToolMetadata` stores tool information per installation
- New API Endpoints:
- `GET /api/teams/:teamId/mcp/installations/:installationId/tools` - fetch tools for a specific installation
- `GET /api/teams/:teamId/mcp-tools/summary` - get aggregated token savings summary
- New Event Handler: `mcp-tools-discovered-handler.ts` processes tool metadata from satellites
- New Permission: `mcp.tools.view` controls access to tool metadata
Satellite Changes
- Event Emission: Satellites now send `mcp.tools.discovered` events to backend after tool discovery
- Token Calculation: Uses existing `token-counter.ts` utility to calculate tokens per tool
- Automatic Sync: Tool metadata syncs automatically when MCP servers start
- Discovery Managers Updated: Both stdio and remote tool discovery managers emit events
Frontend Changes
- New Service: `mcpToolsService.ts` handles API calls for tool metadata
- New Store: `mcpToolsStore.ts` manages tool metadata state with Pinia
- API Integration: Ready for UI components to display tool lists and token savings
Data Flow
1. User installs MCP server via frontend
2. Backend sends command to satellite
3. Satellite spawns/connects to MCP server
4. Satellite discovers tools and calculates token counts
5. Satellite emits `mcp.tools.discovered` event to backend
6. Backend stores tool metadata in database
7. Frontend fetches tool data via API
8. Users see tool lists and token savings in dashboard
Benefits
For Solo Developers
You can now see exactly what tools you have access to and understand token consumption at a glance. This helps you make informed decisions about which MCP servers to turn on.
For Teams
Get shared visibility into available tools and track token usage across all team installations. You can demonstrate ROI from using DeployStack's hierarchical router with real numbers.
For Enterprise
Audit what tools are available to users, monitor context window consumption, and improve MCP server selection based on actual usage data.
Example Use Case
Before:
A team has 5 MCP servers installed, but they have no idea how many tools are available or what token consumption looks like. It's hard to explain why DeployStack is better than traditional MCP without concrete numbers.
After:
The team sees: 5 installations, 87 total tools. The old way would consume 12,450 tokens. DeployStack's hierarchical router uses just 950 tokens. That's 96.19% savings - clear proof that DeployStack prevents context window bloat.
What's Next
This system sets us up for:
- Token Analytics Dashboard (coming soon) - visual charts showing context window usage
- Smart Recommendations (planned) - suggest which servers to disable based on usage
- Usage Reports (planned) - track which tools your team uses most
Breaking Changes
None. This is a purely additive feature.
Migration Required
None. Tool discovery happens automatically for all existing and new installations.
Security Considerations
- Tool metadata requires `mcp.tools.view` permission
- Team isolation enforced at API level
- No sensitive data exposed (only tool schemas and descriptions)
Known Limitations
- Historical data not available for installations created before this update (will populate on next server restart)
- Token calculations use gpt-tokenizer (provider-agnostic approximation)
Team Limits & Admin Tools
Three platform improvements focused on resource management, security, and admin tools.
Team-Level Non-HTTP MCP Server Limits
We added team-level control for non-HTTP (stdio) MCP server installations. This gives platform administrators fine-grained control over resource consumption while keeping HTTP and SSE servers unlimited.
What Changed
New Global Setting:
- Setting: `global.default_non_http_mcp_limit`
- Default Value: 1
- Purpose: Sets the default limit for new teams when they're created
- Location: Backend global settings panel
Database Changes:
- Added `non_http_mcp_limit` column to the `teams` table
- Each team now stores its own limit value
- Default teams and manually created teams both get this limit applied automatically
Installation Enforcement:
- The platform now checks the limit before allowing stdio MCP server installations
- HTTP and SSE MCP servers remain unlimited (they don't consume server resources)
- Clear error messages when teams hit their limit
Why This Matters
Resource Management:
Non-HTTP (stdio) MCP servers run as actual processes on your infrastructure. They consume CPU, memory, and I/O resources:
- Each stdio server requires runtime installation (Node.js, Python, etc.)
- Without limits, teams could overwhelm your satellite infrastructure
HTTP and SSE servers are just URL proxies:
- No local processes needed
- No resource consumption
- Can scale infinitely
Cost Control:
For platform operators running DeployStack:
- Prevent unexpected infrastructure costs from runaway process spawning
- Allocate resources fairly across teams
- Plan capacity based on actual stdio server usage
Fair Usage:
- Ensures all teams get reliable performance
- Prevents one team from monopolizing server resources
How It Works
For New Teams:
When a team is created (either default team during registration or manually):
- System reads `global.default_non_http_mcp_limit` setting
- Stores this value in the team's `non_http_mcp_limit` field
- Team is now subject to this limit
During MCP Installation:
When a team member tries to install an MCP server:
1. System checks if it's a stdio server (`transport_type === 'stdio'`)
2. If yes:
- Fetches the team's `non_http_mcp_limit` value
- Counts existing stdio installations for that team
- Blocks installation if limit would be exceeded
3. If no (HTTP/SSE):
- Installation proceeds without limit check
Error Message:
When a team hits their limit:
Team has reached the maximum limit of 1 non-HTTP (stdio) MCP server.
HTTP and SSE servers are not affected by this limit.
Contact your administrator to increase the limit.
Admin Controls
Set Platform Default:
Change the default for all new teams:
- Go to Backend Global Settings
- Find `global.default_non_http_mcp_limit`
- Set your desired default (e.g., 1 for free tier, 10 for paid)
Adjust Individual Teams:
Currently via database (admin UI coming soon):
UPDATE teams
SET non_http_mcp_limit = 5
WHERE id = 'team_id_here';
Use Cases:
- Freemium Model: Free teams = 1, Pro teams = 5, Enterprise = unlimited (999)
- Resource Tiers: Small teams = 3, Large teams = 10
- Trial Limits: New teams = 1, increase after verification
Technical Details
Transport Types:
- stdio (limited): Node.js, Python, Go MCP servers running as local processes
- http (unlimited): Remote HTTP endpoints like `https://mcp.brightdata.com/mcp`
- sse (unlimited): Server-Sent Events endpoints
Backward Compatibility:
Existing teams automatically get the default limit of 1 via the database migration. Teams with more than 1 stdio server already installed are grandfathered in - the limit only applies to new installations.
Satellite Log Secret Masking
We've added automatic secret masking to DeployStack satellites to protect your API keys, tokens, and credentials from exposure in log files and monitoring systems. When satellites connect to MCP servers using authentication credentials, those sensitive values are now automatically masked in all log output while keeping enough context for debugging.
How It Works
When connecting to an MCP server with a URL like `https://api.example.com?token=sk_abc123xyz789®ion=us-east-1`, our satellites now log it as `https://api.example.com?token=sk_*****®ion=us-east-1` - showing the first 3 characters of secret values followed by asterisks.
The same protection applies to:
- HTTP headers like Authorization tokens (e.g., `Authorization=Bea*****`)
- Environment variables marked as secrets
- Query parameters containing sensitive data
Why This Matters
You can now safely share satellite logs with your team or support staff for troubleshooting without worrying about credential leaks, while still being able to identify which credentials were used.
The system automatically detects which fields contain secrets based on your MCP server configuration schemas, so there's nothing you need to configure - it just works.
Teams Management for Global Admins
We added a Teams Management interface for global admins. If you're a platform administrator, you can now view and manage all teams directly from the admin dashboard.
What You Can Do
View All Teams:
Go to Admin Area → Teams to see a complete list of all teams on your DeployStack instance. The table shows:
- Team name
- Slug (unique identifier)
- Team type (Default or Custom)
- Creation date
- Quick actions
You can search across team names, slugs, and descriptions to quickly find what you're looking for.
View Team Details:
Click "View" on any team to see complete information:
- Basic team information (name, slug, description)
- Team type (default vs custom)
- Non-HTTP MCP server limit
- Team creation and update timestamps
- Complete member list with roles (Owner, Admin, Member)
Edit Team Settings:
Click "Edit Team" to update:
- Team Name - Change the display name
- Description - Add or update team description
- Non-HTTP MCP Server Limit - Control how many stdio MCP servers the team can install Changes save instantly and you'll see a confirmation message when successful.
Why This Matters
Before this update, managing teams required direct database access. Now you can handle common team management tasks through the dashboard - no technical knowledge needed.
This makes it easier to:
- Audit team configurations across your organization
- Adjust MCP server limits as teams grow
- Update team information without developer intervention
- See team membership at a glance
You'll need global admin permissions to access this feature.
Hierarchical Router: 99.5% Context Window Reduction
We've implemented a hierarchical router pattern that solves the MCP context window consumption problem. Instead of exposing 100+ tools that consume up to 80% of your context window before any work begins, DeployStack Satellite now exposes just 2 meta-tools that provide access to all your MCP servers with 99.5% less token usage.
The Problem We Solved
Context Window Consumption Crisis
When MCP clients connect to multiple servers, each tool's definition (name, description, parameters, schemas) gets loaded into the context window. This creates a severe problem:
Real-world example:
- 15 MCP servers × 10 tools each = 150 total tools
- Each tool definition ≈ 500 tokens
- Total consumption: 75,000 tokens (37.5% of a 200k context window)
Before any actual work begins, nearly half your available context is gone just describing what tools exist.
Industry impact:
- Claude Code: 82,000 tokens consumed by MCP tools (41% of context)
- Cursor: Hard limit of 40 tools maximum
- General consensus: Performance degrades significantly after 20-40 tools
- Critical failures reported at 80+ tools
Why This Matters
More MCP servers = Better AI capabilities, but also = Less room for actual work. Users had to choose between:
1. Limited tooling (stay under 40 tools, miss valuable integrations)
2. Degraded performance (add more tools, sacrifice context space)
This wasn't sustainable as the MCP environment grows.
How We Fixed It: 2-Tool Hierarchical Router
How It Works
Instead of exposing all tools directly to MCP clients, DeployStack Satellite now exposes only 2 meta-tools:
1. `discover_mcp_tools` - Search for available tools
// Find tools using natural language
discover_mcp_tools({
query: "github create issue",
limit: 10
})
// Returns: [{ tool_path: "github:create_issue", description: "..." }]
2. `execute_mcp_tool` - Execute a discovered tool
// Execute using the tool_path from discovery
execute_mcp_tool({
tool_path: "github:create_issue",
arguments: { repo: "deploystackio/deploystack", title: "Bug report" }
})
Behind the Scenes
While clients only see 2 tools, the satellite still:
- Manages 20+ actual MCP servers (HTTP and stdio)
- Caches 100+ real tools internally
- Routes execution requests to the correct server
- Handles both HTTP/SSE remote servers and stdio subprocess servers
The magic: Clients discover tools dynamically only when needed, not upfront.
Token Reduction Results
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| Tools Exposed | 150 | 2 | 98.7% reduction |
| Tokens Consumed | 75,000 | 350 | **99.5% reduction** |
| Context Available | 62.5% | 99.8% | +37.3% more space |
Example: If you previously had 82,000 tokens consumed by MCP tools, you now have 81,650 tokens freed for actual work.
Enhanced Tool Discovery
Full-Text Search Powered by Fuse.js
The `discover_mcp_tools` meta-tool uses Fuse.js for intelligent fuzzy search across all your MCP servers:
Features:
- Natural language queries - Search with phrases like "scrape website markdown"
- Fuzzy matching - Handles typos and synonym variations (e.g., "website" matches "webpage")
- Fast performance - 2-5ms search time across 100+ tools
- Relevance scoring - Results ranked by match quality
- Weighted search - Prioritizes tool names (40%), descriptions (35%), server names (25%)
Example workflow:
// User asks: "Do you have tools for GitHub?"
discover_mcp_tools({ query: "github" })
// Returns:
// - github:create_issue
// - github:update_issue
// - github:list_repos
// - github:search_code
// Execute the one you need:
execute_mcp_tool({
tool_path: "github:create_issue",
arguments: {...}
})
Search Quality Improvements
We've tuned the search engine for optimal user experience:
Configuration:
- Threshold: 0.5 - Balanced fuzzy matching (allows natural synonym variations)
- Min match length: 2 - Filters noise while catching abbreviations
- Extended search: enabled - Supports advanced query operators if needed
Result: Users find tools on first try, even when phrasing queries differently than tool descriptions.
What This Means For You
Unlimited MCP Server Growth
You can now add as many MCP servers as you need without worrying about context window consumption:
- ✅ 10 servers? No problem
- ✅ 50 servers? Still only 350 tokens
- ✅ 100 servers? Context usage unchanged
The hierarchical router scales infinitely because clients always see just 2 meta-tools.
Better AI Performance
With 99.5% more context available:
- AI assistants can hold longer conversations
- More complex tasks fit in a single session
- Better code generation with full project context
- Reduced need to restart conversations due to context limits
No Breaking Changes
Everything still works:
- All existing MCP servers continue to work
- No configuration changes required
- Internal routing handles stdio and HTTP servers automatically
- Tool discovery happens transparently
From the user's perspective: Tools just work, but now there's room to actually use them.
Technical Implementation
Technical Design
┌─────────────────────────────────────────────────────────────┐
│ MCP Client (Claude Desktop / VS Code) │
│ │
│ Sees: 2 meta-tools (350 tokens) │
│ - discover_mcp_tools │
│ - execute_mcp_tool │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ DeployStack Satellite (Hierarchical Router) │
│ │
│ Behind the scenes: │
│ - Manages 20+ actual MCP servers │
│ - Caches 100+ real tools │
│ - Full-text search with Fuse.js │
│ - Routes to stdio subprocesses or HTTP endpoints │
└─────────────────────────────────────────────────────────────┘
Key Features
Single Source of Truth:
- `UnifiedToolDiscoveryManager` maintains the only tool cache
- Search service queries this directly (no duplication)
- Always fresh data - automatic server add/remove reflected immediately
Format Conversion:
- External format: `"serverName:toolName"` (user-facing, clean API)
- Internal format: `"serverName-toolName"` (routing, backward compatible)
- Automatic conversion in execution layer
Transport Agnostic:
- HTTP/SSE servers: Routes via MCP SDK Client
- stdio servers: Routes via ProcessManager
- Same interface for both - client never knows the difference
Credits
This implementation is powered by Fuse.js, the excellent lightweight fuzzy-search library that makes intelligent tool discovery possible with zero external dependencies.
The hierarchical router pattern is based on research and best practices from the MCP community, validated by multiple open-source implementations showing 95-99% token reduction across different architectures.
November 2025 Release: Source Tracking, Metrics & Memory Optimization
MCP Server Source Tracking
Every MCP server in your catalog now shows where it came from. A simple badge appears on server details: blue for "Official Registry" or gray for "Manual."
This transparency helps you make better security decisions. Official Registry servers are community-vetted and follow MCP standards, so you can deploy them confidently. Manual servers? Those are custom tools your team built or proprietary integrations specific to your organization. The difference matters when choosing servers.
The tagging happens automatically without any effort on your part. Admins sync servers from the official registry, and they get marked "Official Registry" with no additional configuration required. Create one manually? It's tagged "Manual." Once set, the source can't be changed - you always have an accurate record of where each server originated and who added it to your catalog.
Time-Series Activity Metrics
DeployStack now tracks your MCP client activity over time. No more just seeing total request counts - the system records activity in 15-minute snapshots, showing you exactly when and how your tools get used throughout the day.
Want to know how active your MCP clients were between 10am and 1pm? The system answers that instantly. Which tools did your team use most in the last 3 hours? Easy. The system collects this data automatically every time you use an MCP server, storing it efficiently so performance stays fast.
The metrics system cleans up after itself by removing data older than 3 days automatically. You get clear visibility into recent activity without system slowdowns or storage bloat. Future versions will add more tracking capabilities - server installations, tool execution patterns, and other insights - all built on this same foundation.
Intelligent Process Management
Satellites now save memory by watching for idle stdio-based MCP servers and terminating processes after 3 minutes of inactivity, which frees up about 15MB per server. This prevents your satellite from running out of memory when managing many servers. Need that tool again? It wakes up in 1-2 seconds - you won't even notice the pause.
This changes everything when you run dozens of servers. Previously, 100 MCP servers consumed about 1.5GB of memory continuously - even if you only actively used 10 of them. Now those same 100 servers might use just 150MB when most are dormant. That's a 90% reduction. Your frequently-used tools stay instantly available while rarely-used ones sleep until needed.
The system makes smart decisions about when to sleep processes based on actual usage patterns and workload characteristics. Newly spawned servers get time to initialize. Servers handling active requests never get terminated mid-operation. You can adjust the idle timeout through environment variables - go aggressive on memory savings or keep processes active longer depending on your specific needs and resource constraints. The default 3-minute timeout balances memory efficiency with instant tool availability for most teams.
Admin MCP Server Filtering: Find What You Need, Fast
Managing a large MCP server catalog used to mean scrolling through pages of servers to find what you needed. We've added a filter system that sits at the top of the catalog page. Type a search term to find servers by name or description, then refine results using dropdown filters for status (active, deprecated, maintenance), programming language, runtime environment, featured status, and auto-install settings. All filters work together—select "Python" and "active" to see only production-ready Python servers. The filter dropdowns populate automatically from your database, so when new languages or runtimes appear in your catalog, they show up in the filters without any configuration.

The changes apply only to Global Administrator accounts managing the global MCP server catalog. Team administrators and regular users see the catalog without these filters since they're browsing, not managing hundreds of entries. We built this after seeing administrators struggle to locate specific servers in catalogs with 50+ entries. The search uses a 300ms debounce to avoid hammering your database while you type, and pagination controls remain at the bottom so you can browse filtered results across multiple pages. Clear all filters with one click when you're done.
Personal MCP Client Activity Dashboard
What You'll See
The new dashboard displays your active MCP clients in one place. When you log in, you'll see a list showing which clients you've used recently, which satellite they're connected to, when they were last active, and how many requests they've made. This is your personal view—you only see your own clients, not your teammates' activity.
Each entry shows practical information: "VS Code on Production Satellite - Active 2 minutes ago - 145 requests, 32 tool calls." If you haven't used a client in the last 30 minutes, it drops off the list automatically. You can adjust that time window if you want to see activity from the last hour or the last day instead.
How It Works
We built this using our existing satellite infrastructure. Every 30 seconds, each satellite reports which users made requests and from which clients. The backend processes these events and updates your dashboard. The activity tracking adds less than 1ms to each request, so you won't notice any performance impact.
The satellite identifies your client by checking your OAuth credentials first (which is how VS Code and Cursor typically connect), then falls back to parsing the User-Agent header if needed. Session tracking is optional—if your client sends an Mcp-Session-Id header, we'll store it for debugging purposes, but it doesn't affect what you see in the dashboard.
Why This Matters
This solves a few practical problems. First, you can quickly check if you accidentally left a client running somewhere and burning through API calls. Second, when you're debugging connection issues, you can immediately see which clients are actually reaching the satellite versus which ones are having problems. Third, if you're managing multiple teams, you can see which satellite each of your clients is connected to without having to check your config files.
The backend infrastructure is complete and processing activity right now. We're finishing the frontend UI (Phase 4), which will add the actual dashboard widget with real-time polling, client icons, and a clean timeline view. Once that's live, you'll see your active clients the moment you log in.
Platform Improvements: Search, Performance & Syntax Highlighting
Job Queue Search and Filtering
Finding specific background jobs just got easier. We added a search interface that lets administrators filter jobs by ID, type, status, and time range. Instead of scrolling through hundreds of jobs to find the one that failed, you can now search for "failed email jobs from the last hour" in seconds. The search interface shows loading states when querying, dynamically loads job types from your actual queue data, and includes quick time range presets (last minute, last hour, last 24 hours, etc.). When you're troubleshooting production issues at 2am, this saves real time.
Background Email Processing
Registration now completes instantly instead of waiting 2-5 seconds for email servers. We moved all email sending operations into background jobs, which means users click "Sign Up" and immediately move forward while verification emails queue and send automatically. If the email fails temporarily due to network issues or rate limits, the system retries automatically without bothering the user. This change makes the platform feel faster and more reliable, especially during high-traffic periods when SMTP servers slow down.
Syntax Highlighting
Code blocks throughout the platform now highlight syntax automatically. When you're looking at configuration examples, MCP server code, or debug logs, the syntax highlighting makes it easier to spot variable names, function calls, and structure at a glance. This works for JavaScript, TypeScript, Python, JSON, YAML, and other common languages developers use with DeployStack. It's a small change that makes reading technical content significantly easier.
Official MCP Registry Integration Complete
Automatic Server Discovery
Previously, administrators had to manually add every MCP server to the catalog - a time-consuming process that meant you couldn't easily discover new tools as they became available. This update introduces automatic synchronization with the official MCP Registry, bringing hundreds of pre-configured MCP servers directly into your DeployStack catalog with a single click.
How it works now: Global administrators can trigger a registry sync from the MCP Server Catalog admin interface. Behind the scenes, our background job queue system processes each server sequentially with rate limiting to respect GitHub API limits. The sync fetches server metadata from the official registry, enriches it with GitHub information like star counts and README content, and intelligently maps configurations to our three-tier system. Each synced server is marked with its official registry source and version, so you always know where it came from.
What this means for your workflow: Instead of spending hours researching and manually configuring MCP servers, you can browse the entire ecosystem immediately. Search for servers by category, filter by language or runtime, and deploy them to your teams with all the security and collaboration features DeployStack provides. Official registry servers work seamlessly with our three-tier configuration system - required credentials go to the team level with encryption, while optional settings remain customizable at the user level.
The technical implementation: We built this using our existing job queue infrastructure to handle large-scale synchronization safely. The system processes servers one at a time with configurable delays between requests, preventing API rate limit violations while maintaining progress tracking. GitHub metadata enhancement runs automatically for repository-based servers, pulling in stars, topics, licenses, and organization information. If GitHub rate limits are hit, jobs automatically retry with exponential backoff.
What this enables: DeployStack now serves as the deployment layer for the entire MCP ecosystem. Every server published to the official registry becomes instantly available through DeployStack's managed satellite infrastructure, complete with team management, credential security, and organizational visibility.
stdio MCP Server Support
Before This Update
- ✅ HTTP/SSE MCP servers worked (like Context7)
- ❌ npm package-based MCP servers didn't work
- ❌ MCP servers requiring npx commands didn't work
After This Update
- ✅ HTTP/SSE MCP servers work (unchanged)
- ✅ Node.js MCP servers work (installed via npx i.e.: memory)
- ✅ MCP servers from npm registry work
- ❌ Python MCP servers still don't work (coming later)
What You Can Do Now
Install npm-Based MCP Servers
You can now install MCP servers from the npm registry directly through DeployStack:
Examples of servers that now work:
- @modelcontextprotocol/server-filesystem - File system access
- @modelcontextprotocol/server-sqlite - SQLite database queries
- @modelcontextprotocol/server-postgres - PostgreSQL database access
- Any MCP server published to npm with stdio transport
How It Works
- Install a Server: Add an npm-based MCP server from the catalog
- Automatic Setup: Satellite downloads and spawns the Node.js process
- Instant Access: Tools appear in your IDE within 30 seconds
- Team Isolation: Each team gets their own isolated process
What Happens Behind the Scenes
When you install an npm-based MCP server:
- Satellite receives the installation command from the backend
- Spawns a Node.js subprocess using npx (allows package downloads)
- Performs MCP handshake with 30-second timeout
- Discovers available tools from the running process
- Makes tools available through your IDE's MCP client
Technical Details
Auto-Restart with Limits
If an MCP server process crashes:
- Automatic restart up to 3 times within 5 minutes
- Exponential backoff: 1s → 5s → 15s between retries
- Permanent failure after 3 crashes (visible in dashboard)
- Immediate restart if process ran successfully for 60+ seconds
This prevents infinite restart loops from misconfigured servers while recovering from temporary failures.
Resource Limits (Production Only)
In production environments, stdio processes run with resource limits:
- Memory: 100MB per process
- CPU Time: 60 seconds total CPU time
- Processes: Maximum 50 child processes
- Isolation: Complete namespace isolation per team
Development environments run without limits for easier debugging.
Tool Discovery
Tools from stdio processes are:
- Automatically discovered after process starts
- Namespaced to prevent conflicts (e.g., filesystem-read_file)
- Cached for performance
- Team-isolated using OAuth tokens
Current Limitations
Python MCP Servers Not Supported
What doesn't work yet:
- MCP servers requiring Python/pip installation
- Servers using uvx or pip commands
- Python-based MCP packages
Why: This release focused on Node.js (which covers ~80% of MCP servers). Python support is planned for a future release.
Known Issues
- First installation takes longer: npx downloads packages on first run (30s timeout accounts for this)
- No version pinning yet: Servers always install latest version from npm
Acknowledgments
This release builds on the architecture from DeployStack Gateway (our deprecated local CLI), adapting process management for multi-tenant cloud deployment. The implementation maintains backward compatibility while adding significant new capabilities.
Secure Satellite Registration Now Live
Previously, any satellite could register with our backend without authorization - essentially allowing unauthorized access to your organization's AI tools. This update introduces a secure token-based pairing system that puts you in complete control of which satellites can connect to your DeployStack instance.
How it works now: Administrators generate unique registration tokens through the DeployStack dashboard, similar to how you'd create API keys for other services. These tokens are then used during satellite deployment to securely pair the satellite with your backend. Once paired, satellites receive permanent API keys and store them locally, so they only need the registration token during the initial setup. Think of it like pairing a new device with your WiFi network - you enter the password once, and it remembers the connection.
What this means for existing satellites: Your current satellites continue working exactly as before with no interruption. However, any new satellites you deploy will require a registration token from your admin dashboard. This gives you visibility into every satellite in your infrastructure and prevents unauthorized satellites from accessing your AI tools. You can generate tokens for global satellites (managed by DeployStack) or team-specific satellites (deployed in your own infrastructure), and you can revoke unused tokens if needed.
The security upgrade: We've implemented cryptographically signed JWT tokens that expire automatically and can only be used once. Every satellite pairing attempt is logged for audit purposes, and we've added comprehensive German error messages to guide operators through any issues. This change represents our commitment to enterprise-grade security while maintaining the simplicity that makes DeployStack easy to use - you still just add a URL to VS Code, but now that URL is backed by properly authenticated infrastructure.
DeployStack Architecture Transition: From Gateway CLI to Satellite Infrastructure
Gateway CLI Architecture (Deprecated)
What We Built
The Gateway CLI was a sophisticated local proxy solution with advanced technical features:
Core Components:
- SSE (Server-Sent Events) + stdio dual transport architecture
- Persistent MCP server process management with automatic restart
- Team-aware configuration synchronization from cloud.deploystack.io
- Cryptographic session management with secure credential injection
- Advanced CLI interface with team switching capabilities
Technical Implementation:
- Node.js/TypeScript application with global npm installation
- HTTP server on localhost:9095 with SSE endpoints
- JSON-RPC 2.0 communication over stdio with MCP subprocesses
- Automatic tool discovery and caching system
- Background process monitoring with health checks
Team Integration:
- OAuth2 authentication with cloud.deploystack.io
- Team-specific MCP server configurations
- Role-based access control (global_admin, global_user, team_admin, team_user)
- Centralized credential management with runtime injection
Gateway Limitations Discovered
Installation Complexity: Local installation required multiple steps: npm global install, authentication, VS Code configuration, and local process management. Each step introduced potential failure points.
Process Management Issues: Managing persistent background processes created system resource conflicts, port management complexity, and cross-platform compatibility problems.
Corporate Network Challenges: Outbound connections to cloud.deploystack.io for configuration sync occasionally failed behind restrictive corporate firewalls.
Development Overhead: Maintaining cross-platform compatibility, handling process lifecycle management, and debugging local networking issues consumed significant development resources.
Satellite Architecture (Current)
Design Principles
Edge Worker Pattern: Satellites operate as intelligent edge workers that handle MCP server execution while maintaining centralized configuration management through cloud.deploystack.io.
Dual Deployment Model:
- Global Satellites: DeployStack-operated infrastructure serving all teams
- Team Satellites: Customer-deployed within corporate networks for internal resource access
Standard Protocol Integration: Satellites expose standard MCP interfaces via HTTPS endpoints, eliminating custom client requirements.
Technical Architecture
Core Components:
- HTTP Proxy Router: Team-aware request routing with protocol translation
- Process Manager: MCP server subprocess lifecycle with resource isolation
- stdio Communication Manager: JSON-RPC communication with MCP processes
- Backend Communicator: Integration with cloud.deploystack.io control plane
- OAuth 2.1 Resource Server: Authentication and authorization for MCP clients
Resource Management:
- Linux namespaces and cgroups v2 for complete team isolation
- Resource jailing: 0.1 CPU cores, 100MB RAM per team
- 5-minute idle timeout for inactive MCP server processes
- Process-level user isolation with dedicated system accounts
Implementation Stack:
- Fastify HTTP server with @fastify/http-proxy for request routing
- stdio JSON-RPC communication with MCP server subprocesses
- Docker containerization with team-specific resource limits
- OAuth 2.1 compliance for enterprise authentication
Client Integration
MCP Client Configuration:
{
"mcpServers": {
"deploystack": {
"url": "https://satellite.deploystack.io/mcp",
"transport": "http"
}
}
}Authentication Flow:
- User authenticates with cloud.deploystack.io
- Client receives OAuth2 access token
- Token validates against satellite OAuth 2.1 resource server
- Satellite routes requests based on team context
Migration Details
Deprecated Components
Gateway CLI Application:
- Command-line interface (deploystack login, deploystack start, etc.)
- Local HTTP server with SSE endpoints
- npm package distribution and versioning
- Cross-platform installation procedures
Supporting Infrastructure:
- Local process management and monitoring
- localhost configuration requirements
- Platform-specific installation documentation
Current Infrastructure
Global Satellite Network:
- Multi-region deployment for low-latency access
- Auto-scaling infrastructure managed by DeployStack team
- High availability with 99.9% uptime target
- Standard HTTPS endpoints accessible from any network
Team Satellite Support:
- Docker-based deployment for customer networks
- Outbound-only communication pattern (firewall-friendly)
- Complete team isolation with dedicated resources
- Integration with internal corporate infrastructure
Technical Benefits
Simplified Client Experience
Before (Gateway CLI):
- Multi-step installation: npm install, login, configuration
- Local process management and port conflicts
- Platform-specific troubleshooting and support
After (Satellite):
- Single URL configuration in MCP client
- No local software installation or management
- Standard HTTPS communication patterns
Enhanced Security Model
Team Isolation:
- Complete process isolation using Linux namespaces
- Dedicated system users per team with file system boundaries
- Network isolation preventing cross-team communication
- Resource quotas preventing denial-of-service scenarios
Enterprise Authentication:
- OAuth 2.1 compliance for enterprise SSO integration
- Token-based authentication eliminating credential storage
- Centralized access control through cloud.deploystack.io
- Comprehensive audit logging for compliance requirements
Operational Improvements
Infrastructure Management:
- Centralized monitoring and alerting
- Automated scaling based on demand
- Zero-downtime deployments and updates
- Professional monitoring and incident response
Developer Experience:
- Instant access without installation delays
- Consistent behavior across all environments
- No local system dependencies or conflicts
- Transparent operation with debugging support
- Breaking Changes
Gateway CLI Removal
All Gateway CLI functionality has been removed. Users must migrate to satellite-based access:
- Uninstall global gateway package: npm uninstall -g @deploystack/gateway
- Update MCP client configuration to use satellite URLs
- Authenticate through cloud.deploystack.io web interface
Future Development
Planned Enhancements
Enhanced Team Satellites:
- Advanced resource management and scaling
- Integration with enterprise identity providers
- Custom MCP server deployment and management
Global Satellite Expansion:
- Additional regions for improved latency
- Enhanced monitoring and observability
- Advanced caching and performance optimization
Open Source Development
The complete satellite architecture remains open source with active community development. Contributions welcome for both global and team satellite implementations.
Conclusion
The transition from Gateway CLI to Satellite architecture represents a fundamental improvement in DeployStack's approach to MCP management. By eliminating local installation complexity while maintaining enterprise-grade security and team isolation, the satellite model provides a superior foundation for both individual developers and enterprise teams.
The technical architecture supports the full spectrum of deployment scenarios while significantly reducing operational complexity for end users. This architectural foundation positions DeployStack for continued growth and enterprise adoption in the expanding MCP ecosystem.
30x Faster Performance and Better Team Visibility
30x Faster Command Execution
The biggest change: Gateway commands now complete in ~0.1 seconds instead of 3+ seconds.
We implemented an intelligent device caching system that eliminates the expensive device fingerprinting operations that were slowing down every command. Now fingerprinting happens once during login, then gets cached securely for 30 days.
Before:
bash
$ deploystack refresh
⠹ Detecting device information... # 2.8 seconds
⠹ Getting current configuration... # 0.3 seconds
✔ Configuration updated
After:
bash
$ deploystack refresh
⠹ Getting current configuration... # 0.1 seconds
✔ Configuration updated
The cache uses your OS's secure keychain (macOS Keychain, Windows Credential Manager, Linux Secret Service) with AES-256-GCM encryption as a fallback. Cache automatically refreshes every 30 days and validates hardware signatures for security.
Impact: Commands like deploystack refresh and deploystack mcp feel instant. No more waiting around for basic operations.
Automatic Device Activity Tracking
New capability: Teams can now see when devices last accessed DeployStack.
We added background device activity tracking that updates whenever a device calls the platform. This helps teams monitor usage patterns and identify inactive devices for security and license management.
What gets tracked:
- Last activity timestamps for each device
- IP addresses for security auditing
- Usage patterns across team devices
Enterprise benefits:
- Stale device detection - Find devices that haven't been used in months
- Security auditing - Track access patterns and identify anomalies
- License compliance - Monitor active device counts accurately
- Team visibility - See which developers are actively using MCP tools
The tracking happens in the background using hardware fingerprints and won't slow down API responses. All data helps teams maintain better security hygiene and understand their MCP adoption.
Streamlined Configuration Management
Code quality improvement: Eliminated 250+ lines of duplicated configuration change detection code.
We built a reusable ConfigurationChangeService that both the refresh and mcp commands now use. This means consistent behavior, easier maintenance, and better user feedback.
Better user experience during backend operations:
Before:
bash
🤖 MCP Configuration Status
⚠️ No MCP servers configured for this team
[3-second silence with no feedback]
After:
bash
🤖 MCP Configuration Status
⚠️ No MCP servers configured for this team
⠹ Connecting to backend to check for configuration updates...
⠹ Downloading latest configuration from cloud...
⠹ Comparing configurations...
✔ Configuration check completed
The service provides centralized logic for detecting configuration changes, analyzing what changed (added/removed/modified servers), and handling restart prompts when needed.
Technical Architecture
These improvements follow our core design principles:
Performance-first design - Cache expensive operations, provide immediate feedback, optimize for the common case of repeated command usage.
Enterprise-ready security - Use OS-level secure storage, encrypt fallbacks, validate integrity, and provide audit trails for compliance teams.
Developer experience focus - Make commands feel instant, provide clear progress feedback, and eliminate frustrating wait times that break flow state.
Real-World Impact
For a typical developer using DeployStack throughout the day:
- Before: Each deploystack command took 3+ seconds, interrupting workflow
- After: Commands complete instantly, maintaining development flow
- Team visibility: Managers can see which developers are actively using MCP tools
- Maintenance: Easier to identify and clean up unused devices
For enterprise teams managing dozens of developer devices:
- Activity monitoring: Clear visibility into MCP adoption and usage patterns
- Security compliance: Audit trails and stale device detection
August 26, 2025 Update: Major Security and System Improvements
What We Changed
1. MCP Configuration Fields Now Encrypt Automatically
We added a type: "secret" field type to MCP server schemas. When global admins mark a field as secret:
- Field values are encrypted with AES-256-GCM before database storage
- API responses return ***** instead of the actual value
- Runtime still gets the decrypted value for MCP server execution
Before: API keys stored as plain text in the database, visible in API responses After: API keys encrypted in database, masked in API responses, decrypted only for runtime
Example:
// Schema definition
{
"apiKey": {
"type": "secret",
"description": "Your API key"
}
}
When you configure apiKey: "sk-1234567890", it gets encrypted and you see ***** everywhere except when the MCP server actually runs.
2. Fixed MCP Configuration Data Structure
We standardized how MCP configurations are stored internally across all three tiers (template/team/user):
- All configuration data now uses the same internal format
- Better handling of the args/env merging process
- More consistent behavior when assembling final runtime configurations
Before: Inconsistent data structures caused edge cases in configuration assembly After: Consistent data handling, more reliable configuration merging
Technical Impact
Secret Type Implementation:
- Affects: All MCP server configurations with sensitive fields
- Breaking: No - existing configs work the same
- Security: High - eliminates credential exposure in APIs/logs
Data Structure Consistency:
- Affects: Internal configuration processing
- Breaking: No - user experience unchanged
- Reliability: Improved configuration assembly and error handling
August 26, 2025 Update Summary
Release Date: August 26, 2025
What We Changed
1. MCP Configuration Fields Now Encrypt Automatically
We added a type: "secret" field type to MCP server schemas. When global admins mark a field as secret:
- Field values are encrypted with AES-256-GCM before database storage
- API responses return ***** instead of the actual value
- Runtime still gets the decrypted value for MCP server execution
Before: API keys stored as plain text in the database, visible in API responses After: API keys encrypted in database, masked in API responses, decrypted only for runtime
Example:
// Schema definition
{
"apiKey": {
"type": "secret",
"description": "Your API key"
}
}
When you configure apiKey: "sk-1234567890", it gets encrypted and you see ***** everywhere except when the MCP server actually runs.
2. Fixed MCP Configuration Data Structure
We standardized how MCP configurations are stored internally across all three tiers (template/team/user):
- All configuration data now uses the same internal format
- Better handling of the args/env merging process
- More consistent behavior when assembling final runtime configurations
Before: Inconsistent data structures caused edge cases in configuration assembly After: Consistent data handling, more reliable configuration merging
Technical Impact
Secret Type Implementation:
- Affects: All MCP server configurations with sensitive fields
- Breaking: No - existing configs work the same
- Security: High - eliminates credential exposure in APIs/logs
Data Structure Consistency:
- Affects: Internal configuration processing
- Breaking: No - user experience unchanged
- Reliability: Improved configuration assembly and error handling
What You Need to Do
Nothing. Both changes are backward compatible and happen automatically.
Nothing. Both changes are backward compatible and happen automatically.
Smart MCP Configuration System + Critical Device Management Fixes
The Big Change: Three-Tier MCP Configuration
We introduced a smart configuration system that automatically handles different types of settings, making team collaboration much easier and eliminating repetitive setup work.
The Problem We Solved
Previously, when your team installed an MCP server, everyone had to configure the same settings individually:
- Repetitive Setup: Every team member manually entered the same API keys and shared folders
- Configuration Drift: Team members often had slightly different settings, causing compatibility issues
- Manual Credential Sharing: Sensitive API keys had to be passed around manually
- Update Headaches: When shared settings changed, everyone had to update individually
The Solution: Three Configuration Tiers
We now automatically organize settings into three levels:
1. Template Settings - Core technical settings that never change (server commands, installation methods)
2. Team Settings - Shared across your team:
- Shared API keys (OpenAI, GitHub, database credentials)
- Common resources (project folders, team documentation paths)
- Standardized policies that ensure consistent team experience
3. Personal Settings - Individual to each person:
- Local file paths (Desktop, Documents, project workspace)
- Personal preferences (debug settings, memory locations)
- Device-specific configurations (work laptop vs. home computer)
Real-World Example
Your team uses an MCP server for GitHub integration:
- Template: Server installation and basic setup (automatic)
- Team: Company GitHub token, shared repositories, coding standards
- Personal: Your local code directory, personal GitHub username, editor preferences
When you join the team, you instantly get company repository access with the right permissions, but code still goes to your preferred local folder.
Critical Fixes We Made
Fixed Hardware Fingerprinting Bug
The Problem: Our hardware fingerprinting algorithm was generating different IDs for the same machine, creating duplicate device records and breaking device identification.
Root Causes:
- Network MAC addresses processed in random order
- Non-deterministic JSON serialization
- Timestamp-based fallback logic made fingerprints completely random
The Fix: Made fingerprint generation completely deterministic:
- Sort MAC addresses before processing
- Use consistent JSON serialization with sorted keys
- Remove timestamps from fallback logic
- Add stable system properties for reliable identification
Result: Same machine now always generates the same hardware ID, eliminating duplicate devices.
Performance Improvements
The Problem: Gateway was making multiple API calls to build MCP configurations, causing slow loading times.
The Fix: Created a single backend endpoint that merges all three configuration tiers:
- GET `/api/gateway/me/mcp-configurations?hardware_id={hardwareId}`
- Returns fully merged, ready-to-use configurations
- Automatic device lookup using hardware fingerprints
- Configuration status indication
Performance Gain: Reduced from 3+ API calls to 1 call per configuration refresh.
What This Means for You
For Team Administrators
- Set Once, Use Everywhere: Configure team settings once, they apply to all team members automatically
- Centralized Management: Update shared API keys in one place instead of asking everyone to update individually
- Better Security: Sensitive credentials managed centrally with encryption
- Consistent Experience: All team members get the same base configuration
For Team Members
- Faster Setup: Join a team and get pre-configured MCP servers instantly
- Personal Control: Customize settings that matter to you without affecting the team
- Multiple Devices: Different personal configs for work laptop, home computer, etc.
- Automatic Updates: Team changes flow to you automatically
For Everyone
- No More Config Errors: Eliminates configuration mismatches and missing settings
- Better Collaboration: Everyone works with the same shared resources
- Simpler Maintenance: Updates happen at the right level automatically
- Reliable Device Management: No more duplicate device records or login issues
DeployStack Release Summary - August 20, 2025
New Features
Auto-Install MCP Servers for New Users
Global administrators can now mark MCP servers for automatic installation when new users create DeployStack accounts. New users get their default team pre-configured with curated tools, eliminating manual setup and providing immediate value.
Key benefits:
- Better onboarding experience for new users
- Reduced time to first value
- Administrative control over recommended tools
- Robust error handling that doesn't block registration
Featured MCP Servers Filtering
Added filtering capabilities to help users discover high-quality, curated MCP servers more easily. Featured servers appear first in listings and can be filtered separately from community-contributed servers.
New API capabilities:
- Filter by featured status in list and search endpoints
- Combine featured filtering with existing filters (language, status)
- Featured servers prioritized in default listings
Global Event Bus System (Phase 2 Complete)
Implemented comprehensive event-driven architecture with plugin integration. Plugins can now listen to and react to core application events while maintaining isolation and security.
Technical achievements:
- Type-safe event system with 23 defined event types
- Fire-and-forget processing for performance
- Plugin event listener integration
- Comprehensive error isolation
Technical Details
Database Changes
- Added auto_install_new_default_team boolean field to mcpServers table
API Enhancements
- Global MCP server endpoints now accept auto_install_new_default_team parameter
- Server list/search endpoints support featured filtering parameter
- Event bus integrated into Fastify server as singleton
Plugin System
- Extended Plugin interface with optional eventListeners property
- Enhanced Plugin Manager with event listener registration and management
- Updated example plugin with comprehensive event handlers
Impact
For New Users:
- Immediate access to useful MCP servers upon registration
- No manual configuration required to get started
For All Users:
- Easier discovery of high-quality, curated MCP servers
- Better organization of server catalog
For Developers:
- Event-driven architecture foundation for future features
- Plugin ecosystem can now react to core application events
- Improved system extensibility and maintainability
August 15th, 2025 Release
Gateway Client Configuration API
We built a new API endpoint that provides pre-formatted configuration files for MCP clients to connect to the local DeployStack Gateway. This eliminates manual configuration steps and reduces setup errors across all major MCP clients. The endpoint supports Claude Desktop, Cline, VS Code, Cursor, and Windsurf with both cookie and OAuth2 authentication.
Configurable Team Limits
Team creation and member limits are now configurable through global settings instead of being hardcoded to 3. Administrators can adjust these limits based on their organization's needs - from 1-2 teams for simple deployments to 10+ teams for enterprise customers. The default remains 3 teams and 3 members for backward compatibility.
Dynamic Client Configuration
Updated the Gateway Client Configuration modal to dynamically load supported MCP clients from the backend API instead of using hardcoded options. This means the frontend automatically displays all 5 supported clients (Claude Desktop, Cursor, Cline, VS Code, and Windsurf) and will automatically support new clients as we add them.
Improved Registration Flow
We streamlined the registration experience by replacing alert boxes with toast notifications and removing the artificial 2-second delay. Users now get immediate feedback and faster navigation to login after completing registration.
Updated Email Templates
We redesigned all email templates with a modern, professional design that works consistently across email clients. The new templates include our official logo, enhanced footer with community links, and better typography using system fonts for faster loading.
User Preferences System
We implemented a complete User Preferences System that lets users customize their DeployStack experience with themes, notifications, and feature toggles. The system is designed for privacy (users can only access their own preferences) and makes adding new preferences simple without requiring database migrations.
Welcome Email Revamp
We completely redesigned the welcome email to provide immediate, actionable guidance for new users. The new email includes step-by-step setup commands, MCP-specific guidance, and real support resources. We also added admin control over welcome email delivery through a new global setting.
Platform Updates - August 10, 2025
Email Testing for Administrators
We added a new email testing endpoint for global administrators. Admins can now send test emails through POST `/api/admin/email/test` to verify SMTP configuration is working correctly. The feature includes proper permission checks and a simple test email template. This makes it easier for administrators to troubleshoot email delivery issues during platform setup.
Button Loading States
We enhanced the Button component to show loading spinners during async operations. The component now accepts loading and loadingText props, automatically disabling the button and showing a spinner while operations are in progress. We updated forms across the platform - authentication, MCP server creation, and category management - to use these loading states. This prevents double-clicks and gives users clear feedback when something is processing.
Content Layout Component
We created a reusable ContentWrapper component to standardize the layout of detail pages. It provides a consistent gray background with white content cards, similar to what we use on team management and MCP server pages. This reduces code duplication and ensures pages look consistent across the platform.
DeployStack Gateway - Zero-Config Local Proxy for MCP Servers
The Gateway implements a sophisticated Control Plane/Data Plane architecture that connects your development environment to team MCP servers through cloud.deploystack.io. When you start the gateway, it automatically spawns all configured MCP servers as persistent background processes, making them instantly available at `localhost:9095/sse` for VS Code or any SSE-compatible client.
Supporting both stdio transport for CLI tools and SSE transport for VS Code compatibility, the Gateway handles all the complexity of process management, credential injection, and session management behind the scenes. Credentials are securely downloaded from the cloud control plane and injected directly into process environments without ever exposing them to developers, while cryptographically secure session IDs ensure safe persistent connections.
The Gateway's team-aware caching system enables instant startup by preserving tool configurations locally, automatically switching contexts when you change teams, and discovering new tools as they're added to your team's catalog. Whether your MCP servers are written in Node.js, Python, Go, Rust, or any other language, the Gateway handles them all through appropriate runtime commands, exposing individual tools with proper namespacing for direct use in your development workflow.
Extensible Plugin System with Secure Route Isolation
The plugin system provides a complete framework for extending DeployStack without compromising security or stability. Each plugin operates in its own namespace with API routes automatically isolated under /api/plugin/<plugin-id>/, preventing route hijacking and ensuring plugins cannot interfere with core authentication or user management endpoints.
Plugins can define their own database tables through a secure two-phase initialization process - core migrations run first in a trusted environment, followed by plugin tables created dynamically in a sandboxed phase. This architecture ensures plugins cannot modify core database structure while still providing full database functionality including relationships, seeding, and migrations.
Beyond database and API extensions, plugins can contribute their own global settings managed through DeployStack's admin interface, integrate with other plugins through the Plugin Manager API, and implement lifecycle hooks for initialization and cleanup. Whether you're adding support for a new cloud provider, implementing custom business logic, or extending DeployStack's capabilities, the plugin system provides a secure, structured foundation for development.
Complete Email System with SMTP and Beautiful Templates
The email system automatically connects to your existing SMTP settings configured in Global Settings, supporting providers like Gmail, SendGrid, or any standard SMTP server. We've included three professionally designed templates out of the box - welcome emails for new users, password reset instructions, and general notifications - all built with responsive HTML that looks great on any device.
For developers, we provide type-safe helper methods that make sending emails as simple as calling EmailService.sendWelcomeEmail() with validated parameters. The system includes automatic template caching for performance, connection pooling for high-volume sending, and comprehensive error handling that won't break your application flow if email delivery fails.
Whether you're sending a single welcome email or batch notifications to your entire team, the email system handles it reliably with full support for attachments, CC/BCC recipients, and custom sender addresses.
GitHub Repository Auto-Import for MCP Servers
When you paste a GitHub repository URL into DeployStack, we now automatically fetch the repository metadata, detect the programming language and runtime requirements, and pre-populate all MCP server configuration fields. The system intelligently maps languages to runtimes (TypeScript → Node.js, Python → Python, etc.) and extracts package information from files like package.json or pyproject.toml to understand dependencies and installation requirements.
For immediate use, the feature works out-of-the-box with public repositories using GitHub's public API. Teams needing access to private repositories or higher rate limits can optionally configure a GitHub App for enhanced capabilities with up to 5,000 API requests per hour.
This eliminates the tedious manual entry of repository details, ensures consistency across your MCP server configurations, and reduces setup time from minutes to seconds.
GitHub Integration for MCP Servers
DeployStack now seamlessly connects with GitHub to transform how you manage MCP servers. When you add a GitHub repository URL to an MCP server, the platform automatically extracts everything it needs - description, programming language, license, topics, and even detects MCP-specific configurations from package files. Version management becomes effortless as the system automatically discovers semantic version tags and GitHub releases, creating a complete version history with changelogs pulled directly from your repository.
The integration goes beyond simple repository scanning. Configure GitHub OAuth in your global settings to enable single sign-on for your users, letting them authenticate with their existing GitHub accounts instead of managing separate passwords. The system intelligently handles both public and private repositories, respecting GitHub's access controls while maintaining DeployStack's team boundaries. Repository synchronization can be triggered manually with a single click, pulling the latest metadata and configurations while preserving your local customizations. Security remains paramount with encrypted token storage, minimal permission requests, and comprehensive audit logging of all GitHub operations.
Centralized Global Settings Management
Managing your DeployStack installation just became significantly simpler with the new Global Settings interface. Administrators now have a single control panel where all system-wide configuration lives, organized into logical groups that make sense - SMTP Mail Settings for email delivery, GitHub OAuth Configuration for authentication, and System Configuration for core behavior. No more hunting through configuration files or environment variables.
The interface handles the complexity behind the scenes while presenting clear, understandable options. Setting up email delivery is straightforward - enter your SMTP server details, configure sender information, and emails start flowing for user registrations and password resets. GitHub OAuth integration takes minutes instead of hours - create your OAuth app, enter the credentials, and users can sign in with their GitHub accounts. System configuration options let you control the frontend URL, toggle email functionality, manage login methods, and show or hide API documentation based on your security requirements.
Security is built into the design with all sensitive information automatically encrypted, administrator-only access control, and clear organization that prevents accidental misconfiguration. Whether you're running a personal instance with Gmail SMTP, a team deployment with GitHub authentication, or a production system with dedicated email services, Global Settings adapts to your needs. The grouped tab interface ensures related settings stay together, making it easy to configure entire features at once rather than hunting for scattered options.
Team-Scoped MCP Server Installations from Global Catalog
The missing link between browsing and using MCP servers is now complete. Teams can install any server from the global catalog directly into their workspace, creating team-owned instances configured with their own credentials and settings. This three-layer system - global catalog, team access, and team installations - provides the perfect balance of community sharing and team privacy.
Every installation belongs exclusively to your team workspace. When you install a GitHub MCP server, for instance, you configure it with your team's GitHub token, customize the settings for your workflow, and give it a meaningful name that makes sense in your context. Other teams might install the same server, but their installation is completely separate - different credentials, different configuration, different data. This isolation ensures your API keys, tokens, and sensitive configurations never leak across team boundaries.
The system integrates seamlessly with your existing team resources. Installations live alongside your cloud credentials and environment variables in a unified workspace where team administrators maintain full control. Security is built-in at every level - credentials are encrypted at rest, access follows team permissions, and audit trails track all changes. Whether you're a solo developer installing productivity tools or a team deploying complex integrations, every MCP server from the global catalog is now just a few clicks away from being operational in your secure team environment.
Global MCP Server Catalog - Community-Wide Server Discovery
The MCP Server Catalog transforms how you discover and deploy Model Context Protocol servers. This comprehensive marketplace brings together community-contributed servers and official integrations in one searchable, filterable catalog that's accessible to all authenticated users. Global servers are now available platform-wide, letting you browse pre-configured solutions for everything from OpenAI integrations to GitHub tools, all organized by category and ready for instant deployment.
The catalog operates on two visibility levels to balance collaboration with privacy. Global servers, managed by administrators, provide the community foundation - these are the battle-tested, widely-useful servers everyone can access. Team servers remain completely private to your team, giving you space for custom integrations and proprietary tools without exposing them to the broader platform. Global Administrators maintain oversight capabilities for support purposes, but team boundaries remain secure.
Every server in the catalog includes comprehensive metadata - from technical specifications like language and runtime requirements to capabilities breakdown showing available tools, resources, and prompts. Version management tracks every release with detailed changelogs, automatic GitHub synchronization pulls updates directly from repositories, and the deployment integration means you're one click away from launching any server. Whether you're browsing by category, filtering by language, or searching for specific functionality, the catalog makes finding the right MCP server as simple as finding an app in an app store.
MCP Server Categories for Better Organization
Finding the right MCP server just got significantly easier with our new category system. Categories act as organizational labels that group similar servers together, transforming a potentially overwhelming catalog into a well-organized library of tools. Whether you're looking for Development Tools like Git integrations, Data Sources for database connections, or AI & ML services for machine learning workflows, categories help you navigate directly to what you need.
The system works seamlessly across both global community servers and your team's private servers, using a unified category structure managed by Global Administrators. When adding servers to the catalog, simply assign them to the appropriate category - the same familiar pattern you'd use with folders or tags in any modern application. Users can then filter the entire catalog by category, dramatically reducing the time spent searching for specific functionality.
This seemingly simple addition has a profound impact on discoverability. Instead of scrolling through an ever-growing list of servers, you can now jump straight to Communication tools when you need Slack integration, or filter for Productivity servers when setting up task management. Categories make the MCP ecosystem more accessible for new users while helping experienced developers quickly locate specialized tools.
Role-Based Access Control with Team Management Permissions
DeployStack now features comprehensive role-based access control that governs everything from system administration to team member management. The system supports four key roles: Global Administrators who manage the entire installation, Global Users who create and manage their own teams, Team Administrators with full control over team resources and membership, and Team Users who participate in team activities.
Team Administrators gain powerful member management capabilities - add up to 3 members per team, promote users to admin status, remove team members, and even transfer ownership when needed. Your default personal team remains protected and private, while additional teams support full collaboration. Global Administrators maintain oversight with read-only access to all teams for support and administrative purposes, though they cannot view actual credential values for security.
The MCP Catalog permissions align with your role perfectly. Global Admins manage community-wide servers and categories, Team Admins create and manage their team's private servers, while users browse and deploy from available servers based on their access level. Every action from creating teams to managing deployments follows the principle of least privilege - users get exactly the permissions they need, nothing more. This creates a secure, organized environment where solo developers and collaborative teams work efficiently without compromising security or stepping on each other's toes.
Team Workspaces for Organized MCP Server Management
Teams transform how you organize MCP server deployments in DeployStack. Each team acts as a complete, isolated workspace containing your MCP servers, cloud provider credentials, and environment variables - keeping different projects and environments cleanly separated. When you sign up, we automatically create your personal default team that's ready for immediate use.
You can create up to 3 teams total, each supporting up to 3 members with role-based access control. Team Administrators have full control over resources and member management, while Team Users get basic viewing access. Your default team remains private as your personal workspace, while additional teams enable collaboration with colleagues on shared projects.
Every team maintains complete resource isolation with separate credentials, independent servers, and scoped environment variables. Team management is straightforward - edit team details, manage members, transfer ownership, or delete teams when no longer needed (your default team is protected from deletion). Whether you're a solo developer organizing different projects or a small team collaborating on deployments, Teams provide the structure and security you need for efficient MCP server management.