DeployStack Updates - January 18 2026
MCP Servers Now Recover Automatically
Remote MCP servers (like Context7 or Brightdata) used to get stuck at "Connecting" after network hiccups. You had to uninstall and reinstall to fix it.
Not anymore. We added a two-layer recovery system:
- When a tool call fails, the satellite reconnects immediately (most issues resolve in seconds)
- If that doesn't work, the backend health check (runs every 3 minutes) detects when the server is back and triggers reconnection
You won't notice this until you don't have to reinstall an MCP server after a temporary network issue.
See What's Happening with Real-Time Logs
Team administrators can now view MCP server logs directly in the dashboard. No SSH access needed, no command-line tools.
When you open an MCP server installation, you'll see a new "Logs" tab that shows:
- Live log streaming as your MCP server generates them
- Filters by severity (Info, Warnings, Errors, Debug)
- Timestamps with timezone details on hover
This makes debugging faster. Instead of waiting for support or digging through server files, you can see what's happening right now.
Team Management Gets Smarter
The Teams Management table now shows two new columns:
- MCP Servers: How many servers each team has installed (click the number to jump to their MCP servers page)
- Members: How many members are on the team (click to see the full list)
We also removed the technical "Slug" column to make room. Every clickable element highlights on hover, and you can jump to team details with one click instead of clicking "View" and hunting through tabs.
Behind the scenes, we optimized database queries to fetch all this information in a single request per page, so the interface stays fast even as your platform grows.
Monitor Team MCP Instances at a Glance
Team admins can now click on any MCP server installation to expand and see all team members' instances. Each instance shows:
- Real-time status (online, provisioning, error, offline)
- User information (username and email)
- Status messages for troubleshooting
- Last health check timestamps
Regular team members see only their own instance when they expand an installation. The feature uses lazy loading - instances only load when you expand the row - so pages load faster.
New Controls for GitHub-Based MCP Installations
DeployStack can now install MCP servers directly from GitHub repositories (similar to how Vercel deploys from GitHub). We added three new team-level settings to control this:
- Allow GitHub MCP Servers - Toggle whether teams can install from GitHub
- Allow Private GitHub Repositories - Control access to private repos (requires GitHub authentication)
- GitHub MCP Server Limit - Set how many GitHub-based servers each team can run
Platform administrators can set global defaults for new teams, then override settings for individual teams as needed. Teams can view their GitHub MCP settings at /teams/manage/{teamId}/usage.
This gives you fine-grained control over GitHub installations while unlocking new distribution options:
- Developers can publish MCP servers without waiting for catalog approval
- Teams can build and deploy private MCP servers for internal use
- Community-built MCP servers can be installed directly from repos
The feature is disabled by default (opt-in), so it won't affect existing teams.
What's Next
These updates set the foundation for upcoming features like automatic builds from GitHub source, version management, and webhook integration for auto-updates when repos change.
Admin Panel Improvements - Januar 2025
User & Team Management at Scale
Pagination Everywhere
Both the Users and Teams admin pages now load 20 items at a time instead of everything at once, which makes the interface faster when you're managing a growing platform with hundreds of users or dozens of teams.
What you get:
- Page through results with next/previous buttons
- Adjust how many items you see per page (10, 20, 50, or 100)
- See your current position and total count
- No more waiting for huge lists to load
Search That Actually Works
Users: Search by username or email. Toggle between search types with a button, press Enter to search.
Teams: Search by team name. Results come back fast even with thousands of teams because the search runs on the server.
Both pages show loading spinners while searching, so you know something is happening.
Better Filtering for Users
New dropdown filter menu on the Users page lets you:
- Filter by authentication type (Email or GitHub)
- Filter by role (Global Admin or Global User)
- Combine filters to narrow down results
- See active filters as pill badges
- Clear individual filters or all at once
This replaces the old tab navigation, which was confusing.
Role Management + Email Notifications
Global Admins can now change user roles directly from the user detail page.
The workflow:
- Click on any user
- Find the Role card
- Click "Change Role"
- Select new role from dropdown
- Confirm
What happens automatically:
- The user gets an email about their role change
- All other Global Admins get notified (if it's a Global Admin promotion or demotion)
- You see instant confirmation on the page
Why this matters:
- Faster onboarding for new admins
- Everyone stays informed about who has elevated access
- Creates a paper trail for compliance and audits
- No manual Slack messages needed
Note: Email notifications only fire for Global Admin role changes. Regular role changes don't spam your inbox.
Team Management Improvements
Redesigned Team Detail Page
Team detail pages now use clean URLs and tab-based navigation:
Before: /admin/teams/4vj7igb2fcwzmko?section=limits After: /admin/teams/4vj7igb2fcwzmko/limits
Desktop: Left sidebar with General, Limits, Members, and MCP Servers sections Mobile: Button-based navigation bar at the top
Switching between tabs is instant (no page flash) because we cache team data.
New: MCP Server Installations View
Global Admins can now see all MCP servers a team has installed, all in one place.
What you'll see:
- Server name (click to view full details in catalog)
- Status with color-coded badges (online, offline, error, requires reauth, etc.)
- Installation date
- Last used date (or "Never")
Why this helps:
- Faster support: Check if a server is offline when a team reports an issue
- Usage monitoring: See which servers teams actually use
- Capacity planning: Spot teams approaching their installation limits
Cleaner API Structure (Technical Change)
We moved all admin user routes under /api/admin/users/ to match the pattern we already use for teams (/api/admin/teams/).
Customer impact: None. The admin dashboard automatically updated.
For API consumers: If you directly call our API, update your admin user endpoint URLs. Regular user endpoints (/api/users/me, /api/users/:id) are unchanged.
Performance & UX Polish
- Skeleton loaders on all admin pages (no more blank screens while loading)
- Smooth transitions between pages and search results
- Breadcrumb navigation on detail pages for easier back navigation
- Faster API responses for large datasets
Who Benefits Most
Small teams (1-20 users): The interface feels snappier with better loading states and instant navigation.
Growing teams (20-100 users): Pagination prevents slowdowns. Search helps you find people quickly.
Large deployments (100+ users): Managing the platform is now practical. Find any user or team in seconds.
Technical Details for Support Teams
API Changes
Users:
- GET /api/users → GET /api/admin/users (admin-only operations moved)
- Response format now includes pagination metadata
- New search endpoint: GET /api/admin/users/search
Teams:
- GET /api/admin/teams now supports pagination (limit/offset)
- New search endpoint: GET /api/admin/teams/search?name=query
- New endpoint: GET /api/admin/teams/:id/mcp/installations
Breaking Changes
⚠️ Admin Teams and Users API response structure changed. Old responses returned flat arrays. New responses include pagination metadata.
If you have custom integrations, you'll need to update your code to handle the new format:
// Old format
const users = response.data; // Array
// New format
const users = response.data.users; // Array
const pagination = response.data.pagination; // Metadata
Email Configuration
Role change notifications require SMTP configuration in Global Settings. If SMTP isn't set up:
- Role changes still work perfectly
- Emails are queued but not sent
- No errors shown to users
OAuth Re-auth & Team Controls
OAuth Re-Authentication: Recover from Expired Tokens in Seconds
The Problem: When OAuth tokens expired—due to server downtime, token revocation, or failed automatic refreshes—users were stuck. Their MCP servers showed "Auth Required" errors, and the only option was to delete the installation and start over. This meant losing all their configuration, re-entering environment variables, and waiting 5+ minutes to get back online.
What We Built: A one-click re-authentication flow that updates expired tokens without reinstalling anything. When an OAuth token expires, users see a "Re-authenticate" button on their installation details page. Click it, authorize through the OAuth provider's popup, and you're back online in about 30 seconds. All configuration stays intact.
How It Works:
- User navigates to an installation with expired OAuth credentials
- Status shows "Reauth Required" with a clear explanation
- Click "Re-authenticate" → OAuth popup opens (Notion, Box, Linear, etc.)
- User authorizes DeployStack access
- Popup closes automatically with success confirmation
- Installation status updates to "Connected"
- Tools become available immediately
Who Can Use It: All team members, both admins and regular users. Since OAuth tokens are personal credentials, everyone manages their own authorizations.
Technical Implementation:
- Backend: New POST /api/teams/:teamId/mcp/installations/:installationId/reauth endpoint
- Frontend: OAuth popup flow with real-time status tracking via Server-Sent Events
- Database: Added installation_id field to oauthPendingFlows table for linking re-auth flows to existing installations
- Permissions: Requires mcp.installations.view (standard for both admin and user roles)
User Benefits:
- 30-second recovery instead of 5+ minute reinstall process
- Self-service - no waiting for admin help or support tickets
- Configuration preserved - environment variables, custom settings, and headers all stay intact
- Less downtime - faster recovery means your team keeps working
For Support Teams: This should reduce OAuth-related support tickets. When users report token expiration issues, point them to the installation details page and the "Re-authenticate" button. They'll have the issue resolved before you finish typing a response.
Metrics Worth Tracking:
- Re-auth success rate (% of attempts that succeed)
- Average time from "requires_reauth" to "online" status
- Support ticket reduction for OAuth issues
- User satisfaction scores for OAuth functionality
Related Features: This works alongside our existing automatic token refresh (background job every 5 minutes), OAuth discovery system, and AES-256-GCM token encryption. Each user's tokens remain isolated and private per team.
Per-Team Member Limits: Configure Team Sizes Independently
The Problem: All teams shared the same member limit—defaulting to 3 members. If you wanted one team to have 5 members and another to have 10, you had to change the global setting for everyone. This created friction for organizations with different team size requirements.
What We Built: Team member limits are now configurable per team. Administrators can set different limits for each team through the admin panel, giving you control over team sizes based on your organization's structure.
How It Works:
For Administrators:
- Go to Admin → Teams → Select a team
- Go to the "Limits" tab
- Adjust the "Member Limit" field (minimum 1 member)
- Click Save
The limit applies immediately. If a team already exceeds a new lower limit, existing members stay but no new members can be added until the count drops.
For Team Owners: The members interface shows "X of Y members" where Y is the team-specific limit. The "Add Member" button disables when you hit the limit.
Default Teams: Personal teams remain single-user only. Member count information doesn't display for these teams since they can't have additional members anyway.
Technical Implementation:
- Backend: Global setting renamed from team.member_limit to team.default_member_limit
- Database: New member_limit column in teams table (default: 3 members)
- Validation: Enforces minimum of 1 member (at least the owner)
- Admin API: Updated to accept member_limit parameter
Frontend Changes:
- Admin panel includes "Member Limit" input field in Limits tab
- Members list shows dynamic limits from team data
- UI text updated to reflect per-team configuration
- Default team UI simplified (no unnecessary member count display)
Migration: Existing teams automatically received a member limit of 3 (the previous global default). New teams use the global default setting value. No action required for existing deployments.
Breaking Changes: None. This is fully backward compatible. All existing teams keep working exactly as before with their 3-member limit.
Configuration: Administrators can still set the global default through Admin → Settings → Team Settings → "Default Team Member Limit". This sets the initial member limit for newly created teams, but existing teams keep their current limits.
Remote MCP Server Permissions: Control Access to External MCP Servers
The Problem: All teams could only install MCP servers from the DeployStack catalog. Some organizations needed access to custom or private MCP servers hosted on their own infrastructure, but there was no way to enable this selectively without opening it up for everyone.
What We Built: A new per-team permission called allow_remote_mcp that lets global admins control which teams can install MCP servers from remote sources outside the catalog.
How It Works:
For Global Admins:
- Go to Admin Panel → Teams → Select a team
- Go to the General tab
- Find the "Remote MCP Servers" card
- Check the box to allow remote installations
- Click "Save changes"
For Team Members: Team members see their permission status on the team management page. If enabled, they can install MCP servers from any URL. If disabled, they're limited to the DeployStack catalog.
Default Behavior: New teams have this permission disabled by default. Existing teams were updated to disabled during deployment. Global admins can change this setting anytime for individual teams.
Security Consideration: Enabling this permission means team members can install MCP servers from any URL. Remote servers run with the team's permissions, so only enable this for trusted teams with legitimate needs.
Use Cases:
- Enterprise teams running custom internal MCP servers
- Development teams testing private MCP implementations before catalog submission
- Partners integrating proprietary MCP tools
- Beta testers accessing unreleased MCP servers for early feedback
Technical Implementation:
- Backend: Added team.allow_remote_mcp global setting (default: false)
- Database: New allow_remote_mcp column in teams table (migration: 0014_useful_sersi.sql)
- Admin API: Updated to support the new permission field
- Frontend: Admin panel checkbox in General tab with save-on-click behavior (no auto-save)
- Team UI: Read-only display on team management page for transparency
Why This Matters: This gives administrators granular control over security and functionality. You can enable advanced capabilities for teams that need them while keeping stricter controls for others. It's particularly valuable for organizations with mixed trust levels across teams.
Tools Table Search: Find MCP Tools Faster
The Problem: When MCP servers expose dozens or hundreds of tools, finding a specific tool meant scrolling through the entire list. No search, no filters, just manual hunting.
What We Built: A search input field on the tools table that lets users filter tools by name or description in real-time.
How It Works:
- Type in the search box positioned to the left of the Enable/Disable bulk action buttons
- Results update instantly as you type
- Searches both tool names and descriptions
- Case-insensitive matching
- Clear the search input to show all tools again
- Selection counter updates to reflect filtered results
Technical Implementation:
- Component: Updated ToolsTab.vue
- Pattern: Reactive computed property filtering (follows existing users/teams table pattern for consistency)
- Translation: Added key mcpInstallations.details.tools.table.search
- Performance: Client-side filtering, no server requests
Why This Matters: When you're working with MCP servers that have extensive tool catalogs, this makes discovery and management much faster. It's a small change that saves time every single day for power users.
DeployStack Updates - Real-Time Status, Satellite Selection & More
Real-Time MCP Server Status Updates
We fixed a frustrating issue where MCP servers would appear "online" even when they'd gone offline. Previously, you had to wait 3-4 minutes for status updates. Now, status changes happen immediately when you run tools.
What changed:
Instant offline detection - When a tool execution fails because the server is offline, the status updates to "offline" right away. No more waiting for health checks.
Automatic recovery - When a server comes back online, the system detects this automatically the next time you use a tool. The server reconnects, refreshes its tools, and updates to "online" in seconds.
Retry logic - Tool executions now retry 3 times (with 0.5s, 1s, and 2s delays) before marking a server as offline. This handles temporary network hiccups without false alarms.
Tools preserved - If a server goes offline, its tools remain visible so you can see what was available. Previously, tools could disappear during failed reconnection attempts.
Recovery attempts enabled - You can now retry tools on offline servers. If the server has recovered, the tool will work and automatically update the status to "online."
Why this matters:
Before this update, a server could go offline during a call but the status would stay "online" for 3-4 minutes. You'd keep trying tools with confusing errors and no way to tell what was happening. Now, when a server goes offline, the status updates to "offline" within 4 seconds. You know immediately what happened, and when the server comes back, one tool execution detects it and reconnects automatically.
What you'll see:
The status badge in your dashboard updates within seconds instead of minutes. You'll see "offline" when a server can't be reached, "error" for other failures, and "requires re-auth" when OAuth tokens expire. Tools stay in your list even when servers are temporarily offline.
Satellite Selection for MCP Client Configuration
We added satellite selection dropdowns to both the dashboard configuration modal and the client configuration page. You can now choose which satellite to connect your MCP clients to, instead of being locked to a single hardcoded satellite URL.

Before this change:
- All users were automatically connected to https://satellite.deploystack.io
- No way to choose a different satellite
- Team satellites couldn't be used for MCP client configuration
- Configuration was one-size-fits-all
After this change:
- Users select which satellite they want to use
- Team satellites show up alongside global satellites
- Configuration automatically updates with the selected satellite's URL
- Each team can use their own satellite infrastructure
Where you'll find it:
Dashboard modal - Added satellite dropdown above the MCP client selector. Client selection (left) and satellite selection (right) in a 50/50 layout. If you only have one satellite, we auto-select it for you. If no active satellites are available, you'll see a clear message to contact your administrator.
Client configuration page - Satellite dropdown appears at the top of the sidebar. Switching satellites automatically reloads the configuration. Same auto-selection behavior as the dashboard.
Benefits:
Choose the satellite that makes sense for your use case. Use your team's satellite instead of shared global infrastructure. Connect to satellites closer to your infrastructure for better performance. Know exactly which satellite you're connecting to with clear feedback.
In-Memory Cache for Global Settings
Global settings are now cached in memory instead of being fetched from the database on every request. The cache loads automatically when the server starts and refreshes whenever an admin updates a setting.
Why this matters:
Before this change, every API request that checked a global setting (like whether to show the API version or enable Swagger docs) triggered a database query. For high-traffic endpoints like the health check (/) or documentation page (/documentation), this created unnecessary database load. With dozens of settings and potentially thousands of requests per minute, those database calls add up.
Now, all setting reads happen instantly from memory. The database is only touched once at startup and again when an admin saves a change. This reduces latency on every request and eliminates a significant source of database traffic, making the platform faster and more efficient as usage scales.
How it works:
Settings load into memory on server startup. All setting reads happen from the in-memory cache. The cache automatically invalidates and reloads when settings are modified via the admin panel. If the cache isn't ready for some reason, the system falls back to database queries (graceful degradation).
Self-Service Account Deletion
Users can now delete their own accounts directly from their profile settings. This gives users full control over their data and meets compliance requirements for data privacy regulations.
How it works:
We added a "Delete Account" section to the user profile page. The UI shows a clear list of what gets deleted (account, default team, MCP installations, team memberships, preferences, and sessions), a warning that the action is irreversible, and a confirmation modal requiring users to type "sudo delete account" to proceed.
Safety checks:
Before allowing deletion, the system checks if the user owns any non-default teams. If they do, deletion is blocked and the user sees a warning message listing the teams they need to delete or transfer first. This prevents orphaned teams and ensures data integrity.
What gets deleted:
When you delete your account, we remove: your user account (email, username, profile information), your default team and all associated data, all MCP servers installed in the default team, your memberships in any teams you belong to but don't own, all saved settings and configurations, all login sessions across devices, and background commands related to your team.
Cleanup process:
The deletion happens in a specific order: validate you don't own non-default teams, remove team memberships from other teams, delete all MCP installations from the default team, create satellite commands to stop any running MCP processes, send confirmation email, delete satellite commands, delete the default team, delete all sessions, delete the user account, and log you out automatically.
Email confirmation:
After deletion completes, we send a confirmation email to your email address. The email lists everything that was deleted and confirms the account is permanently removed.
What this enables:
Users can exercise their "right to be forgotten" under GDPR and similar regulations. Clear control over personal data builds confidence in the platform. Prevents abandoned accounts from cluttering the system. No need to contact support to delete an account.
All of these changes are live now. If you run into any issues or have questions, let us know.
DeployStack Changelog - December 10 2024 Updates
Table Redesign
We updated the MCP Server Catalog table with a cleaner look and better navigation. The pagination controls now follow a modern layout with the rows-per-page selector on the left, a page indicator showing "Page X of Y" in the center, and navigation buttons (first, previous, next, last) on the right. On mobile, the design adapts by hiding less essential controls while keeping core navigation accessible.
The three-dot menu on each row now includes more actions: View to open server details, Edit to jump directly to the edit form, Enable/Disable to toggle server availability, and Delete to remove the server. We also cleaned up the typography with consistent text weights, better spacing between table elements, and more prominent status badges.
These updates make managing large server catalogs faster. Admins can now edit servers without navigating away from the list, and the new pagination makes it easier to browse through hundreds of entries.
Wildcard Search for Tool Discovery
The discover_mcp_tools meta-tool now supports as a wildcard query. When you use query: "", you get a list of all available tools instead of an empty result. Previously, if you weren't sure what tools were available or wanted to browse everything, searching with returned zero results. That's because Fuse.js (our fuzzy search library) treats as a literal character, not a wildcard.
The wildcard query returns up to 20 tools to keep responses manageable. When more tools are available, the response includes a notice with the total count and suggestions for finding additional tools with specific keywords like "github", "database", or "markdown".
On the technical side, wildcard detection happens before search execution and uses a listAll() method instead of Fuse.js search. It still respects disabled tool filtering and has no performance impact on regular searches.
Improved Search & Filters
We rebuilt the search and filtering experience in the MCP Server Catalog admin page. The search input now sits on the left with a filters dropdown button and bulk delete button on the right, creating a cleaner, more compact layout that doesn't take up as much vertical space.
When you click a filter from the dropdown (Status, Language, Runtime, Featured, Auto Install), it appears as a compact chip below the search bar. Each chip shows the filter label, a dropdown to select a value, and an X to remove it. Filters apply immediately when you select a value—no need to click a separate search button. Filter selections persist when the table refreshes.
The source tabs (All / Official Registry / Manual) now properly filter search results. Previously, switching tabs while searching would show the same results regardless of which tab was selected. We also added skeleton loading states so you see animated placeholders instead of the whole page disappearing when the table is loading.
Mass Delete for MCP Servers
Admins can now select multiple MCP servers and delete them in one action. The MCP Server Catalog table includes checkboxes for selecting servers: row checkboxes to select individual servers, a header checkbox to select or deselect all servers on the current page, and a selection counter showing "X of Y row(s) selected" at the bottom of the table.
A "Delete" button appears above the table (disabled until servers are selected). When you click it, a confirmation dialog shows how many servers will be deleted and warns about background job processing. The deletion creates background jobs for each server. Teams with installations will be notified, and the servers are removed as jobs complete.
Managing large server catalogs is faster when you can select and act on multiple items at once. Instead of deleting servers one by one, admins can select a batch and remove them together.
Enable/Disable MCP Servers
Admins can now disable MCP servers directly from the catalog. From the MCP Server Catalog, global admins can toggle server availability using the three-dot menu on any server row. Two actions are available: Disable prevents new installations of the server, and Enable restores it to active status.
When an admin disables a server, new users cannot install it, but existing installations continue to work normally. The server appears grayed out in the catalog with a "Disabled" badge. This gives admins control over which MCP servers are available for installation without affecting teams that already use them.
Use cases include temporarily blocking a problematic server while investigating issues, phasing out deprecated servers by preventing new adoptions, and controlling rollout of new servers by keeping them disabled until ready.
Improved Background Job Visibility
We improved how scheduled background tasks are tracked in DeployStack. Previously, if a scheduled task failed before it could be recorded, administrators had no way to see what went wrong—the failure was only visible in server logs and the admin panel showed nothing. Now, every scheduled task is immediately recorded in the system the moment it's triggered.
Every scheduled task appears in the Jobs admin panel (/admin/jobs) immediately when triggered. If something fails, you'll see it as a failed job with error details. This gives you a complete history of all scheduled task executions for a full audit trail.
This improvement applies to all recurring background tasks: OAuth token refresh (every 5 minutes), satellite heartbeat cleanup (every 3 minutes), activity metrics cleanup (every 30 minutes), and old job cleanup (every 6 hours). For developers creating custom cron jobs, the new interface is simpler—just define the job type and payload, and the system handles job creation automatically, ensuring the job record exists before any work begins.
MCP Server: Limits, Usage Tracking, Tool Controls
Team Usage Tracking
You can now see exactly how many MCP servers your team has installed and how close you are to your limits. We added a new "Usage" tab to the team management page that breaks down your total servers, stdio servers, and HTTP servers with progress bars that turn red when you hit a limit.
We also added a compact usage indicator to the Dashboard and MCP Servers pages. It shows something like "Total MCP Servers 3/5 | stdio MCP Servers 1/1" so you can check your usage at a glance without navigating to team settings.
Personal Configuration During Installation
Team admins can now configure their personal MCP server settings right in the installation wizard. Before, you had to complete the installation, then go to a separate page to set up your personal variables. Now it's one step.
Skip it if you want. Configure later. But for most admins, this cuts out several clicks and makes setup feel smoother.

Total MCP Server Limit
Teams now have a total MCP server limit that applies to all transport types. Previously, only stdio servers had a limit. HTTP/SSE servers? Unlimited. That's fixed now - there's a single cap (default is 5) that covers everything.
Administrators can configure both limits when managing teams. Hit your limit? Users see a clear error message telling them to contact their admin.
Featured Servers & Full Catalog
We added two new pages to help you find MCP servers faster. The Featured page (/mcp-server/featured) shows our hand-picked servers organized by category with a sidebar for quick navigation. The Catalog pages (/mcp-server/catalog/:categoryId) let you browse every server in our directory by category.
The install wizard now has "Browse Featured" and "View All Servers" buttons below the search bar. Every catalog category has its own URL, so you can share links to specific server collections with your team.
Disable Individual Tools
Team admins can now disable specific tools from any MCP server installation. If a server comes with a tool you'd rather your team not use - like delete_repository - you can turn it off without removing the entire server.
Disabled tools won't appear in tool discovery. If someone tries to use one anyway, they get a message explaining that it's disabled and suggesting they look for alternatives. Changes sync to satellites within 2 seconds.
PostgreSQL Migration: Technical Changelog
The Problem: Architectural Mismatch
Satellite Infrastructure Write Patterns
DeployStack's satellite architecture creates fundamentally different database load patterns than typical web applications:
Continuous Write Operations:
- Satellite Heartbeats: Every active satellite writes heartbeat data every 30 seconds
- Activity Tracking: Real-time logging of MCP tool executions across all satellites
- Process Lifecycle Events: Start/stop/crash events for stdio MCP servers
- Team Metrics: Per-team, per-satellite usage tracking
- Background Jobs: Job queue state changes, results storage, cleanup operations
- Audit Logs: Security and compliance event logging
Scaling Characteristics:
100 satellites -> 2 writes/min (heartbeat) = 200 writes/min baseline
+ MCP tool execution logging: 10 tools/min -> 100 satellites = 1,000 writes/min
+ Process lifecycle events, metrics, job queue updates
= 50-200 writes/second minimum at moderate scale
This is not a read-heavy web application. Both reads and writes are high-frequency operations.
SQLite/Turso Architectural Constraint
Single-Writer Serialization:
- SQLite's architecture serializes all write operations
- One write blocks all others, regardless of threading
- SQLITE_BUSY errors occur under concurrent load
- 5-second transaction timeout to prevent permanent blocking
- ~150k rows/second maximum throughput with zero improvement from additional threads
Turso's MVCC Implementation (Concurrent Writes) 2025-11-28:
- Technology preview status
- Does not support CREATE INDEX operations
- Substantial memory overhead (stores complete row copies, not deltas)
- No asynchronous I/O (limits scalability)
- No production timeline announced
The Decision: PostgreSQL-Only Architecture
Technical Rationale
1. Production-Ready Concurrent Writes
PostgreSQL's MVCC implementation has been battle-tested in production for decades:
- True multi-writer parallelism without serialization
- Multiple transactions write simultaneously without blocking
- No SQLITE_BUSY errors or artificial timeouts
- Proven performance with thousands of writes per second
2. Architectural Fit for Distributed Systems
The satellite infrastructure maps well to PostgreSQL's design:
- Connection pooling works efficiently with distributed satellites (PgBouncer, built-in pooling)
- Proven performance in microservices architectures
- Better handling of high-concurrency scenarios than single-writer databases
- Mature operational patterns for distributed deployments
3. Operational Maturity
PostgreSQL provides complete production tooling:
- Reliable monitoring (pgAdmin, DataGrip, pganalyze, pg_stat_statements)
- Point-in-time recovery capabilities
- Streaming replication for high availability
- Mature backup and recovery tools (`pg_dump`, WAL-E, `pgBackRest`)
- Extensive production experience across industry
Migration Architecture
Schema Migration Strategy
The migration leveraged Drizzle ORM's database abstraction to minimize changes:
Before (Multi-Database):
// Conditional driver selection
import { sqliteTable, text, integer } from 'drizzle-orm/sqlite-core';
import { pgTable, text, timestamp } from 'drizzle-orm/pg-core';
// Runtime driver switching based on configuration
const db = selectedType === 'sqlite'
? drizzle(sqliteClient, { schema })
: drizzle(postgresClient, { schema });
After (PostgreSQL-Only):
// Single driver implementation
import { drizzle } from 'drizzle-orm/node-postgres';
import { Pool } from 'pg';
const pool = new Pool({
host: config.host,
port: config.port,
database: config.database,
user: config.user,
password: config.password,
ssl: config.ssl ? { rejectUnauthorized: false } : false
});
const db = drizzle(pool, { schema });
Type System Changes
SQLite/Turso Compromises:
- Timestamps stored as integers (Unix epoch)
- Booleans stored as integers (0/1)
- No native JSONB support
- Limited type safety
PostgreSQL Native Types:
- timestamp with timezone for proper datetime handling
- Native boolean type
- jsonb for efficient JSON storage with indexing
- Arrays and custom types
- Full-text search capabilities
Schema Files
Removed:
- src/db/schema.sqlite.ts - SQLite-specific schema
- drizzle/migrations_sqlite/ - SQLite migration history
- Multi-database conditional logic throughout codebase
Retained:
- src/db/schema.ts - PostgreSQL-only schema (renamed from schema.postgres.ts)
- drizzle/migrations/ - PostgreSQL migration files
- src/db/schema-tables/ - Modular table definitions
Migration Directory Structure
services/backend/
├─ drizzle/
├─ migrations/ # PostgreSQL migrations only
├─ 0000_perfect_rogue.sql
├─ 0001_wild_selene.sql
├─ meta/
├─ src/db/
├─ schema.ts # PostgreSQL schema (single source of truth)
├─ schema-tables/ # Modular table definitions
├─ auth.ts
├─ teams.ts
├─ mcp-catalog.ts
├─ ...
├─ config.ts # PostgreSQL configuration only
├─ index.ts # Database initialization
Technical Implementation Changes
1. Database Configuration
Removed Multi-Database Selection:
// Before: Complex type switching
type DatabaseType = 'sqlite' | 'turso' | 'postgresql';
interface DatabaseConfig {
type: DatabaseType;
// ... conditional fields based on type
}
Simplified PostgreSQL Configuration:
// After: PostgreSQL-only configuration
interface DatabaseConfig {
type: 'postgresql';
host: string;
port: number;
database: string;
user: string;
password: string;
ssl: boolean;
}
2. Query Result Handling
Removed Conditional Logic:
// Before: Handle different result formats
const deleted = (result.changes || result.rowsAffected || 0) > 0;
PostgreSQL-Specific Pattern:
// After: Use PostgreSQL's rowCount
const deleted = (result.rowCount || 0) > 0;
3. Session Table Schema Fix
Fixed a critical bug introduced during multi-database support where Drizzle ORM property names didn't match usage:
Schema Definition (Correct):
export const authSession = pgTable('authSession', {
id: text('id').primaryKey(),
userId: text('user_id').notNull(), // TypeScript property: userId
expiresAt: bigint('expires_at').notNull() // TypeScript property: expiresAt
});
Code Usage (Was Incorrect):
// Before (WRONG - used snake_case):
await db.insert(authSession).values({
id: sessionId,
user_id: userId, // L Wrong! Should be userId
expires_at: expiresAt // L Wrong! Should be expiresAt
});
// After (CORRECT - use camelCase):
await db.insert(authSession).values({
id: sessionId,
userId: userId, // Matches TypeScript property
expiresAt: expiresAt // Matches TypeScript property
});
This affected:
- `registerEmail.ts` - User registration
- `loginEmail.ts` - Email login
- `github.ts` - GitHub OAuth (2 locations)
4. Plugin System Updates
Plugin Table Access:
Plugin tables are created dynamically in the database but not exported from the schema. Updated plugins to use raw SQL:
// Before: Tried to access from schema (failed)
const schema = getSchema();
const table = schema[`${pluginId}_${tableName}`]; // L TypeScript error 7053
// After: Use raw SQL queries
const tableName = `${pluginId}_example_entities`;
const result = await db.execute(
sql.raw(`SELECT COUNT(*) as count FROM "${tableName}"`)
);
Plugin Schema Type Inference:
The mock column builder auto-detects types based on column names:
- Column name contains "id" INTEGER type
- Column name contains "at" or "date" TIMESTAMP type
- Default TEXT type
Fixed plugin seed data to match inferred types:
// Before (WRONG - text ID for INTEGER column):
VALUES ('example1', 'Example Entity', ...)
// After (CORRECT - integer ID):
VALUES (1, 'Example Entity', ...)
5. API Spec Generation
Added flags to skip database/plugin initialization during OpenAPI spec generation:
// Skip database initialization for API spec generation
process.env.SKIP_DATABASE_INIT = 'true';
process.env.SKIP_PLUGIN_INIT = 'true';
This allows spec generation without database connectivity and excludes plugin routes.
Performance Characteristics
Write Performance Comparison
PostgreSQL (After):
- Thousands of writes/second baseline
- Scales with CPU cores (parallel writes)
- No artificial timeout limits
- No blocking errors under normal load
Satellite Workload Handling
Scenario: 100 satellites at moderate activity
- Estimated load: 50-200 writes/second
- SQLite/Turso: At the edge of recommended limits
- PostgreSQL: Well within comfortable operating range
Complexity Reduction
Removed Code
Multi-Database Abstraction Layer:
- Conditional driver selection logic
- Result format normalization
- Type system compatibility layers
- SQLite-specific workarounds
SQLite Migration Files:
- 18 migration files removed from drizzle/migrations_sqlite/
- Migration metadata and journal files removed
- SQLite schema definition removed
Configuration Complexity:
- Removed database type selection logic
- Removed conditional environment variable handling
- Simplified database setup flow
Simplified Maintenance
Single Migration Path:
- One migration directory (drizzle/migrations/)
- One schema file (src/db/schema.ts)
- One set of type definitions
- One testing strategy
Reduced Testing Surface:
- No multi-database test matrix
- No driver compatibility testing
- No type conversion testing
- Focus on PostgreSQL-specific optimizations
Operational Impact
Development Workflow
Before:
- Choose database type (SQLite/Turso/PostgreSQL)
- Configure appropriate environment variables
- Handle type differences in code
- Test across multiple database backends
After:
- Configure PostgreSQL environment variables
- Run migrations
- Use consistent PostgreSQL patterns
- Test single database backend
Deployment Changes
Environment Variables:
# Before: Type selection required
DATABASE_TYPE=turso
TURSO_DATABASE_URL=libsql://...
TURSO_AUTH_TOKEN=...
# After: PostgreSQL only
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DATABASE=deploystack
POSTGRES_USER=postgres
POSTGRES_PASSWORD=password
POSTGRES_SSL=false
Docker Compose:
# Before: Optional SQLite, required for Turso/PostgreSQL
services:
backend:
environment:
DATABASE_TYPE: turso
TURSO_DATABASE_URL: ${TURSO_DATABASE_URL}
# After: PostgreSQL service always required
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: deploystack
POSTGRES_USER: deploystack
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
backend:
environment:
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
POSTGRES_DATABASE: deploystack
Monitoring and Observability
PostgreSQL Advantages:
- pg_stat_statements for query performance analysis
- Native metrics export to Prometheus/Grafana
- Complete tooling (pgAdmin, DataGrip, pganalyze)
- Better visibility into concurrent operations
Removed:
- SQLite-specific monitoring workarounds
- Multi-database metric aggregation
- Database type conditional monitoring
Documentation Updates
Updated Files
Backend Documentation:
- services/backend/README.md - Updated database section
- services/backend/.env.example - PostgreSQL-only variables
- services/backend/drizzle.config.ts - PostgreSQL dialect only
Project Documentation:
- README.md - Marked PostgreSQL migration as complete in Phase 1
- Removed SQLite references from roadmap
- Updated background job queue description (PostgreSQL-based)
Technical Guides:
- /documentation/development/backend/database/index.mdx - PostgreSQL overview
- /documentation/development/backend/database/postgresql.mdx - PostgreSQL Detailed technical guide
Lessons Learned
Architectural Decisions
Database Selection Must Match Workload:
- "Write-heavy for web standards" ` write-heavy for distributed infrastructure
- Satellite architecture creates fundamentally different load patterns
- Database choice should be validated against actual workload
Conclusion
The migration to PostgreSQL-only architecture addresses a fundamental architectural mismatch between SQLite's single-writer design and DeployStack's distributed, write-heavy satellite infrastructure.
Key Outcomes:
- Eliminated architectural bottleneck: No more write serialization
- Reduced complexity: Single database backend, simpler codebase
- Better operational tooling: Mature PostgreSQL ecosystem
- Fixed critical bugs: Resolved authSession property name mismatches
OAuth Support for External MCP Servers
DeployStack now supports MCP servers that require OAuth authentication. This means you can connect to services like Box, Linear, and GitHub Copilot directly through our platform. Before this update, these OAuth-protected servers were simply unavailable in DeployStack - now they work out of the box.
When you install an OAuth-requiring MCP server, DeployStack handles the entire authentication flow for you. Click install, authorize in the popup window, and you're done. Your tokens are encrypted with AES-256-GCM and stored securely. The platform automatically refreshes expired tokens in the background, so you never have to re-authenticate unless you revoke access. For teams, each member maintains their own OAuth connection - your Box account stays yours.
On the technical side, we implemented the full OAuth 2.1 specification with PKCE (S256 method), resource indicators per RFC 8707, and on-the-fly endpoint discovery using RFC 9728 and RFC 8414. When an admin adds a new MCP server to the catalog, DeployStack automatically detects whether it requires OAuth by checking for 401 responses with WWW-Authenticate headers. No manual configuration needed - just paste the URL and we figure out the rest.
The Satellite infrastructure handles token injection transparently. For HTTP/SSE transports, tokens go in the Authorization header. For stdio-based MCP servers, tokens are injected as environment variables. This works identically whether you're using our global satellites or running a team satellite in your own infrastructure.
Stats Dashboard & AI Setup
Token Usage Statistics Dashboard
We added a statistics dashboard that shows exactly how much DeployStack saves you in token usage.
Visit the Statistics page to see your team's token savings compared to a traditional MCP setup where all tools are exposed directly. DeployStack's hierarchical routing uses just 2 meta-tools instead.
What you'll see:
Four summary cards showing:
- Your MCP installations count
- Total available tools
- Percentage saved (in green)
- Actual token usage with DeployStack
Visual comparison:
A side-by-side bar chart that makes the savings obvious:
- Left bar: Traditional setup - grows with every MCP server you add
- Right bar: DeployStack's constant usage - stays flat at 1,372 tokens
Detailed breakdown:
An expandable table showing each MCP server installation with:
- Tool count per installation
- Total tokens consumed
- Average tokens per tool
- Click any row to see individual tool token counts
Why this matters:
As you add more MCP servers, the traditional method scales linearly (more tools = more tokens). DeployStack stays constant because you're using 2 meta-tools that discover and execute anything on demand. The dashboard shows you this difference in real-time.
With 5 servers and 25 tools, you might see 16% savings. With 20 servers and 150+ tools, savings can reach 99.5%. The dashboard updates as your team adds more MCP servers, so you can watch the savings grow.
AI Instruction Files
We added ready-to-copy instruction files that help your AI coding assistant understand how to use DeployStack's MCP integration.
What's new:
A new Client Configuration page with two sections:
- Connection Setup: How to connect your IDE to DeployStack satellite
- AI Instructions: Project files for your AI coding assistant
Supported AI assistants:
- Claude Desktop - `CLAUDE.md` for project instructions
- VS Code - `copilot-instructions.md` for GitHub Copilot
- Claude Code - CLAUDE.md for CLI integration
- Cursor - `.cursorrules` configuration
How to use it:
- Visit the Client Configuration page
- Select your AI coding assistant from the sidebar
- Switch to the "AI Instructions" category
- Copy the instruction content
- Add it to your project root (`CLAUDE.md`) or IDE-specific location
Why this helps:
AI coding assistants work better when they understand your tools. These instruction files teach your AI:
- How to discover available MCP tools using short keywords
- How to execute MCP tools via DeployStack's hierarchical router
- When to check for MCP tools versus implementing functionality manually
- Best practices for the 2-tool pattern (instead of managing 150+ individual tools)
This means less time explaining how MCP works in every conversation, and more time building.
Satellite Version Management
We added automatic version management to the satellite service. The version now updates from package.json during releases instead of being hardcoded.
What changed:
The satellite now shows the correct version from package.json in:
- MCP client/server initialization
- Debug endpoint responses
- Server statistics
This makes satellite releases cleaner - no manual version updates needed in code. You'll see accurate version numbers in logs and debug information, which helps when reporting issues or checking which features are available.
MCP Tool Metadata Collection & Display
What Changed
For Users
Before this update:
- You had no way to see what tools were available in your installed MCP servers. This made it hard to know what you could actually do with each installation.
- Token consumption was invisible - you couldn't tell how much context window each MCP server was eating up.
- Understanding the value of DeployStack's hierarchical router required taking our word for it.
After this update:
- Each MCP installation now shows a complete list of available tools with descriptions, so you know exactly what you're working with.
- Token consumption gets calculated and displayed for every tool.
- You can see the total token savings from using DeployStack's hierarchical router with real numbers.
- We show you a side-by-side comparison: traditional MCP vs DeployStack's method.
New Capabilities
1. Automatic Tool Discovery
- When you install an MCP server, DeployStack automatically discovers all available tools. Tool metadata like names, descriptions, and input schemas get collected and stored, while token consumption is calculated for each tool.
2. Tool Visibility
- Browse complete tool lists for each MCP installation, with detailed descriptions and input schemas so you understand what each tool does before using it.
3. Token Usage Analytics
- See total tokens consumed by all your MCP installations. Compare the old way (all tools loaded) vs DeployStack's hierarchical router (just 2 meta-tools), and visualize token savings percentage across your team.
4. Team-Wide Insights
- Get an aggregated view across all team installations showing total tool count and token savings from using DeployStack.
Technical Implementation
Backend Changes
- New Database Table: `mcpToolMetadata` stores tool information per installation
- New API Endpoints:
- `GET /api/teams/:teamId/mcp/installations/:installationId/tools` - fetch tools for a specific installation
- `GET /api/teams/:teamId/mcp-tools/summary` - get aggregated token savings summary
- New Event Handler: `mcp-tools-discovered-handler.ts` processes tool metadata from satellites
- New Permission: `mcp.tools.view` controls access to tool metadata
Satellite Changes
- Event Emission: Satellites now send `mcp.tools.discovered` events to backend after tool discovery
- Token Calculation: Uses existing `token-counter.ts` utility to calculate tokens per tool
- Automatic Sync: Tool metadata syncs automatically when MCP servers start
- Discovery Managers Updated: Both stdio and remote tool discovery managers emit events
Frontend Changes
- New Service: `mcpToolsService.ts` handles API calls for tool metadata
- New Store: `mcpToolsStore.ts` manages tool metadata state with Pinia
- API Integration: Ready for UI components to display tool lists and token savings
Data Flow
1. User installs MCP server via frontend
2. Backend sends command to satellite
3. Satellite spawns/connects to MCP server
4. Satellite discovers tools and calculates token counts
5. Satellite emits `mcp.tools.discovered` event to backend
6. Backend stores tool metadata in database
7. Frontend fetches tool data via API
8. Users see tool lists and token savings in dashboard
Benefits
For Solo Developers
You can now see exactly what tools you have access to and understand token consumption at a glance. This helps you make informed decisions about which MCP servers to turn on.
For Teams
Get shared visibility into available tools and track token usage across all team installations. You can demonstrate ROI from using DeployStack's hierarchical router with real numbers.
For Enterprise
Audit what tools are available to users, monitor context window consumption, and improve MCP server selection based on actual usage data.
Example Use Case
Before:
A team has 5 MCP servers installed, but they have no idea how many tools are available or what token consumption looks like. It's hard to explain why DeployStack is better than traditional MCP without concrete numbers.
After:
The team sees: 5 installations, 87 total tools. The old way would consume 12,450 tokens. DeployStack's hierarchical router uses just 950 tokens. That's 96.19% savings - clear proof that DeployStack prevents context window bloat.
What's Next
This system sets us up for:
- Token Analytics Dashboard (coming soon) - visual charts showing context window usage
- Smart Recommendations (planned) - suggest which servers to disable based on usage
- Usage Reports (planned) - track which tools your team uses most
Breaking Changes
None. This is a purely additive feature.
Migration Required
None. Tool discovery happens automatically for all existing and new installations.
Security Considerations
- Tool metadata requires `mcp.tools.view` permission
- Team isolation enforced at API level
- No sensitive data exposed (only tool schemas and descriptions)
Known Limitations
- Historical data not available for installations created before this update (will populate on next server restart)
- Token calculations use gpt-tokenizer (provider-agnostic approximation)
Team Limits & Admin Tools
Three platform improvements focused on resource management, security, and admin tools.
Team-Level Non-HTTP MCP Server Limits
We added team-level control for non-HTTP (stdio) MCP server installations. This gives platform administrators fine-grained control over resource consumption while keeping HTTP and SSE servers unlimited.
What Changed
New Global Setting:
- Setting: `global.default_non_http_mcp_limit`
- Default Value: 1
- Purpose: Sets the default limit for new teams when they're created
- Location: Backend global settings panel
Database Changes:
- Added `non_http_mcp_limit` column to the `teams` table
- Each team now stores its own limit value
- Default teams and manually created teams both get this limit applied automatically
Installation Enforcement:
- The platform now checks the limit before allowing stdio MCP server installations
- HTTP and SSE MCP servers remain unlimited (they don't consume server resources)
- Clear error messages when teams hit their limit
Why This Matters
Resource Management:
Non-HTTP (stdio) MCP servers run as actual processes on your infrastructure. They consume CPU, memory, and I/O resources:
- Each stdio server requires runtime installation (Node.js, Python, etc.)
- Without limits, teams could overwhelm your satellite infrastructure
HTTP and SSE servers are just URL proxies:
- No local processes needed
- No resource consumption
- Can scale infinitely
Cost Control:
For platform operators running DeployStack:
- Prevent unexpected infrastructure costs from runaway process spawning
- Allocate resources fairly across teams
- Plan capacity based on actual stdio server usage
Fair Usage:
- Ensures all teams get reliable performance
- Prevents one team from monopolizing server resources
How It Works
For New Teams:
When a team is created (either default team during registration or manually):
- System reads `global.default_non_http_mcp_limit` setting
- Stores this value in the team's `non_http_mcp_limit` field
- Team is now subject to this limit
During MCP Installation:
When a team member tries to install an MCP server:
1. System checks if it's a stdio server (`transport_type === 'stdio'`)
2. If yes:
- Fetches the team's `non_http_mcp_limit` value
- Counts existing stdio installations for that team
- Blocks installation if limit would be exceeded
3. If no (HTTP/SSE):
- Installation proceeds without limit check
Error Message:
When a team hits their limit:
Team has reached the maximum limit of 1 non-HTTP (stdio) MCP server.
HTTP and SSE servers are not affected by this limit.
Contact your administrator to increase the limit.
Admin Controls
Set Platform Default:
Change the default for all new teams:
- Go to Backend Global Settings
- Find `global.default_non_http_mcp_limit`
- Set your desired default (e.g., 1 for free tier, 10 for paid)
Adjust Individual Teams:
Currently via database (admin UI coming soon):
UPDATE teams
SET non_http_mcp_limit = 5
WHERE id = 'team_id_here';
Use Cases:
- Freemium Model: Free teams = 1, Pro teams = 5, Enterprise = unlimited (999)
- Resource Tiers: Small teams = 3, Large teams = 10
- Trial Limits: New teams = 1, increase after verification
Technical Details
Transport Types:
- stdio (limited): Node.js, Python, Go MCP servers running as local processes
- http (unlimited): Remote HTTP endpoints like `https://mcp.brightdata.com/mcp`
- sse (unlimited): Server-Sent Events endpoints
Backward Compatibility:
Existing teams automatically get the default limit of 1 via the database migration. Teams with more than 1 stdio server already installed are grandfathered in - the limit only applies to new installations.
Satellite Log Secret Masking
We've added automatic secret masking to DeployStack satellites to protect your API keys, tokens, and credentials from exposure in log files and monitoring systems. When satellites connect to MCP servers using authentication credentials, those sensitive values are now automatically masked in all log output while keeping enough context for debugging.
How It Works
When connecting to an MCP server with a URL like `https://api.example.com?token=sk_abc123xyz789®ion=us-east-1`, our satellites now log it as `https://api.example.com?token=sk_*****®ion=us-east-1` - showing the first 3 characters of secret values followed by asterisks.
The same protection applies to:
- HTTP headers like Authorization tokens (e.g., `Authorization=Bea*****`)
- Environment variables marked as secrets
- Query parameters containing sensitive data
Why This Matters
You can now safely share satellite logs with your team or support staff for troubleshooting without worrying about credential leaks, while still being able to identify which credentials were used.
The system automatically detects which fields contain secrets based on your MCP server configuration schemas, so there's nothing you need to configure - it just works.
Teams Management for Global Admins
We added a Teams Management interface for global admins. If you're a platform administrator, you can now view and manage all teams directly from the admin dashboard.
What You Can Do
View All Teams:
Go to Admin Area → Teams to see a complete list of all teams on your DeployStack instance. The table shows:
- Team name
- Slug (unique identifier)
- Team type (Default or Custom)
- Creation date
- Quick actions
You can search across team names, slugs, and descriptions to quickly find what you're looking for.
View Team Details:
Click "View" on any team to see complete information:
- Basic team information (name, slug, description)
- Team type (default vs custom)
- Non-HTTP MCP server limit
- Team creation and update timestamps
- Complete member list with roles (Owner, Admin, Member)
Edit Team Settings:
Click "Edit Team" to update:
- Team Name - Change the display name
- Description - Add or update team description
- Non-HTTP MCP Server Limit - Control how many stdio MCP servers the team can install Changes save instantly and you'll see a confirmation message when successful.
Why This Matters
Before this update, managing teams required direct database access. Now you can handle common team management tasks through the dashboard - no technical knowledge needed.
This makes it easier to:
- Audit team configurations across your organization
- Adjust MCP server limits as teams grow
- Update team information without developer intervention
- See team membership at a glance
You'll need global admin permissions to access this feature.
Hierarchical Router: 99.5% Context Window Reduction
We've implemented a hierarchical router pattern that solves the MCP context window consumption problem. Instead of exposing 100+ tools that consume up to 80% of your context window before any work begins, DeployStack Satellite now exposes just 2 meta-tools that provide access to all your MCP servers with 99.5% less token usage.
The Problem We Solved
Context Window Consumption Crisis
When MCP clients connect to multiple servers, each tool's definition (name, description, parameters, schemas) gets loaded into the context window. This creates a severe problem:
Real-world example:
- 15 MCP servers × 10 tools each = 150 total tools
- Each tool definition ≈ 500 tokens
- Total consumption: 75,000 tokens (37.5% of a 200k context window)
Before any actual work begins, nearly half your available context is gone just describing what tools exist.
Industry impact:
- Claude Code: 82,000 tokens consumed by MCP tools (41% of context)
- Cursor: Hard limit of 40 tools maximum
- General consensus: Performance degrades significantly after 20-40 tools
- Critical failures reported at 80+ tools
Why This Matters
More MCP servers = Better AI capabilities, but also = Less room for actual work. Users had to choose between:
1. Limited tooling (stay under 40 tools, miss valuable integrations)
2. Degraded performance (add more tools, sacrifice context space)
This wasn't sustainable as the MCP environment grows.
How We Fixed It: 2-Tool Hierarchical Router
How It Works
Instead of exposing all tools directly to MCP clients, DeployStack Satellite now exposes only 2 meta-tools:
1. `discover_mcp_tools` - Search for available tools
// Find tools using natural language
discover_mcp_tools({
query: "github create issue",
limit: 10
})
// Returns: [{ tool_path: "github:create_issue", description: "..." }]
2. `execute_mcp_tool` - Execute a discovered tool
// Execute using the tool_path from discovery
execute_mcp_tool({
tool_path: "github:create_issue",
arguments: { repo: "deploystackio/deploystack", title: "Bug report" }
})
Behind the Scenes
While clients only see 2 tools, the satellite still:
- Manages 20+ actual MCP servers (HTTP and stdio)
- Caches 100+ real tools internally
- Routes execution requests to the correct server
- Handles both HTTP/SSE remote servers and stdio subprocess servers
The magic: Clients discover tools dynamically only when needed, not upfront.
Token Reduction Results
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| Tools Exposed | 150 | 2 | 98.7% reduction |
| Tokens Consumed | 75,000 | 350 | **99.5% reduction** |
| Context Available | 62.5% | 99.8% | +37.3% more space |
Example: If you previously had 82,000 tokens consumed by MCP tools, you now have 81,650 tokens freed for actual work.
Enhanced Tool Discovery
Full-Text Search Powered by Fuse.js
The `discover_mcp_tools` meta-tool uses Fuse.js for intelligent fuzzy search across all your MCP servers:
Features:
- Natural language queries - Search with phrases like "scrape website markdown"
- Fuzzy matching - Handles typos and synonym variations (e.g., "website" matches "webpage")
- Fast performance - 2-5ms search time across 100+ tools
- Relevance scoring - Results ranked by match quality
- Weighted search - Prioritizes tool names (40%), descriptions (35%), server names (25%)
Example workflow:
// User asks: "Do you have tools for GitHub?"
discover_mcp_tools({ query: "github" })
// Returns:
// - github:create_issue
// - github:update_issue
// - github:list_repos
// - github:search_code
// Execute the one you need:
execute_mcp_tool({
tool_path: "github:create_issue",
arguments: {...}
})
Search Quality Improvements
We've tuned the search engine for optimal user experience:
Configuration:
- Threshold: 0.5 - Balanced fuzzy matching (allows natural synonym variations)
- Min match length: 2 - Filters noise while catching abbreviations
- Extended search: enabled - Supports advanced query operators if needed
Result: Users find tools on first try, even when phrasing queries differently than tool descriptions.
What This Means For You
Unlimited MCP Server Growth
You can now add as many MCP servers as you need without worrying about context window consumption:
- ✅ 10 servers? No problem
- ✅ 50 servers? Still only 350 tokens
- ✅ 100 servers? Context usage unchanged
The hierarchical router scales infinitely because clients always see just 2 meta-tools.
Better AI Performance
With 99.5% more context available:
- AI assistants can hold longer conversations
- More complex tasks fit in a single session
- Better code generation with full project context
- Reduced need to restart conversations due to context limits
No Breaking Changes
Everything still works:
- All existing MCP servers continue to work
- No configuration changes required
- Internal routing handles stdio and HTTP servers automatically
- Tool discovery happens transparently
From the user's perspective: Tools just work, but now there's room to actually use them.
Technical Implementation
Technical Design
┌─────────────────────────────────────────────────────────────┐
│ MCP Client (Claude Desktop / VS Code) │
│ │
│ Sees: 2 meta-tools (350 tokens) │
│ - discover_mcp_tools │
│ - execute_mcp_tool │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ DeployStack Satellite (Hierarchical Router) │
│ │
│ Behind the scenes: │
│ - Manages 20+ actual MCP servers │
│ - Caches 100+ real tools │
│ - Full-text search with Fuse.js │
│ - Routes to stdio subprocesses or HTTP endpoints │
└─────────────────────────────────────────────────────────────┘
Key Features
Single Source of Truth:
- `UnifiedToolDiscoveryManager` maintains the only tool cache
- Search service queries this directly (no duplication)
- Always fresh data - automatic server add/remove reflected immediately
Format Conversion:
- External format: `"serverName:toolName"` (user-facing, clean API)
- Internal format: `"serverName-toolName"` (routing, backward compatible)
- Automatic conversion in execution layer
Transport Agnostic:
- HTTP/SSE servers: Routes via MCP SDK Client
- stdio servers: Routes via ProcessManager
- Same interface for both - client never knows the difference
Credits
This implementation is powered by Fuse.js, the excellent lightweight fuzzy-search library that makes intelligent tool discovery possible with zero external dependencies.
The hierarchical router pattern is based on research and best practices from the MCP community, validated by multiple open-source implementations showing 95-99% token reduction across different architectures.
November 2025 Release: Source Tracking, Metrics & Memory Optimization
MCP Server Source Tracking
Every MCP server in your catalog now shows where it came from. A simple badge appears on server details: blue for "Official Registry" or gray for "Manual."
This transparency helps you make better security decisions. Official Registry servers are community-vetted and follow MCP standards, so you can deploy them confidently. Manual servers? Those are custom tools your team built or proprietary integrations specific to your organization. The difference matters when choosing servers.
The tagging happens automatically without any effort on your part. Admins sync servers from the official registry, and they get marked "Official Registry" with no additional configuration required. Create one manually? It's tagged "Manual." Once set, the source can't be changed - you always have an accurate record of where each server originated and who added it to your catalog.
Time-Series Activity Metrics
DeployStack now tracks your MCP client activity over time. No more just seeing total request counts - the system records activity in 15-minute snapshots, showing you exactly when and how your tools get used throughout the day.
Want to know how active your MCP clients were between 10am and 1pm? The system answers that instantly. Which tools did your team use most in the last 3 hours? Easy. The system collects this data automatically every time you use an MCP server, storing it efficiently so performance stays fast.
The metrics system cleans up after itself by removing data older than 3 days automatically. You get clear visibility into recent activity without system slowdowns or storage bloat. Future versions will add more tracking capabilities - server installations, tool execution patterns, and other insights - all built on this same foundation.
Intelligent Process Management
Satellites now save memory by watching for idle stdio-based MCP servers and terminating processes after 3 minutes of inactivity, which frees up about 15MB per server. This prevents your satellite from running out of memory when managing many servers. Need that tool again? It wakes up in 1-2 seconds - you won't even notice the pause.
This changes everything when you run dozens of servers. Previously, 100 MCP servers consumed about 1.5GB of memory continuously - even if you only actively used 10 of them. Now those same 100 servers might use just 150MB when most are dormant. That's a 90% reduction. Your frequently-used tools stay instantly available while rarely-used ones sleep until needed.
The system makes smart decisions about when to sleep processes based on actual usage patterns and workload characteristics. Newly spawned servers get time to initialize. Servers handling active requests never get terminated mid-operation. You can adjust the idle timeout through environment variables - go aggressive on memory savings or keep processes active longer depending on your specific needs and resource constraints. The default 3-minute timeout balances memory efficiency with instant tool availability for most teams.
Admin MCP Server Filtering: Find What You Need, Fast
Managing a large MCP server catalog used to mean scrolling through pages of servers to find what you needed. We've added a filter system that sits at the top of the catalog page. Type a search term to find servers by name or description, then refine results using dropdown filters for status (active, deprecated, maintenance), programming language, runtime environment, featured status, and auto-install settings. All filters work together—select "Python" and "active" to see only production-ready Python servers. The filter dropdowns populate automatically from your database, so when new languages or runtimes appear in your catalog, they show up in the filters without any configuration.

The changes apply only to Global Administrator accounts managing the global MCP server catalog. Team administrators and regular users see the catalog without these filters since they're browsing, not managing hundreds of entries. We built this after seeing administrators struggle to locate specific servers in catalogs with 50+ entries. The search uses a 300ms debounce to avoid hammering your database while you type, and pagination controls remain at the bottom so you can browse filtered results across multiple pages. Clear all filters with one click when you're done.
Personal MCP Client Activity Dashboard
What You'll See
The new dashboard displays your active MCP clients in one place. When you log in, you'll see a list showing which clients you've used recently, which satellite they're connected to, when they were last active, and how many requests they've made. This is your personal view—you only see your own clients, not your teammates' activity.
Each entry shows practical information: "VS Code on Production Satellite - Active 2 minutes ago - 145 requests, 32 tool calls." If you haven't used a client in the last 30 minutes, it drops off the list automatically. You can adjust that time window if you want to see activity from the last hour or the last day instead.
How It Works
We built this using our existing satellite infrastructure. Every 30 seconds, each satellite reports which users made requests and from which clients. The backend processes these events and updates your dashboard. The activity tracking adds less than 1ms to each request, so you won't notice any performance impact.
The satellite identifies your client by checking your OAuth credentials first (which is how VS Code and Cursor typically connect), then falls back to parsing the User-Agent header if needed. Session tracking is optional—if your client sends an Mcp-Session-Id header, we'll store it for debugging purposes, but it doesn't affect what you see in the dashboard.
Why This Matters
This solves a few practical problems. First, you can quickly check if you accidentally left a client running somewhere and burning through API calls. Second, when you're debugging connection issues, you can immediately see which clients are actually reaching the satellite versus which ones are having problems. Third, if you're managing multiple teams, you can see which satellite each of your clients is connected to without having to check your config files.
The backend infrastructure is complete and processing activity right now. We're finishing the frontend UI (Phase 4), which will add the actual dashboard widget with real-time polling, client icons, and a clean timeline view. Once that's live, you'll see your active clients the moment you log in.
Platform Improvements: Search, Performance & Syntax Highlighting
Job Queue Search and Filtering
Finding specific background jobs just got easier. We added a search interface that lets administrators filter jobs by ID, type, status, and time range. Instead of scrolling through hundreds of jobs to find the one that failed, you can now search for "failed email jobs from the last hour" in seconds. The search interface shows loading states when querying, dynamically loads job types from your actual queue data, and includes quick time range presets (last minute, last hour, last 24 hours, etc.). When you're troubleshooting production issues at 2am, this saves real time.
Background Email Processing
Registration now completes instantly instead of waiting 2-5 seconds for email servers. We moved all email sending operations into background jobs, which means users click "Sign Up" and immediately move forward while verification emails queue and send automatically. If the email fails temporarily due to network issues or rate limits, the system retries automatically without bothering the user. This change makes the platform feel faster and more reliable, especially during high-traffic periods when SMTP servers slow down.
Syntax Highlighting
Code blocks throughout the platform now highlight syntax automatically. When you're looking at configuration examples, MCP server code, or debug logs, the syntax highlighting makes it easier to spot variable names, function calls, and structure at a glance. This works for JavaScript, TypeScript, Python, JSON, YAML, and other common languages developers use with DeployStack. It's a small change that makes reading technical content significantly easier.
Official MCP Registry Integration Complete
Automatic Server Discovery
Previously, administrators had to manually add every MCP server to the catalog - a time-consuming process that meant you couldn't easily discover new tools as they became available. This update introduces automatic synchronization with the official MCP Registry, bringing hundreds of pre-configured MCP servers directly into your DeployStack catalog with a single click.
How it works now: Global administrators can trigger a registry sync from the MCP Server Catalog admin interface. Behind the scenes, our background job queue system processes each server sequentially with rate limiting to respect GitHub API limits. The sync fetches server metadata from the official registry, enriches it with GitHub information like star counts and README content, and intelligently maps configurations to our three-tier system. Each synced server is marked with its official registry source and version, so you always know where it came from.
What this means for your workflow: Instead of spending hours researching and manually configuring MCP servers, you can browse the entire ecosystem immediately. Search for servers by category, filter by language or runtime, and deploy them to your teams with all the security and collaboration features DeployStack provides. Official registry servers work seamlessly with our three-tier configuration system - required credentials go to the team level with encryption, while optional settings remain customizable at the user level.
The technical implementation: We built this using our existing job queue infrastructure to handle large-scale synchronization safely. The system processes servers one at a time with configurable delays between requests, preventing API rate limit violations while maintaining progress tracking. GitHub metadata enhancement runs automatically for repository-based servers, pulling in stars, topics, licenses, and organization information. If GitHub rate limits are hit, jobs automatically retry with exponential backoff.
What this enables: DeployStack now serves as the deployment layer for the entire MCP ecosystem. Every server published to the official registry becomes instantly available through DeployStack's managed satellite infrastructure, complete with team management, credential security, and organizational visibility.
stdio MCP Server Support
Before This Update
- ✅ HTTP/SSE MCP servers worked (like Context7)
- ❌ npm package-based MCP servers didn't work
- ❌ MCP servers requiring npx commands didn't work
After This Update
- ✅ HTTP/SSE MCP servers work (unchanged)
- ✅ Node.js MCP servers work (installed via npx i.e.: memory)
- ✅ MCP servers from npm registry work
- ❌ Python MCP servers still don't work (coming later)
What You Can Do Now
Install npm-Based MCP Servers
You can now install MCP servers from the npm registry directly through DeployStack:
Examples of servers that now work:
- @modelcontextprotocol/server-filesystem - File system access
- @modelcontextprotocol/server-sqlite - SQLite database queries
- @modelcontextprotocol/server-postgres - PostgreSQL database access
- Any MCP server published to npm with stdio transport
How It Works
- Install a Server: Add an npm-based MCP server from the catalog
- Automatic Setup: Satellite downloads and spawns the Node.js process
- Instant Access: Tools appear in your IDE within 30 seconds
- Team Isolation: Each team gets their own isolated process
What Happens Behind the Scenes
When you install an npm-based MCP server:
- Satellite receives the installation command from the backend
- Spawns a Node.js subprocess using npx (allows package downloads)
- Performs MCP handshake with 30-second timeout
- Discovers available tools from the running process
- Makes tools available through your IDE's MCP client
Technical Details
Auto-Restart with Limits
If an MCP server process crashes:
- Automatic restart up to 3 times within 5 minutes
- Exponential backoff: 1s → 5s → 15s between retries
- Permanent failure after 3 crashes (visible in dashboard)
- Immediate restart if process ran successfully for 60+ seconds
This prevents infinite restart loops from misconfigured servers while recovering from temporary failures.
Resource Limits (Production Only)
In production environments, stdio processes run with resource limits:
- Memory: 100MB per process
- CPU Time: 60 seconds total CPU time
- Processes: Maximum 50 child processes
- Isolation: Complete namespace isolation per team
Development environments run without limits for easier debugging.
Tool Discovery
Tools from stdio processes are:
- Automatically discovered after process starts
- Namespaced to prevent conflicts (e.g., filesystem-read_file)
- Cached for performance
- Team-isolated using OAuth tokens
Current Limitations
Python MCP Servers Not Supported
What doesn't work yet:
- MCP servers requiring Python/pip installation
- Servers using uvx or pip commands
- Python-based MCP packages
Why: This release focused on Node.js (which covers ~80% of MCP servers). Python support is planned for a future release.
Known Issues
- First installation takes longer: npx downloads packages on first run (30s timeout accounts for this)
- No version pinning yet: Servers always install latest version from npm
Acknowledgments
This release builds on the architecture from DeployStack Gateway (our deprecated local CLI), adapting process management for multi-tenant cloud deployment. The implementation maintains backward compatibility while adding significant new capabilities.
Secure Satellite Registration Now Live
Previously, any satellite could register with our backend without authorization - essentially allowing unauthorized access to your organization's AI tools. This update introduces a secure token-based pairing system that puts you in complete control of which satellites can connect to your DeployStack instance.
How it works now: Administrators generate unique registration tokens through the DeployStack dashboard, similar to how you'd create API keys for other services. These tokens are then used during satellite deployment to securely pair the satellite with your backend. Once paired, satellites receive permanent API keys and store them locally, so they only need the registration token during the initial setup. Think of it like pairing a new device with your WiFi network - you enter the password once, and it remembers the connection.
What this means for existing satellites: Your current satellites continue working exactly as before with no interruption. However, any new satellites you deploy will require a registration token from your admin dashboard. This gives you visibility into every satellite in your infrastructure and prevents unauthorized satellites from accessing your AI tools. You can generate tokens for global satellites (managed by DeployStack) or team-specific satellites (deployed in your own infrastructure), and you can revoke unused tokens if needed.
The security upgrade: We've implemented cryptographically signed JWT tokens that expire automatically and can only be used once. Every satellite pairing attempt is logged for audit purposes, and we've added comprehensive German error messages to guide operators through any issues. This change represents our commitment to enterprise-grade security while maintaining the simplicity that makes DeployStack easy to use - you still just add a URL to VS Code, but now that URL is backed by properly authenticated infrastructure.
DeployStack Architecture Transition: From Gateway CLI to Satellite Infrastructure
Gateway CLI Architecture (Deprecated)
What We Built
The Gateway CLI was a sophisticated local proxy solution with advanced technical features:
Core Components:
- SSE (Server-Sent Events) + stdio dual transport architecture
- Persistent MCP server process management with automatic restart
- Team-aware configuration synchronization from cloud.deploystack.io
- Cryptographic session management with secure credential injection
- Advanced CLI interface with team switching capabilities
Technical Implementation:
- Node.js/TypeScript application with global npm installation
- HTTP server on localhost:9095 with SSE endpoints
- JSON-RPC 2.0 communication over stdio with MCP subprocesses
- Automatic tool discovery and caching system
- Background process monitoring with health checks
Team Integration:
- OAuth2 authentication with cloud.deploystack.io
- Team-specific MCP server configurations
- Role-based access control (global_admin, global_user, team_admin, team_user)
- Centralized credential management with runtime injection
Gateway Limitations Discovered
Installation Complexity: Local installation required multiple steps: npm global install, authentication, VS Code configuration, and local process management. Each step introduced potential failure points.
Process Management Issues: Managing persistent background processes created system resource conflicts, port management complexity, and cross-platform compatibility problems.
Corporate Network Challenges: Outbound connections to cloud.deploystack.io for configuration sync occasionally failed behind restrictive corporate firewalls.
Development Overhead: Maintaining cross-platform compatibility, handling process lifecycle management, and debugging local networking issues consumed significant development resources.
Satellite Architecture (Current)
Design Principles
Edge Worker Pattern: Satellites operate as intelligent edge workers that handle MCP server execution while maintaining centralized configuration management through cloud.deploystack.io.
Dual Deployment Model:
- Global Satellites: DeployStack-operated infrastructure serving all teams
- Team Satellites: Customer-deployed within corporate networks for internal resource access
Standard Protocol Integration: Satellites expose standard MCP interfaces via HTTPS endpoints, eliminating custom client requirements.
Technical Architecture
Core Components:
- HTTP Proxy Router: Team-aware request routing with protocol translation
- Process Manager: MCP server subprocess lifecycle with resource isolation
- stdio Communication Manager: JSON-RPC communication with MCP processes
- Backend Communicator: Integration with cloud.deploystack.io control plane
- OAuth 2.1 Resource Server: Authentication and authorization for MCP clients
Resource Management:
- Linux namespaces and cgroups v2 for complete team isolation
- Resource jailing: 0.1 CPU cores, 100MB RAM per team
- 5-minute idle timeout for inactive MCP server processes
- Process-level user isolation with dedicated system accounts
Implementation Stack:
- Fastify HTTP server with @fastify/http-proxy for request routing
- stdio JSON-RPC communication with MCP server subprocesses
- Docker containerization with team-specific resource limits
- OAuth 2.1 compliance for enterprise authentication
Client Integration
MCP Client Configuration:
{
"mcpServers": {
"deploystack": {
"url": "https://satellite.deploystack.io/mcp",
"transport": "http"
}
}
}Authentication Flow:
- User authenticates with cloud.deploystack.io
- Client receives OAuth2 access token
- Token validates against satellite OAuth 2.1 resource server
- Satellite routes requests based on team context
Migration Details
Deprecated Components
Gateway CLI Application:
- Command-line interface (deploystack login, deploystack start, etc.)
- Local HTTP server with SSE endpoints
- npm package distribution and versioning
- Cross-platform installation procedures
Supporting Infrastructure:
- Local process management and monitoring
- localhost configuration requirements
- Platform-specific installation documentation
Current Infrastructure
Global Satellite Network:
- Multi-region deployment for low-latency access
- Auto-scaling infrastructure managed by DeployStack team
- High availability with 99.9% uptime target
- Standard HTTPS endpoints accessible from any network
Team Satellite Support:
- Docker-based deployment for customer networks
- Outbound-only communication pattern (firewall-friendly)
- Complete team isolation with dedicated resources
- Integration with internal corporate infrastructure
Technical Benefits
Simplified Client Experience
Before (Gateway CLI):
- Multi-step installation: npm install, login, configuration
- Local process management and port conflicts
- Platform-specific troubleshooting and support
After (Satellite):
- Single URL configuration in MCP client
- No local software installation or management
- Standard HTTPS communication patterns
Enhanced Security Model
Team Isolation:
- Complete process isolation using Linux namespaces
- Dedicated system users per team with file system boundaries
- Network isolation preventing cross-team communication
- Resource quotas preventing denial-of-service scenarios
Enterprise Authentication:
- OAuth 2.1 compliance for enterprise SSO integration
- Token-based authentication eliminating credential storage
- Centralized access control through cloud.deploystack.io
- Comprehensive audit logging for compliance requirements
Operational Improvements
Infrastructure Management:
- Centralized monitoring and alerting
- Automated scaling based on demand
- Zero-downtime deployments and updates
- Professional monitoring and incident response
Developer Experience:
- Instant access without installation delays
- Consistent behavior across all environments
- No local system dependencies or conflicts
- Transparent operation with debugging support
- Breaking Changes
Gateway CLI Removal
All Gateway CLI functionality has been removed. Users must migrate to satellite-based access:
- Uninstall global gateway package: npm uninstall -g @deploystack/gateway
- Update MCP client configuration to use satellite URLs
- Authenticate through cloud.deploystack.io web interface
Future Development
Planned Enhancements
Enhanced Team Satellites:
- Advanced resource management and scaling
- Integration with enterprise identity providers
- Custom MCP server deployment and management
Global Satellite Expansion:
- Additional regions for improved latency
- Enhanced monitoring and observability
- Advanced caching and performance optimization
Open Source Development
The complete satellite architecture remains open source with active community development. Contributions welcome for both global and team satellite implementations.
Conclusion
The transition from Gateway CLI to Satellite architecture represents a fundamental improvement in DeployStack's approach to MCP management. By eliminating local installation complexity while maintaining enterprise-grade security and team isolation, the satellite model provides a superior foundation for both individual developers and enterprise teams.
The technical architecture supports the full spectrum of deployment scenarios while significantly reducing operational complexity for end users. This architectural foundation positions DeployStack for continued growth and enterprise adoption in the expanding MCP ecosystem.
30x Faster Performance and Better Team Visibility
30x Faster Command Execution
The biggest change: Gateway commands now complete in ~0.1 seconds instead of 3+ seconds.
We implemented an intelligent device caching system that eliminates the expensive device fingerprinting operations that were slowing down every command. Now fingerprinting happens once during login, then gets cached securely for 30 days.
Before:
bash
$ deploystack refresh
⠹ Detecting device information... # 2.8 seconds
⠹ Getting current configuration... # 0.3 seconds
✔ Configuration updated
After:
bash
$ deploystack refresh
⠹ Getting current configuration... # 0.1 seconds
✔ Configuration updated
The cache uses your OS's secure keychain (macOS Keychain, Windows Credential Manager, Linux Secret Service) with AES-256-GCM encryption as a fallback. Cache automatically refreshes every 30 days and validates hardware signatures for security.
Impact: Commands like deploystack refresh and deploystack mcp feel instant. No more waiting around for basic operations.
Automatic Device Activity Tracking
New capability: Teams can now see when devices last accessed DeployStack.
We added background device activity tracking that updates whenever a device calls the platform. This helps teams monitor usage patterns and identify inactive devices for security and license management.
What gets tracked:
- Last activity timestamps for each device
- IP addresses for security auditing
- Usage patterns across team devices
Enterprise benefits:
- Stale device detection - Find devices that haven't been used in months
- Security auditing - Track access patterns and identify anomalies
- License compliance - Monitor active device counts accurately
- Team visibility - See which developers are actively using MCP tools
The tracking happens in the background using hardware fingerprints and won't slow down API responses. All data helps teams maintain better security hygiene and understand their MCP adoption.
Streamlined Configuration Management
Code quality improvement: Eliminated 250+ lines of duplicated configuration change detection code.
We built a reusable ConfigurationChangeService that both the refresh and mcp commands now use. This means consistent behavior, easier maintenance, and better user feedback.
Better user experience during backend operations:
Before:
bash
🤖 MCP Configuration Status
⚠️ No MCP servers configured for this team
[3-second silence with no feedback]
After:
bash
🤖 MCP Configuration Status
⚠️ No MCP servers configured for this team
⠹ Connecting to backend to check for configuration updates...
⠹ Downloading latest configuration from cloud...
⠹ Comparing configurations...
✔ Configuration check completed
The service provides centralized logic for detecting configuration changes, analyzing what changed (added/removed/modified servers), and handling restart prompts when needed.
Technical Architecture
These improvements follow our core design principles:
Performance-first design - Cache expensive operations, provide immediate feedback, optimize for the common case of repeated command usage.
Enterprise-ready security - Use OS-level secure storage, encrypt fallbacks, validate integrity, and provide audit trails for compliance teams.
Developer experience focus - Make commands feel instant, provide clear progress feedback, and eliminate frustrating wait times that break flow state.
Real-World Impact
For a typical developer using DeployStack throughout the day:
- Before: Each deploystack command took 3+ seconds, interrupting workflow
- After: Commands complete instantly, maintaining development flow
- Team visibility: Managers can see which developers are actively using MCP tools
- Maintenance: Easier to identify and clean up unused devices
For enterprise teams managing dozens of developer devices:
- Activity monitoring: Clear visibility into MCP adoption and usage patterns
- Security compliance: Audit trails and stale device detection
August 26, 2025 Update: Major Security and System Improvements
What We Changed
1. MCP Configuration Fields Now Encrypt Automatically
We added a type: "secret" field type to MCP server schemas. When global admins mark a field as secret:
- Field values are encrypted with AES-256-GCM before database storage
- API responses return ***** instead of the actual value
- Runtime still gets the decrypted value for MCP server execution
Before: API keys stored as plain text in the database, visible in API responses After: API keys encrypted in database, masked in API responses, decrypted only for runtime
Example:
// Schema definition
{
"apiKey": {
"type": "secret",
"description": "Your API key"
}
}
When you configure apiKey: "sk-1234567890", it gets encrypted and you see ***** everywhere except when the MCP server actually runs.
2. Fixed MCP Configuration Data Structure
We standardized how MCP configurations are stored internally across all three tiers (template/team/user):
- All configuration data now uses the same internal format
- Better handling of the args/env merging process
- More consistent behavior when assembling final runtime configurations
Before: Inconsistent data structures caused edge cases in configuration assembly After: Consistent data handling, more reliable configuration merging
Technical Impact
Secret Type Implementation:
- Affects: All MCP server configurations with sensitive fields
- Breaking: No - existing configs work the same
- Security: High - eliminates credential exposure in APIs/logs
Data Structure Consistency:
- Affects: Internal configuration processing
- Breaking: No - user experience unchanged
- Reliability: Improved configuration assembly and error handling
August 26, 2025 Update Summary
Release Date: August 26, 2025
What We Changed
1. MCP Configuration Fields Now Encrypt Automatically
We added a type: "secret" field type to MCP server schemas. When global admins mark a field as secret:
- Field values are encrypted with AES-256-GCM before database storage
- API responses return ***** instead of the actual value
- Runtime still gets the decrypted value for MCP server execution
Before: API keys stored as plain text in the database, visible in API responses After: API keys encrypted in database, masked in API responses, decrypted only for runtime
Example:
// Schema definition
{
"apiKey": {
"type": "secret",
"description": "Your API key"
}
}
When you configure apiKey: "sk-1234567890", it gets encrypted and you see ***** everywhere except when the MCP server actually runs.
2. Fixed MCP Configuration Data Structure
We standardized how MCP configurations are stored internally across all three tiers (template/team/user):
- All configuration data now uses the same internal format
- Better handling of the args/env merging process
- More consistent behavior when assembling final runtime configurations
Before: Inconsistent data structures caused edge cases in configuration assembly After: Consistent data handling, more reliable configuration merging
Technical Impact
Secret Type Implementation:
- Affects: All MCP server configurations with sensitive fields
- Breaking: No - existing configs work the same
- Security: High - eliminates credential exposure in APIs/logs
Data Structure Consistency:
- Affects: Internal configuration processing
- Breaking: No - user experience unchanged
- Reliability: Improved configuration assembly and error handling
What You Need to Do
Nothing. Both changes are backward compatible and happen automatically.
Nothing. Both changes are backward compatible and happen automatically.
Smart MCP Configuration System + Critical Device Management Fixes
The Big Change: Three-Tier MCP Configuration
We introduced a smart configuration system that automatically handles different types of settings, making team collaboration much easier and eliminating repetitive setup work.
The Problem We Solved
Previously, when your team installed an MCP server, everyone had to configure the same settings individually:
- Repetitive Setup: Every team member manually entered the same API keys and shared folders
- Configuration Drift: Team members often had slightly different settings, causing compatibility issues
- Manual Credential Sharing: Sensitive API keys had to be passed around manually
- Update Headaches: When shared settings changed, everyone had to update individually
The Solution: Three Configuration Tiers
We now automatically organize settings into three levels:
1. Template Settings - Core technical settings that never change (server commands, installation methods)
2. Team Settings - Shared across your team:
- Shared API keys (OpenAI, GitHub, database credentials)
- Common resources (project folders, team documentation paths)
- Standardized policies that ensure consistent team experience
3. Personal Settings - Individual to each person:
- Local file paths (Desktop, Documents, project workspace)
- Personal preferences (debug settings, memory locations)
- Device-specific configurations (work laptop vs. home computer)
Real-World Example
Your team uses an MCP server for GitHub integration:
- Template: Server installation and basic setup (automatic)
- Team: Company GitHub token, shared repositories, coding standards
- Personal: Your local code directory, personal GitHub username, editor preferences
When you join the team, you instantly get company repository access with the right permissions, but code still goes to your preferred local folder.
Critical Fixes We Made
Fixed Hardware Fingerprinting Bug
The Problem: Our hardware fingerprinting algorithm was generating different IDs for the same machine, creating duplicate device records and breaking device identification.
Root Causes:
- Network MAC addresses processed in random order
- Non-deterministic JSON serialization
- Timestamp-based fallback logic made fingerprints completely random
The Fix: Made fingerprint generation completely deterministic:
- Sort MAC addresses before processing
- Use consistent JSON serialization with sorted keys
- Remove timestamps from fallback logic
- Add stable system properties for reliable identification
Result: Same machine now always generates the same hardware ID, eliminating duplicate devices.
Performance Improvements
The Problem: Gateway was making multiple API calls to build MCP configurations, causing slow loading times.
The Fix: Created a single backend endpoint that merges all three configuration tiers:
- GET `/api/gateway/me/mcp-configurations?hardware_id={hardwareId}`
- Returns fully merged, ready-to-use configurations
- Automatic device lookup using hardware fingerprints
- Configuration status indication
Performance Gain: Reduced from 3+ API calls to 1 call per configuration refresh.
What This Means for You
For Team Administrators
- Set Once, Use Everywhere: Configure team settings once, they apply to all team members automatically
- Centralized Management: Update shared API keys in one place instead of asking everyone to update individually
- Better Security: Sensitive credentials managed centrally with encryption
- Consistent Experience: All team members get the same base configuration
For Team Members
- Faster Setup: Join a team and get pre-configured MCP servers instantly
- Personal Control: Customize settings that matter to you without affecting the team
- Multiple Devices: Different personal configs for work laptop, home computer, etc.
- Automatic Updates: Team changes flow to you automatically
For Everyone
- No More Config Errors: Eliminates configuration mismatches and missing settings
- Better Collaboration: Everyone works with the same shared resources
- Simpler Maintenance: Updates happen at the right level automatically
- Reliable Device Management: No more duplicate device records or login issues
DeployStack Release Summary - August 20, 2025
New Features
Auto-Install MCP Servers for New Users
Global administrators can now mark MCP servers for automatic installation when new users create DeployStack accounts. New users get their default team pre-configured with curated tools, eliminating manual setup and providing immediate value.
Key benefits:
- Better onboarding experience for new users
- Reduced time to first value
- Administrative control over recommended tools
- Robust error handling that doesn't block registration
Featured MCP Servers Filtering
Added filtering capabilities to help users discover high-quality, curated MCP servers more easily. Featured servers appear first in listings and can be filtered separately from community-contributed servers.
New API capabilities:
- Filter by featured status in list and search endpoints
- Combine featured filtering with existing filters (language, status)
- Featured servers prioritized in default listings
Global Event Bus System (Phase 2 Complete)
Implemented comprehensive event-driven architecture with plugin integration. Plugins can now listen to and react to core application events while maintaining isolation and security.
Technical achievements:
- Type-safe event system with 23 defined event types
- Fire-and-forget processing for performance
- Plugin event listener integration
- Comprehensive error isolation
Technical Details
Database Changes
- Added auto_install_new_default_team boolean field to mcpServers table
API Enhancements
- Global MCP server endpoints now accept auto_install_new_default_team parameter
- Server list/search endpoints support featured filtering parameter
- Event bus integrated into Fastify server as singleton
Plugin System
- Extended Plugin interface with optional eventListeners property
- Enhanced Plugin Manager with event listener registration and management
- Updated example plugin with comprehensive event handlers
Impact
For New Users:
- Immediate access to useful MCP servers upon registration
- No manual configuration required to get started
For All Users:
- Easier discovery of high-quality, curated MCP servers
- Better organization of server catalog
For Developers:
- Event-driven architecture foundation for future features
- Plugin ecosystem can now react to core application events
- Improved system extensibility and maintainability
August 15th, 2025 Release
Gateway Client Configuration API
We built a new API endpoint that provides pre-formatted configuration files for MCP clients to connect to the local DeployStack Gateway. This eliminates manual configuration steps and reduces setup errors across all major MCP clients. The endpoint supports Claude Desktop, Cline, VS Code, Cursor, and Windsurf with both cookie and OAuth2 authentication.
Configurable Team Limits
Team creation and member limits are now configurable through global settings instead of being hardcoded to 3. Administrators can adjust these limits based on their organization's needs - from 1-2 teams for simple deployments to 10+ teams for enterprise customers. The default remains 3 teams and 3 members for backward compatibility.
Dynamic Client Configuration
Updated the Gateway Client Configuration modal to dynamically load supported MCP clients from the backend API instead of using hardcoded options. This means the frontend automatically displays all 5 supported clients (Claude Desktop, Cursor, Cline, VS Code, and Windsurf) and will automatically support new clients as we add them.
Improved Registration Flow
We streamlined the registration experience by replacing alert boxes with toast notifications and removing the artificial 2-second delay. Users now get immediate feedback and faster navigation to login after completing registration.
Updated Email Templates
We redesigned all email templates with a modern, professional design that works consistently across email clients. The new templates include our official logo, enhanced footer with community links, and better typography using system fonts for faster loading.
User Preferences System
We implemented a complete User Preferences System that lets users customize their DeployStack experience with themes, notifications, and feature toggles. The system is designed for privacy (users can only access their own preferences) and makes adding new preferences simple without requiring database migrations.
Welcome Email Revamp
We completely redesigned the welcome email to provide immediate, actionable guidance for new users. The new email includes step-by-step setup commands, MCP-specific guidance, and real support resources. We also added admin control over welcome email delivery through a new global setting.
Platform Updates - August 10, 2025
Email Testing for Administrators
We added a new email testing endpoint for global administrators. Admins can now send test emails through POST `/api/admin/email/test` to verify SMTP configuration is working correctly. The feature includes proper permission checks and a simple test email template. This makes it easier for administrators to troubleshoot email delivery issues during platform setup.
Button Loading States
We enhanced the Button component to show loading spinners during async operations. The component now accepts loading and loadingText props, automatically disabling the button and showing a spinner while operations are in progress. We updated forms across the platform - authentication, MCP server creation, and category management - to use these loading states. This prevents double-clicks and gives users clear feedback when something is processing.
Content Layout Component
We created a reusable ContentWrapper component to standardize the layout of detail pages. It provides a consistent gray background with white content cards, similar to what we use on team management and MCP server pages. This reduces code duplication and ensures pages look consistent across the platform.
DeployStack Gateway - Zero-Config Local Proxy for MCP Servers
The Gateway implements a sophisticated Control Plane/Data Plane architecture that connects your development environment to team MCP servers through cloud.deploystack.io. When you start the gateway, it automatically spawns all configured MCP servers as persistent background processes, making them instantly available at `localhost:9095/sse` for VS Code or any SSE-compatible client.
Supporting both stdio transport for CLI tools and SSE transport for VS Code compatibility, the Gateway handles all the complexity of process management, credential injection, and session management behind the scenes. Credentials are securely downloaded from the cloud control plane and injected directly into process environments without ever exposing them to developers, while cryptographically secure session IDs ensure safe persistent connections.
The Gateway's team-aware caching system enables instant startup by preserving tool configurations locally, automatically switching contexts when you change teams, and discovering new tools as they're added to your team's catalog. Whether your MCP servers are written in Node.js, Python, Go, Rust, or any other language, the Gateway handles them all through appropriate runtime commands, exposing individual tools with proper namespacing for direct use in your development workflow.
Extensible Plugin System with Secure Route Isolation
The plugin system provides a complete framework for extending DeployStack without compromising security or stability. Each plugin operates in its own namespace with API routes automatically isolated under /api/plugin/<plugin-id>/, preventing route hijacking and ensuring plugins cannot interfere with core authentication or user management endpoints.
Plugins can define their own database tables through a secure two-phase initialization process - core migrations run first in a trusted environment, followed by plugin tables created dynamically in a sandboxed phase. This architecture ensures plugins cannot modify core database structure while still providing full database functionality including relationships, seeding, and migrations.
Beyond database and API extensions, plugins can contribute their own global settings managed through DeployStack's admin interface, integrate with other plugins through the Plugin Manager API, and implement lifecycle hooks for initialization and cleanup. Whether you're adding support for a new cloud provider, implementing custom business logic, or extending DeployStack's capabilities, the plugin system provides a secure, structured foundation for development.
Complete Email System with SMTP and Beautiful Templates
The email system automatically connects to your existing SMTP settings configured in Global Settings, supporting providers like Gmail, SendGrid, or any standard SMTP server. We've included three professionally designed templates out of the box - welcome emails for new users, password reset instructions, and general notifications - all built with responsive HTML that looks great on any device.
For developers, we provide type-safe helper methods that make sending emails as simple as calling EmailService.sendWelcomeEmail() with validated parameters. The system includes automatic template caching for performance, connection pooling for high-volume sending, and comprehensive error handling that won't break your application flow if email delivery fails.
Whether you're sending a single welcome email or batch notifications to your entire team, the email system handles it reliably with full support for attachments, CC/BCC recipients, and custom sender addresses.
GitHub Repository Auto-Import for MCP Servers
When you paste a GitHub repository URL into DeployStack, we now automatically fetch the repository metadata, detect the programming language and runtime requirements, and pre-populate all MCP server configuration fields. The system intelligently maps languages to runtimes (TypeScript → Node.js, Python → Python, etc.) and extracts package information from files like package.json or pyproject.toml to understand dependencies and installation requirements.
For immediate use, the feature works out-of-the-box with public repositories using GitHub's public API. Teams needing access to private repositories or higher rate limits can optionally configure a GitHub App for enhanced capabilities with up to 5,000 API requests per hour.
This eliminates the tedious manual entry of repository details, ensures consistency across your MCP server configurations, and reduces setup time from minutes to seconds.
GitHub Integration for MCP Servers
DeployStack now seamlessly connects with GitHub to transform how you manage MCP servers. When you add a GitHub repository URL to an MCP server, the platform automatically extracts everything it needs - description, programming language, license, topics, and even detects MCP-specific configurations from package files. Version management becomes effortless as the system automatically discovers semantic version tags and GitHub releases, creating a complete version history with changelogs pulled directly from your repository.
The integration goes beyond simple repository scanning. Configure GitHub OAuth in your global settings to enable single sign-on for your users, letting them authenticate with their existing GitHub accounts instead of managing separate passwords. The system intelligently handles both public and private repositories, respecting GitHub's access controls while maintaining DeployStack's team boundaries. Repository synchronization can be triggered manually with a single click, pulling the latest metadata and configurations while preserving your local customizations. Security remains paramount with encrypted token storage, minimal permission requests, and comprehensive audit logging of all GitHub operations.
Centralized Global Settings Management
Managing your DeployStack installation just became significantly simpler with the new Global Settings interface. Administrators now have a single control panel where all system-wide configuration lives, organized into logical groups that make sense - SMTP Mail Settings for email delivery, GitHub OAuth Configuration for authentication, and System Configuration for core behavior. No more hunting through configuration files or environment variables.
The interface handles the complexity behind the scenes while presenting clear, understandable options. Setting up email delivery is straightforward - enter your SMTP server details, configure sender information, and emails start flowing for user registrations and password resets. GitHub OAuth integration takes minutes instead of hours - create your OAuth app, enter the credentials, and users can sign in with their GitHub accounts. System configuration options let you control the frontend URL, toggle email functionality, manage login methods, and show or hide API documentation based on your security requirements.
Security is built into the design with all sensitive information automatically encrypted, administrator-only access control, and clear organization that prevents accidental misconfiguration. Whether you're running a personal instance with Gmail SMTP, a team deployment with GitHub authentication, or a production system with dedicated email services, Global Settings adapts to your needs. The grouped tab interface ensures related settings stay together, making it easy to configure entire features at once rather than hunting for scattered options.
Team-Scoped MCP Server Installations from Global Catalog
The missing link between browsing and using MCP servers is now complete. Teams can install any server from the global catalog directly into their workspace, creating team-owned instances configured with their own credentials and settings. This three-layer system - global catalog, team access, and team installations - provides the perfect balance of community sharing and team privacy.
Every installation belongs exclusively to your team workspace. When you install a GitHub MCP server, for instance, you configure it with your team's GitHub token, customize the settings for your workflow, and give it a meaningful name that makes sense in your context. Other teams might install the same server, but their installation is completely separate - different credentials, different configuration, different data. This isolation ensures your API keys, tokens, and sensitive configurations never leak across team boundaries.
The system integrates seamlessly with your existing team resources. Installations live alongside your cloud credentials and environment variables in a unified workspace where team administrators maintain full control. Security is built-in at every level - credentials are encrypted at rest, access follows team permissions, and audit trails track all changes. Whether you're a solo developer installing productivity tools or a team deploying complex integrations, every MCP server from the global catalog is now just a few clicks away from being operational in your secure team environment.
Global MCP Server Catalog - Community-Wide Server Discovery
The MCP Server Catalog transforms how you discover and deploy Model Context Protocol servers. This comprehensive marketplace brings together community-contributed servers and official integrations in one searchable, filterable catalog that's accessible to all authenticated users. Global servers are now available platform-wide, letting you browse pre-configured solutions for everything from OpenAI integrations to GitHub tools, all organized by category and ready for instant deployment.
The catalog operates on two visibility levels to balance collaboration with privacy. Global servers, managed by administrators, provide the community foundation - these are the battle-tested, widely-useful servers everyone can access. Team servers remain completely private to your team, giving you space for custom integrations and proprietary tools without exposing them to the broader platform. Global Administrators maintain oversight capabilities for support purposes, but team boundaries remain secure.
Every server in the catalog includes comprehensive metadata - from technical specifications like language and runtime requirements to capabilities breakdown showing available tools, resources, and prompts. Version management tracks every release with detailed changelogs, automatic GitHub synchronization pulls updates directly from repositories, and the deployment integration means you're one click away from launching any server. Whether you're browsing by category, filtering by language, or searching for specific functionality, the catalog makes finding the right MCP server as simple as finding an app in an app store.
MCP Server Categories for Better Organization
Finding the right MCP server just got significantly easier with our new category system. Categories act as organizational labels that group similar servers together, transforming a potentially overwhelming catalog into a well-organized library of tools. Whether you're looking for Development Tools like Git integrations, Data Sources for database connections, or AI & ML services for machine learning workflows, categories help you navigate directly to what you need.
The system works seamlessly across both global community servers and your team's private servers, using a unified category structure managed by Global Administrators. When adding servers to the catalog, simply assign them to the appropriate category - the same familiar pattern you'd use with folders or tags in any modern application. Users can then filter the entire catalog by category, dramatically reducing the time spent searching for specific functionality.
This seemingly simple addition has a profound impact on discoverability. Instead of scrolling through an ever-growing list of servers, you can now jump straight to Communication tools when you need Slack integration, or filter for Productivity servers when setting up task management. Categories make the MCP ecosystem more accessible for new users while helping experienced developers quickly locate specialized tools.
Role-Based Access Control with Team Management Permissions
DeployStack now features comprehensive role-based access control that governs everything from system administration to team member management. The system supports four key roles: Global Administrators who manage the entire installation, Global Users who create and manage their own teams, Team Administrators with full control over team resources and membership, and Team Users who participate in team activities.
Team Administrators gain powerful member management capabilities - add up to 3 members per team, promote users to admin status, remove team members, and even transfer ownership when needed. Your default personal team remains protected and private, while additional teams support full collaboration. Global Administrators maintain oversight with read-only access to all teams for support and administrative purposes, though they cannot view actual credential values for security.
The MCP Catalog permissions align with your role perfectly. Global Admins manage community-wide servers and categories, Team Admins create and manage their team's private servers, while users browse and deploy from available servers based on their access level. Every action from creating teams to managing deployments follows the principle of least privilege - users get exactly the permissions they need, nothing more. This creates a secure, organized environment where solo developers and collaborative teams work efficiently without compromising security or stepping on each other's toes.
Team Workspaces for Organized MCP Server Management
Teams transform how you organize MCP server deployments in DeployStack. Each team acts as a complete, isolated workspace containing your MCP servers, cloud provider credentials, and environment variables - keeping different projects and environments cleanly separated. When you sign up, we automatically create your personal default team that's ready for immediate use.
You can create up to 3 teams total, each supporting up to 3 members with role-based access control. Team Administrators have full control over resources and member management, while Team Users get basic viewing access. Your default team remains private as your personal workspace, while additional teams enable collaboration with colleagues on shared projects.
Every team maintains complete resource isolation with separate credentials, independent servers, and scoped environment variables. Team management is straightforward - edit team details, manage members, transfer ownership, or delete teams when no longer needed (your default team is protected from deletion). Whether you're a solo developer organizing different projects or a small team collaborating on deployments, Teams provide the structure and security you need for efficient MCP server management.