In recent developments, an enhanced version of the Docker MCP (Model Context Protocol) Catalog has been introduced, designed to optimize the discovery process and streamline the submission procedure. This initiative has been undertaken to provide a secure and efficient method for deploying and scaling agent-based applications while minimizing risks associated with host access and secret management. Developers now have the opportunity to submit their servers through two distinct pathways: servers constructed by Docker, which come equipped with a comprehensive security suite including signatures, Software Bill of Materials (SBOMs), attestations, and continuous scanning; or those built and maintained by the community using their own Docker images.
This article aims to outline five best practices for the design, testing, and packaging of MCP servers, drawing from the extensive experience of constructing and assisting in the creation of over 100 MCP servers for the Docker MCP Catalog. These practices are intended to simplify the submission process, provide access to over 20 million Docker developers, and deliver genuine utility to both agents and the developers who utilize them.
Manage Your Agent’s Tool Budget Intentionally
In the context of MCP servers, "Tool Budget" refers to the number of tools an agent can effectively manage. Similar to any budget, efficient management is crucial for a satisfactory user experience. As an MCP server creator, it’s important to recognize that an excessive offering of tools can complicate and increase the cost of using your server, potentially deterring users. Although some AI agents now allow users to selectively enable tools to keep the experience streamlined, a more effective strategy is to design your toolset around clear use cases, avoiding the assignment of every API endpoint to a separate tool.
For instance, when developing an MCP server to interface with your API, it might be tempting to create a tool for each endpoint. While this approach may facilitate a quick start, it often results in an overloaded toolset that hampers user adoption.
Instead of assigning one tool per endpoint, consider utilizing MCP server prompts. Similar to Macros, these prompts allow you to create a single command that integrates multiple tools or endpoint calls behind the scenes. This approach enables users to request actions like "fetch my user’s invoices," with the agent handling the underlying complexity, calling multiple tools without exposing users to the overhead.
The End User of the Tool is the Agent/LLM
A critical aspect often overlooked is that the agent or Large Language Model (LLM), rather than the end user, is the actual tool user. Although the user enables the tool, the agent is responsible for calling it. Understanding this distinction is crucial when developing an MCP server, as you are building for the agent that acts on the user’s behalf.
Error handling presents a common challenge for developers. If your tool returns error messages intended for humans, the user experience you envision may not be realized. The agent, rather than the user, calls your tool, and it may not relay error messages back to the user.
Agents are designed to complete tasks, and if a task fails, they often attempt an alternative approach. Therefore, your error handling should guide the agent on subsequent actions, rather than merely highlighting the issue. Instead of stating, "You don’t have access to this system," provide guidance like, "To access this system, the MCP server requires a valid API_TOKEN; the current API_TOKEN is not valid."
This approach informs the agent that third-party system access is not possible due to misconfiguration, rather than outright denial. The distinction is important: the lack of access stems from the user’s failure to properly configure the MCP server, rather than a strict permission issue.
Document for Humans and Agents!
Documentation plays an equally significant role in the development process. When crafting documentation for your MCP server, remember that you are addressing two audiences: the end users and the AI agent. As with error handling, it’s essential to understand the needs of both.
Your documentation should clearly address each audience. End users seek to understand why they should use your MCP server, the problems it solves, and how it fits into their workflow. Conversely, agents depend on well-crafted tool names and descriptions to determine if your server is suitable for a particular task.
Keep in mind that although the agent is the one actually using the MCP server, the end user decides which tools the agent can access. Your documentation must support both audiences.
Don’t Just Test Functionality, Test User Interactions
Validating your documentation can be achieved through thorough testing of your MCP server. The simplest method for interaction during development is utilizing the MCP inspector tool, which can be accessed by typing npx @modelcontextprotocol/inspector
in your terminal.
While it is common to test whether your MCP server functions correctly, the inspector also provides insights into the end user’s perspective. It offers a clearer understanding of how users interact with your server and whether your documentation supports that experience.
There are three essential steps to testing a server:
- Connecting to the MCP Server: This step ensures your server captures all necessary configurations for proper operation.
- List Tools: This step reveals what AI agents view when initializing your MCP server.
- Tool Calling: Ensure that the tool behaves as expected, allowing you to validate failure modes.
An important design consideration is the MCP Server lifecycle, which involves questions such as: What is necessary for the MCP Client to connect to the MCP Server? How should tools be listed and discovered? What’s the process for invoking a specific tool?
For example, when writing an MCP server for a database, a typical API might establish the database connection when the server starts. However, for an MCP server, it’s advisable to make each tool call self-contained by creating a connection for every tool call instead of at server start. This approach allows users to connect and list tools even if the server is not correctly configured.
Although this may initially seem counterintuitive, it enhances usability and reliability. In essence, the only time your MCP will require a connection to a database or third-party system is when a tool is invoked. The MCP Inspector is invaluable in visualizing this process and understanding how both users and agents will interact with your server.
If you are using the Docker MCP Toolkit, several methods are available to test whether your MCP server behaves as intended. Running the command
docker mcp tools call my-tool
allows you to call your tool using the configuration defined in Docker Desktop.To test what MCP clients see, you can execute the command
docker mcp gateway run --verbose --dry-run
, which simulates a call from an MCP client to your MCP server, assuming it’s enabled in the Docker MCP Catalog.Packaging Your MCP Servers with Containers
After writing and testing your MCP server, the next step is packaging. Packaging an MCP server involves more than just creating the artifact; it also requires consideration of how the artifact will be used. While there may be some bias, packaging your MCP server as a Docker Image is highly recommended.
MCP servers can be developed in various languages such as Python, TypeScript, or Java. Packaging as a Docker image ensures your server’s portability, as Docker images allow users to run your MCP server regardless of their system’s configuration. Docker containers eliminate the need to manage dependencies on other machines. If a user can run Docker, they can run your MCP server.
Numerous resources are available regarding how to create a good Dockerfile. If you are uncertain about your Dockerfile, you can use Gordon, the Docker AI agent, to optimize it by typing
docker ai improve my Dockerfile
.How to Submit Your MCP Server
Once you have a Dockerfile in your repository, you are invited to submit your MCP server to the Docker Official Registry. At present, all submitted MCP servers must utilize the stdio transport mechanism, so ensure your server supports this when operating as a container. We eagerly anticipate your submission!
Conclusion
The revamped Docker MCP Catalog provides an efficient avenue for securely discovering and scaling MCP servers. Whether submitting a Docker-built server with comprehensive security features or maintaining your own as a community contributor, adhering to these five best practices—managing tool budgets, designing for agents, documenting for users and LLMs, thorough testing, and container packaging—will enable you to develop MCP servers that are dependable, user-friendly, and ready for real-world agentic workloads.
Are you ready to share your MCP server with the Docker community? Submit it to the Docker MCP Catalog and showcase it to millions of developers!
Learn More
For more detailed insights and resources, visit the Docker website.
For more Information, Refer to this article.