Model Runner: AI-Driven Mock APIs Testing via Docker

NewsModel Runner: AI-Driven Mock APIs Testing via Docker

In today’s digital age, ensuring that applications perform seamlessly and deliver high-quality user experiences is more critical than ever. One innovative approach to achieving this is by leveraging non-deterministic large language models (LLMs) to generate dynamic and rich test data. This data is instrumental in validating application behavior and maintaining consistent quality. In this guide, we will delve into using Docker’s Model Runner in conjunction with Microcks to create dynamic mock APIs for testing applications. This integration offers developers a powerful solution to enhance their testing environments.

Microcks is an open-source tool certified by the Cloud Native Computing Foundation (CNCF) that simplifies the process of creating mock services for development and testing. It allows developers to simulate APIs by providing predefined mock responses or generating them directly from an OpenAPI schema. By directing applications to interact with these mock services, developers can test efficiently without the risk of affecting live systems, ensuring a secure and robust testing environment.

Docker Model Runner is a tool that allows LLMs to run locally within Docker Desktop. It provides an API compatible with OpenAI, enabling the integration of advanced AI capabilities into projects using local hardware resources. This is particularly beneficial for developers who want to incorporate sophisticated AI functionalities without relying on external cloud services, thus maintaining control over their data and operations.

By integrating Microcks with Docker Model Runner, developers can enrich their mock APIs with AI-generated responses, resulting in realistic and varied test data. This approach moves beyond the limitations of static examples, offering more flexibility and accuracy in simulating real-world scenarios.

Setting up Docker Model Runner

To begin with, ensure Docker Model Runner is enabled. This can be done by following the guidelines in Docker’s previous blog on configuring Goose for a local AI assistant setup. The next step involves selecting and pulling a desired LLM model from Docker Hub. For instance, you can execute the following command to pull a model:

bash<br /> docker model pull ai/qwen3:8B-Q4_0<br />

Configuring Microcks with Docker Model Runner

The initial step in this process is to clone the Microcks repository:

bash<br /> git clone https://github.com/microcks/microcks --depth 1<br />

Navigate to the Docker Compose setup directory:

bash<br /> cd microcks/install/docker-compose<br />

Make necessary adjustments in the /config/application.properties file to enable the AI Copilot feature. This involves configuring the AI Copilot to utilize Docker Model Runner:

plaintext<br /> ai-copilot.enabled=true<br /> ai-copilot.implementation=openai<br /> ai-copilot.openai.api-key=irrelevant<br /> ai-copilot.openai.api-url=http://model-runner.docker.internal:80/engines/llama.cpp/<br /> ai-copilot.openai.timeout=600<br /> ai-copilot.openai.maxTokens=10000<br /> ai-copilot.openai.model=ai/qwen3:8B-Q4_0<br />

The configuration utilizes model-runner.docker.internal:80 as the base URL for the OpenAI-compatible API. This setup allows for direct communication between containers and the Model Runner, avoiding unnecessary networking through host machine ports.

To activate the copilot feature, add the following line to the Microcks config/features.properties file:

plaintext<br /> features.feature.ai-copilot.enabled=true<br />

Running Microcks

Start Microcks using Docker Compose in development mode with the following command:

bash<br /> docker-compose -f docker-compose-devmode.yml up<br />

Once the system is up and running, access the Microcks UI at http://localhost:8080.

For testing purposes, install the example API by navigating through these options on the Microcks page: Microcks Hub → MicrocksIO Samples APIs → pastry-api-openapi v.2.0.0 → Install → Direct import → Go.

Using AI Copilot Samples

Within the Microcks UI, navigate to the service page of the imported API and select an operation you wish to enhance. Open the “AI Copilot Samples” dialog, which prompts Microcks to query the LLM configured via Docker Model Runner.

You may notice increased GPU activity as the model processes your request. After processing, the AI-generated mock responses are displayed, ready to be reviewed or added directly to your mocked operations.

This process can be tested using a simple curl command. For example:

bash<br /> curl -X PATCH 'http://localhost:8080/rest/API+Pastry+-+2.0/2.0.0/pastry/Chocolate+Cake' \<br /> -H 'accept: application/json' \<br /> -H 'Content-Type: application/json' \<br /> -d '{"status":"out_of_stock"}'<br /> <br /> {<br /> "name" : "Chocolate Cake",<br /> "description" : "Rich chocolate cake with vanilla frosting",<br /> "size" : "L",<br /> "price" : 12.99,<br /> "status" : "out_of_stock"<br /> }<br />

This command returns a realistic, AI-generated response that enhances the quality and reliability of your test data.

In practical applications, such as a shopping cart system, where the app relies on an inventory service, this approach with realistic yet randomized mocked data can cover more application behaviors with the same set of tests. For better reproducibility, specify the Docker Model Runner dependency and the chosen model explicitly in your compose.yml:

yaml<br /> models:<br /> qwen3:<br /> model: ai/qwen3:8B-Q4_0<br /> context_size: 8096<br />

Starting the compose setup will pull the model and wait for it to become available, similar to how it handles containers.

Conclusion

Docker Model Runner serves as a valuable local resource for running LLMs, offering compatibility with OpenAI APIs and allowing for seamless integration into existing workflows. Tools like Microcks can take advantage of Model Runner to generate dynamic sample responses for mocked APIs, providing richer and more realistic synthetic data for integration testing.

If you are involved in local AI workflows or running LLMs locally, engage with the community in the Docker Forum. Exploring more local AI integrations with Docker could open up new possibilities for your projects.

Learn More

For further insights and detailed instructions, visit the Docker Model Runner product page or join discussions on the Docker Forum. These resources can provide additional guidance and community support for integrating AI capabilities into your development processes.

For more Information, Refer to this article.

Neil S
Neil S
Neil is a highly qualified Technical Writer with an M.Sc(IT) degree and an impressive range of IT and Support certifications including MCSE, CCNA, ACA(Adobe Certified Associates), and PG Dip (IT). With over 10 years of hands-on experience as an IT support engineer across Windows, Mac, iOS, and Linux Server platforms, Neil possesses the expertise to create comprehensive and user-friendly documentation that simplifies complex technical concepts for a wide audience.
Watch & Subscribe Our YouTube Channel
YouTube Subscribe Button

Latest From Hawkdive

You May like these Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.