Spring AI Playground is a self-hosted web UI that simplifies AI experimentation and testing. It provides Java developers with an intuitive interface for working with large language models (LLMs), vector databases, prompt engineering, and Model Context Protocol (MCP) integrations.
Built on Spring AI, it supports leading model providers and includes comprehensive tools for testing retrieval-augmented generation (RAG) workflows and MCP integrations. The goal is to make AI more accessible to developers, helping them quickly prototype Spring AI-based applications with enhanced contextual awareness and external tool capabilities.
First, clone the Spring AI Playground project from GitHub:
git clone https://github.com/JM-Lab/spring-ai-playground.git
cd spring-ai-playground
./mvnw spring-boot:build-image -Pproduction -DskipTests=true \
-Dspring-boot.build-image.imageName=jmlab/spring-ai-playground:latest
docker run -d -p 8282:8282 --name spring-ai-playground \
-e SPRING_AI_OLLAMA_BASE_URL=http://host.docker.internal:11434 \
-v spring-ai-playground:/home \
--restart unless-stopped \
jmlab/spring-ai-playground:latest
Notes:
- Data Persistence: Application data is stored in the spring-ai-playground Docker volume, ensuring data persists even if the container is removed.
- Ollama Connection: The environment variable SPRING_AI_OLLAMA_BASE_URL is set to http://host.docker.internal:11434. Adjust the URL if Ollama runs on a different host or port.
- Automatic Restart: The –restart unless-stopped option ensures the container restarts automatically unless manually stopped with docker stop.
- For Linux Users: The
host.docker.internal
DNS name may not be available on all Linux distributions. If you encounter connection issues, you may need to use--network="host"
in yourdocker run
command or replacehost.docker.internal
with your host machine’s IP address on the Docker bridge network (e.g.,172.17.0.1
).
⚠️ MCP STDIO Transport Limitation
While Docker is recommended for most scenarios, it is not suitable for testing MCP STDIO transport. MCP STDIO transport requires direct process-to-process communication, which containerized environments cannot provide reliably.If you plan to test the MCP STDIO transport, please use the Running Locally (Optional) instead.
docker stop spring-ai-playground
docker rm spring-ai-playground
docker rmi jmlab/spring-ai-playground:latest
docker volume rm spring-ai-playground
./mvnw clean install -Pproduction -DskipTests=true
./mvnw spring-boot:run
http://localhost:8282
in your browser.Note: Complete either the Docker or Local installation steps above before proceeding with PWA installation.
Spring AI Playground comes with Progressive Web App (PWA) capabilities, allowing you to install it as a standalone application on your device for a native app-like experience.
http://localhost:8282
Spring AI Playground uses Ollama by default for local LLM and embedding models. No API keys are required, which makes it easy to get started.
To enable Ollama, ensure it is installed and running on your system. Refer to the Spring AI Ollama Chat Prerequisites for setup details.
Spring AI Playground supports all major AI model providers, including Anthropic, OpenAI, Microsoft, Amazon, Google, and Ollama. For more details on the available implementations, visit the Spring AI Chat Models Reference Documentation.
When running Spring AI Playground with the ollama
profile, you can configure the default chat and embedding models, as well as the list of available models in the playground UI, by updating your configuration file (application.yaml
).
Notes:
pull-model-strategy: when_missing
ensures that the configured models are automatically pulled from Ollama if they are not already available locally.playground.chat.models
controls which models appear in the model selection dropdown in the web UI.- Changing the
chat.options.model
orembedding.options.model
here updates the defaults used by the application.
Pre‑pull Recommended Ollama Models to avoid delays when first using a model, pre-pull it with Ollama before starting Spring AI Playground.
Switching to OpenAI is a primary example of how you can use a different AI model with Spring AI Playground. To explore other models supported by Spring AI, learn more in the Spring AI Documentation.
To switch to OpenAI, follow these steps:
Modify the pom.xml
file:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-ollama-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
</dependency>
Update application.yaml
:
spring:
profiles:
default: openai
ai:
openai:
api-key: your-openai-api-key
You can connect Spring AI to OpenAI-compatible servers such as llama.cpp
, TabbyAPI
, or LM Studio
by adding the following configuration to application.yml
:
spring:
ai:
openai:
# Set your actual API key here.
# If your server does not require authentication, use a placeholder string (e.g., "not-used").
api-key: "not-used"
# Base URL including scheme, host, and port only. Do NOT append /v1, as Spring AI will automatically add the path.
# Ensure the port matches your server’s configuration, as defaults may vary.
base-url: "http://localhost:8080"
chat:
options:
# Specify the model ID exposed by your server (e.g., "mistral" for LM Studio). Check your server’s documentation or /models endpoint for available models.
model: "your-model-name"
# Optional: Override the completions endpoint if your server uses a custom path (default: "/v1/chat/completions").
# completions-path: "/custom/chat/completions"
Configuration Details:
api-key
: Required by Spring AI. Use your real API key or a placeholder string if authentication is not required.base-url
: The server URL up to host and port, without trailing version/path segments.model
: Must match the model name exposed by your server (e.g., mistral, gpt2).completions-path
: Override only if your server uses a non-standard endpoint path for chat completions (default: /v1/chat/completions
).Server-Specific Examples:
llama.cpp server:
spring:
ai:
openai:
api-key: "not-used"
base-url: "http://localhost:8080"
chat:
options:
model: "your-model-name"
TabbyAPI:
spring:
ai:
openai:
api-key: "not-used" # Replace with an actual key if authentication is enabled in TabbyAPI settings
base-url: "http://localhost:5000"
chat:
options:
model: "your-exllama-model"
LM Studio:
spring:
ai:
openai:
api-key: "not-used"
base-url: "http://localhost:1234"
chat:
options:
model: "your-loaded-model"
Note:
Ensure your server fully adheres to OpenAI’s API specification for best compatibility. Streaming support works automatically if your server supports Server-Sent Events. Verify your server’s capabilities. Check your server’s documentation or /models endpoint to find the correct model name.
Spring AI Playground now includes a comprehensive MCP (Model Context Protocol) Playground that provides a visual interface for managing connections to external tools through AI models. This feature leverages Spring AI’s Model Context Protocol implementation to offer client-side capabilities.
Note: STREAMABLE HTTP officially introduced in the MCP v2025‑03‑26 specification (March 26, 2025) — is a single-endpoint HTTP transport that replaces the former HTTP+SSE setup. Clients send JSON‑RPC via POST to /mcp, while responses may optionally use an SSE-style stream, with session‑ID tracking and resumable connections.
This MCP Playground provides developers with a powerful visual tool for prototyping, testing, and debugging Model Context Protocol integrations, making it easier to build sophisticated AI applications with contextual awareness.
Spring AI Playground now provides seamless integration with MCP (Model Context Protocol) tools directly within the chat interface, enabling you to enhance AI conversations with external tools. Here’s how you can leverage this powerful feature:
⚠️ Important for Ollama Users
When using Ollama as your AI provider, ensure you’re using a tool-enabled model that supports external function calling. Not all Ollama models support MCP tool integration.
ollama pull <model-name>
Tip
Models like OpenAI GPT-OSS, Qwen 3, and DeepSeek-R1 offer advanced reasoning capabilities with visible thought processes, making them particularly effective for complex MCP tool workflows.
This integration enables developers to quickly prototype and test tool-enhanced AI interactions, bringing the power of external systems and capabilities directly into your Spring AI conversations through the Model Context Protocol.
Spring AI Playground offers a comprehensive vector database playground with advanced retrieval capabilities powered by Spring AI’s VectorStore API integration.
Vector Database providers including Apache Cassandra, Azure Cosmos DB, Azure Vector Search, Chroma, Elasticsearch, GemFire, MariaDB, Milvus, MongoDB Atlas, Neo4j, OpenSearch, Oracle, PostgreSQL/PGVector, Pinecone, Qdrant, Redis, SAP Hana, Typesense and Weaviate.
author == 'John' && year >= 2023
) to narrow search scopes and refine query results.These features, combined with Spring AI’s flexibility, provide a comprehensive playground for vector database testing and advanced integration into your applications.
Spring AI Playground now offers a fully integrated RAG (Retrieval-Augmented Generation) feature, allowing you to enhance AI responses with knowledge from your own documents. Here’s how you can make the most of this capability:
This seamless integration enables developers to quickly prototype and optimize knowledge-enhanced AI interactions within a single, intuitive interface-bringing the power of Retrieval-Augmented Generation to your Spring AI applications.
Here are some features we are planning to develop for future releases of Spring AI Playground:
Build production-ready AI Agents by combining Model Context Protocol (MCP) for external tool integration, Retrieval-Augmented Generation (RAG) for knowledge retrieval, and Chat for natural interaction — all inside a single, unified workflow.
Inspired by the Effective Agents patterns from Spring AI, developers can:
This streamlined approach enables going from concept to tested, cloud-ready deployment with minimal friction.
Introducing tools to track and monitor AI performance, usage, and errors for better management and debugging.
Implementing login and security features to control access to the Spring AI Playground.
Supporting embedding, image, audio, and moderation models from Spring AI.
These features will help make Spring AI Playground even better for testing and building AI projects.