Skip to main content

Chat with AI agents

The ToolHive playground is a chat workspace for AI agents - built-in or your own - that runs inside the desktop app. Each thread runs against an agent that sets the system prompt and a default toolset. You can then give that agent specific MCP tools and skills, and chat with it directly. Testing MCP servers is one workflow the playground supports, alongside building skills, managing your ToolHive setup through chat, and any other chat-driven task you want an agent for.

Key capabilities

Built-in and custom agents

Switch between the built-in ToolHive Assistant and Skill Engineer agents, or create your own with a custom name, description, and system prompt. See Choose an agent for a thread.

Per-thread context

Each thread keeps its own agent, model, MCP tool selection, and skill selection, so switching threads doesn't reshuffle your setup. See Manage chat threads.

Conversational ToolHive management

The default ToolHive Assistant uses a built-in MCP server (toolhive mcp) to manage your other MCP servers through chat: list servers, check status, start or stop them, and view logs without leaving the conversation.

Detailed interaction logs

See tool calls, parameters, execution status, response data, and timing inline in the chat so you can verify exactly what an agent is doing.

Message actions

Copy any message, edit and rewind a previous user message, or queue a follow-up while a response is still streaming. See Work with chat messages.

Per-message cost

For paid providers, see an estimated USD cost next to the token totals on each assistant message. See See per-message cost.

Attachments

Send images and PDFs alongside your prompt. See Attach files to a message.

Getting started

To start using the playground:

  1. Access the playground: Click the Playground tab in the ToolHive UI navigation bar.

  2. Configure provider settings: Click Provider Settings to set up access to AI model providers:

    • OpenAI: Enter your OpenAI API key to use GPT models
    • Anthropic: Add your Anthropic API key for Claude models
    • Google: Configure Google AI API key for Gemini models
    • xAI: Set up xAI API key for Grok models
    • Ollama: Enter the server URL to connect to your local Ollama instance (default: http://localhost:11434)
    • LM Studio: Enter the server URL from the Developer section in LM Studio where you started the local server (default: http://localhost:1234)
    • OpenRouter: Add OpenRouter API key for access to multiple model providers
  3. Start a thread: Click New chat in the sidebar. New threads open with the ToolHive Assistant agent the first time you use the playground, and inherit the agent, model, MCP tool selection, and skill selection from the thread you used most recently after that.

  4. Pick an agent (optional): Use the agent selector in the chat toolbar to switch to Skill Engineer, a custom agent, or open Manage agents to build a new one. See Choose an agent for a thread.

  5. Add MCP tools and skills (optional): Pick which MCP tools and installed skills the agent can use in this thread. See Configure MCP tools and skills for a thread.

  6. Start chatting: Send a message. The agent uses its system prompt together with the tools and skills you enabled for this thread.

Choose an agent for a thread

Each thread runs against an agent that sets the system prompt and a default toolset. Two agents are built in:

  • ToolHive Assistant is the default. It's tuned to manage your MCP servers, run tools through them, and answer questions about ToolHive itself.
  • Skill Engineer is tuned to design, build, and audit skills.

To switch agents on the active thread, open the agent selector in the chat toolbar and pick one. Agent selection is per-thread, so different threads can run different agents at the same time. New threads inherit the agent you used most recently.

Manage custom agents

You can build your own agents alongside the built-ins. Open the agent selector and choose Manage agents to open the Agents page. From there you can:

  • Create an agent with a name, description, and system prompt.
  • Edit an existing custom agent.
  • Delete a custom agent you no longer need.

Custom agents appear in the agent selector alongside the built-ins so you can pick them for any thread.

Configure MCP tools and skills for a thread

After you pick an agent, choose which MCP tools and skills it can use in the active thread. Both selections are scoped to the thread and persist across reloads. New threads inherit your most recent choices.

MCP tools

Click the tools icon in the chat toolbar to manage which MCP servers and tools are available in the active thread:

  • View all your running MCP servers
  • Enable or disable specific tools from each server
  • Search and filter tools by name or functionality
  • The toolhive mcp server is included by default, providing management capabilities
ToolHive playground tools management showing available MCP toolsToolHive playground tools management showing available MCP tools
tip

For more control over tool availability, use Customize tools to permanently configure which tools are enabled for each registry server across all clients and threads. Playground tool selection applies only to the active thread.

Skills

If you've installed skills, pick which ones the agent can use in this thread alongside its MCP tools. Skill selection is per-thread, same as MCP tools, so different threads can give the same agent different skill sets.

Manage chat threads

The playground keeps each conversation in a separate thread so you can run several sessions in parallel without losing context. Open the sidebar to see your threads, with Starred entries pinned at the top and Recents below. Untitled threads show as New chat until you give them a name.

Each row shows a relative timestamp such as just now, 5m ago, 2h ago, or 3d ago. Older threads show a short date instead.

To work with threads:

  • Start a new thread: Click New chat at the top of the sidebar.

  • Rename a thread: Double-click the thread row, or open its Thread options menu and choose Rename. You can also click the title or the pencil icon at the top of the chat to rename the active thread. The thread row's tooltip confirms the double-click action:

    Double-click to rename

  • Star or unstar a thread: Click the star icon next to the thread title, or open Thread options and choose Star or Unstar. Starred threads appear under Starred at the top of the sidebar.

  • Delete a thread: Open Thread options and choose Delete to remove a thread you no longer need. The playground asks for confirmation:

    Delete "<THREAD_NAME>"? This cannot be undone.

    Confirm with Delete, or back out with Cancel.

Work with chat messages

Hover over any message in the chat to reveal message actions:

  • Copy copies the message text to your clipboard. Tool inputs, tool outputs, and internal reasoning are excluded; tool result blocks have their own per-block Copy button.
  • Edit is only available on your own messages. It pre-fills the composer with the message text so you can revise and resend it.

The behavior of Edit depends on whether the assistant is currently responding:

  • Idle: the edited message is sent as a new message at the end of the thread. The original message stays in the history.
  • Streaming the last user message: the composer shows a chip reading Editing last message - submit to rewind and retry, and the submit button switches to a refresh icon. Submitting cancels the in-flight response, drops the partial assistant reply and the original user message, and sends your edited text as a fresh turn.

To exit edit mode, click cancel on the chip or empty the composer.

Queue a message while a response is streaming

If you type into the composer while the assistant is still streaming, the submit button switches to a send icon. Clicking it queues your message instead of stopping the response. The composer clears and shows a chip:

Queued: <PREVIEW> - sends when the current response finishes

When the current response finishes, the queued message sends automatically. Click the X on the chip to cancel the queued message at any time. Only one message can be queued at a time; submitting a second one replaces the queued slot. Switching threads also clears the queue.

If the streaming response fails instead of finishing cleanly, the queued message stays in the chip but isn't sent automatically. Click the X to discard it.

See per-message cost

For assistant messages that use a paid provider (OpenAI, Anthropic, Google, xAI, OpenRouter), the playground shows an estimated USD cost next to the token totals (for example, 100 → 50 = 150 • $0.0012). Hover the totals to see a breakdown of input, cached, output, and total cost.

Pricing comes from models.dev and is cached locally and refreshed daily. Local providers like Ollama and LM Studio, and any model without published pricing, render without a cost line.

note

These figures are estimates for guidance only. Refer to your provider's billing dashboard for authoritative usage and charges.

Attach files to a message

Add images and PDFs to a message so the agent can read them while it works with your MCP tools. The composer accepts up to 5 files per message, each 10 MB or smaller, and supports image files and PDFs.

To attach files:

  1. Open the composer toolbar menu and choose Add images or PDFs, or drag files onto the playground window. Drag-and-drop is enabled across the entire playground.

  2. Type your prompt and send the message. If you send a message that only contains attachments, the playground records the message text as:

    Sent with attachments

In the chat history, the playground previews each attachment alongside the message:

  • Images appear inline. Click an image to open it in a larger modal preview.
  • PDFs show as 📎 <FILE_NAME> with a Download link so you can save the original file.

If a file is rejected, the playground shows a toast that explains why:

  • When you exceed the per-message limit:

    You reached the maximum number of files

    You can only upload up to 5 files

  • When a file is over 10 MB:

    File size too large

    The file size must be less than 10MB

  • When a file isn't an image or a PDF:

    File type not supported

    Only images and PDFs are supported

The composer placeholder reflects the playground state:

  • Before you select a model:

    Select an AI model to get started

  • After you select a model:

    Type your message...

Example workflows

The playground supports any chat-driven task you want an agent for. A few common starting points:

Manage MCP servers through conversation

The desktop app starts a dedicated MCP server (toolhive mcp) that orchestrates ToolHive operations through natural language. With the default ToolHive Assistant agent, you can list, start, stop, and inspect servers without leaving the chat:

Can you list all my MCP servers and show their current status?
Start the fetch MCP server for me
Stop all unhealthy MCP servers
Show me the logs for the fetch MCP server

The agent calls the matching toolhive mcp tools and shows the results inline, giving you a unified interface and an audit trail in the same place as any other tool execution.

ToolHive playground showing AI response with MCP tool execution resultsToolHive playground showing AI response with MCP tool execution results

Test MCP server functionality

Use the playground to validate that an MCP server works as expected before you connect external clients to it. Enable the server's tools in the active thread, then prompt the agent to call them:

Use the GitHub MCP server to search for recent issues in the
microsoft/vscode repository

If the GitHub MCP server is running, the agent makes the appropriate API calls and returns formatted results. The playground shows each tool execution inline:

  • Tool name and description: what tool was called and its purpose
  • Input parameters: the exact parameters passed to the tool
  • Execution status: whether the tool succeeded or failed
  • Response data: the complete response from the tool
  • Timing information: how long the tool took to execute

This makes it easy to spot tool implementation or configuration issues.

Build and audit skills

Switch to the Skill Engineer agent on a thread to design new skills, refine an existing one, or audit a skill's behavior. See Skills for details on skill formats and how to install or build your own.

Provider security

  • Use dedicated API keys for testing that have appropriate rate limits.
  • Regularly rotate API keys used in development environments.
  • Consider using API keys with restricted permissions for testing purposes.
  • When using local providers like Ollama or LM Studio, ensure the server URLs are only accessible on your local network to prevent unauthorized access.

Agents, servers, and tools

  • Start only the MCP servers you need so that agents only see relevant tools.
  • Save reusable prompts as custom agents so you don't have to retype the system prompt for every new thread.
  • Use the playground to validate new server configurations before connecting them to external AI clients.

Thread and attachment hygiene

  • Delete unused threads so the sidebar stays focused on the work you actually return to.
  • Star the conversations you want to keep close at hand. Otherwise they get pushed down as new chats arrive in Recents.
  • Attachments are sent to your AI provider. Strip credentials, customer information, and other sensitive content from PDFs and screenshots before sharing them.

Next steps

  • Browse the Skills section to install or build skills you can enable in a thread
  • Configure client configuration to connect ToolHive to external AI applications
  • Set up secrets management for secure handling of API keys and tokens
  • Explore network isolation for enhanced security when testing untrusted MCP servers
  • Discover more MCP servers to add to your agent threads in the registry

Troubleshooting

Provider not working

If a provider isn't working:

  1. For API key-based providers (OpenAI, Anthropic, Google, xAI, OpenRouter):

    • If you see a 401 or "invalid API key" error, double-check the key in the provider's API keys dashboard. The key may have been rotated, revoked, or scoped to the wrong project.
    • If you see a 429 or quota error, check your billing and usage in the provider's dashboard.
    • Confirm the key has access to the model you selected.
  2. For local providers (Ollama, LM Studio):

    • Verify the server is running and reachable at the configured URL, including the port (for example, http://localhost:11434).
    • For LM Studio, confirm you started the server from the Developer section.
    • Check that no firewall or VPN is blocking localhost traffic.
MCP tools not appearing

If your MCP server tools aren't showing up:

  1. Verify the MCP server is running on the MCP Servers page.
  2. Click the tools icon in the playground and confirm the server's tools are enabled for this thread.
  3. Restart the MCP server if it shows as unhealthy.
  4. Check the server logs for errors.
Tool execution failing

If tools fail to execute:

  1. Check the tool's parameter requirements in the audit log.
  2. Verify any required secrets or environment variables are configured for the server. See Secrets management.
  3. Ensure the MCP server has the permissions it needs (network access, file system access). See Network isolation.
  4. Review the server logs for detailed error information.