# Go AI Client Library > A comprehensive, multi-provider Go library for interacting with various AI models through unified interfaces. This library supports Large Language Models (LLMs), embedding models, image generation models, audio generation (text-to-speech), and rerankers from multiple providers including Anthropic, OpenAI, Google, AWS, Voyage AI, xAI, ElevenLabs, and more. ## Docs - [Home](https://joakimcarlsson.github.io/ai/): A comprehensive, multi-provider Go library for interacting with various AI models through unified interfaces. ### Getting Started - [Installation](https://joakimcarlsson.github.io/ai/getting-started/installation/): - Go 1.25 or later - [Quick Start](https://joakimcarlsson.github.io/ai/getting-started/quick-start/): See the [Agent Framework](../agent/overview.md) section for the full guide. ### Providers - [Overview](https://joakimcarlsson.github.io/ai/providers/overview/): - [LLM](https://joakimcarlsson.github.io/ai/providers/llm/): - [Embeddings](https://joakimcarlsson.github.io/ai/providers/embeddings/): Embed document chunks with awareness of their surrounding context. - [Image Generation](https://joakimcarlsson.github.io/ai/providers/image-generation/): - [Audio](https://joakimcarlsson.github.io/ai/providers/audio/): Enable character-level timing information for subtitles, word highlighting, or lip sync: - [Speech-to-Text](https://joakimcarlsson.github.io/ai/providers/speech-to-text/): - [Rerankers](https://joakimcarlsson.github.io/ai/providers/rerankers/): - [Fill-in-the-Middle](https://joakimcarlsson.github.io/ai/providers/fim/): Code completion by providing a prompt (code before the cursor) and an optional suffix (code after the cursor), with the model filling in the middle. - [Vision](https://joakimcarlsson.github.io/ai/providers/vision/): Send images to LLMs for analysis using URL references or raw binary data. ### Agent Framework - [Overview](https://joakimcarlsson.github.io/ai/agent/overview/): The agent package provides multi-agent orchestration with automatic tool execution, session management, persistent memory, sub-agents, handoffs, fan-out, and context strategies. - [Session Management](https://joakimcarlsson.github.io/ai/agent/sessions/): Sessions persist conversation history across multiple `Chat()` calls. - [Persistent Memory](https://joakimcarlsson.github.io/ai/agent/memory/): Memory enables cross-conversation fact storage and retrieval using vector-based semantic search. - [Streaming](https://joakimcarlsson.github.io/ai/agent/streaming/): `ChatStream` returns a channel of events for real-time response handling. - [Hooks](https://joakimcarlsson.github.io/ai/agent/hooks/): Hooks let you observe, modify, or block agent behavior at key points in the execution pipeline. - [Tool Confirmation](https://joakimcarlsson.github.io/ai/agent/confirmation/): The confirmation protocol lets tools require human approval before executing. - [Sub-Agents](https://joakimcarlsson.github.io/ai/agent/sub-agents/): Sub-agents let an orchestrator delegate tasks to specialized child agents. - [Background Agents](https://joakimcarlsson.github.io/ai/agent/background-agents/): Background agents let the orchestrator launch sub-agents asynchronously. - [Handoffs](https://joakimcarlsson.github.io/ai/agent/handoffs/): Handoffs transfer full control from one agent to another. - [Fan-Out](https://joakimcarlsson.github.io/ai/agent/fan-out/): Fan-out distributes multiple tasks to worker agents in parallel and collects results. - [Continue/Resume](https://joakimcarlsson.github.io/ai/agent/continue/): `Continue()` lets you manually execute tool calls and feed results back into the agent loop. - [Context Strategies](https://joakimcarlsson.github.io/ai/agent/context-strategies/): Context strategies automatically manage the context window when conversations grow beyond token limits. - [Toolsets](https://joakimcarlsson.github.io/ai/agent/toolsets/): Toolsets group multiple tools under a name with optional dynamic filtering. - [Instruction Templates](https://joakimcarlsson.github.io/ai/agent/instruction-templates/): Dynamic system prompts using template variables or runtime-generated instructions. ### Integrations - [PostgreSQL](https://joakimcarlsson.github.io/ai/integrations/postgres/): PostgreSQL-backed session store for persistent conversation history. - [SQLite](https://joakimcarlsson.github.io/ai/integrations/sqlite/): SQLite-backed session store for lightweight persistent conversation history. - [pgvector](https://joakimcarlsson.github.io/ai/integrations/pgvector/): PostgreSQL-backed memory store using [pgvector](https://github.com/pgvector/pgvector) for semantic vector search. ### Advanced - [Batch Processing](https://joakimcarlsson.github.io/ai/advanced/batch-processing/): Process bulk LLM and embedding requests efficiently using provider-native batch APIs or bounded concurrent execution. - [BYOM](https://joakimcarlsson.github.io/ai/advanced/byom/): Use Ollama, LocalAI, vLLM, LM Studio, or any OpenAI-compatible inference server. - [MCP Integration](https://joakimcarlsson.github.io/ai/advanced/mcp/): This library integrates with the official [Model Context Protocol Go SDK](https://github.com/modelcontextprotocol/go-sdk) to provide seamless access to MCP servers and their tools. - [Tool Calling](https://joakimcarlsson.github.io/ai/advanced/tools/): For simple tools that are just a function, use `functiontool.New` to skip the struct boilerplate: - [Structured Output](https://joakimcarlsson.github.io/ai/advanced/structured-output/): Constrained generation that forces the LLM to return valid JSON matching a schema. - [Cost Tracking](https://joakimcarlsson.github.io/ai/advanced/cost-tracking/): All models include built-in pricing information for cost calculation. - [Prompt Templates](https://joakimcarlsson.github.io/ai/advanced/prompt-templates/): A template engine for building dynamic prompts with variable substitution, built-in functions, caching, and validation. - [OpenTelemetry Tracing](https://joakimcarlsson.github.io/ai/advanced/tracing/): Built-in OpenTelemetry instrumentation for all provider calls and agent execution. - [Configuration](https://joakimcarlsson.github.io/ai/advanced/configuration/): All LLM providers include automatic retry with exponential backoff and jitter.