Skip to content

BYOM (Bring Your Own Model)

Use Ollama, LocalAI, vLLM, LM Studio, or any OpenAI-compatible inference server.

Setup

// 1. Create model
llamaModel := model.NewCustomModel(
    model.WithModelID("llama3.2"),
    model.WithAPIModel("llama3.2:latest"),
)

// 2. Register provider
ollama := llm.RegisterCustomProvider("ollama", llm.CustomProviderConfig{
    BaseURL:      "http://localhost:11434/v1",
    DefaultModel: llamaModel,
})

// 3. Use it
client, _ := llm.NewLLM(ollama)
response, _ := client.SendMessages(ctx, messages, nil)

Supported Servers

Any server that implements the OpenAI-compatible API:

  • Ollamahttp://localhost:11434/v1
  • LocalAIhttp://localhost:8080/v1
  • vLLMhttp://localhost:8000/v1
  • LM Studiohttp://localhost:1234/v1

See example/byom/main.go for a complete example.