Model Providers Overview¶
Swarms supports a vast array of model providers, giving you the flexibility to choose the best model for your specific use case. Whether you need high-performance inference, cost-effective solutions, or specialized capabilities, Swarms has you covered.
Supported Model Providers¶
| Provider | Description | Documentation |
|---|---|---|
| OpenAI | Industry-leading language models including GPT-4, GPT-4o, and GPT-4o-mini. Perfect for general-purpose tasks, creative writing, and complex reasoning. | OpenAI Integration |
| Anthropic/Claude | Advanced AI models known for their safety, helpfulness, and reasoning capabilities. Claude models excel at analysis, coding, and creative tasks. | Claude Integration |
| Groq | Ultra-fast inference platform offering real-time AI responses. Ideal for applications requiring low latency and high throughput. | Groq Integration |
| Cohere | Enterprise-grade language models with strong performance on business applications, text generation, and semantic search. | Cohere Integration |
| DeepSeek | Advanced reasoning models including the DeepSeek Reasoner (R1). Excellent for complex problem-solving and analytical tasks. | DeepSeek Integration |
| Ollama | Local model deployment platform allowing you to run open-source models on your own infrastructure. No API keys required. | Ollama Integration |
| OpenRouter | Unified API gateway providing access to hundreds of models from various providers through a single interface. | OpenRouter Integration |
| XAI | xAI's Grok models offering unique capabilities for research, analysis, and creative tasks with advanced reasoning abilities. | XAI Integration |
| Llama4 | Meta's latest open-source language models including Llama-4-Maverick and Llama-4-Scout variants with expert routing capabilities. | Llama4 Integration |
| Azure OpenAI | Enterprise-grade OpenAI models through Microsoft's cloud infrastructure with enhanced security, compliance, and enterprise features. | Azure Integration |
Quick Start¶
All model providers follow a consistent pattern in Swarms. Here's the basic template:
from swarms import Agent
import os
from dotenv import load_dotenv
load_dotenv()
# Initialize agent with your chosen model
agent = Agent(
agent_name="Your-Agent-Name",
model_name="gpt-4o-mini", # Varies by provider
system_prompt="Your system prompt here",
agent_description="Description of what your agent does.",
)
# Run your agent
response = agent.run("Your query here")
Model Selection Guide¶
For High-Performance Applications¶
-
OpenAI GPT-4o: Best overall performance and reasoning
-
Anthropic Claude: Excellent safety and analysis capabilities
-
DeepSeek R1: Advanced reasoning and problem-solving
For Cost-Effective Solutions¶
-
OpenAI GPT-4o-mini: Great performance at lower cost
-
Ollama: Free local deployment
-
OpenRouter: Access to cost-effective models
For Real-Time Applications¶
- Groq: Ultra-fast inference
For Specialized Tasks¶
-
Llama4: Expert routing for complex workflows
-
XAI Grok: Advanced research capabilities
-
Cohere: Strong business applications
Environment Setup¶
Most providers require API keys. Add them to your .env file:
# OpenAI
OPENAI_API_KEY=your_openai_key
# Anthropic
ANTHROPIC_API_KEY=your_anthropic_key
# Groq
GROQ_API_KEY=your_groq_key
# Cohere
COHERE_API_KEY=your_cohere_key
# DeepSeek
DEEPSEEK_API_KEY=your_deepseek_key
# OpenRouter
OPENROUTER_API_KEY=your_openrouter_key
# XAI
XAI_API_KEY=your_xai_key
# Azure OpenAI
AZURE_API_KEY=your_azure_openai_api_key
AZURE_API_BASE=https://your-resource-name.openai.azure.com/
AZURE_API_VERSION=2024-02-15-preview
No API Key Required
Ollama can be run locally without API keys, making it perfect for development and testing.
Custom Wrappers for Local Models¶
For advanced use cases, you can create custom wrapper classes for local model deployments. This approach gives you full control over the LLM interface:
vLLM Custom Wrapper¶
vLLM is a high-performance LLM inference engine. Create a custom wrapper to use it with Swarms:
from swarms import Agent
from vllm_wrapper import VLLMWrapper
# Create custom vLLM wrapper
vllm_llm = VLLMWrapper(
model_name="meta-llama/Llama-2-7b-chat-hf",
temperature=0.7,
)
# Use with Agent
agent = Agent(
agent_name="vLLM-Agent",
llm=vllm_llm, # Pass custom wrapper
max_loops=1,
)
See the vLLM Custom Wrapper Guide for detailed instructions.
Ollama Custom Wrapper¶
Create a custom wrapper for Ollama to have full control over the interface:
from swarms import Agent
from ollama_wrapper import OllamaWrapper
# Create custom Ollama wrapper
ollama_llm = OllamaWrapper(
model_name="llama2",
temperature=0.7,
)
# Use with Agent
agent = Agent(
agent_name="Ollama-Agent",
llm=ollama_llm, # Pass custom wrapper
max_loops=1,
)
See the Ollama Guide for both simple and custom wrapper approaches.
Advanced Features¶
Multi-Model Workflows¶
Swarms allows you to create workflows that use different models for different tasks:
from swarms import Agent, ConcurrentWorkflow
# Research agent using Claude for analysis
research_agent = Agent(
agent_name="Research-Agent",
model_name="claude-3-sonnet-20240229",
system_prompt="You are a research expert."
)
# Creative agent using GPT-4o for content generation
creative_agent = Agent(
agent_name="Creative-Agent",
model_name="gpt-4.1",
system_prompt="You are a creative content expert."
)
# Workflow combining both agents
workflow = ConcurrentWorkflow(
name="Research-Creative-Workflow",
agents=[research_agent, creative_agent]
)
Model Routing¶
Automatically route tasks to the most appropriate model:
from swarms import Agent, ModelRouter
# Define model preferences for different task types
model_router = ModelRouter(
models={
"analysis": "claude-3-sonnet-20240229",
"creative": "gpt-4.1",
"fast": "gpt-4o-mini",
"local": "ollama/llama2"
}
)
# Agent will automatically choose the best model
agent = Agent(
agent_name="Smart-Agent",
llm=model_router,
system_prompt="You are a versatile assistant."
)
Getting Help¶
-
Documentation: Each provider has detailed documentation with examples
-
Community: Join the Swarms community for support and best practices
-
Issues: Report bugs and request features on GitHub
-
Discussions: Share your use cases and learn from others
Ready to Get Started?
Choose a model provider from the table above and follow the detailed integration guide. Each provider offers unique capabilities that can enhance your Swarms applications.