Browser Use natively supports 15+ LLM providers. Most providers accept any model string. Check each provider’s docs to see which models are available.
Which model should I use? See our benchmark results and recommendations for detailed comparisons across real-world browser tasks.
ChatBrowserUse() is our optimized in-house model, matching the accuracy of top models while completing tasks 3-5x faster. See our blog post→
from browser_use import Agent, ChatBrowserUse
# Initialize the model (defaults to bu-latest)
llm = ChatBrowserUse()
# Or use the premium model
llm = ChatBrowserUse(model='bu-2-0')
# Create agent with the model
agent = Agent(
task="...", # Your task here
llm=llm
)
Required environment variables:
Get your API key from the Browser Use Cloud. New users get 5 free tasks.
Available Models
bu-latest or bu-1-0: Default model
bu-2-0: Latest premium model with improved capabilities
Pricing
ChatBrowserUse offers competitive pricing per 1 million tokens:
bu-1-0 / bu-latest (Default)
| Token Type | Price per 1M tokens |
|---|
| Input tokens | $0.20 |
| Cached tokens | $0.02 |
| Output tokens | $2.00 |
bu-2-0 (Premium)
| Token Type | Price per 1M tokens |
|---|
| Input tokens | $0.60 |
| Cached tokens | $0.06 |
| Output tokens | $3.50 |
Available models. Also supports Gemma models and Vertex AI via ChatGoogle(model="...", vertexai=True).
GEMINI_API_KEY is deprecated and should be named GOOGLE_API_KEY as of 2025-05.
from browser_use import Agent, ChatGoogle
from dotenv import load_dotenv
# Read GOOGLE_API_KEY into env
load_dotenv()
# Initialize the model
llm = ChatGoogle(model='gemini-2.5-flash')
# Create agent with the model
agent = Agent(
task="Your task here",
llm=llm
)
Required environment variables:
Available models
from browser_use import Agent, ChatOpenAI
# Initialize the model
llm = ChatOpenAI(
model="gpt-5",
)
# Create agent with the model
agent = Agent(
task="...", # Your task here
llm=llm
)
Required environment variables:
You can use any OpenAI compatible model by passing the model name to the
ChatOpenAI class using a custom URL (or any other parameter that would go
into the normal OpenAI API call).
Available models. Coordinate clicking is automatically enabled for claude-sonnet-4-* and claude-opus-4-* models.
from browser_use import Agent, ChatAnthropic
# Initialize the model
llm = ChatAnthropic(
model="claude-sonnet-4-6",
)
# Create agent with the model
agent = Agent(
task="...", # Your task here
llm=llm
)
And add the variable:
Available models
from browser_use import Agent, ChatAzureOpenAI
from pydantic import SecretStr
import os
# Initialize the model
llm = ChatAzureOpenAI(
model="o4-mini",
)
# Create agent with the model
agent = Agent(
task="...", # Your task here
llm=llm
)
Required environment variables:
AZURE_OPENAI_ENDPOINT=https://your-endpoint.openai.azure.com/
AZURE_OPENAI_API_KEY=
Using the Responses API (for GPT-5.1 Codex models)
Azure OpenAI now requires api_version >= 2025-03-01-preview for certain models like gpt-5.1-codex-mini.
These models only support the Responses API instead of the Chat Completions API.
Browser Use automatically detects and uses the Responses API for these models:
gpt-5.1-codex, gpt-5.1-codex-mini, gpt-5.1-codex-max
gpt-5-codex, codex-mini-latest
computer-use-preview
from browser_use import Agent, ChatAzureOpenAI
# Auto-detection (recommended) - uses Responses API for gpt-5.1-codex-mini
llm = ChatAzureOpenAI(
model="gpt-5.1-codex-mini",
api_version="2025-03-01-preview", # Required for Responses API
)
# Or explicitly enable/disable Responses API for any model
llm = ChatAzureOpenAI(
model="gpt-4o",
api_version="2025-03-01-preview",
use_responses_api=True, # Force Responses API (True/False/'auto')
)
agent = Agent(
task="...",
llm=llm
)
The use_responses_api parameter accepts:
'auto' (default): Automatically uses Responses API for models that require it
True: Force use of the Responses API
False: Force use of the Chat Completions API
Available models. AWS Bedrock provides access to multiple model providers through a single API. We support both a general AWS Bedrock client and provider-specific convenience classes. Install with pip install "browser-use[aws]".
General AWS Bedrock (supports all providers)
from browser_use import Agent
from browser_use.llm import ChatAWSBedrock
# Works with any Bedrock model (Anthropic, Meta, AI21, etc.)
llm = ChatAWSBedrock(
model="anthropic.claude-3-5-sonnet-20240620-v1:0", # or any Bedrock model
aws_region="us-east-1",
)
# Create agent with the model
agent = Agent(
task="Your task here",
llm=llm
)
Anthropic Claude via AWS Bedrock (convenience class)
from browser_use import Agent
from browser_use.llm import ChatAnthropicBedrock
# Anthropic-specific class with Claude defaults
llm = ChatAnthropicBedrock(
model="anthropic.claude-3-5-sonnet-20240620-v1:0",
aws_region="us-east-1",
)
# Create agent with the model
agent = Agent(
task="Your task here",
llm=llm
)
AWS Authentication
Required environment variables:
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION=us-east-1
You can also use AWS profiles or IAM roles instead of environment variables. The implementation supports:
- Environment variables (
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION)
- AWS profiles and credential files
- IAM roles (when running on EC2)
- Session tokens for temporary credentials
- AWS SSO authentication (
aws_sso_auth=True)
Available models
from browser_use import Agent, ChatGroq
llm = ChatGroq(model="meta-llama/llama-4-maverick-17b-128e-instruct")
agent = Agent(
task="Your task here",
llm=llm
)
Required environment variables:
Oracle Cloud Infrastructure (OCI) example
Available models. OCI provides access to various generative AI models including Meta Llama, Cohere, and other providers through their Generative AI service. Install with pip install "browser-use[oci]".
from browser_use import Agent, ChatOCIRaw
# Initialize the OCI model
llm = ChatOCIRaw(
model_id="ocid1.generativeaimodel.oc1.us-chicago-1.amaaaaaask7dceya...",
service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
compartment_id="ocid1.tenancy.oc1..aaaaaaaayeiis5uk2nuubznrekd...",
provider="meta", # or "cohere"
temperature=0.7,
max_tokens=800,
top_p=0.9,
auth_type="API_KEY",
auth_profile="DEFAULT"
)
# Create agent with the model
agent = Agent(
task="Your task here",
llm=llm
)
Required setup:
- Set up OCI configuration file at
~/.oci/config
- Have access to OCI Generative AI models in your tenancy
- Install the OCI Python SDK:
uv add oci or pip install oci
Authentication methods supported:
API_KEY: Uses API key authentication (default)
INSTANCE_PRINCIPAL: Uses instance principal authentication
RESOURCE_PRINCIPAL: Uses resource principal authentication
Ollama
Available models.
- Install Ollama: https://github.com/ollama/ollama
- Run
ollama serve to start the server
- In a new terminal, install the model you want to use:
ollama pull llama3.1:8b (this has 4.9GB)
from browser_use import Agent, ChatOllama
llm = ChatOllama(model="llama3.1:8b")
Langchain
Example on how to use Langchain with Browser Use.
Currently, only qwen-vl-max is recommended for Browser Use. Other Qwen models, including qwen-max, have issues with the action schema format.
Smaller Qwen models may return incorrect action schema formats (e.g., actions: [{"navigate": "google.com"}] instead of [{"navigate": {"url": "google.com"}}]). If you want to use other models, add concrete examples of the correct action format to your prompt.
from browser_use import Agent, ChatOpenAI
from dotenv import load_dotenv
import os
load_dotenv()
# Get API key from https://modelstudio.console.alibabacloud.com/?tab=playground#/api-key
api_key = os.getenv('ALIBABA_CLOUD')
base_url = 'https://dashscope-intl.aliyuncs.com/compatible-mode/v1'
llm = ChatOpenAI(model='qwen-vl-max', api_key=api_key, base_url=base_url)
agent = Agent(
task="Your task here",
llm=llm,
use_vision=True
)
Required environment variables:
from browser_use import Agent, ChatOpenAI
from dotenv import load_dotenv
import os
load_dotenv()
# Get API key from https://www.modelscope.cn/docs/model-service/API-Inference/intro
api_key = os.getenv('MODELSCOPE_API_KEY')
base_url = 'https://api-inference.modelscope.cn/v1/'
llm = ChatOpenAI(model='Qwen/Qwen2.5-VL-72B-Instruct', api_key=api_key, base_url=base_url)
agent = Agent(
task="Your task here",
llm=llm,
use_vision=True
)
Required environment variables:
Vercel AI Gateway example
Available models. Vercel AI Gateway provides an OpenAI-compatible API endpoint that acts as a proxy to various AI providers, with features like rate limiting, caching, and monitoring.
from browser_use import Agent, ChatVercel
from dotenv import load_dotenv
import os
load_dotenv()
# Get API key (https://vercel.com/ai-gateway)
api_key = os.getenv('VERCEL_API_KEY')
if not api_key:
raise ValueError('VERCEL_API_KEY is not set')
# Basic usage
llm = ChatVercel(
model='openai/gpt-4o',
api_key=api_key,
)
# With provider options - control which providers are used and in what order
# This will try Vertex AI first, then fall back to Anthropic if Vertex fails
llm_with_provider_options = ChatVercel(
model='anthropic/claude-sonnet-4',
api_key=api_key,
provider_options={
'gateway': {
'order': ['vertex', 'anthropic'] # Try Vertex AI first, then Anthropic
}
},
)
agent = Agent(
task="Your task here",
llm=llm
)
Required environment variables:
Available models
from browser_use import Agent, ChatDeepSeek
llm = ChatDeepSeek(model="deepseek-chat")
agent = Agent(
task="Your task here",
llm=llm
)
Required environment variables:
Available models
from browser_use import Agent, ChatMistral
llm = ChatMistral(model="mistral-large-latest")
agent = Agent(
task="Your task here",
llm=llm
)
Required environment variables:
Available models
from browser_use import Agent, ChatCerebras
llm = ChatCerebras(model="llama3.3-70b")
agent = Agent(
task="Your task here",
llm=llm
)
Required environment variables:
Available models. Access 300+ models from any provider through a single API.
from browser_use import Agent, ChatOpenRouter
llm = ChatOpenRouter(model="anthropic/claude-sonnet-4-6")
agent = Agent(
task="Your task here",
llm=llm
)
Required environment variables:
LiteLLM
Requires separate install (pip install litellm). Supports any LiteLLM model string — useful when you need a provider not covered by the native integrations above.
from browser_use import Agent
from browser_use.llm.litellm import ChatLiteLLM
llm = ChatLiteLLM(model="openai/gpt-5")
agent = Agent(
task="Your task here",
llm=llm
)
Other OpenAI-Compatible Providers
Any provider with an OpenAI-compatible endpoint works via ChatOpenAI with a custom base_url:
Examples available: