Recommendations
- Best accuracy:
O3
- Fastest:
llama4
on groq - Balanced: fast + cheap + clever:
gemini-flash-latest
orgpt-4.1-mini
OpenAI example
O3
model is recommended for best performance.
.env
You can use any OpenAI compatible model by passing the model name to the
ChatOpenAI
class using a custom URL (or any other parameter that would go
into the normal OpenAI API call).Anthropic example
.env
Azure OpenAI example
.env
Gemini example
[!IMPORTANT]GEMINI_API_KEY
was the old environment var name, it should be calledGOOGLE_API_KEY
as of 2025-05.
.env
AWS Bedrock example
AWS Bedrock provides access to multiple model providers through a single API. We support both a general AWS Bedrock client and provider-specific convenience classes.General AWS Bedrock (supports all providers)
Anthropic Claude via AWS Bedrock (convenience class)
AWS Authentication
Required environment variables:.env
- Environment variables (
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
,AWS_DEFAULT_REGION
) - AWS profiles and credential files
- IAM roles (when running on EC2)
- Session tokens for temporary credentials
- AWS SSO authentication (
aws_sso_auth=True
)
Groq example
.env
Ollama
- Install Ollama: https://github.com/ollama/ollama
- Run
ollama serve
to start the server - In a new terminal, install the model you want to use:
ollama pull llama3.1:8b
(this has 4.9GB)
Langchain
Example on how to use Langchain with Browser Use.Qwen example
Currently, onlyqwen-vl-max
is recommended for Browser Use. Other Qwen models, including qwen-max
, have issues with the action schema format.
Smaller Qwen models may return incorrect action schema formats (e.g., actions: [{"go_to_url": "google.com"}]
instead of [{"go_to_url": {"url": "google.com"}}]
). If you want to use other models, add concrete examples of the correct action format to your prompt.
.env
ModelScope example
.env