Agent Settings
Learn how to configure the agent
Overview
The Agent
class is the core component of Browser Use that handles browser automation. Here are the main configuration options you can use when initializing an agent.
Basic Settings
Required Parameters
task
: The instruction for the agent to executellm
: A LangChain chat model instance. See LangChain Models for supported models.
Agent Behavior
Control how the agent operates:
Behavior Parameters
controller
: Registry of functions the agent can call. Defaults to base Controller. See Custom Functions for details.use_vision
: Enable/disable vision capabilities. Defaults toTrue
.- When enabled, the model processes visual information from web pages
- Disable to reduce costs or use models without vision support
- For GPT-4o, image processing costs approximately 800-1000 tokens (~$0.002 USD) per image (but this depends on the defined screen size)
save_conversation_path
: Path to save the complete conversation history. Useful for debugging.override_system_message
: Completely replace the default system prompt with a custom one.extend_system_message
: Add additional instructions to the default system prompt.
Vision capabilities are recommended for better web interaction understanding, but can be disabled to reduce costs or when using models without vision support.
Reuse Existing Browser Context
By default browser-use launches its own builtin browser using playwright chromium.
You can also connect to a remote browser or pass any of the following
existing playwright objects to the Agent: page
, browser_context
, browser
, browser_session
, or browser_profile
.
These all get passed down to create a BrowserSession
for the Agent
:
For example, to connect to an existing browser over CDP you could do:
For example, to connect to a local running chrome instance you can do:
See Connect to your Browser for more info.
You can reuse the same BrowserSession
after an agent has completed running. If you do nothing, the
browser will be automatically closed on run()
completion only if it was launched by us.
Running the Agent
The agent is executed using the async run()
method:
max_steps
(default:100
) Maximum number of steps the agent can take during execution. This prevents infinite loops and helps control execution time.
Agent History
The method returns an AgentHistoryList
object containing the complete execution history. This history is invaluable for debugging, analysis, and creating reproducible scripts.
The AgentHistoryList
provides many helper methods to analyze the execution:
final_result()
: Get the final extracted contentis_done()
: Check if the agent completed successfullyhas_errors()
: Check if any errors occurredmodel_thoughts()
: Get the agent’s reasoning processaction_results()
: Get results of all actions
For a complete list of helper methods and detailed history analysis capabilities, refer to the AgentHistoryList source code.
Run initial actions without LLM
With this example you can run initial actions without the LLM. Specify the action as a dictionary where the key is the action name and the value is the action parameters. You can find all our actions in the Controller source code.
Run with message context
You can configure the agent and provide a separate message to help the LLM understand the task better.
Run with planner model
You can configure the agent to use a separate planner model for high-level task planning:
Planner Parameters
planner_llm
: A LangChain chat model instance used for high-level task planning. Can be a smaller/cheaper model than the main LLM.use_vision_for_planner
: Enable/disable vision capabilities for the planner model. Defaults toTrue
.planner_interval
: Number of steps between planning phases. Defaults to1
.
Using a separate planner model can help:
- Reduce costs by using a smaller model for high-level planning
- Improve task decomposition and strategic thinking
- Better handle complex, multi-step tasks
The planner model is optional. If not specified, the agent will not use the planner model.
Optional Parameters
message_context
: Additional information about the task to help the LLM understand the task better.initial_actions
: List of initial actions to run before the main task.max_actions_per_step
: Maximum number of actions to run in a step. Defaults to10
.max_failures
: Maximum number of failures before giving up. Defaults to3
.retry_delay
: Time to wait between retries in seconds when rate limited. Defaults to10
.generate_gif
: Enable/disable GIF generation. Defaults toFalse
. Set toTrue
or a string path to save the GIF.
Memory Management
Browser Use includes a procedural memory system using Mem0 that automatically summarizes the agent’s conversation history at regular intervals to optimize context window usage during long tasks.
Memory Parameters
enable_memory
: Enable/disable the procedural memory system. Defaults toTrue
.memory_config
: AMemoryConfig
Pydantic model instance (required ifenable_memory
isTrue
). Dictionary format is not supported.
Using MemoryConfig
You must configure the memory system using the MemoryConfig
Pydantic model for a type-safe approach:
The MemoryConfig
model provides these configuration options:
Memory Settings
agent_id
: Unique identifier for the agent (default:"browser_use_agent"
). Essential for persistent memory sessions if using a persistent vector store.memory_interval
: Number of steps between memory summarization (default:10
)
LLM Settings (for Mem0’s internal operations)
llm_instance
: The LangChainBaseChatModel
instance that Mem0 will use for its internal summarization and processing. You must pass the same LLM instance used by the main agent, or another compatible one, here.
Embedder Settings
embedder_provider
: Provider for embeddings ('openai'
,'gemini'
,'ollama'
, or'huggingface'
)embedder_model
: Model name for the embedderembedder_dims
: Dimensions for the embeddings
Vector Store Settings
vector_store_provider
: Choose the vector store backend. Supported options include:'faiss'
(default),'qdrant'
,'pinecone'
,'supabase'
,'elasticsearch'
,'chroma'
,'weaviate'
,'milvus'
,'pgvector'
,'upstash_vector'
,'vertex_ai_vector_search'
,'azure_ai_search'
,'lancedb'
,'mongodb'
,'redis'
,'memory'
(in-memory, non-persistent).vector_store_collection_name
: (Optional) Specify a custom name for the collection or index in your vector store. If not provided, a default name is generated (especially for local stores like FAISS/Chroma) or used by Mem0.vector_store_base_path
: Path for local vector stores like FAISS or Chroma (e.g.,/tmp/mem0
). Default is/tmp/mem0
.vector_store_config_override
: (Optional) A dictionary to provide or override specific configuration parameters required by Mem0 for the chosenvector_store_provider
. This is where you’d put connection details likehost
,port
,api_key
,url
,environment
, etc., for cloud-based or server-based vector stores.
The model automatically sets appropriate defaults based on the LLM being used:
- For
ChatOpenAI
: Uses OpenAI’stext-embedding-3-small
embeddings - For
ChatGoogleGenerativeAI
: Uses Gemini’smodels/text-embedding-004
embeddings - For
ChatOllama
: Uses Ollama’snomic-embed-text
embeddings - Default: Uses Hugging Face’s
all-MiniLM-L6-v2
embeddings
Important:
- Always pass a properly constructed
MemoryConfig
object to thememory_config
parameter. - Ensure the
llm_instance
is provided toMemoryConfig
so Mem0 can perform its operations. - For persistent memory across agent runs or for shared memory, choose a scalable vector store provider (like Qdrant, Pinecone, etc.) and configure it correctly using
vector_store_provider
andvector_store_config_override
. The default ‘faiss’ provider stores data locally invector_store_base_path
.
How Memory Works
When enabled, the agent periodically compresses its conversation history into concise summaries:
- Every
memory_interval
steps, the agent reviews its recent interactions. - It uses Mem0 (configured with your chosen LLM and vector store) to create a procedural memory summary.
- The original messages in the agent’s active context are replaced with this summary, reducing token usage.
- This process helps maintain important context while freeing up the context window for new information.
Disabling Memory
If you want to disable the memory system (for debugging or for shorter tasks), set enable_memory
to False
:
Disabling memory may be useful for debugging or short tasks, but for longer tasks, it can lead to context window overflow as the conversation history grows. The memory system helps maintain performance during extended sessions.