Gemini CLI documentation
Gemini CLI brings the power of Gemini models directly into your terminal. Use it to understand code, automate tasks, and build workflows with your local project context.
Install
npm install -g @google/gemini-cli
Get started
Jump in to Gemini CLI.
- Quickstart: Your first session with Gemini CLI.
- Installation: How to install Gemini CLI on your system.
- Authentication: Setup instructions for personal and enterprise accounts.
- CLI cheatsheet: A quick reference for common commands and options.
- Gemini 3 on Gemini CLI: Learn about Gemini 3 support in Gemini CLI.
Use Gemini CLI
User-focused guides and tutorials for daily development workflows.
- File management: How to work with local files and directories.
- Get started with Agent skills: Getting started with specialized expertise.
- Manage context and memory: Managing persistent instructions and facts.
- Execute shell commands: Executing system commands safely.
- Manage sessions and history: Resuming, managing, and rewinding conversations.
- Plan tasks with todos: Using todos for complex workflows.
- Web search and fetch: Searching and fetching content from the web.
- Set up an MCP server: Set up an MCP server.
- Automate tasks: Automate tasks.
Features
Technical documentation for each capability of Gemini CLI.
- Agent Skills: Use specialized agents for specific tasks.
- Checkpointing: Automatic session snapshots.
- Headless mode: Programmatic and scripting interface.
- Hooks: Customize Gemini CLI behavior with scripts.
- IDE integration: Integrate Gemini CLI with your favorite IDE.
- MCP servers: Connect to and use remote agents.
- Model routing: Automatic fallback resilience.
- Model selection: Choose the best model for your needs.
- Plan mode: Use a safe, read-only mode for planning complex changes.
- Subagents: Using specialized agents for specific tasks.
- Remote subagents: Connecting to and using remote agents.
- Rewind: Rewind and replay sessions.
- Sandboxing: Isolate tool execution.
- Settings: Full configuration reference.
- Telemetry: Usage and performance metric details.
- Token caching: Performance optimization.
Configuration
Settings and customization options for Gemini CLI.
- Custom commands: Personalized shortcuts.
- Enterprise configuration: Professional environment controls.
- Ignore files (.geminiignore): Exclusion pattern reference.
- Model configuration: Fine-tune generation parameters like temperature and thinking budget.
- Project context (GEMINI.md): Technical hierarchy of context files.
- System prompt override: Instruction replacement logic.
- Themes: UI personalization technical guide.
- Trusted folders: Security permission logic.
Reference
Deep technical documentation and API specifications.
- Command reference: Detailed slash command guide.
- Configuration reference: Settings and environment variables.
- Keyboard shortcuts: Productivity tips.
- Memory import processor: How Gemini CLI processes memory from various sources.
- Policy engine: Fine-grained execution control.
- Tools reference: Information on how tools are defined, registered, and used.
Resources
Support, release history, and legal information.
- FAQ: Answers to frequently asked questions.
- Quota and pricing: Limits and billing details.
- Terms and privacy: Official notices and terms.
- Troubleshooting: Common issues and solutions.
- Uninstall: How to uninstall Gemini CLI.
Development
- Contribution guide: How to contribute to Gemini CLI.
- Integration testing: Running integration tests.
- Issue and PR automation: Automation for issues and pull requests.
- Local development: Setting up a local development environment.
- NPM package structure: The structure of the NPM packages.
Releases
- Release notes: Release notes for all versions.
- Stable release: The latest stable release.
- Preview release: The latest preview release.
Released: April 09, 2026
For most users, our latest stable release is the recommended release. Install the latest stable version with:
npm install -g @google/gemini-cli
Highlights
Section titled “Highlights”- Dynamic Sandbox Expansion: Implemented dynamic sandbox expansion and worktree support for both Linux and Windows, enhancing development flexibility in restricted environments.
- Tool-Based Topic Grouping (Chapters): Introduced “Chapters” to logically group agent interactions based on tool usage and intent, providing a clearer narrative flow in long sessions.
- Enhanced Browser Agent: Added persistent session management, dynamic read-only tool discovery, and sandbox-aware initialization for the browser agent.
- Security & Permission Hardening: Implemented secret visibility lockdown for environment files and integrated integrity controls for Windows sandboxing.
What’s Changed
Section titled “What’s Changed”- fix(acp): handle all InvalidStreamError types gracefully in prompt #24540
- feat(acp): add support for /about command #24649
- feat(acp): add /help command #24839
- feat(evals): centralize test agents into test-utils for reuse by @Samee24 in #23616
- revert: chore(config): disable agents by default by @abhipatel12 in #23672
- fix(plan): update telemetry attribute keys and add timestamp by @Adib234 in #23685
- fix(core): prevent premature MCP discovery completion by @jackwotherspoon in #23637
- feat(browser): add maxActionsPerTask for browser agent setting by @cynthialong0-0 in #23216
- fix(core): improve agent loader error formatting for empty paths by @adamfweidman in #23690
- fix(cli): only show updating spinner when auto-update is in progress by @scidomino in #23709
- Refine onboarding metrics to log the duration explicitly and use the tier name. by @yunaseoul in #23678
- chore(tools): add toJSON to tools and invocations to reduce logging verbosity by @alisa-alisa in #22899
- fix(cli): stabilize copy mode to prevent flickering and cursor resets by @mattKorwel in #22584
- fix(test): move flaky ctrl-c-exit test to non-blocking suite by @mattKorwel in #23732
- feat(skills): add ci skill for automated failure replication by @mattKorwel in #23720
- feat(sandbox): implement forbiddenPaths for OS-specific sandbox managers by @ehedlund in #23282
- fix(core): conditionally expose additional_permissions in shell tool by @galz10 in #23729
- refactor(core): standardize OS-specific sandbox tests and extract linux helper methods by @ehedlund in #23715
- format recently added script by @scidomino in #23739
- fix(ui): prevent over-eager slash subcommand completion by @keithguerin in #20136
- Fix dynamic model routing for gemini 3.1 pro to customtools model by @kevinjwang1 in #23641
- feat(core): support inline agentCardJson for remote agents by @adamfweidman in #23743
- fix(cli): skip console log/info in headless mode by @cynthialong0-0 in #22739
- test(core): install bubblewrap on Linux CI for sandbox integration tests by @ehedlund in #23583
- docs(reference): split tools table into category sections by @sheikhlimon in #21516
- fix(browser): detect embedded URLs in query params to prevent allowedDomains bypass by @tony-shi in #23225
- fix(browser): add proxy bypass constraint to domain restriction system prompt by @tony-shi in #23229
- fix(policy): relax write_file argsPattern in plan mode to allow paths without session ID by @Adib234 in #23695
- docs: fix grammar in CONTRIBUTING and numbering in sandbox docs by @splint-disk-8i in #23448
- fix(acp): allow attachments by adding a permission prompt by @sripasg in #23680
- fix(core): thread AbortSignal to chat compression requests (#20405) by @SH20RAJ in #20778
- feat(core): implement Windows sandbox dynamic expansion Phase 1 and 2.1 by @scidomino in #23691
- Add note about root privileges in sandbox docs by @diodesign in #23314
- docs(core): document agent_card_json string literal options for remote agents by @adamfweidman in #23797
- fix(cli): resolve TTY hang on headless environments by unconditionally resuming process.stdin before React Ink launch by @cocosheng-g in #23673
- fix(ui): cleanup estimated string length hacks in composer by @keithguerin in #23694
- feat(browser): dynamically discover read-only tools by @cynthialong0-0 in #23805
- docs: clarify policy requirement for
general.plan.directoryin settings schema by @jerop in #23784 - Revert “perf(cli): optimize —version startup time (#23671)” by @scidomino in #23812
- don’t silence errors from wombat by @scidomino in #23822
- fix(ui): prevent escape key from cancelling requests in shell mode by @PrasannaPal21 in #21245
- Changelog for v0.36.0-preview.0 by @gemini-cli-robot in #23702
- feat(core,ui): Add experiment-gated support for gemini flash 3.1 lite by @chrstnb in #23794
- Changelog for v0.36.0-preview.3 by @gemini-cli-robot in #23827
- new linting check: github-actions-pinning by @alisa-alisa in #23808
- fix(cli): show helpful guidance when no skills are available by @Niralisj in #23785
- fix: Chat logs and errors handle tail tool calls correctly by @googlestrobe in #22460
- Don’t try removing a tag from a non-existent release. by @scidomino in #23830
- fix(cli): allow ask question dialog to take full window height by @jacob314 in #23693
- fix(core): strip leading underscores from error types in telemetry by @yunaseoul in #23824
- Changelog for v0.35.0 by @gemini-cli-robot in #23819
- feat(evals): add reliability harvester and 500/503 retry support by @alisa-alisa in #23626
- feat(sandbox): dynamic Linux sandbox expansion and worktree support by @galz10 in #23692
- Merge examples of use into quickstart documentation by @diodesign in #23319
- fix(cli): prioritize primary name matches in slash command search by @sehoon38 in #23850
- Changelog for v0.35.1 by @gemini-cli-robot in #23840
- fix(browser): keep input blocker active across navigations by @kunal-10-cloud in #22562
- feat(core): new skill to look for duplicated code while reviewing PRs by @devr0306 in #23704
- fix(core): replace hardcoded non-interactive ASK_USER denial with explicit policy rules by @ruomengz in #23668
- fix(plan): after exiting plan mode switches model to a flash model by @Adib234 in #23885
- feat(gcp): add development worker infrastructure by @mattKorwel in #23814
- fix(a2a-server): A2A server should execute ask policies in interactive mode by @kschaab in #23831
- feat(core): define TrajectoryProvider interface by @sehoon38 in #23050
- Docs: Update quotas and pricing by @jkcinouye in #23835
- fix(core): allow disabling environment variable redaction by @galz10 in #23927
- feat(cli): enable notifications cross-platform via terminal bell fallback by @genneth in #21618
- feat(sandbox): implement secret visibility lockdown for env files by @DavidAPierce in #23712
- fix(core): remove shell outputChunks buffer caching to prevent memory bloat and sanitize prompt input by @spencer426 in #23751
- feat(core): implement persistent browser session management by @kunal-10-cloud in #21306
- refactor(core): delegate sandbox denial parsing to SandboxManager by @scidomino in #23928
- dep(update) Update Ink version to 6.5.0 by @jacob314 in #23843
- Docs: Update ‘docs-writer’ skill for relative links by @jkcinouye in #21463
- Changelog for v0.36.0-preview.4 by @gemini-cli-robot in #23935
- fix(acp): Update allow approval policy flow for ACP clients to fix config persistence and compatible with TUI by @sripasg in #23818
- Changelog for v0.35.2 by @gemini-cli-robot in #23960
- ACP integration documents by @g-samroberts in #22254
- fix(core): explicitly set error names to avoid bundling renaming issues by @yunaseoul in #23913
- feat(core): subagent isolation and cleanup hardening by @abhipatel12 in #23903
- disable extension-reload test by @scidomino in #24018
- feat(core): add forbiddenPaths to GlobalSandboxOptions and refactor createSandboxManager by @ehedlund in #23936
- refactor(core): improve ignore resolution and fix directory-matching bug by @ehedlund in #23816
- revert(core): support custom base URL via env vars by @spencer426 in #23976
- Increase memory limited for eslint. by @jacob314 in #24022
- fix(acp): prevent crash on empty response in ACP mode by @sripasg in #23952
- feat(core): Land
AgentHistoryProvider. by @joshualitt in #23978 - fix(core): switch to subshells for shell tool wrapping to fix heredocs and edge cases by @abhipatel12 in #24024
- Debug command. by @jacob314 in #23851
- Changelog for v0.36.0-preview.5 by @gemini-cli-robot in #24046
- Fix test flakes by globally mocking ink-spinner by @jacob314 in #24044
- Enable network access in sandbox configuration by @galz10 in #24055
- feat(context): add configurable memoryBoundaryMarkers setting by @SandyTao520 in #24020
- feat(core): implement windows sandbox expansion and denial detection by @scidomino in #24027
- fix(core): resolve ACP Operation Aborted Errors in grep_search by @ivanporty in #23821
- fix(hooks): prevent SessionEnd from firing twice in non-interactive mode by @krishdef7 in #22139
- Re-word intro to Gemini 3 page. by @g-samroberts in #24069
- fix(cli): resolve layout contention and flashing loop in StatusRow by @keithguerin in #24065
- fix(sandbox): implement Windows Mandatory Integrity Control for GeminiSandbox by @galz10 in #24057
- feat(core): implement tool-based topic grouping (Chapters) by @Abhijit-2592 in #23150
- feat(cli): support ‘tab to queue’ for messages while generating by @gundermanc in #24052
- feat(core): agnostic background task UI with CompletionBehavior by @adamfweidman in #22740
- UX for topic narration tool by @gundermanc in #24079
- fix: shellcheck warnings in scripts by @scidomino in #24035
- test(evals): add comprehensive subagent delegation evaluations by @abhipatel12 in #24132
- fix(a2a-server): prioritize ADC before evaluating headless constraints for auth initialization by @spencer426 in #23614
- Text can be added after /plan command by @rambleraptor in #22833
- fix(cli): resolve missing F12 logs via global console store by @scidomino in #24235
- fix broken tests by @scidomino in #24279
- fix(evals): add update_topic behavioral eval by @gundermanc in #24223
- feat(core): Unified Context Management and Tool Distillation. by @joshualitt in #24157
- Default enable narration for the team. by @gundermanc in #24224
- fix(core): ensure default agents provide tools and use model-specific schemas by @abhipatel12 in #24268
- feat(cli): show Flash Lite Preview model regardless of user tier by @sehoon38 in #23904
- feat(cli): implement compact tool output by @jwhelangoog in #20974
- Add security settings for tool sandboxing by @galz10 in #23923
- chore(test-utils): switch integration tests to use PREVIEW_GEMINI_MODEL by @sehoon38 in #24276
- feat(core): enable topic update narration for legacy models by @Abhijit-2592 in #24241
- feat(core): add project-level memory scope to save_memory tool by @SandyTao520 in #24161
- test(integration): fix plan mode write denial test false positive by @sehoon38 in #24299
- feat(plan): support
Planmode in untrusted folders by @Adib234 in #17586 - fix(core): enable mid-stream retries for all models and re-enable compression test by @sehoon38 in #24302
- Changelog for v0.36.0-preview.6 by @gemini-cli-robot in #24082
- Changelog for v0.35.3 by @gemini-cli-robot in #24083
- feat(cli): add auth info to footer by @sehoon38 in #24042
- fix(browser): reset action counter for each agent session and let it ignore internal actions by @cynthialong0-0 in #24228
- feat(plan): promote planning feature to stable by @ruomengz in #24282
- fix(browser): terminate subagent immediately on domain restriction violations by @gsquared94 in #24313
- feat(cli): add UI to update extensions by @ruomengz in #23682
- Fix(browser): terminate immediately for “browser is already running” error by @cynthialong0-0 in #24233
- docs: Add ‘plan’ option to approval mode in CLI reference by @YifanRuan in #24134
- fix(core): batch macOS seatbelt rules into a profile file to prevent ARG_MAX errors by @ehedlund in #24255
- fix(core): fix race condition between browser agent and main closing process by @cynthialong0-0 in #24340
- perf(build): optimize build scripts for parallel execution and remove redundant checks by @sehoon38 in #24307
- ci: install bubblewrap on Linux for release workflows by @ehedlund in #24347
- chore(release): allow bundling for all builds, including stable by @sehoon38 in #24305
- Revert “Add security settings for tool sandboxing” by @jerop in #24357
- docs: update subagents docs to not be experimental by @abhipatel12 in #24343
- fix(core): implement **read and **write commands in sandbox managers by @galz10 in #24283
- don’t try to remove tags in dry run by @scidomino in #24356
- fix(config): disable JIT context loading by default by @SandyTao520 in #24364
- test(sandbox): add integration test for dynamic permission expansion by @galz10 in #24359
- docs(policy): remove unsupported mcpName wildcard edge case by @abhipatel12 in #24133
- docs: fix broken GEMINI.md link in CONTRIBUTING.md by @Panchal-Tirth in #24182
- feat(core): infrastructure for event-driven subagent history by @abhipatel12 in #23914
- fix(core): resolve Plan Mode deadlock during plan file creation due to sandbox restrictions by @DavidAPierce in #24047
- fix(core): fix browser agent UX issues and improve E2E test reliability by @gsquared94 in #24312
- fix(ui): wrap topic and intent fields in TopicMessage by @jwhelangoog in #24386
- refactor(core): Centralize context management logic into src/context by @joshualitt in #24380
- fix(core): pin AuthType.GATEWAY to use Gemini 3.1 Pro/Flash Lite by default by @sripasg in #24375
- feat(ui): add Tokyo Night theme by @danrneal in #24054
- fix(cli): refactor test config loading and mock debugLogger in test-setup by @mattKorwel in #24389
- Set memoryManager to false in settings.json by @mattKorwel in #24393
- ink 6.6.3 by @jacob314 in #24372
- fix(core): resolve subagent chat recording gaps and directory inheritance by @abhipatel12 in #24368
- fix(cli): cap shell output at 10 MB to prevent RangeError crash by @ProthamD in #24168
- feat(plan): conditionally add enter/exit plan mode tools based on current mode by @ruomengz in #24378
- feat(core): prioritize discussion before formal plan approval by @jerop in #24423
- fix(ui): add accelerated scrolling on alternate buffer mode by @devr0306 in #23940
- feat(core): populate sandbox forbidden paths with project ignore file contents by @ehedlund in #24038
- fix(core): ensure blue border overlay and input blocker to act correctly depending on browser agent activities by @cynthialong0-0 in #24385
- fix(ui): removed additional vertical padding for tables by @devr0306 in #24381
- fix(build): upload full bundle directory archive to GitHub releases by @sehoon38 in #24403
- fix(build): wire bundle:browser-mcp into bundle pipeline by @gsquared94 in #24424
- feat(browser): add sandbox-aware browser agent initialization by @gsquared94 in #24419
- feat(core): enhance tracker task schemas for detailed titles and descriptions by @anj-s in #23902
- refactor(core): Unified context management settings schema by @joshualitt in #24391
- feat(core): update browser agent prompt to check open pages first when bringing up by @cynthialong0-0 in #24431
- fix(acp) refactor(core,cli): centralize model discovery logic in ModelConfigService by @sripasg in #24392
- Changelog for v0.36.0-preview.7 by @gemini-cli-robot in #24346
- fix: update task tracker storage location in system prompt by @anj-s in #24034
- feat(browser): supersede stale snapshots to reclaim context-window tokens by @gsquared94 in #24440
- docs(core): add subagent tool isolation draft doc by @akh64bit in #23275
- fix(patch): cherry-pick 64c928f to release/v0.37.0-preview.0-pr-23257 to patch version v0.37.0-preview.0 and create version 0.37.0-preview.1 by @gemini-cli-robot in #24561
- fix(patch): cherry-pick cb7f7d6 to release/v0.37.0-preview.1-pr-24342 to patch version v0.37.0-preview.1 and create version 0.37.0-preview.2 by @gemini-cli-robot in #24842
Full Changelog: https://github.com/google-gemini/gemini-cli/compare/v0.36.0…v0.37.1
Released: April 08, 2026
Our preview release includes the latest, new, and experimental features. This release may not be as stable as our latest weekly release.
To install the preview release:
npm install -g @google/gemini-cli
Highlights
Section titled “Highlights”- Context Management: Introduced a Context Compression Service to optimize context window usage and landed a background memory service for skill extraction.
- Enhanced Security: Implemented context-aware persistent
policy approvals
for smarter tool permissions and enabled
web_fetchin plan mode with user confirmation. - Workflow Monitoring: Added background process monitoring and inspection tools for better visibility into long-running tasks.
- UI/UX Refinements: Enhanced the tool confirmation UI, selection layout, and added support for selective topic expansion and click-to-expand.
- Core Stability: Improved sandbox reliability on Linux and Windows, resolved shebang compatibility issues, and fixed various crashes in the CLI and core services.
What’s Changed
Section titled “What’s Changed”- fix(cli): refresh slash command list after /skills reload by @NTaylorMullen in #24454
- Update README.md for links. by @g-samroberts in #22759
- fix(core): ensure complete_task tool calls are recorded in chat history by @abhipatel12 in #24437
- feat(policy): explicitly allow web_fetch in plan mode with ask_user by @Adib234 in #24456
- fix(core): refactor linux sandbox to fix ARG_MAX crashes by @ehedlund in #24286
- feat(config): add experimental.adk.agentSessionNoninteractiveEnabled setting by @adamfweidman in #24439
- Changelog for v0.36.0-preview.8 by @gemini-cli-robot in #24453
- feat(cli): change default loadingPhrases to ‘off’ to hide tips by @keithguerin in #24342
- fix(cli): ensure agent stops when all declinable tools are cancelled by @NTaylorMullen in #24479
- fix(core): enhance sandbox usability and fix build error by @galz10 in #24460
- Terminal Serializer Optimization by @jacob314 in #24485
- Auto configure memory. by @jacob314 in #24474
- Unused error variables in catch block are not allowed by @alisa-alisa in #24487
- feat(core): add background memory service for skill extraction by @SandyTao520 in #24274
- feat: implement high-signal PR regression check for evaluations by @alisa-alisa in #23937
- Fix shell output display by @jacob314 in #24490
- fix(ui): resolve unwanted vertical spacing around various tool output treatments by @jwhelangoog in #24449
- revert(cli): bring back input box and footer visibility in copy mode by @sehoon38 in #24504
- fix(cli): prevent crash in AnsiOutputText when handling non-array data by @sehoon38 in #24498
- feat(cli): support default values for environment variables by @ruomengz in #24469
- Implement background process monitoring and inspection tools by @cocosheng-g in #23799
- docs(browser-agent): update stale browser agent documentation by @gsquared94 in #24463
- fix: enable browser_agent in integration tests and add localhost fixture tests by @gsquared94 in #24523
- fix(browser): handle computer-use model detection for analyze_screenshot by @gsquared94 in #24502
- feat(core): Land ContextCompressionService by @joshualitt in #24483
- feat(core): scope subagent workspace directories via AsyncLocalStorage by @SandyTao520 in #24445
- Update ink version to 6.6.7 by @jacob314 in #24514
- fix(acp): handle all InvalidStreamError types gracefully in prompt by @sripasg in #24540
- Fix crash when vim editor is not found in PATH on Windows by @Nagajyothi-tammisetti in #22423
- fix(core): move project memory dir under tmp directory by @SandyTao520 in #24542
- Enable ‘Other’ option for yesno question type by @ruomengz in #24545
- fix(cli): clear stale retry/loading state after cancellation (#21096) by @Aaxhirrr in #21960
- Changelog for v0.37.0-preview.0 by @gemini-cli-robot in #24464
- feat(core): implement context-aware persistent policy approvals by @jerop in #23257
- docs: move agent disabling instructions and update remote agent status by @jackwotherspoon in #24559
- feat(cli): migrate nonInteractiveCli to LegacyAgentSession by @adamfweidman in #22987
- fix(core): unsafe type assertions in Core File System #19712 by @aniketsaurav18 in #19739
- fix(ui): hide model quota in /stats and refactor quota display by @danzaharia1 in #24206
- Changelog for v0.36.0 by @gemini-cli-robot in #24558
- Changelog for v0.37.0-preview.1 by @gemini-cli-robot in #24568
- docs: add missing .md extensions to internal doc links by @ishaan-arora-1 in #24145
- fix(ui): fixed table styling by @devr0306 in #24565
- fix(core): pass includeDirectories to sandbox configuration by @galz10 in #24573
- feat(ui): enable “TerminalBuffer” mode to solve flicker by @jacob314 in #24512
- docs: clarify release coordination by @scidomino in #24575
- fix(core): remove broken PowerShell translation and fix native __write in Windows sandbox by @scidomino in #24571
- Add instructions for how to start react in prod and force react to prod mode by @jacob314 in #24590
- feat(cli): minimalist sandbox status labels by @galz10 in #24582
- Feat/browser agent metrics by @kunal-10-cloud in #24210
- test: fix Windows CI execution and resolve exposed platform failures by @ehedlund in #24476
- feat(core,cli): prioritize summary for topics (#24608) by @Abhijit-2592 in #24609
- show color by @jacob314 in #24613
- feat(cli): enable compact tool output by default (#24509) by @jwhelangoog in #24510
- fix(core): inject skill system instructions into subagent prompts if activated by @abhipatel12 in #24620
- fix(core): improve windows sandbox reliability and fix integration tests by @ehedlund in #24480
- fix(core): ensure sandbox approvals are correctly persisted and matched for proactive expansions by @galz10 in #24577
- feat(cli) Scrollbar for input prompt by @jacob314 in #21992
- Do not run pr-eval workflow when no steering changes detected by @alisa-alisa in #24621
- Fix restoration of topic headers. by @gundermanc in #24650
- feat(core): discourage update topic tool for simple tasks by @Samee24 in #24640
- fix(core): ensure global temp directory is always in sandbox allowed paths by @galz10 in #24638
- fix(core): detect uninitialized lines by @jacob314 in #24646
- docs: update sandboxing documentation and toolSandboxing settings by @galz10 in #24655
- feat(cli): enhance tool confirmation UI and selection layout by @galz10 in #24376
- feat(acp): add support for
/aboutcommand by @sripasg in #24649 - feat(cli): add role specific metrics to /stats by @cynthialong0-0 in #24659
- split context by @jacob314 in #24623
- fix(cli): remove -S from shebang to fix Windows and BSD execution by @scidomino in #24756
- Fix issue where topic headers can be posted back to back by @gundermanc in #24759
- fix(core): handle partial llm_request in BeforeModel hook override by @krishdef7 in #22326
- fix(ui): improve narration suppression and reduce flicker by @gundermanc in #24635
- fix(ui): fixed auth race condition causing logo to flicker by @devr0306 in #24652
- fix(browser): remove premature browser cleanup after subagent invocation by @gsquared94 in #24753
- Revert “feat(core,cli): prioritize summary for topics (#24608)” by @Abhijit-2592 in #24777
- relax tool sandboxing overrides for plan mode to match defaults. by @DavidAPierce in #24762
- fix(cli): respect global environment variable allowlist by @scidomino in #24767
- fix(cli): ensure skills list outputs to stdout in non-interactive environments by @spencer426 in #24566
- Add an eval for and fix unsafe cloning behavior. by @gundermanc in #24457
- fix(policy): allow complete_task in plan mode by @abhipatel12 in #24771
- feat(telemetry): add browser agent clearcut metrics by @gsquared94 in #24688
- feat(cli): support selective topic expansion and click-to-expand by @Abhijit-2592 in #24793
- temporarily disable sandbox integration test on windows by @ehedlund in #24786
- Remove flakey test by @scidomino in #24837
- Alisa/approve button by @alisa-alisa in #24645
- feat(hooks): display hook system messages in UI by @mbleigh in #24616
- fix(core): propagate BeforeModel hook model override end-to-end by @krishdef7 in #24784
- chore: fix formatting for behavioral eval skill reference file by @abhipatel12 in #24846
- fix: use directory junctions on Windows for skill linking by @enjoykumawat in #24823
- fix(cli): prevent multiple banner increments on remount by @sehoon38 in #24843
- feat(acp): add /help command by @sripasg in #24839
- fix(core): remove tmux alternate buffer warning by @jackwotherspoon in #24852
- Improve sandbox error matching and caching by @DavidAPierce in #24550
- feat(core): add agent protocol UI types and experimental flag by @mbleigh in #24275
- feat(core): use experiment flags for default fetch timeouts by @yunaseoul in #24261
- Revert “fix(ui): improve narration suppression and reduce flicker (#2… by @gundermanc in #24857
- refactor(cli): remove duplication in interactive shell awaiting input hint by @JayadityaGit in #24801
- refactor(core): make LegacyAgentSession dependencies optional by @mbleigh in #24287
- Changelog for v0.37.0-preview.2 by @gemini-cli-robot in #24848
- fix(cli): always show shell command description or actual command by @jacob314 in #24774
- Added flag for ept size and increased default size by @devr0306 in #24859
- fix(core): dispose Scheduler to prevent McpProgress listener leak by @Anjaligarhwal in #24870
- fix(cli): switch default back to terminalBuffer=false and fix regressions introduced for that mode by @jacob314 in #24873
- feat(cli): switch to ctrl+g from ctrl-x by @jacob314 in #24861
- fix: isolate concurrent browser agent instances by @gsquared94 in #24794
- docs: update MCP server OAuth redirect port documentation by @adamfweidman in #24844
Full Changelog: https://github.com/google-gemini/gemini-cli/compare/v0.37.0-preview.2…v0.38.0-preview.0
Gemini CLI has three major release channels: nightly, preview, and stable. For most users, we recommend the stable release.
On this page, you can find information regarding the current releases and announcements from each release.
For the full changelog, refer to Releases - google-gemini/gemini-cli on GitHub.
Current releases
Section titled “Current releases”| Release channel | Notes |
|---|---|
| Nightly | Nightly release with the most recent changes. |
| Preview | Experimental features ready for early feedback. |
| Stable | Stable, recommended for general use. |
Announcements: v0.37.0 - 2026-04-08
Section titled “Announcements: v0.37.0 - 2026-04-08”- Dynamic Sandbox Expansion: Implemented dynamic sandbox expansion and worktree support for Linux and Windows, improving developer workflows in isolated environments (#23692 by @galz10, #23691 by @scidomino).
- Chapters Narrative Flow: Introduced tool-based topic grouping (“Chapters”) to provide better session structure and narrative continuity (#23150 by @Abhijit-2592, #24079 by @gundermanc).
- Advanced Browser Capabilities: Enhanced the browser agent with persistent sessions and dynamic tool discovery (#21306 by @kunal-10-cloud, #23805 by @cynthialong0-0).
Announcements: v0.36.0 - 2026-04-01
Section titled “Announcements: v0.36.0 - 2026-04-01”- Multi-Registry Architecture and Sandboxing: Introduced a multi-registry architecture and implemented native macOS Seatbelt and Windows sandboxing for enhanced subagent security (#22712, #22718 by @akh64bit, #22832 by @ehedlund, #21807 by @mattKorwel).
- Refreshed Composer UX: Implemented a refreshed user experience for the Composer layout and improved terminal interaction robustness (#21212, #23286 by @jwhelangoog).
- Git Worktree Support: Added native support for Git worktrees, allowing for isolated parallel sessions (#22973, #23265 by @jerop).
- Subagent Context and Feedback: Enhanced subagents with JIT context injection and resilient tool rejection with contextual feedback (#23032, #22951 by @abhipatel12).
Announcements: v0.35.0 - 2026-03-24
Section titled “Announcements: v0.35.0 - 2026-03-24”- Customizable Keyboard Shortcuts: Users can now customize their keyboard shortcuts, including support for literal character keybindings and the extended Kitty protocol (#21945, #21972 by @scidomino).
- Vim Mode Improvements: Added missing motions (X, ~, r, f/F/t/T) and yank/paste support with the unnamed register (#21932, #22026 by @aanari).
- Tool Isolation and Sandboxing: Introduced
SandboxManagerto isolate process-spawning tools and added Linux bubblewrap/seccomp sandboxing support (#21774, #22231 by @galz10, #22680 by @DavidAPierce). - JIT Context Discovery: Implemented Just-In-Time context discovery for file system tools to improve model performance and accuracy (#22082, #22736 by @SandyTao520).
Announcements: v0.34.0 - 2026-03-17
Section titled “Announcements: v0.34.0 - 2026-03-17”- Plan Mode Enabled by Default: Plan Mode is now enabled by default to help you break down complex tasks and execute them systematically (#21713 by @jerop).
- Sandboxing Enhancements: We’ve added native gVisor (runsc) and experimental LXC container sandboxing support for safer execution environments (#21062 by @Zheyuan-Lin, #20735 by @h30s).
Announcements: v0.33.0 - 2026-03-11
Section titled “Announcements: v0.33.0 - 2026-03-11”- Agent Architecture Enhancements: Introduced HTTP authentication for A2A remote agents and authenticated A2A agent card discovery (#20510 by @SandyTao520, #20622 by @SandyTao520).
- Plan Mode Updates: Expanded Plan Mode with built-in
research subagents,
annotation support for feedback, and a new
copysubcommand (#20972 by @Adib234, #20988 by @ruomengz). - CLI UX & Admin Controls: Redesigned the header to be compact with an ASCII icon, inverted context window display to show usage, and enabled a 30-day default retention for chat history (#18713 by @keithguerin, #20853 by @skeshive).
Announcements: v0.32.0 - 2026-03-03
Section titled “Announcements: v0.32.0 - 2026-03-03”- Generalist Agent: The generalist agent is now enabled to improve task delegation and routing (#19665 by @joshualitt).
- Model Steering in Workspace: Added support for model steering directly in the workspace (#20343 by @joshualitt).
- Plan Mode Enhancements: Users can now open and modify plans in an external editor, and the planning workflow has been adapted to handle complex tasks more effectively with multi-select options (#20348 by @Adib234, #20465 by @jerop).
- Interactive Shell Autocompletion: Introduced interactive shell autocompletion for a more seamless experience (#20082 by @mrpmohiburrahman).
- Parallel Extension Loading: Extensions are now loaded in parallel to improve startup times (#20229 by @scidomino).
Announcements: v0.31.0 - 2026-02-27
Section titled “Announcements: v0.31.0 - 2026-02-27”- Gemini 3.1 Pro Preview: Gemini CLI now supports the new Gemini 3.1 Pro Preview model (#19676 by @sehoon38).
- Experimental Browser Agent: We’ve introduced a new experimental browser agent to interact with web pages (#19284 by @gsquared94).
- Policy Engine Updates: The policy engine now supports project-level policies, MCP server wildcards, and tool annotation matching (#18682 by @Abhijit-2592, #20024 by @jerop).
- Web Fetch Improvements: We’ve implemented an experimental direct web fetch feature and added rate limiting to mitigate DDoS risks (#19557 by @mbleigh, #19567 by @mattKorwel).
Announcements: v0.30.0 - 2026-02-25
Section titled “Announcements: v0.30.0 - 2026-02-25”- SDK & Custom Skills: Introduced the initial SDK
package, enabling dynamic
system instructions,
SessionContextfor SDK tool calls, and support for custom skills (#18861 by @mbleigh). - Policy Engine Enhancements: Added a new
--policyflag for user-defined policies, introduced strict seatbelt profiles, and deprecated--allowed-toolsin favor of the policy engine (#18500 by @allenhutchison). - UI & Themes: Added a generic searchable list for settings and extensions, new Solarized themes, text wrapping for markdown tables, and a clean UI toggle prototype (#19064 by @rmedranollamas).
- Vim & Terminal Interaction: Improved Vim support to feel more complete and added support for Ctrl-Z terminal suspension (#18755 by @ppgranger, #18931 by @scidomino).
Announcements: v0.29.0 - 2026-02-17
Section titled “Announcements: v0.29.0 - 2026-02-17”- Plan Mode: A new comprehensive planning capability with
/plan,enter_plan_modetool, and dedicated documentation (#17698 by @Adib234, #18324 by @jerop). - Gemini 3 Default: We’ve removed the preview flag and enabled Gemini 3 by default for all users (#18414 by @sehoon38).
- Extension Exploration: New UI and settings to explore and manage extensions more easily (#18686 by @sripasg).
- Admin Control: Administrators can now allowlist specific MCP server configurations (#18311 by @skeshive).
Announcements: v0.28.0 - 2026-02-10
Section titled “Announcements: v0.28.0 - 2026-02-10”- IDE Support: Gemini CLI now supports the Positron IDE (#15047 by @kapsner).
- Customization: You can now use custom themes in extensions, and we’ve implemented automatic theme switching based on your terminal’s background (#17327 by @spencer426, #17976 by @Abhijit-2592).
- Authentication: We’ve added interactive and non-interactive consent for OAuth, and you can now include your auth method in bug reports (#17699 by @ehedlund, #17569 by @erikus).
Announcements: v0.27.0 - 2026-02-03
Section titled “Announcements: v0.27.0 - 2026-02-03”- Event-Driven Architecture: The CLI now uses a new event-driven scheduler for tool execution, resulting in a more responsive and performant experience (#17078 by @abhipatel12).
- Enhanced User Experience: This release includes queued tool confirmations, and expandable large text pastes for a smoother workflow.
- New
/rewindCommand: Easily navigate your session history with the new/rewindcommand (#15720 by @Adib234). - Linux Clipboard Support: You can now paste images on Linux with Wayland and X11 (#17144 by @devr0306).
Announcements: v0.26.0 - 2026-01-27
Section titled “Announcements: v0.26.0 - 2026-01-27”- Agents and Skills: We’ve introduced a new
skill-creatorskill (#16394 by @NTaylorMullen), enabled agent skills by default, and added a generalist agent to improve task routing (#16638 by @joshualitt). - UI/UX Improvements: You can now “Rewind” through your conversation history (#15717 by @Adib234).
- Core and Scheduler Refactoring: The core scheduler has been significantly refactored to improve performance and reliability (#16895 by @abhipatel12), and numerous performance and stability fixes have been included.
Announcements: v0.25.0 - 2026-01-20
Section titled “Announcements: v0.25.0 - 2026-01-20”- Skills and Agents Improvements: We’ve enhanced the
activate_skilltool, added a newpr-creatorskill (#16232 by @NTaylorMullen), enabled skills by default, improved thecli_helpagent (#16100 by @scidomino), and added a new/agents refreshcommand (#16204 by @joshualitt). - UI/UX Refinements: You’ll notice more transparent feedback for skills (#15954 by @NTaylorMullen), the ability to switch focus between the shell and input with Tab (#14332 by @jacob314), and dynamic terminal tab titles (#16378 by @NTaylorMullen).
- Core Functionality & Performance: This release includes support for built-in agent skills (#16045 by @NTaylorMullen), refined Gemini 3 system instructions (#16139 by @NTaylorMullen), caching for ignore instances to improve performance (#16185 by @EricRahm), and enhanced retry mechanisms (#16489 by @sehoon38).
- Bug Fixes and Stability: We’ve squashed numerous bugs across the CLI, core, and workflows, addressing issues with subagent delegation, unicode character crashes, and sticky header regressions.
Announcements: v0.24.0 - 2026-01-14
Section titled “Announcements: v0.24.0 - 2026-01-14”- Agent Skills: We’ve introduced significant advancements in Agent Skills. This includes initial documentation and tutorials to help you get started, alongside enhanced support for remote agents, allowing for more distributed and powerful automation within Gemini CLI. (#15869 by @NTaylorMullen), (#16013 by @adamweidman)
- Improved UI/UX: The user interface has received several updates, featuring visual indicators for hook execution, a more refined display for settings, and the ability to use the Tab key to effortlessly switch focus between the shell and input areas. (#15408 by @abhipatel12), (#14332 by @galz10)
- Enhanced Security: Security has been a major focus, with default folder trust now set to untrusted for increased safety. The Policy Engine has been improved to allow specific modes in user and administrator policies, and granular allowlisting for shell commands has been implemented, providing finer control over tool execution. (#15943 by @galz10), (#15977 by @NTaylorMullen)
- Core Functionality: This release includes a mandatory MessageBus injection, marking Phase 3 of a hard migration to a more robust internal communication system. We’ve also added support for built-in skills with the CLI itself, and enhanced model routing to effectively utilize subagents. (#15776 by @abhipatel12), (#16300 by @NTaylorMullen)
- Terminal Features: Terminal interactions are more seamless with new features like OSC 52 paste support, along with fixes for Windows clipboard paste issues and general improvements to pasting in Windows terminals. (#15336 by @scidomino), (#15932 by @scidomino)
- New Commands: To manage the new features, we’ve added
several new
commands:
/agents refreshto update agent configurations,/skills reloadto refresh skill definitions, and/skills install/uninstallfor easier management of your Agent Skills. (#16204 by @NTaylorMullen), (#15865 by @NTaylorMullen), (#16377 by @NTaylorMullen)
Announcements: v0.23.0 - 2026-01-07
Section titled “Announcements: v0.23.0 - 2026-01-07”- 🎉 Experimental Agent Skills Support in Preview: Gemini
CLI now supports
Agent Skills in our preview builds.
This is an
early preview where we’re looking for feedback!
- Install Preview:
npm install -g @google/gemini-cli - Enable in
/settings - Docs: https://geminicli.com/docs/cli/skills/
- Install Preview:
- Gemini CLI wrapped: Run
npx gemini-wrappedto visualize your usage stats, top models, languages, and more! - Windows clipboard image support: Windows users can now
paste images
directly from their clipboard into the CLI using
Alt+V. (pr by @sgeraldes) - Terminal background color detection: Automatically optimizes your terminal’s background color to select compatible themes and provide accessibility warnings. (pr by @jacob314)
- Session logout: Use the new
/logoutcommand to instantly clear credentials and reset your authentication state for seamless account switching. (pr by @CN-Scars)
Announcements: v0.22.0 - 2025-12-22
Section titled “Announcements: v0.22.0 - 2025-12-22”- 🎉Free Tier + Gemini 3: Free tier users now all have
access to Gemini 3
Pro & Flash. Enable in
/settingsby toggling “Preview Features” totrue. - 🎉Gemini CLI + Colab: Gemini CLI is now pre-installed. Can be used headlessly in notebook cells or interactively in the built-in terminal (pic)
- 🎉Gemini CLI Extensions:
-
Conductor: Planning++, Gemini works with you to build out a detailed plan, pull in extra details as needed, ultimately to give the LLM guardrails with artifacts. Measure twice, implement once!
gemini extensions install https://github.com/gemini-cli-extensions/conductorBlog: https://developers.googleblog.com/conductor-introducing-context-driven-development-for-gemini-cli/
-
Endor Labs: Perform code analysis, vulnerability scanning, and dependency checks using natural language.
gemini extensions install https://github.com/endorlabs/gemini-extension
-
Announcements: v0.21.0 - 2025-12-15
Section titled “Announcements: v0.21.0 - 2025-12-15”- ⚡️⚡️⚡️ Gemini 3 Flash + Gemini CLI: Better, faster and
cheaper than 2.5
Pro - and in some scenarios better than 3 Pro! For paid tiers + free
tier
users who were on the wait list enable Preview Features
in
/settings. - For more information: Gemini 3 Flash is now available in Gemini CLI.
- 🎉 Gemini CLI Extensions:
- Rill: Utilize natural language to analyze Rill data, enabling
the
exploration of metrics and trends without the need for manual
queries.
gemini extensions install https://github.com/rilldata/rill-gemini-extension - Browserbase: Interact with web pages, take screenshots, extract
information,
and perform automated actions with atomic precision.
gemini extensions install https://github.com/browserbase/mcp-server-browserbase
- Rill: Utilize natural language to analyze Rill data, enabling
the
exploration of metrics and trends without the need for manual
queries.
- Quota Visibility: The
/statscommand now displays quota information for all available models, including those not used in the current session. (@sehoon38) - Fuzzy Setting Search: Users can now quickly find settings using fuzzy search within the settings dialog. (@sehoon38)
- MCP Resource Support: Users can now discover, view, and search through resources using the @ command. (@MrLesk)
- Auto-execute Simple Slash Commands: Simple slash commands are now executed immediately on enter. (@jackwotherspoon)
Announcements: v0.20.0 - 2025-12-01
Section titled “Announcements: v0.20.0 - 2025-12-01”- Multi-file Drag & Drop: Users can now drag and
drop multiple files into
the terminal, and the CLI will automatically prefix each valid path with
@. (pr by @jackwotherspoon) - Persistent “Always Allow” Policies: Users can now save “Always Allow” decisions for tool executions, with granular control over specific shell commands and multi-cloud platform tools. (pr by @allenhutchison)
Announcements: v0.19.0 - 2025-11-24
Section titled “Announcements: v0.19.0 - 2025-11-24”- 🎉 New extensions:
- Eleven Labs: Create, play, manage your audio
play tracks with the Eleven
Labs Gemini CLI extension:
gemini extensions install https://github.com/elevenlabs/elevenlabs-mcp
- Eleven Labs: Create, play, manage your audio
play tracks with the Eleven
Labs Gemini CLI extension:
- Zed integration: Users can now leverage Gemini 3 within
the Zed
integration after enabling “Preview Features” in their CLI’s
/settings. (pr by @benbrandt) - Interactive shell:
- Click-to-Focus: When “Use Alternate Buffer” setting is enabled, users can click within the embedded shell output to focus it for input. (pr by @galz10)
- Loading phrase: Clearly indicates when the interactive shell is awaiting user input. (vid, pr by @jackwotherspoon)
Announcements: v0.18.0 - 2025-11-17
Section titled “Announcements: v0.18.0 - 2025-11-17”- 🎉 New extensions:
- Google Workspace: Integrate Gemini CLI with
your Workspace data. Write
docs, build slides, chat with others or even get your calc on in
sheets:
gemini extensions install https://github.com/gemini-cli-extensions/workspace - Redis: Manage and search data in Redis with
natural language:
gemini extensions install https://github.com/redis/mcp-redis - Anomalo: Query your data warehouse table
metadata and quality status
through commands and natural language:
gemini extensions install https://github.com/datagravity-ai/anomalo-gemini-extension
- Google Workspace: Integrate Gemini CLI with
your Workspace data. Write
docs, build slides, chat with others or even get your calc on in
sheets:
- Experimental permission improvements: We are now experimenting with a new policy engine in Gemini CLI. This allows users and administrators to create fine-grained policy for tool calls. Currently behind a flag. See policy engine documentation for more information.
- Gemini 3 support for paid: Gemini 3 support has been
rolled out to all API
key, Google AI Pro or Google AI Ultra (for individuals, not businesses)
and
Gemini Code Assist Enterprise users. Enable it via
/settingsand toggling on Preview Features. - Updated UI rollback: We’ve temporarily rolled back our
updated UI to give
it more time to bake. This means for a time you won’t have embedded
scrolling
or mouse support. You can re-enable with
/settings-> Use Alternate Screen Buffer ->true. - Model in history: Users can now toggle in
/settingsto display model in their chat history. (gif, pr by @scidomino) - Multi-uninstall: Users can now uninstall multiple extensions with a single command. (pic, pr by @JayadityaGit)
Announcements: v0.16.0 - 2025-11-10
Section titled “Announcements: v0.16.0 - 2025-11-10”- Gemini 3 + Gemini CLI: launch 🚀🚀🚀
- Data Commons Gemini CLI Extension - A new Data Commons Gemini CLI extension that lets you query open-source statistical data from datacommons.org. To get started, you’ll need a Data Commons API key and uv installed. These and other details to get you started with the extension can be found at https://github.com/gemini-cli-extensions/datacommons.
Announcements: v0.15.0 - 2025-11-03
Section titled “Announcements: v0.15.0 - 2025-11-03”-
🎉 Seamless scrollable UI and mouse support: We’ve given Gemini CLI a major facelift to make your terminal experience smoother and much more polished. You now get a flicker-free display with sticky headers that keep important context visible and a stable input prompt that doesn’t jump around. We even added mouse support so you can click right where you need to type! (gif, @jacob314).
-
🎉 New partner extensions:
-
Arize: Seamlessly instrument AI applications with Arize AX and grant direct access to Arize support:
gemini extensions install https://github.com/Arize-ai/arize-tracing-assistant -
Chronosphere: Retrieve logs, metrics, traces, events, and specific entities:
gemini extensions install https://github.com/chronosphereio/chronosphere-mcp -
Transmit: Comprehensive context, validation, and automated fixes for creating production-ready authentication and identity workflows:
gemini extensions install https://github.com/TransmitSecurity/transmit-security-journey-builder
-
-
Todo planning: Complex questions now get broken down into todo lists that the model can manage and check off. (gif, pr by @anj-s)
-
Disable GitHub extensions: Users can now prevent the installation and loading of extensions from GitHub. (pr by @kevinjwang1).
-
Extensions restart: Users can now explicitly restart extensions using the
/extensions restartcommand. (pr by @jakemac53). -
Better Angular support: Angular workflows should now be more seamless (pr by @MarkTechson).
-
Validate command: Users can now check that local extensions are formatted correctly. (pr by @kevinjwang1).
Announcements: v0.12.0 - 2025-10-27
Section titled “Announcements: v0.12.0 - 2025-10-27”
-
🎉 New partner extensions:
-
🤗 Hugging Face extension: Access the Hugging Face hub. (gif)
gemini extensions install https://github.com/huggingface/hf-mcp-server -
Monday.com extension: Analyze your sprints, update your task boards, etc. (gif)
gemini extensions install https://github.com/mondaycom/mcp -
Data Commons extension: Query public datasets or ground responses on data from Data Commons (gif).
gemini extensions install https://github.com/gemini-cli-extensions/datacommons
-
-
Model selection: Choose the Gemini model for your session with
/model. (pic, pr by @abhipatel12). -
Model routing: Gemini CLI will now intelligently pick the best model for the task. Simple queries will be sent to Flash while complex analytical or creative tasks will still use the power of Pro. This ensures your quota will last for a longer period of time. You can always opt-out of this via
/model. (pr by @abhipatel12). -
Codebase investigator subagent: We now have a new built-in subagent that will explore your workspace and resolve relevant information to improve overall performance. (pr by @abhipatel12, pr by @silviojr).
-
Explore extensions with
/extension: Users can now open the extensions page in their default browser directly from the CLI using the/extensionexplore command. (pr by @JayadityaGit). -
Configurable compression: Users can modify the context compression threshold in
/settings(decimal with percentage display). The default has been made more proactive (pr by @scidomino). -
API key authentication: Users can now securely enter and store their Gemini API key via a new dialog, eliminating the need for environment variables and repeated entry. (pr by @galz10).
-
Sequential approval: Users can now approve multiple tool calls sequentially during execution. (pr by @joshualitt).
Announcements: v0.11.0 - 2025-10-20
Section titled “Announcements: v0.11.0 - 2025-10-20”
- 🎉 Gemini CLI Jules Extension: Use Gemini CLI to
orchestrate Jules. Spawn
remote workers, delegate tedious tasks, or check in on running jobs!
- Install:
gemini extensions install https://github.com/gemini-cli-extensions/jules - Announcement: https://developers.googleblog.com/en/introducing-the-jules-extension-for-gemini-cli/
- Install:
- Stream JSON output: Stream real-time JSONL events with
--output-format stream-jsonto monitor AI agent progress when run headlessly. (gif, pr by @anj-s) - Markdown toggle: Users can now switch between rendered
and raw markdown
display using
alt+morctrl+m. (gif, pr by @srivatsj) - Queued message editing: Users can now quickly edit queued messages by pressing the up arrow key when the input is empty. (gif, pr by @akhil29)
- JSON web fetch: Non-HTML content like JSON APIs or raw source code are now properly shown to the model (previously only supported HTML) (gif, pr by @abhipatel12)
- Non-interactive MCP commands: Users can now run MCP
slash commands in
non-interactive mode
gemini "/some-mcp-prompt". (pr by @capachino) - Removal of deprecated flags: We’ve finally removed a
number of deprecated
flags to cleanup Gemini CLI’s invocation profile:
--all-files/-ain favor of@from within Gemini CLI. (pr by @allenhutchison)--telemetry-*flags in favor of environment variables (pr by @allenhutchison)
Announcements: v0.10.0 - 2025-10-13
Section titled “Announcements: v0.10.0 - 2025-10-13”- Polish: The team has been heads down bug fixing and investing heavily into polishing existing flows, tools, and interactions.
- Interactive Shell Tool calling: Gemini CLI can now also execute interactive tools if needed (pr by @galz10).
- Alt+Key support: Enables broader support for Alt+Key keyboard shortcuts across different terminals. (pr by @srivatsj).
- Telemetry Diff stats: Track line changes made by the model and user during file operations via OTEL. (pr by @jerop).
Announcements: v0.9.0 - 2025-10-06
Section titled “Announcements: v0.9.0 - 2025-10-06”- 🎉 Interactive Shell: Run interactive commands like
vim,rebase -i, or evengemini😎 directly in Gemini CLI: - Install pre-release extensions: Install the latest
--pre-releaseversions of extensions. Used for when an extension’s release hasn’t been marked as “latest”. (pr by @jakemac53) - Simplified extension creation: Create a new, empty extension. Templates are no longer required. (pr by @chrstnb)
- OpenTelemetry GenAI metrics: Aligns telemetry with industry-standard semantic conventions for improved interoperability. (spec, pr by @jerop)
- List memory files: Quickly find the location of your
long-term memory
files with
/memory list. (pr by @sgnagnarella)
Announcements: v0.8.0 - 2025-09-29
Section titled “Announcements: v0.8.0 - 2025-09-29”- 🎉 Announcing Gemini CLI Extensions 🎉
- Completely customize your Gemini CLI experience to fit your workflow.
- Build and share your own Gemini CLI extensions with the world.
- Launching with a growing catalog of community, partner, and
Google-built
extensions.
- Check extensions from key launch partners.
- Easy install:
gemini extensions install <github url|folder path>
- Easy management:
gemini extensions install|uninstall|linkgemini extensions enable|disablegemini extensions list|update|new
- Or use commands while running with
/extensions list|update. - Everything you need to know: Now open for building: Introducing Gemini CLI extensions.
- 🎉 Our New Home Page & Better Documentation 🎉
- Check out our new home page for better getting started material, reference documentation, extensions and more!
- Homepage: https://geminicli.com
- ‼️NEW documentation: https://geminicli.com/docs (Have any suggestions?)
- Extensions: https://geminicli.com/extensions
- Non-Interactive Allowed Tools:
--allowed-toolswill now also work in non-interactive mode. (pr by @mistergarrison) - Terminal Title Status: See the CLI’s real-time status
and thoughts
directly in the terminal window’s title by setting
showStatusInTitle: true. (pr by @Fridayxiao) - Small features, polish, reliability & bug fixes: A large amount of changes, smaller features, UI updates, reliability and bug fixes + general polish made it in this week!
Announcements: v0.7.0 - 2025-09-22
Section titled “Announcements: v0.7.0 - 2025-09-22”- 🎉Build your own Gemini CLI IDE plugin: We’ve published a spec for creating IDE plugins to enable rich context-aware experiences and native in-editor diffing in your IDE of choice. (pr by @skeshive)
- 🎉 Gemini CLI extensions
- Telemetry config via environment: Manage telemetry settings using environment variables for a more flexible setup. (docs, pr by @jerop)
- Experimental todos: Track and display progress on
complex tasks with a
managed checklist. Off by default but can be enabled via
"useWriteTodos": true(pr by @anj-s) - Share chat support for tools: Using
/chat sharewill now also render function calls and responses in the final markdown file. (pr by @rramkumar1) - Citations: Now enabled for all users (pr by @scidomino)
- Custom commands in Headless Mode: Run custom slash
commands directly from
the command line in non-interactive mode:
gemini "/joke Chuck Norris"(pr by @capachino) - Small features, polish, reliability & bug fixes: A large amount of changes, smaller features, UI updates, reliability and bug fixes + general polish made it in this week!
Announcements: v0.6.0 - 2025-09-15
Section titled “Announcements: v0.6.0 - 2025-09-15”- 🎉 Higher limits for Google AI Pro and Ultra subscribers: We’re psyched to finally announce that Google AI Pro and AI Ultra subscribers now get access to significantly higher 2.5 quota limits for Gemini CLI!
- 🎉Gemini CLI Databases and BigQuery Extensions: Connect
Gemini CLI to all
of your cloud data with Gemini CLI.
- Announcement and how to get started with each of the below extensions: https://cloud.google.com/blog/products/databases/gemini-cli-extensions-for-google-data-cloud?e=48754805
- AlloyDB: Interact, manage and observe AlloyDB for PostgreSQL databases (manage, observe)
- BigQuery: Connect and query your BigQuery datasets or utilize a sub-agent for contextual insights (query, sub-agent)
- Cloud SQL: Interact, manage and observe Cloud SQL for PostgreSQL (manage, observe), Cloud SQL for MySQL (manage, observe) and Cloud SQL for SQL Server (manage, observe) databases.
- Dataplex: Discover, manage, and govern data and AI artifacts (extension)
- Firestore: Interact with Firestore databases, collections and documents (extension)
- Looker: Query data, run Looks and create dashboards (extension)
- MySQL: Interact with MySQL databases (extension)
- Postgres: Interact with PostgreSQL databases (extension)
- Spanner: Interact with Spanner databases (extension)
- SQL Server: Interact with SQL Server databases (extension)
- MCP Toolbox: Configure and load custom tools for more than 30+ data sources (extension)
- JSON output mode: Have Gemini CLI output JSON with
--output-format jsonwhen invoked headlessly for easy parsing and post-processing. Includes response, stats and errors. (pr by @jerop) - Keybinding triggered approvals: When you use shortcuts
(
shift+yorshift+tab) to activate YOLO/auto-edit modes any pending confirmation dialogs will now approve. (pr by @bulkypanda) - Chat sharing: Convert the current conversation to a Markdown or JSON file with /chat share <file.md|file.json> (pr by @rramkumar1)
- Prompt search: Search your prompt history using
ctrl+r. (pr by @Aisha630) - Input undo/redo: Recover accidentally deleted text in
the input prompt
using
ctrl+z(undo) andctrl+shift+z(redo). (pr by @masiafrest) - Loop detection confirmation: When loops are detected you are now presented with a dialog to disable detection for the current session. (pr by @SandyTao520)
- Direct to Google Cloud Telemetry: Directly send telemetry to Google Cloud for a simpler and more streamlined setup. (pr by @jerop)
- Visual Mode Indicator Revamp: ‘shell’, ‘accept edits’ and ‘yolo’ modes now have colors to match their impact / usage. Input box now also updates. (shell, accept-edits, yolo, pr by @miguelsolorio)
- Small features, polish, reliability & bug fixes: A large amount of changes, smaller features, UI updates, reliability and bug fixes + general polish made it in this week!
Announcements: v0.5.0 - 2025-09-08
Section titled “Announcements: v0.5.0 - 2025-09-08”- 🎉FastMCP + Gemini CLI🎉: Quickly install and manage
your Gemini CLI MCP
servers with FastMCP (video,
pr by
@jackwotherspoon)
- Getting started: https://gofastmcp.com/integrations/gemini-cli
- Positional Prompt for Non-Interactive: Seamlessly
invoke Gemini CLI
headlessly via
gemini "Hello". Synonymous with passing-p. (gif, pr by @allenhutchison) - Experimental Tool output truncation: Enable truncating
shell tool outputs
and saving full output to a file by setting
"enableToolOutputTruncation": true(pr by @SandyTao520) - Edit Tool improvements: Gemini CLI’s ability to edit files should now be far more capable. (pr by @silviojr)
- Custom witty messages: The feature you’ve all been
waiting for…
Personalized witty loading messages via
"ui": { "customWittyPhrases": ["YOLO"]}insettings.json. (pr by @JayadityaGit) - Nested .gitignore File Handling: Nested
.gitignorefiles are now respected. (pr by @gsquared94) - Enforced authentication: System administrators can now
mandate a specific
authentication method via
"enforcedAuthType": "oauth-personal|gemini-api-key|…"insettings.json. (pr by @chrstnb) - A2A development-tool extension: An RFC for an Agent2Agent (A2A) powered extension for developer tool use cases. (feedback, pr by @skeshive)
- **Hands on Codelab: **https://codelabs.developers.google.com/gemini-cli-hands-on
- Small features, polish, reliability & bug fixes: A large amount of changes, smaller features, UI updates, reliability and bug fixes + general polish made it in this week!
Announcements: v0.4.0 - 2025-09-01
Section titled “Announcements: v0.4.0 - 2025-09-01”- 🎉Gemini CLI CloudRun and Security Integrations🎉:
Automate app deployment
and security analysis with CloudRun and Security extension integrations.
Once
installed deploy your app to the cloud with
/deployand find and fix security vulnerabilities with/security:analyze.- Announcement and how to get started: https://cloud.google.com/blog/products/ai-machine-learning/automate-app-deployment-and-security-analysis-with-new-gemini-cli-extensions
- Experimental
- Edit Tool: Give our new edit tool a try by
setting
"useSmartEdit": trueinsettings.json! (feedback, pr by @silviojr) - Model talking to itself fix: We’ve removed a
model workaround that would
encourage Gemini CLI to continue conversations on your behalf.
This may be
disruptive and can be disabled via
"skipNextSpeakerCheck": falsein yoursettings.json(feedback, pr by @SandyTao520) - Prompt completion: Get real-time AI suggestions
to complete your prompts
as you type. Enable it with
"general": { "enablePromptCompletion": true }and share your feedback! (gif, pr by @3ks)
- Edit Tool: Give our new edit tool a try by
setting
- Footer visibility configuration: Customize the CLI’s
footer look and feel
in
settings.json(pr by @miguelsolorio)hideCWD: hide current working directory.hideSandboxStatus: hide sandbox status.hideModelInfo: hide current model information.hideContextSummary: hide request context summary.
- Citations: For enterprise Code Assist licenses users
will now see
citations in their responses by default. Enable this yourself with
"showCitations": true(pr by @scidomino) - Pro Quota Dialog: Handle daily Pro model usage limits with an interactive dialog that lets you immediately switch auth or fallback. (pr by @JayadityaGit)
- Custom commands @: Embed local file or directory
content directly into
your custom command prompts using
@{path}syntax (gif, pr by @abhipatel12) - 2.5 Flash Lite support: You can now use the
gemini-2.5-flash-litemodel for Gemini CLI viagemini -m …. (gif, pr by @psinha40898) - CLI streamlining: We have deprecated a number of
command line arguments in
favor of
settings.jsonalternatives. We will remove these arguments in a future release. See the PR for the full list of deprecations. (pr by @allenhutchison) - JSON session summary: Track and save detailed CLI
session statistics to a
JSON file for performance analysis with
--session-summary <path>(pr by @leehagoodjames) - Robust keyboard handling: More reliable and consistent behavior for arrow keys, special keys (Home, End, etc.), and modifier combinations across various terminals. (pr by @deepankarsharma)
- MCP loading indicator: Provides visual feedback during CLI initialization when connecting to multiple servers. (pr by @swissspidy)
- Small features, polish, reliability & bug fixes: A large amount of changes, smaller features, UI updates, reliability and bug fixes + general polish made it in this week!
ACP (Agent Client Protocol) mode is a special operational mode of Gemini CLI designed for programmatic control, primarily for IDE and other developer tool integrations. It uses a JSON-RPC protocol over stdio to communicate between Gemini CLI agent and a client.
To start Gemini CLI in ACP mode, use the --acp flag:
gemini --acp
Agent Client Protocol (ACP)
Section titled “Agent Client Protocol (ACP)”ACP is an open protocol that standardizes how AI coding agents communicate with code editors and IDEs. It addresses the challenge of fragmented distribution, where agents traditionally needed custom integrations for each client. With ACP, developers can implement their agent once, and it becomes compatible with any ACP-compliant editor.
For a comprehensive introduction to ACP, including its architecture and benefits, refer to the official ACP Introduction documentation.
Existing integrations using ACP
Section titled “Existing integrations using ACP”The ACP Agent Registry simplifies the distribution and management of ACP-compatible agents across various IDEs. Gemini CLI is an ACP-compatible agent and can be found in this registry.
For more general information about the registry, and how to use it with specific IDEs like JetBrains and Zed, refer to the IDE Integration documentation.
You can also find more information on the official ACP Agent Registry page.
Architecture and protocol basics
Section titled “Architecture and protocol basics”ACP mode establishes a client-server relationship between your tool (the client) and Gemini CLI (the server).
- Communication: The entire communication happens over standard input/output (stdio) using the JSON-RPC 2.0 protocol.
- Client’s role: The client is responsible for sending requests (for example, prompts) and handling responses and notifications from Gemini CLI.
- Gemini CLI’s role: In ACP mode, Gemini CLI listens for incoming JSON-RPC requests, processes them, and sends back responses.
The core of the ACP implementation can be found in
packages/cli/src/acp/acpClient.ts.
Extending with MCP
Section titled “Extending with MCP”ACP can be used with the Model Context Protocol (MCP). This lets an ACP client (like an IDE) expose its own functionality as “tools” that the Gemini model can use.
- The client implements an MCP server that advertises its tools.
- During the ACP
initializehandshake, the client provides the connection details for its MCP server. - Gemini CLI connects to the MCP server, discovers the available tools, and makes them available to the AI model.
- When the model decides to use one of these tools, Gemini CLI sends a tool call request to the MCP server.
This mechanism lets for a powerful, two-way integration where the agent can
leverage the IDE’s capabilities to perform tasks. The MCP client logic is in
packages/core/src/tools/mcp-client.ts.
Capabilities and supported methods
Section titled “Capabilities and supported methods”The ACP protocol exposes a number of methods for ACP clients (for example IDEs) to control Gemini CLI.
Core methods
Section titled “Core methods”initialize: Establishes the initial connection and lets the client to register its MCP server.authenticate: Authenticates the user.newSession: Starts a new chat session.loadSession: Loads a previous session.prompt: Sends a prompt to the agent.cancel: Cancels an ongoing prompt.
Session control
Section titled “Session control”setSessionMode: Allows changing the approval level for tool calls (for example, toauto-approve).unstable_setSessionModel: Changes the model for the current session.
File system proxy
Section titled “File system proxy”ACP includes a proxied file system service. This means that when the agent needs to read or write files, it does so through the ACP client. This is a security feature that ensures the agent only has access to the files that the client (and by extension, the user) has explicitly allowed.
Debugging and telemetry
Section titled “Debugging and telemetry”You can get insights into the ACP communication and the agent’s behavior through debugging logs and telemetry.
Debugging logs
Section titled “Debugging logs”To enable general debugging logs, start Gemini CLI with the --debug flag:
gemini --acp --debug
Telemetry
Section titled “Telemetry”For more detailed telemetry, you can use the following environment variables to capture telemetry data to a file:
GEMINI_TELEMETRY_ENABLED=trueGEMINI_TELEMETRY_TARGET=localGEMINI_TELEMETRY_OUTFILE=/path/to/your/log.json
This will write a JSON log file containing detailed information about all the
events happening within the agent, including ACP requests and responses. The
integration test integration-tests/acp-telemetry.test.ts provides a
working
example of how to set this up.
Gemini CLI includes a Checkpointing feature that automatically saves a snapshot of your project’s state before any file modifications are made by AI-powered tools. This lets you safely experiment with and apply code changes, knowing you can instantly revert back to the state before the tool was run.
How it works
Section titled “How it works”When you approve a tool that modifies the file system (like write_file or
replace), the CLI automatically creates a
“checkpoint.” This checkpoint
includes:
- A Git snapshot: A commit is made in a special, shadow
Git repository
located in your home directory (
~/.gemini/history/<project_hash>). This snapshot captures the complete state of your project files at that moment. It does not interfere with your own project’s Git repository. - Conversation history: The entire conversation you’ve had with the agent up to that point is saved.
- The tool call: The specific tool call that was about to be executed is also stored.
If you want to undo the change or simply go back, you can use the /restore
command. Restoring a checkpoint will:
- Revert all files in your project to the state captured in the snapshot.
- Restore the conversation history in the CLI.
- Re-propose the original tool call, allowing you to run it again, modify it, or simply ignore it.
All checkpoint data, including the Git snapshot and conversation history, is
stored locally on your machine. The Git snapshot is stored in the shadow
repository while the conversation history and tool calls are saved in a JSON
file in your project’s temporary directory, typically located at
~/.gemini/tmp/<project_hash>/checkpoints.
Enabling the feature
Section titled “Enabling the feature”The Checkpointing feature is disabled by default. To enable it, you need to
edit
your settings.json file.
Add the following key to your settings.json:
{ "general": { "checkpointing": { "enabled": true } }}
Using the /restore command
Section titled “Using the
/restore command”
Once enabled, checkpoints are created automatically. To manage them, you use
the
/restore command.
List available checkpoints
Section titled “List available checkpoints”To see a list of all saved checkpoints for the current project, simply run:
/restore
The CLI will display a list of available checkpoint files. These file names
are
typically composed of a timestamp, the name of the file being modified, and
the
name of the tool that was about to be run (for example,
2025-06-22T10-00-00_000Z-my-file.txt-write_file).
Restore a specific checkpoint
Section titled “Restore a specific checkpoint”To restore your project to a specific checkpoint, use the checkpoint file from the list:
/restore <checkpoint_file>
For example:
/restore 2025-06-22T10-00-00_000Z-my-file.txt-write_file
After running the command, your files and conversation will be immediately restored to the state they were in when the checkpoint was created, and the original tool prompt will reappear.
This page provides a reference for commonly used Gemini CLI commands, options, and parameters.
CLI commands
Section titled “CLI commands”| Command | Description | Example |
|---|---|---|
gemini |
Start interactive REPL | gemini |
gemini -p "query" |
Query non-interactively | gemini -p "summarize README.md" |
gemini "query" |
Query and continue interactively | gemini "explain this project" |
cat file | gemini |
Process piped content | cat logs.txt | geminiGet-Content logs.txt | gemini |
gemini -i "query" |
Execute and continue interactively | gemini -i "What is the purpose of this project?"
|
gemini -r "latest" |
Continue most recent session | gemini -r "latest" |
gemini -r "latest" "query" |
Continue session with a new prompt | gemini -r "latest" "Check for type errors"
|
gemini -r "<session-id>" "query"
|
Resume session by ID | gemini -r "abc123" "Finish this PR" |
gemini update |
Update to latest version | gemini update |
gemini extensions |
Manage extensions | See Extensions Management |
gemini mcp |
Configure MCP servers | See MCP Server Management |
Positional arguments
Section titled “Positional arguments”| Argument | Type | Description |
|---|---|---|
query |
string (variadic) | Positional prompt. Defaults to interactive mode in a TTY. Use
-p/--prompt for non-interactive
execution. |
Interactive commands
Section titled “Interactive commands”These commands are available within the interactive REPL.
| Command | Description |
|---|---|
/skills reload |
Reload discovered skills from disk |
/agents reload |
Reload the agent registry |
/commands reload |
Reload custom slash commands |
/memory reload |
Reload context files (for example, GEMINI.md) |
/mcp reload |
Restart and reload MCP servers |
/extensions reload |
Reload all active extensions |
/help |
Show help for all commands |
/quit |
Exit the interactive session |
CLI Options
Section titled “CLI Options”| Option | Alias | Type | Default | Description |
|---|---|---|---|---|
--debug |
-d |
boolean | false |
Run in debug mode with verbose logging |
--version |
-v |
- | - | Show CLI version number and exit |
--help |
-h |
- | - | Show help information |
--model |
-m |
string | auto |
Model to use. See Model Selection for available values. |
--prompt |
-p |
string | - | Prompt text. Appended to stdin input if provided. Forces non-interactive mode. |
--prompt-interactive |
-i |
string | - | Execute prompt and continue in interactive mode |
--worktree |
-w |
string | - | Start Gemini in a new git worktree. If no name is provided, one
is generated automatically. Requires experimental.worktrees: true in settings.
|
--sandbox |
-s |
boolean | false |
Run in a sandboxed environment for safer execution |
--approval-mode |
- | string | default |
Approval mode for tool execution. Choices: default, auto_edit, yolo,
plan |
--yolo |
-y |
boolean | false |
Deprecated. Auto-approve all actions. Use --approval-mode=yolo instead. |
--experimental-acp |
- | boolean | - | Start in ACP (Agent Code Pilot) mode. Experimental feature. |
--experimental-zed-integration |
- | boolean | - | Run in Zed editor integration mode. Experimental feature. |
--allowed-mcp-server-names |
- | array | - | Allowed MCP server names (comma-separated or multiple flags) |
--allowed-tools |
- | array | - | Deprecated. Use the Policy Engine instead. Tools that are allowed to run without confirmation (comma-separated or multiple flags) |
--extensions |
-e |
array | - | List of extensions to use. If not provided, all extensions are enabled (comma-separated or multiple flags) |
--list-extensions |
-l |
boolean | - | List all available extensions and exit |
--resume |
-r |
string | - | Resume a previous session. Use "latest"
for most recent or index number (for example --resume 5) |
--list-sessions |
- | boolean | - | List available sessions for the current project and exit |
--delete-session |
- | string | - | Delete a session by index number (use --list-sessions to see available sessions)
|
--include-directories |
- | array | - | Additional directories to include in the workspace (comma-separated or multiple flags) |
--screen-reader |
- | boolean | - | Enable screen reader mode for accessibility |
--output-format |
-o |
string | text |
The format of the CLI output. Choices: text, json, stream-json |
Model selection
Section titled “Model selection”The --model (or -m) flag lets
you specify which Gemini model to use. You can
use either model aliases (user-friendly names) or concrete model names.
Model aliases
Section titled “Model aliases”These are convenient shortcuts that map to specific models:
| Alias | Resolves To | Description |
|---|---|---|
auto |
gemini-2.5-pro or gemini-3-pro-preview |
Default. Resolves to the preview model if preview features are enabled, otherwise resolves to the standard pro model. |
pro |
gemini-2.5-pro or gemini-3-pro-preview |
For complex reasoning tasks. Uses preview model if enabled. |
flash |
gemini-2.5-flash |
Fast, balanced model for most tasks. |
flash-lite |
gemini-2.5-flash-lite |
Fastest model for simple tasks. |
Extensions management
Section titled “Extensions management”| Command | Description | Example |
|---|---|---|
gemini extensions install <source>
|
Install extension from Git URL or local path | gemini extensions install https://github.com/user/my-extension
|
gemini extensions install <source> --ref <ref>
|
Install from specific branch/tag/commit | gemini extensions install https://github.com/user/my-extension --ref develop
|
gemini extensions install <source> --auto-update
|
Install with auto-update enabled | gemini extensions install https://github.com/user/my-extension --auto-update
|
gemini extensions uninstall <name>
|
Uninstall one or more extensions | gemini extensions uninstall my-extension
|
gemini extensions list |
List all installed extensions | gemini extensions list |
gemini extensions update <name>
|
Update a specific extension | gemini extensions update my-extension
|
gemini extensions update --all |
Update all extensions | gemini extensions update --all |
gemini extensions enable <name>
|
Enable an extension | gemini extensions enable my-extension
|
gemini extensions disable <name>
|
Disable an extension | gemini extensions disable my-extension
|
gemini extensions link <path> |
Link local extension for development | gemini extensions link /path/to/extension
|
gemini extensions new <path> |
Create new extension from template | gemini extensions new ./my-extension
|
gemini extensions validate <path>
|
Validate extension structure | gemini extensions validate ./my-extension
|
See Extensions Documentation for more details.
MCP server management
Section titled “MCP server management”| Command | Description | Example |
|---|---|---|
gemini mcp add <name> <command>
|
Add stdio-based MCP server | gemini mcp add github npx -y @modelcontextprotocol/server-github
|
gemini mcp add <name> <url> --transport http
|
Add HTTP-based MCP server | gemini mcp add api-server http://localhost:3000 --transport http
|
gemini mcp add <name> <command> --env KEY=value
|
Add with environment variables | gemini mcp add slack node server.js --env SLACK_TOKEN=xoxb-xxx
|
gemini mcp add <name> <command> --scope user
|
Add with user scope | gemini mcp add db node db-server.js --scope user
|
gemini mcp add <name> <command> --include-tools tool1,tool2
|
Add with specific tools | gemini mcp add github npx -y @modelcontextprotocol/server-github --include-tools list_repos,get_pr
|
gemini mcp remove <name> |
Remove an MCP server | gemini mcp remove github |
gemini mcp list |
List all configured MCP servers | gemini mcp list |
See MCP Server Integration for more details.
Skills management
Section titled “Skills management”| Command | Description | Example |
|---|---|---|
gemini skills list |
List all discovered agent skills | gemini skills list |
gemini skills install <source> |
Install skill from Git, path, or file | gemini skills install https://github.com/u/repo
|
gemini skills link <path> |
Link local agent skills via symlink | gemini skills link /path/to/my-skills
|
gemini skills uninstall <name> |
Uninstall an agent skill | gemini skills uninstall my-skill |
gemini skills enable <name> |
Enable an agent skill | gemini skills enable my-skill |
gemini skills disable <name> |
Disable an agent skill | gemini skills disable my-skill |
gemini skills enable --all |
Enable all skills | gemini skills enable --all |
gemini skills disable --all |
Disable all skills | gemini skills disable --all |
See Agent Skills Documentation for more details.
Custom commands let you save and reuse your favorite or most frequently used prompts as personal shortcuts within Gemini CLI. You can create commands that are specific to a single project or commands that are available globally across all your projects, streamlining your workflow and ensuring consistency.
File locations and precedence
Section titled “File locations and precedence”Gemini CLI discovers commands from two locations, loaded in a specific order:
- User commands (global): Located in
~/.gemini/commands/. These commands are available in any project you are working on. - Project commands (local): Located in
<your-project-root>/.gemini/commands/. These commands are specific to the current project and can be checked into version control to be shared with your team.
If a command in the project directory has the same name as a command in the user directory, the project command will always be used. This allows projects to override global commands with project-specific versions.
Naming and namespacing
Section titled “Naming and namespacing”The name of a command is determined by its file path relative to its commands
directory. Subdirectories are used to create namespaced commands, with the
path
separator (/ or \) being
converted to a colon (:).
- A file at
~/.gemini/commands/test.tomlbecomes the command/test. - A file at
<project>/.gemini/commands/git/commit.tomlbecomes the namespaced command/git:commit.
TOML file format (v1)
Section titled “TOML file format (v1)”Your command definition files must be written in the TOML format and use the
.toml file extension.
Required fields
Section titled “Required fields”prompt(String): The prompt that will be sent to the Gemini model when the command is executed. This can be a single-line or multi-line string.
Optional fields
Section titled “Optional fields”description(String): A brief, one-line description of what the command does. This text will be displayed next to your command in the/helpmenu. If you omit this field, a generic description will be generated from the filename.
Handling arguments
Section titled “Handling arguments”Custom commands support two powerful methods for handling arguments. The CLI
automatically chooses the correct method based on the content of your
command’s
prompt.
1. Context-aware injection with
{{args}}
Section titled “1. Context-aware
injection with {{args}}”
If your prompt contains the special placeholder {{args}}, the CLI will
replace that placeholder with the text the user typed after the command
name.
The behavior of this injection depends on where it is used:
A. Raw injection (outside shell commands)
When used in the main body of the prompt, the arguments are injected exactly as the user typed them.
Example (git/fix.toml):
# Invoked via: /git:fix "Button is misaligned"
description = "Generates a fix for a given issue."prompt = "Please provide a code fix for the issue described here: {{args}}."
The model receives:
Please provide a code fix for the issue described here: "Button is misaligned".
B. Using arguments in shell commands (inside !{...} blocks)
When you use {{args}} inside a shell injection block
(!{...}), the arguments
are automatically shell-escaped before replacement. This
lets you safely
pass arguments to shell commands, ensuring the resulting command is
syntactically correct and secure while preventing command injection
vulnerabilities.
Example (/grep-code.toml):
prompt = """Please summarize the findings for the pattern `{{args}}`.
Search Results:!{grep -r {{args}} .}"""
When you run /grep-code It's complicated:
- The CLI sees
{{args}}used both outside and inside!{...}. - Outside: The first
{{args}}is replaced raw withIt's complicated. - Inside: The second
{{args}}is replaced with the escaped version (for example, on Linux:"It\'s complicated"). - The command executed is
grep -r "It's complicated" .. - The CLI prompts you to confirm this exact, secure command before execution.
- The final prompt is sent.
2. Default argument handling
Section titled “2. Default argument handling”If your prompt does not contain the
special placeholder {{args}}, the
CLI uses a default behavior for handling arguments.
If you provide arguments to the command (for example, /mycommand arg1), the
CLI will append the full command you typed to the end of the prompt,
separated
by two newlines. This allows the model to see both the original instructions
and
the specific arguments you just provided.
If you do not provide any arguments (for example, /mycommand), the prompt
is sent to the model exactly as it is, with nothing appended.
Example (changelog.toml):
This example shows how to create a robust command by defining a role for the model, explaining where to find the user’s input, and specifying the expected format and behavior.
# In: <project>/.gemini/commands/changelog.toml# Invoked via: /changelog 1.2.0 added "Support for default argument parsing."
description = "Adds a new entry to the project's CHANGELOG.md file."prompt = """# Task: Update Changelog
You are an expert maintainer of this software project. A user has invoked a command to add a new entry to the changelog.
**The user's raw command is appended below your instructions.**
Your task is to parse the `<version>`, `<change_type>`, and `<message>` from their input and use the `write_file` tool to correctly update the `CHANGELOG.md` file.
## Expected FormatThe command follows this format: `/changelog <version> <type> <message>`- `<type>` must be one of: "added", "changed", "fixed", "removed".
## Behavior1. Read the `CHANGELOG.md` file.2. Find the section for the specified `<version>`.3. Add the `<message>` under the correct `<type>` heading.4. If the version or type section doesn't exist, create it.5. Adhere strictly to the "Keep a Changelog" format."""
When you run /changelog 1.2.0 added "New feature",
the final text sent to the
model will be the original prompt followed by two newlines and the command
you
typed.
3. Executing shell commands with
!{...}
Section titled “3. Executing
shell commands with !{...}”
You can make your commands dynamic by executing shell commands directly
within
your prompt and injecting their output. This is
ideal for gathering context
from your local environment, like reading file content or checking the
status of
Git.
When a custom command attempts to execute a shell command, Gemini CLI will now prompt you for confirmation before proceeding. This is a security measure to ensure that only intended commands can be run.
How it works:
- Inject commands: Use the
!{...}syntax. - Argument substitution: If
{{args}}is present inside the block, it is automatically shell-escaped (see Context-Aware Injection above). - Robust parsing: The parser correctly handles complex
shell commands that
include nested braces, such as JSON payloads. The content inside
!{...}must have balanced braces ({and}). If you need to execute a command containing unbalanced braces, consider wrapping it in an external script file and calling the script within the!{...}block. - Security check and confirmation: The CLI performs a security check on the final, resolved command (after arguments are escaped and substituted). A dialog will appear showing the exact command(s) to be executed.
- Execution and error reporting: The command is executed.
If the command
fails, the output injected into the prompt will include the error
messages
(stderr) followed by a status line, for example,
[Shell command exited with code 1]. This helps the model understand the context of the failure.
Example (git/commit.toml):
This command gets the staged git diff and uses it to ask the model to write a commit message.
# In: <project>/.gemini/commands/git/commit.toml# Invoked via: /git:commit
description = "Generates a Git commit message based on staged changes."
# The prompt uses !{...} to execute the command and inject its output.prompt = """Please generate a Conventional Commit message based on the following git diff:
```diff!{git diff --staged}```
"""
When you run /git:commit, the CLI first executes
git diff --staged, then
replaces !{git diff --staged} with the output of
that command before sending
the final, complete prompt to the model.
4. Injecting file content with @{...}
Section titled “4. Injecting file
content with @{...}”
You can directly embed the content of a file or a directory listing into your
prompt using the @{...} syntax. This is useful for
creating commands that
operate on specific files.
How it works:
- File injection:
@{path/to/file.txt}is replaced by the content offile.txt. - Multimodal support: If the path points to a supported image (for example, PNG, JPEG), PDF, audio, or video file, it will be correctly encoded and injected as multimodal input. Other binary files are handled gracefully and skipped.
- Directory listing:
@{path/to/dir}is traversed and each file present within the directory and all subdirectories is inserted into the prompt. This respects.gitignoreand.geminiignoreif enabled. - Workspace-aware: The command searches for the path in the current directory and any other workspace directories. Absolute paths are allowed if they are within the workspace.
- Processing order: File content injection with
@{...}is processed before shell commands (!{...}) and argument substitution ({{args}}). - Parsing: The parser requires the content inside
@{...}(the path) to have balanced braces ({and}).
Example (review.toml):
This command injects the content of a fixed best practices file
(docs/best-practices.md) and uses the user’s
arguments to provide context for
the review.
# In: <project>/.gemini/commands/review.toml# Invoked via: /review FileCommandLoader.ts
description = "Reviews the provided context using a best practice guide."prompt = """You are an expert code reviewer.
Your task is to review {{args}}.
Use the following best practices when providing your review:
@{docs/best-practices.md}"""
When you run /review FileCommandLoader.ts, the @{docs/best-practices.md}
placeholder is replaced by the content of that file, and {{args}} is replaced
by the text you provided, before the final prompt is sent to the model.
Example: A “Pure Function” refactoring command
Section titled “Example: A “Pure Function” refactoring command”Let’s create a global command that asks the model to refactor a piece of code.
1. Create the file and directories:
First, ensure the user commands directory exists, then create a refactor
subdirectory for organization and the final TOML file.
macOS/Linux
mkdir -p ~/.gemini/commands/refactortouch ~/.gemini/commands/refactor/pure.toml
Windows (PowerShell)
New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.gemini\commands\refactor"New-Item -ItemType File -Force -Path "$env:USERPROFILE\.gemini\commands\refactor\pure.toml"
2. Add the content to the file:
Open ~/.gemini/commands/refactor/pure.toml in your
editor and add the
following content. We are including the optional description for best
practice.
# This command will be invoked via: /refactor:pure
description = "Asks the model to refactor the current context into a pure function."
prompt = """Please analyze the code I've provided in the current context.Refactor it into a pure function.
Your response should include:1. The refactored, pure function code block.2. A brief explanation of the key changes you made and why they contribute to purity."""
3. Run the command:
That’s it! You can now run your command in the CLI. First, you might add a file to the context, and then invoke your command:
> @my-messy-function.js> /refactor:pure
Gemini CLI will then execute the multi-line prompt defined in your TOML file.
This document outlines configuration patterns and best practices for deploying and managing Gemini CLI in an enterprise environment. By leveraging system-level settings, administrators can enforce security policies, manage tool access, and ensure a consistent experience for all users.
Centralized configuration: The system settings file
Section titled “Centralized configuration: The system settings file”The most powerful tools for enterprise administration are the system-wide
settings files. These files allow you to define a baseline configuration
(system-defaults.json) and a set of overrides (settings.json) that apply to
all users on a machine. For a complete overview of configuration options,
see
the Configuration documentation.
Settings are merged from four files. The precedence order for single-value
settings (like theme) is:
- System Defaults (
system-defaults.json) - User Settings (
~/.gemini/settings.json) - Workspace Settings (
<project>/.gemini/settings.json) - System Overrides (
settings.json)
This means the System Overrides file has the final say. For settings that are
arrays (includeDirectories) or objects (mcpServers), the values are merged.
Example of merging and precedence:
Here is how settings from different levels are combined.
-
System defaults
system-defaults.json:{"ui": {"theme": "default-corporate-theme"},"context": {"includeDirectories": ["/etc/gemini-cli/common-context"]}} -
User
settings.json(~/.gemini/settings.json):{"ui": {"theme": "user-preferred-dark-theme"},"mcpServers": {"corp-server": {"command": "/usr/local/bin/corp-server-dev"},"user-tool": {"command": "npm start --prefix ~/tools/my-tool"}},"context": {"includeDirectories": ["~/gemini-context"]}} -
Workspace
settings.json(<project>/.gemini/settings.json):{"ui": {"theme": "project-specific-light-theme"},"mcpServers": {"project-tool": {"command": "npm start"}},"context": {"includeDirectories": ["./project-context"]}} -
System overrides
settings.json:{"ui": {"theme": "system-enforced-theme"},"mcpServers": {"corp-server": {"command": "/usr/local/bin/corp-server-prod"}},"context": {"includeDirectories": ["/etc/gemini-cli/global-context"]}}
This results in the following merged configuration:
- Final merged configuration:
{"ui": {"theme": "system-enforced-theme"},"mcpServers": {"corp-server": {"command": "/usr/local/bin/corp-server-prod"},"user-tool": {"command": "npm start --prefix ~/tools/my-tool"},"project-tool": {"command": "npm start"}},"context": {"includeDirectories": ["/etc/gemini-cli/common-context","~/gemini-context","./project-context","/etc/gemini-cli/global-context"]}}
Why:
-
theme: The value from the system overrides (system-enforced-theme) is used, as it has the highest precedence. -
mcpServers: The objects are merged. Thecorp-serverdefinition from the system overrides takes precedence over the user’s definition. The uniqueuser-toolandproject-toolare included. -
includeDirectories: The arrays are concatenated in the order of System Defaults, User, Workspace, and then System Overrides. -
Location:
- Linux:
/etc/gemini-cli/settings.json - Windows:
C:\ProgramData\gemini-cli\settings.json - macOS:
/Library/Application Support/GeminiCli/settings.json - The path can be overridden using the
GEMINI_CLI_SYSTEM_SETTINGS_PATHenvironment variable.
- Linux:
-
Control: This file should be managed by system administrators and protected with appropriate file permissions to prevent unauthorized modification by users.
By using the system settings file, you can enforce the security and configuration patterns described below.
Enforcing system settings with a wrapper script
Section titled “Enforcing system settings with a wrapper script”While the GEMINI_CLI_SYSTEM_SETTINGS_PATH environment
variable provides
flexibility, a user could potentially override it to point to a different
settings file, bypassing the centrally managed configuration. To mitigate
this,
enterprises can deploy a wrapper script or alias that ensures the
environment
variable is always set to the corporate-controlled path.
This approach ensures that no matter how the user calls the gemini command,
the enterprise settings are always loaded with the highest precedence.
Example wrapper script:
Administrators can create a script named gemini and
place it in a directory
that appears earlier in the user’s PATH than the
actual Gemini CLI binary (for
example, /usr/local/bin/gemini).
#!/bin/bash
# Enforce the path to the corporate system settings file.# This ensures that the company's configuration is always applied.export GEMINI_CLI_SYSTEM_SETTINGS_PATH="/etc/gemini-cli/settings.json"
# Find the original gemini executable.# This is a simple example; a more robust solution might be needed# depending on the installation method.REAL_GEMINI_PATH=$(type -aP gemini | grep -v "^$(type -P gemini)$" | head -n 1)
if [ -z "$REAL_GEMINI_PATH" ]; then echo "Error: The original 'gemini' executable was not found." >&2 exit 1fi
# Pass all arguments to the real Gemini CLI executable.exec "$REAL_GEMINI_PATH" "$@"
By deploying this script, the GEMINI_CLI_SYSTEM_SETTINGS_PATH is set within
the script’s environment, and the exec command
replaces the script process
with the actual Gemini CLI process, which inherits the environment variable.
This makes it significantly more difficult for a user to bypass the enforced
settings.
PowerShell Profile (Windows alternative):
On Windows, administrators can achieve similar results by adding the environment variable to the system-wide or user-specific PowerShell profile:
Add-Content -Path $PROFILE -Value '$env:GEMINI_CLI_SYSTEM_SETTINGS_PATH="C:\ProgramData\gemini-cli\settings.json"'
User isolation in shared environments
Section titled “User isolation in shared environments”In shared compute environments (like ML experiment runners or shared build servers), you can isolate Gemini CLI state by overriding the user’s home directory.
By default, Gemini CLI stores configuration and history in ~/.gemini. You can
use the GEMINI_CLI_HOME environment variable to
point to a unique directory
for a specific user or job. The CLI will create a .gemini folder inside the
specified path.
macOS/Linux
# Isolate state for a specific jobexport GEMINI_CLI_HOME="/tmp/gemini-job-123"gemini
Windows (PowerShell)
# Isolate state for a specific job$env:GEMINI_CLI_HOME="C:\temp\gemini-job-123"gemini
Restricting tool access
Section titled “Restricting tool access”You can significantly enhance security by controlling which tools the Gemini
model can use. This is achieved through the tools.core setting and the
Policy Engine. For a list of
available tools,
see the Tools reference.
Allowlisting with coreTools
Section titled “Allowlisting with
coreTools”
The most secure approach is to explicitly add the tools and commands that users are permitted to execute to an allowlist. This prevents the use of any tool not on the approved list.
Example: Allow only safe, read-only file operations and listing files.
{ "tools": { "core": ["ReadFileTool", "GlobTool", "ShellTool(ls)"] }}
Blocklisting with excludeTools (Deprecated)
Section titled “Blocklisting with
excludeTools (Deprecated)”
Deprecated: Use the Policy Engine for more robust control.
Alternatively, you can add specific tools that are considered dangerous in your environment to a blocklist.
Example: Prevent the use of the shell tool for removing files.
{ "tools": { "exclude": ["ShellTool(rm -rf)"] }}
Disabling YOLO mode
Section titled “Disabling YOLO mode”To ensure that users cannot bypass the confirmation prompt for tool execution, you can disable YOLO mode at the policy level. This adds a critical layer of safety, as it prevents the model from executing tools without explicit user approval.
Example: Force all tool executions to require user confirmation.
{ "security": { "disableYoloMode": true }}
This setting is highly recommended in an enterprise environment to prevent unintended tool execution.
Managing custom tools (MCP servers)
Section titled “Managing custom tools (MCP servers)”If your organization uses custom tools via Model-Context Protocol (MCP) servers, it is crucial to understand how server configurations are managed to apply security policies effectively.
How MCP server configurations are merged
Section titled “How MCP server configurations are merged”Gemini CLI loads settings.json files from three
levels: System, Workspace, and
User. When it comes to the mcpServers object, these
configurations are
merged:
- Merging: The lists of servers from all three levels are combined into a single list.
- Precedence: If a server with the same
name is defined at multiple
levels (for example, a server named
corp-apiexists in both system and user settings), the definition from the highest-precedence level is used. The order of precedence is: System > Workspace > User.
This means a user cannot override the definition of a server that is already defined in the system-level settings. However, they can add new servers with unique names.
Enforcing a catalog of tools
Section titled “Enforcing a catalog of tools”The security of your MCP tool ecosystem depends on a combination of defining the canonical servers and adding their names to an allowlist.
Restricting tools within an MCP server
Section titled “Restricting tools within an MCP server”For even greater security, especially when dealing with third-party MCP
servers,
you can restrict which specific tools from a server are exposed to the
model.
This is done using the includeTools and excludeTools properties within a
server’s definition. This lets you use a subset of tools from a server
without
allowing potentially dangerous ones.
Following the principle of least privilege, it is highly recommended to use
includeTools to create an allowlist of only the
necessary tools.
Example: Only allow the code-search
and get-ticket-details tools from a
third-party MCP server, even if the server offers other tools like
delete-ticket.
{ "mcp": { "allowed": ["third-party-analyzer"] }, "mcpServers": { "third-party-analyzer": { "command": "/usr/local/bin/start-3p-analyzer.sh", "includeTools": ["code-search", "get-ticket-details"] } }}
More secure pattern: Define and add to allowlist in system settings
Section titled “More secure pattern: Define and add to allowlist in system settings”To create a secure, centrally-managed catalog of tools, the system
administrator
must do both of the following in the system-level settings.json file:
- Define the full configuration for every approved server
in the
mcpServersobject. This ensures that even if a user defines a server with the same name, the secure system-level definition will take precedence. - Add the names of those servers to an allowlist using
the
mcp.allowedsetting. This is a critical security step that prevents users from running any servers that are not on this list. If this setting is omitted, the CLI will merge and allow any server defined by the user.
Example system settings.json:
-
Add the names of all approved servers to an allowlist. This will prevent users from adding their own servers.
-
Provide the canonical definition for each server on the allowlist.
{ "mcp": { "allowed": ["corp-data-api", "source-code-analyzer"] }, "mcpServers": { "corp-data-api": { "command": "/usr/local/bin/start-corp-api.sh", "timeout": 5000 }, "source-code-analyzer": { "command": "/usr/local/bin/start-analyzer.sh" } }}
This pattern is more secure because it uses both definition and an allowlist.
Any server a user defines will either be overridden by the system definition
(if
it has the same name) or blocked because its name is not in the mcp.allowed
list.
Less secure pattern: Omitting the allowlist
Section titled “Less secure pattern: Omitting the allowlist”If the administrator defines the mcpServers object
but fails to also specify
the mcp.allowed allowlist, users may add their own
servers.
Example system settings.json:
This configuration defines servers but does not enforce the allowlist. The administrator has NOT included the “mcp.allowed” setting.
{ "mcpServers": { "corp-data-api": { "command": "/usr/local/bin/start-corp-api.sh" } }}
In this scenario, a user can add their own server in their local
settings.json. Because there is no mcp.allowed list to filter the merged
results, the user’s server will be added to the list of available tools and
allowed to run.
Enforcing sandboxing for security
Section titled “Enforcing sandboxing for security”To mitigate the risk of potentially harmful operations, you can enforce the use of sandboxing for all tool execution. The sandbox isolates tool execution in a containerized environment.
Example: Force all tool execution to happen within a Docker sandbox.
{ "tools": { "sandbox": "docker" }}
You can also specify a custom, hardened Docker image for the sandbox by
building
a custom sandbox.Dockerfile as described in the
Sandboxing documentation.
Controlling network access via proxy
Section titled “Controlling network access via proxy”In corporate environments with strict network policies, you can configure
Gemini
CLI to route all outbound traffic through a corporate proxy. This can be set
via
an environment variable, but it can also be enforced for custom tools via
the
mcpServers configuration.
Example (for an MCP server):
{ "mcpServers": { "proxied-server": { "command": "node", "args": ["mcp_server.js"], "env": { "HTTP_PROXY": "http://proxy.example.com:8080", "HTTPS_PROXY": "http://proxy.example.com:8080" } } }}
Telemetry and auditing
Section titled “Telemetry and auditing”For auditing and monitoring purposes, you can configure Gemini CLI to send telemetry data to a central location. This lets you track tool usage and other events. For more information, see the telemetry documentation.
Example: Enable telemetry and send it to a local OTLP
collector. If
otlpEndpoint is not specified, it defaults to http://localhost:4317.
{ "telemetry": { "enabled": true, "target": "gcp", "logPrompts": false }}
Authentication
Section titled “Authentication”You can enforce a specific authentication method for all users by setting the
security.auth.enforcedType in the system-level settings.json file. This
prevents users from choosing a different authentication method. See the
Authentication docs for more
details.
Example: Enforce the use of Google login for all users.
{ "security": { "auth": { "enforcedType": "oauth-personal" } }}
If a user has a different authentication method configured, they will be prompted to switch to the enforced method. In non-interactive mode, the CLI will exit with an error if the configured authentication method does not match the enforced one.
Restricting logins to corporate domains
Section titled “Restricting logins to corporate domains”For enterprises using Google Workspace, you can enforce that users only authenticate with their corporate Google accounts. This is a network-level control that is configured on a proxy server, not within Gemini CLI itself. It works by intercepting authentication requests to Google and adding a special HTTP header.
This policy prevents users from logging in with personal Gmail accounts or other non-corporate Google accounts.
For detailed instructions, see the Google Workspace Admin Help article on blocking access to consumer accounts.
The general steps are as follows:
- Intercept Requests: Configure your web proxy to
intercept all requests
to
google.com. - Add HTTP Header: For each intercepted request, add the
X-GoogApps-Allowed-DomainsHTTP header. - Specify Domains: The value of the header should be a comma-separated list of your approved Google Workspace domain names.
Example header:
X-GoogApps-Allowed-Domains: my-corporate-domain.com, secondary-domain.com
When this header is present, Google’s authentication service will only allow logins from accounts belonging to the specified domains.
Putting it all
together: example system settings.json
Section titled “Putting it all
together: example system settings.json”
Here is an example of a system settings.json file
that combines several of the
patterns discussed above to create a secure, controlled environment for
Gemini
CLI.
{ "tools": { "sandbox": "docker", "core": [ "ReadFileTool", "GlobTool", "ShellTool(ls)", "ShellTool(cat)", "ShellTool(grep)" ] }, "mcp": { "allowed": ["corp-tools"] }, "mcpServers": { "corp-tools": { "command": "/opt/gemini-tools/start.sh", "timeout": 5000 } }, "telemetry": { "enabled": true, "target": "gcp", "otlpEndpoint": "https://telemetry-prod.example.com:4317", "logPrompts": false }, "advanced": { "bugCommand": { "urlTemplate": "https://servicedesk.example.com/new-ticket?title={title}&details={info}" } }, "privacy": { "usageStatisticsEnabled": false }}
This configuration:
- Forces all tool execution into a Docker sandbox.
- Strictly uses an allowlist for a small set of safe shell commands and file tools.
- Defines and allows a single corporate MCP server for custom tools.
- Enables telemetry for auditing, without logging prompt content.
- Redirects the
/bugcommand to an internal ticketing system. - Disables general usage statistics collection.
This document provides an overview of the Gemini Ignore (.geminiignore)
feature of Gemini CLI.
Gemini CLI includes the ability to automatically ignore files, similar to
.gitignore (used by Git) and .aiexclude (used by Gemini Code Assist). Adding
paths to your .geminiignore file will exclude them
from tools that support
this feature, although they will still be visible to other services (such as
Git).
How it works
Section titled “How it works”When you add a path to your .geminiignore file, tools
that respect this file
will exclude matching files and directories from their operations. For
example,
when you use the @ command to share files, any paths
in your .geminiignore
file will be automatically excluded.
For the most part, .geminiignore follows the
conventions of .gitignore
files:
- Blank lines and lines starting with
#are ignored. - Standard glob patterns are supported (such as
*,?, and[]). - Putting a
/at the end will only match directories. - Putting a
/at the beginning anchors the path relative to the.geminiignorefile. !negates a pattern.
You can update your .geminiignore file at any time.
To apply the changes, you
must restart your Gemini CLI session.
How to use .geminiignore
Section titled “How to use
.geminiignore”
To enable .geminiignore:
- Create a file named
.geminiignorein the root of your project directory.
To add a file or directory to .geminiignore:
- Open your
.geminiignorefile. - Add the path or file you want to ignore, for example:
/archive/orapikeys.txt.
.geminiignore
examples
Section titled “.geminiignore
examples”
You can use .geminiignore to ignore directories and
files:
# Exclude your /packages/ directory and all subdirectories/packages/
# Exclude your apikeys.txt fileapikeys.txt
You can use wildcards in your .geminiignore file with
*:
# Exclude all .md files*.md
Finally, you can exclude files and directories from exclusion with !:
# Exclude all .md files except README.md*.md!README.md
To remove paths from your .geminiignore file, delete
the relevant lines.
Context files, which use the default name GEMINI.md,
are a powerful feature
for providing instructional context to the Gemini model. You can use these
files
to give project-specific instructions, define a persona, or provide coding
style
guides to make the AI’s responses more accurate and tailored to your needs.
Instead of repeating instructions in every prompt, you can define them once in a context file.
Understand the context hierarchy
Section titled “Understand the context hierarchy”The CLI uses a hierarchical system to source context. It loads various context files from several locations, concatenates the contents of all found files, and sends them to the model with every prompt. The CLI loads files in the following order:
-
Global context file:
- Location:
~/.gemini/GEMINI.md(in your user home directory). - Scope: Provides default instructions for all your projects.
- Location:
-
Environment and workspace context files:
- Location: The CLI searches for
GEMINI.mdfiles in your configured workspace directories and their parent directories. - Scope: Provides context relevant to the projects you are currently working on.
- Location: The CLI searches for
-
Just-in-time (JIT) context files:
- Location: When a tool accesses a file or
directory, the CLI
automatically scans for
GEMINI.mdfiles in that directory and its ancestors up to a trusted root. - Scope: Lets the model discover highly specific instructions for particular components only when they are needed.
- Location: When a tool accesses a file or
directory, the CLI
automatically scans for
The CLI footer displays the number of loaded context files, which gives you a quick visual cue of the active instructional context.
Example GEMINI.md
file
Section titled “Example GEMINI.md
file”
Here is an example of what you can include in a GEMINI.md file at the root of
a TypeScript project:
# Project: My TypeScript Library
## General Instructions
- When you generate new TypeScript code, follow the existing coding style.- Ensure all new functions and classes have JSDoc comments.- Prefer functional programming paradigms where appropriate.
## Coding Style
- Use 2 spaces for indentation.- Prefix interface names with `I` (for example, `IUserService`).- Always use strict equality (`===` and `!==`).
Manage context with the
/memory command
Section titled “Manage context
with the /memory command”
You can interact with the loaded context files by using the /memory command.
/memory show: Displays the full, concatenated content of the current hierarchical memory. This lets you inspect the exact instructional context being provided to the model./memory reload: Forces a re-scan and reload of allGEMINI.mdfiles from all configured locations./memory add <text>: Appends your text to your global~/.gemini/GEMINI.mdfile. This lets you add persistent memories on the fly.
Modularize context with imports
Section titled “Modularize context with imports”You can break down large GEMINI.md files into
smaller, more manageable
components by importing content from other files using the @file.md syntax.
This feature supports both relative and absolute paths.
Example GEMINI.md with imports:
# Main GEMINI.md file
This is the main content.
@./components/instructions.md
More content here.
@../shared/style-guide.md
For more details, see the Memory Import Processor documentation.
Customize the context file name
Section titled “Customize the context file name”While GEMINI.md is the default filename, you can
configure this in your
settings.json file. To specify a different name or a
list of names, use the
context.fileName property.
Example settings.json:
{ "context": { "fileName": ["AGENTS.md", "CONTEXT.md", "GEMINI.md"] }}
Next steps
Section titled “Next steps”- Learn about Ignoring files to exclude content from the context system.
- Explore the Memory tool to save persistent memories.
- See how to use Custom commands to automate common prompts.
This guide details the Model Configuration system within Gemini CLI. Designed for researchers, AI quality engineers, and advanced users, this system provides a rigorous framework for managing generative model hyperparameters and behaviors.
1. System Overview
Section titled “1. System Overview”The Model Configuration system (ModelConfigService)
enables deterministic
control over model generation. It decouples the requested model identifier
(for
example, a CLI flag or agent request) from the underlying API configuration.
This allows for:
- Precise Hyperparameter Tuning: Direct control over
temperature,topP,thinkingBudget, and other SDK-level parameters. - Environment-Specific Behavior: Distinct configurations for different operating contexts (for example, testing vs. production).
- Agent-Scoped Customization: Applying specific settings only when a particular agent is active.
The system operates on two core primitives: Aliases and Overrides.
2. Configuration Primitives
Section titled “2. Configuration Primitives”These settings are located under the modelConfigs key
in your configuration
file.
Aliases (customAliases)
Section titled “Aliases
(customAliases)”
Aliases are named, reusable configuration presets. Users should define their
own
aliases (or override system defaults) in the customAliases map.
- Inheritance: An alias can
extendsanother alias (including system defaults likechat-base), inheriting itsmodelConfig. Child aliases can overwrite or augment inherited settings. - Abstract Aliases: An alias is not required to specify a
concrete
modelif it serves purely as a base for other aliases.
Example Hierarchy:
"modelConfigs": { "customAliases": { "base": { "modelConfig": { "generateContentConfig": { "temperature": 0.0 } } }, "chat-base": { "extends": "base", "modelConfig": { "generateContentConfig": { "temperature": 0.7 } } } }}
Overrides (overrides)
Section titled “Overrides
(overrides)”
Overrides are conditional rules that inject configuration based on the runtime context. They are evaluated dynamically for each model request.
- Match Criteria: Overrides apply when the request
context matches the
specified
matchproperties.model: Matches the requested model name or alias.overrideScope: Matches the distinct scope of the request (typically the agent name, for example,codebaseInvestigator).
Example Override:
"modelConfigs": { "overrides": [ { "match": { "overrideScope": "codebaseInvestigator" }, "modelConfig": { "generateContentConfig": { "temperature": 0.1 } } } ]}
3. Resolution Strategy
Section titled “3. Resolution Strategy”The ModelConfigService resolves the final
configuration through a two-step
process:
Step 1: Alias Resolution
Section titled “Step 1: Alias Resolution”The requested model string is looked up in the merged map of system aliases
and user customAliases.
- If found, the system recursively resolves the
extendschain. - Settings are merged from parent to child (child wins).
- This results in a base
ResolvedModelConfig. - If not found, the requested string is treated as the raw model name.
Step 2: Override Application
Section titled “Step 2: Override Application”The system evaluates the overrides list against the
request context (model
and overrideScope).
- Filtering: All matching overrides are identified.
- Sorting: Matches are prioritized by
specificity (the number of
matched keys in the
matchobject).- Specific matches (for example,
model+overrideScope) override broad matches (for example,modelonly). - Tie-breaking: If specificity is equal, the order of definition
in the
overridesarray is preserved (last one wins).
- Specific matches (for example,
- Merging: The configurations from the sorted overrides are merged sequentially onto the base configuration.
4. Configuration Reference
Section titled “4. Configuration Reference”The configuration follows the ModelConfigServiceConfig interface.
ModelConfig Object
Section titled “ModelConfig
Object”
Defines the actual parameters for the model.
| Property | Type | Description |
|---|---|---|
model |
string |
The identifier of the model to be called (for
example, gemini-2.5-pro). |
generateContentConfig |
object |
The configuration object passed to the @google/genai SDK. |
GenerateContentConfig (Common Parameters)
Section titled
“GenerateContentConfig (Common Parameters)”
Directly maps to the SDK’s GenerateContentConfig.
Common parameters include:
temperature: (number) Controls output randomness. Lower values (0.0) are deterministic; higher values (>0.7) are creative.topP: (number) Nucleus sampling probability.maxOutputTokens: (number) Limit on generated response length.thinkingConfig: (object) Configuration for models with reasoning capabilities (for example,thinkingBudget,includeThoughts).
5. Practical Examples
Section titled “5. Practical Examples”Defining a Deterministic Baseline
Section titled “Defining a Deterministic Baseline”Create an alias for tasks requiring high precision, extending the standard chat configuration but enforcing zero temperature.
"modelConfigs": { "customAliases": { "precise-mode": { "extends": "chat-base", "modelConfig": { "generateContentConfig": { "temperature": 0.0, "topP": 1.0 } } } }}
Agent-Specific Parameter Injection
Section titled “Agent-Specific Parameter Injection”Enforce extended thinking budgets for a specific agent without altering the
global default, for example for the codebaseInvestigator.
"modelConfigs": { "overrides": [ { "match": { "overrideScope": "codebaseInvestigator" }, "modelConfig": { "generateContentConfig": { "thinkingConfig": { "thinkingBudget": 4096 } } } } ]}
Experimental Model Evaluation
Section titled “Experimental Model Evaluation”Route traffic for a specific alias to a preview model for A/B testing, without changing client code.
"modelConfigs": { "overrides": [ { "match": { "model": "gemini-2.5-pro" }, "modelConfig": { "model": "gemini-2.5-pro-experimental-001" } } ]}
When working on multiple tasks at once, you can use Git worktrees to give each Gemini session its own copy of the codebase. Git worktrees create separate working directories that each have their own files and branch while sharing the same repository history. This prevents changes in one session from colliding with another.
Learn more about session management.
Learn more in the official Git worktree documentation.
How to enable Git worktrees
Section titled “How to enable Git worktrees”Git worktrees are an experimental feature. You must enable them in your
settings
using the /settings command or by manually editing
your settings.json file.
- Use the
/settingscommand. - Search for and set Enable Git Worktrees to
true.
Alternatively, add the following to your settings.json:
{ "experimental": { "worktrees": true }}
How to use Git worktrees
Section titled “How to use Git worktrees”Use the --worktree (-w) flag
to create an isolated worktree and start Gemini
CLI in it.
-
Start with a specific name: The value you pass becomes both the directory name (within
.gemini/worktrees/) and the branch name.Terminal window gemini --worktree feature-search -
Start with a random name: If you omit the name, Gemini generates a random one automatically (for example,
worktree-a1b2c3d4).Terminal window gemini --worktree
How to exit a Git worktree session
Section titled “How to exit a Git worktree session”When you exit a worktree session (using /quit or
Ctrl+C), Gemini leaves the
worktree intact so your work is not lost. This includes your uncommitted
changes
(modified files, staged changes, or untracked files) and any new commits you
have made.
Gemini prioritizes a fast and safe exit: it does not automatically delete your worktree or branch. You are responsible for cleaning up your worktrees manually once you are finished with them.
When you exit, Gemini displays instructions on how to resume your work or how to manually remove the worktree if you no longer need it.
Resuming work in a Git worktree
Section titled “Resuming work in a Git worktree”To resume a session in a worktree, navigate to the worktree directory and
start
Gemini CLI with the --resume flag and the session
ID:
cd .gemini/worktrees/feature-searchgemini --resume <session_id>
Managing Git worktrees manually
Section titled “Managing Git worktrees manually”For more control over worktree location and branch configuration, or to clean up a preserved worktree, you can use Git directly:
- Clean up a preserved Git worktree:
Terminal window git worktree remove .gemini/worktrees/feature-search --forcegit branch -D worktree-feature-search - Create a Git worktree manually:
Terminal window git worktree add ../project-feature-search -b feature-searchcd ../project-feature-search && gemini
Headless mode provides a programmatic interface to Gemini CLI, returning structured text or JSON output without an interactive terminal UI.
Technical reference
Section titled “Technical reference”Headless mode is triggered when the CLI is run in a non-TTY environment or
when
providing a query with the -p (or --prompt) flag.
Output formats
Section titled “Output formats”You can specify the output format using the --output-format flag.
JSON output
Section titled “JSON output”Returns a single JSON object containing the response and usage statistics.
- Schema:
response: (string) The model’s final answer.stats: (object) Token usage and API latency metrics.error: (object, optional) Error details if the request failed.
Streaming JSON output
Section titled “Streaming JSON output”Returns a stream of newline-delimited JSON (JSONL) events.
- Event types:
init: Session metadata (session ID, model).message: User and assistant message chunks.tool_use: Tool call requests with arguments.tool_result: Output from executed tools.error: Non-fatal warnings and system errors.result: Final outcome with aggregated statistics and per-model token usage breakdowns.
Exit codes
Section titled “Exit codes”The CLI returns standard exit codes to indicate the result of the headless execution:
0: Success.1: General error or API failure.42: Input error (invalid prompt or arguments).53: Turn limit exceeded.
Next steps
Section titled “Next steps”- Follow the Automation tutorial for practical scripting examples.
- See the CLI reference for all available flags.
Gemini CLI includes a model routing feature that automatically switches to a fallback model in case of a model failure. This feature is enabled by default and provides resilience when the primary model is unavailable.
How it works
Section titled “How it works”Model routing is managed by the ModelAvailabilityService, which monitors model
health and automatically routes requests to available models based on
defined
policies.
-
Model failure: If the currently selected model fails (for example, due to quota or server errors), the CLI will initiate the fallback process.
-
User consent: Depending on the failure and the model’s policy, the CLI may prompt you to switch to a fallback model (by default always prompts you).
Some internal utility calls (such as prompt completion and classification) use a silent fallback chain for
gemini-2.5-flash-liteand will fall back togemini-2.5-flashandgemini-2.5-prowithout prompting or changing the configured model. -
Model switch: If approved, or if the policy allows for silent fallback, the CLI will use an available fallback model for the current turn or the remainder of the session.
Local Model Routing (Experimental)
Section titled “Local Model Routing (Experimental)”Gemini CLI supports using a local model for routing decisions. When configured, Gemini CLI will use a locally-running Gemma model to make routing decisions (instead of sending routing decisions to a hosted model). This feature can help reduce costs associated with hosted model usage while offering similar routing decision latency and quality.
In order to use this feature, the local Gemma model must be
served behind a
Gemini API and accessible via HTTP at an endpoint configured in settings.json.
For more details on how to configure local model routing, see Local Model Routing.
Model selection precedence
Section titled “Model selection precedence”The model used by Gemini CLI is determined by the following order of precedence:
--modelcommand-line flag: A model specified with the--modelflag when launching the CLI will always be used.GEMINI_MODELenvironment variable: If the--modelflag is not used, the CLI will use the model specified in theGEMINI_MODELenvironment variable.model.nameinsettings.json: If neither of the above are set, the model specified in themodel.nameproperty of yoursettings.jsonfile will be used.- Local model (experimental): If the Gemma local model
router is enabled
in your
settings.jsonfile, the CLI will use the local Gemma model (instead of Gemini models) to route the request to an appropriate model. - Default model: If none of the above are set, the
default model will be
used. The default model is
auto
Model steering lets you provide real-time guidance and feedback to Gemini CLI while it is actively executing a task. This lets you correct course, add missing context, or skip unnecessary steps without having to stop and restart the agent.
Model steering is particularly useful during complex Plan Mode workflows or long-running subagent executions where you want to ensure the agent stays on the right track.
Enabling model steering
Section titled “Enabling model steering”Model steering is an experimental feature and is disabled by default. You can
enable it using the /settings command or by updating
your settings.json
file.
- Type
/settingsin Gemini CLI. - Search for Model Steering.
- Set the value to true.
Alternatively, add the following to your settings.json:
{ "experimental": { "modelSteering": true }}
Using model steering
Section titled “Using model steering”When model steering is enabled, Gemini CLI treats any text you type while the agent is working as a steering hint.
- Start a task (for example, “Refactor the database service”).
- While the agent is working (the spinner is visible), type your feedback in the input box.
- Press Enter.
Gemini CLI acknowledges your hint with a brief message and injects it directly into the model’s context for the very next turn. The model then re-evaluates its current plan and adjusts its actions accordingly.
Common use cases
Section titled “Common use cases”You can use steering hints to guide the model in several ways:
- Correcting a path: “Actually, the utilities are in
src/common/utils.” - Skipping a step: “Skip the unit tests for now and just focus on the implementation.”
- Adding context: “The
Usertype is defined inpackages/core/types.ts.” - Redirecting the effort: “Stop searching the codebase and start drafting the plan now.”
- Handling ambiguity: “Use the existing
Loggerclass instead of creating a new one.”
How it works
Section titled “How it works”When you submit a steering hint, Gemini CLI performs the following actions:
- Immediate acknowledgment: It uses a small, fast model to generate a one-sentence acknowledgment so you know your hint was received.
- Context injection: It prepends an internal instruction
to your hint that
tells the main agent to:
- Re-evaluate the active plan.
- Classify the update (for example, as a new task or extra context).
- Apply minimal-diff changes to affected tasks.
- Real-time update: The hint is delivered to the agent at the beginning of its next turn, ensuring the most immediate course correction possible.
Next steps
Section titled “Next steps”- Tackle complex tasks with Plan Mode.
- Build custom Agent Skills.
Select your Gemini CLI model. The /model command lets
you configure the model
used by Gemini CLI, giving you more control over your results. Use
Pro
models for complex tasks and reasoning, Flash models for
high speed results,
or the (recommended) Auto setting to choose the best model
for your tasks.
How to use the /model command
Section titled “How to use the
/model command”
Use the following command in Gemini CLI:
/model
Running this command will open a dialog with your options:
| Option | Description | Models |
|---|---|---|
| Auto (Gemini 3) | Let the system choose the best Gemini 3 model for your task. | gemini-3-pro-preview, gemini-3-flash-preview |
| Auto (Gemini 2.5) | Let the system choose the best Gemini 2.5 model for your task. | gemini-2.5-pro, gemini-2.5-flash |
| Manual | Select a specific model. | Any available model. |
We recommend selecting one of the above Auto options. However, you can select Manual to select a specific model from those available.
You can also use the --model flag to specify a
particular Gemini model on
startup. For more details, refer to the
configuration documentation.
Changes to these settings will be applied to all subsequent interactions with Gemini CLI.
Best practices for model selection
Section titled “Best practices for model selection”-
Default to Auto. For most users, the Auto option model provides a balance between speed and performance, automatically selecting the correct model based on the complexity of the task. Example: Developing a web application could include a mix of complex tasks (building architecture and scaffolding the project) and simple tasks (generating CSS).
-
Switch to Pro if you aren’t getting the results you want. If you think you need your model to be a little “smarter,” you can manually select Pro. Pro will provide you with the highest levels of reasoning and creativity. Example: A complex or multi-stage debugging task.
-
Switch to Flash or Flash-Lite if you need faster results. If you need a simple response quickly, Flash or Flash-Lite is the best option. Example: Converting a JSON object to a YAML string.
Gemini CLI can send system notifications to alert you when a session completes or when it needs your attention, such as when it’s waiting for you to approve a tool call.
Notifications are particularly useful when running long-running tasks or using Plan Mode, letting you switch to other windows while Gemini CLI works in the background.
Requirements
Section titled “Requirements”Terminal support
Section titled “Terminal support”The CLI uses the OSC 9 terminal escape sequence to trigger system notifications. This is supported by several modern terminal emulators including iTerm2, WezTerm, Ghostty, and Kitty. If your terminal does not support OSC 9 notifications, Gemini CLI falls back to a terminal bell (BEL) to get your attention. Most terminals respond to BEL with a taskbar flash or system alert sound.
Enable notifications
Section titled “Enable notifications”Notifications are disabled by default. You can enable them using the /settings
command or by updating your settings.json file.
- Open the settings dialog by typing
/settingsin an interactive session. - Navigate to the General category.
- Toggle the Enable Notifications setting to On.
Alternatively, add the following to your settings.json:
{ "general": { "enableNotifications": true }}
Types of notifications
Section titled “Types of notifications”Gemini CLI sends notifications for the following events:
- Action required: Triggered when the model is waiting for user input or tool approval. This helps you know when the CLI has paused and needs you to intervene.
- Session complete: Triggered when a session finishes successfully. This is useful for tracking the completion of automated tasks.
Next steps
Section titled “Next steps”Plan Mode is a read-only environment for architecting robust solutions before implementation. With Plan Mode, you can:
- Research: Explore the project in a read-only state to prevent accidental changes.
- Design: Understand problems, evaluate trade-offs, and choose a solution.
- Plan: Align on an execution strategy before any code is modified.
Plan Mode is enabled by default. You can manage this setting using the
/settings command.
How to enter Plan Mode
Section titled “How to enter Plan Mode”Plan Mode integrates seamlessly into your workflow, letting you switch between planning and execution as needed.
You can either configure Gemini CLI to start in Plan Mode by default or enter Plan Mode manually during a session.
Launch in Plan Mode
Section titled “Launch in Plan Mode”To start Gemini CLI directly in Plan Mode by default:
- Use the
/settingscommand. - Set Default Approval Mode to
Plan.
To launch Gemini CLI in Plan Mode once:
- Use
gemini --approval-mode=planwhen launching Gemini CLI.
Enter Plan Mode manually
Section titled “Enter Plan Mode manually”To start Plan Mode while using Gemini CLI:
-
Keyboard shortcut: Press
Shift+Tabto cycle through approval modes (Default->Auto-Edit->Plan). Plan Mode is automatically removed from the rotation when Gemini CLI is actively processing or showing confirmation dialogs. -
Command: Type
/plan [goal]in the input box. The[goal]is optional; for example,/plan implement authenticationwill switch to Plan Mode and immediately submit the prompt to the model. -
Natural Language: Ask Gemini CLI to “start a plan for…”. Gemini CLI calls the
enter_plan_modetool to switch modes. This tool is not available when Gemini CLI is in YOLO mode.
How to use Plan Mode
Section titled “How to use Plan Mode”Plan Mode lets you collaborate with Gemini CLI to design a solution before Gemini CLI takes action.
-
Provide a goal: Start by describing what you want to achieve. Gemini CLI will then enter Plan Mode (if it’s not already) to research the task.
-
Discuss and agree on strategy: As Gemini CLI analyzes your codebase, it will discuss its findings and proposed strategy with you to ensure alignment. It may ask you questions or present different implementation options using
ask_user. Gemini CLI will stop and wait for your confirmation before drafting the formal plan. You should reach an informal agreement on the approach before proceeding. -
Review the plan: Once you’ve agreed on the strategy, Gemini CLI creates a detailed implementation plan as a Markdown file in your plans directory.
- View: You can open and read this file to understand the proposed changes.
- Edit: Press
Ctrl+Xto open the plan directly in your configured external editor.
-
Approve or iterate: Gemini CLI will present the finalized plan for your formal approval.
- Approve: If you’re satisfied with the plan, approve it to start the implementation immediately: Yes, automatically accept edits or Yes, manually accept edits.
- Iterate: If the plan needs adjustments, provide feedback in the input box or edit the plan file directly. Gemini CLI will refine the strategy and update the plan.
- Cancel: You can cancel your plan with
Esc.
For more complex or specialized planning tasks, you can customize the planning workflow with skills.
Collaborative plan editing
Section titled “Collaborative plan editing”You can collaborate with Gemini CLI by making direct changes or leaving comments in the implementation plan. This is often faster and more precise than describing complex changes in natural language.
- Open the plan: Press
Ctrl+Xwhen Gemini CLI presents a plan for review. - Edit or comment: The plan opens in your configured
external editor (for
example, VS Code or Vim). You can:
- Modify steps: Directly reorder, delete, or rewrite implementation steps.
- Leave comments: Add inline questions or
feedback (for example, “Wait,
shouldn’t we use the existing
Loggerclass here?”).
- Save and close: Save your changes and close the editor.
- Review and refine: Gemini CLI automatically detects the changes, reviews your comments, and adjusts the implementation strategy. It then presents the refined plan for your final approval.
How to exit Plan Mode
Section titled “How to exit Plan Mode”You can exit Plan Mode at any time, whether you have finalized a plan or want to switch back to another mode.
- Approve a plan: When Gemini CLI presents a finalized plan, approving it automatically exits Plan Mode and starts the implementation.
- Keyboard shortcut: Press
Shift+Tabto cycle to the desired mode. - Natural language: Ask Gemini CLI to “exit plan mode” or “stop planning.”
Tool Restrictions
Section titled “Tool Restrictions”Plan Mode enforces strict safety policies to prevent accidental changes.
These are the only allowed tools:
- FileSystem (Read):
read_file,list_directory,glob - Search:
grep_search,google_web_search,web_fetch(requires explicit confirmation),get_internal_docs - Research Subagents:
codebase_investigator,cli_help - Interaction:
ask_user - MCP tools (Read): Read-only MCP tools (for
example,
github_read_issue,postgres_read_schema) are allowed. - Planning (Write):
write_fileandreplaceonly allowed for.mdfiles in the~/.gemini/tmp/<project>/<session-id>/plans/directory or your custom plans directory. - Memory:
save_memory - Skills:
activate_skill(allows loading specialized instructions and resources in a read-only manner)
Customization and best practices
Section titled “Customization and best practices”Plan Mode is secure by default, but you can adapt it to fit your specific workflows. You can customize how Gemini CLI plans by using skills, adjusting safety policies, changing where plans are stored, or adding hooks.
Custom planning with skills
Section titled “Custom planning with skills”You can use Agent Skills to customize how Gemini CLI approaches planning for specific types of tasks. When a skill is activated during Plan Mode, its specialized instructions and procedural workflows will guide the research, design, and planning phases.
For example:
- A “Database Migration” skill could ensure the plan includes data safety checks and rollback strategies.
- A “Security Audit” skill could prompt Gemini CLI to look for specific vulnerabilities during codebase exploration.
- A “Frontend Design” skill could guide Gemini CLI to use specific UI components and accessibility standards in its proposal.
To use a skill in Plan Mode, you can explicitly ask Gemini CLI to “use the
<skill-name> skill to plan…” or Gemini CLI may
autonomously activate it
based on the task description.
Custom policies
Section titled “Custom policies”Plan Mode’s default tool restrictions are managed by the
policy engine and defined in the
built-in
plan.toml file. The built-in policy (Tier 1)
enforces the read-only state,
but you can customize these rules by creating your own policies in your
~/.gemini/policies/ directory (Tier 2).
Global vs. mode-specific rules
Section titled “Global vs. mode-specific rules”As described in the
policy engine
documentation, any
rule that does not explicitly specify modes is
considered “always active” and
will apply to Plan Mode as well.
To maintain the integrity of Plan Mode as a safe research environment, persistent tool approvals are context-aware. Approvals granted in modes like Default or Auto-Edit do not apply to Plan Mode, ensuring that tools trusted for implementation don’t automatically execute while you’re researching. However, approvals granted while in Plan Mode are treated as intentional choices for global trust and apply to all modes.
If you want to manually restrict a rule to other modes but not to
Plan Mode,
you must explicitly specify the target modes. For example, to allow npm test
in default and Auto-Edit modes but not in Plan Mode:
[[rule]]toolName = "run_shell_command"commandPrefix = "npm test"decision = "allow"priority = 100# By omitting "plan", this rule will not be active in Plan Mode.modes = ["default", "autoEdit"]
Example: Automatically approve read-only MCP tools
Section titled “Example: Automatically approve read-only MCP tools”By default, read-only MCP tools require user confirmation in Plan Mode. You
can
use toolAnnotations and the mcpName wildcard to customize this behavior for
your specific environment.
~/.gemini/policies/mcp-read-only.toml
[[rule]]toolName = "*"mcpName = "*"toolAnnotations = { readOnlyHint = true }decision = "allow"priority = 100modes = ["plan"]
For more information on how the policy engine works, see the policy engine docs.
Example: Allow git commands in Plan Mode
Section titled “Example: Allow git commands in Plan Mode”This rule lets you check the repository status and see changes while in Plan Mode.
~/.gemini/policies/git-research.toml
[[rule]]toolName = "run_shell_command"commandPrefix = ["git status", "git diff"]decision = "allow"priority = 100modes = ["plan"]
Example: Enable custom subagents in Plan Mode
Section titled “Example: Enable custom subagents in Plan Mode”Built-in research subagents like
codebase_investigator and
cli_help are enabled by default in Plan
Mode. You can enable additional
custom
subagents by adding a
rule to your policy.
~/.gemini/policies/research-subagents.toml
[[rule]]toolName = "my_custom_subagent"decision = "allow"priority = 100modes = ["plan"]
Tell Gemini CLI it can use these tools in your prompt, for example: “You can check ongoing changes in git.”
Custom plan directory and policies
Section titled “Custom plan directory and policies”By default, planning artifacts are stored in a managed temporary directory
outside your project: ~/.gemini/tmp/<project>/<session-id>/plans/.
You can configure a custom directory for plans in your settings.json. For
example, to store plans in a .gemini/plans directory
within your project:
{ "general": { "plan": { "directory": ".gemini/plans" } }}
To maintain the safety of Plan Mode, user-configured paths for the plans directory are restricted to the project root. This ensures that custom planning locations defined within a project’s workspace cannot be used to escape and overwrite sensitive files elsewhere. Any user-configured directory must reside within the project boundary.
Using a custom directory requires updating your
policy engine configurations to
allow
write_file and replace in
that specific location. For example, to allow
writing to the .gemini/plans directory within your
project, create a policy
file at ~/.gemini/policies/plan-custom-directory.toml:
[[rule]]toolName = ["write_file", "replace"]decision = "allow"priority = 100modes = ["plan"]# Adjust the pattern to match your custom directory.# This example matches any .md file in a .gemini/plans directory within the project.argsPattern = "\"file_path\":\"[^\"]+[\\\\/]+\\.gemini[\\\\/]+plans[\\\\/]+[\\w-]+\\.md\""
Using hooks with Plan Mode
Section titled “Using hooks with Plan Mode”You can use the hook system to automate parts of the planning workflow or enforce additional checks when Gemini CLI transitions into or out of Plan Mode.
Hooks such as BeforeTool or AfterTool can be configured to intercept the
enter_plan_mode and exit_plan_mode tool calls.
Example: Archive
approved plans to GCS (AfterTool)
Section titled “Example: Archive
approved plans to GCS (AfterTool)”
If your organizational policy requires a record of all execution plans, you
can
use an AfterTool hook to securely copy the plan
artifact to Google Cloud
Storage whenever Gemini CLI exits Plan Mode to start the implementation.
.gemini/hooks/archive-plan.sh:
#!/usr/bin/env bash# Extract the plan path from the tool input JSONplan_path=$(jq -r '.tool_input.plan_path // empty')
if [ -f "$plan_path" ]; then # Generate a unique filename using a timestamp filename="$(date +%s)_$(basename "$plan_path")"
# Upload the plan to GCS in the background so it doesn't block the CLI gsutil cp "$plan_path" "gs://my-audit-bucket/gemini-plans/$filename" > /dev/null 2>&1 &fi
# AfterTool hooks should generally allow the flow to continueecho '{"decision": "allow"}'
To register this AfterTool hook, add it to your settings.json:
{ "hooks": { "AfterTool": [ { "matcher": "exit_plan_mode", "hooks": [ { "name": "archive-plan", "type": "command", "command": "./.gemini/hooks/archive-plan.sh" } ] } ] }}
Commands
Section titled “Commands”/plan copy: Copy the currently approved plan to your clipboard.
Planning workflows
Section titled “Planning workflows”Plan Mode provides building blocks for structured research and design. These
are
implemented as extensions using core planning
tools
like enter_plan_mode,
exit_plan_mode, and
ask_user.
Built-in planning workflow
Section titled “Built-in planning workflow”The built-in planner uses an adaptive workflow to analyze your project,
consult
you on trade-offs via ask_user, and draft a plan for
your approval.
Custom planning workflows
Section titled “Custom planning workflows”You can install or create specialized planners to suit your workflow.
Conductor
Section titled “Conductor”Conductor is designed for spec-driven
development. It organizes work into
“tracks” and stores persistent artifacts in your project’s conductor/
directory:
- Automate transitions: Switches to read-only mode via
enter_plan_mode. - Streamline decisions: Uses
ask_userfor architectural choices. - Maintain project context: Stores artifacts in the project directory using custom plan directory and policies.
- Handoff execution: Transitions to implementation via
exit_plan_mode.
Build your own
Section titled “Build your own”Since Plan Mode is built on modular building blocks, you can develop your own custom planning workflow as an extensions. By leveraging core tools and custom policies, you can define how Gemini CLI researches and stores plans for your specific domain.
To build a custom planning workflow, you can use:
- Tool usage: Use core tools like
enter_plan_mode,ask_user, andexit_plan_modeto manage the research and design process. - Customization: Set your own storage locations and policy rules using custom plan directories and custom policies.
By using Plan Mode as its execution environment, your custom methodology can enforce read-only safety during the design phase while benefiting from high-reasoning model routing.
Automatic Model Routing
Section titled “Automatic Model Routing”When using an auto model, Gemini CLI automatically optimizes model routing based on the current phase of your task:
- Planning Phase: While in Plan Mode, the CLI routes requests to a high-reasoning Pro model to ensure robust architectural decisions and high-quality plans.
- Implementation Phase: Once a plan is approved and you exit Plan Mode, the CLI detects the existence of the approved plan and automatically switches to a high-speed Flash model. This provides a faster, more responsive experience during the implementation of the plan.
This behavior is enabled by default to provide the best balance of quality and performance. You can disable this automatic switching in your settings:
{ "general": { "plan": { "modelRouting": false } }}
Cleanup
Section titled “Cleanup”By default, Gemini CLI automatically cleans up old session data, including all associated plan files and task trackers.
- Default behavior: Sessions (and their plans) are retained for 30 days.
- Configuration: You can customize this behavior via the
/settingscommand (search for Session Retention) or in yoursettings.jsonfile. See session retention for more details.
Manual deletion also removes all associated artifacts:
- Command Line: Use
gemini --delete-session <index|id>. - Session Browser: Press
/resume, navigate to a session, and pressx.
If you use a custom plans directory, those files are not automatically deleted and must be managed manually.
Non-interactive execution
Section titled “Non-interactive execution”When running Gemini CLI in non-interactive environments (such as headless scripts or CI/CD pipelines), Plan Mode optimizes for automated workflows:
- Automatic transitions: The policy engine automatically
approves the
enter_plan_modeandexit_plan_modetools without prompting for user confirmation. - Automated implementation: When exiting Plan Mode to execute the plan, Gemini CLI automatically switches to YOLO mode instead of the standard Default mode. This allows the CLI to execute the implementation steps automatically without hanging on interactive tool approvals.
Example:
gemini --approval-mode plan -p "Analyze telemetry and suggest improvements"
The /rewind command lets you go back to a previous
state in your conversation
and, optionally, revert any file changes made by the AI during those
interactions. This is a powerful tool for undoing mistakes, exploring
different
approaches, or simply cleaning up your session history.
To use the rewind feature, simply type /rewind into
the input prompt and press
Enter.
Alternatively, you can use the keyboard shortcut: Press Esc twice.
Interface
Section titled “Interface”When you trigger a rewind, an interactive list of your previous interactions appears.
- Select interaction: Use the Up/Down arrow keys to navigate through the list. The most recent interactions are at the bottom.
- Preview: As you select an interaction, you’ll see a preview of the user prompt and, if applicable, the number of files changed during that step.
- Confirm selection: Press Enter on the interaction you want to rewind back to.
- Action selection: After selecting an interaction,
you’ll be presented
with a confirmation dialog with up to three options:
- Rewind conversation and revert code changes: Reverts both the chat history and the file modifications to the state before the selected interaction.
- Rewind conversation: Only reverts the chat history. File changes are kept.
- Revert code changes: Only reverts the file modifications. The chat history is kept.
- Do nothing (esc): Cancels the rewind operation.
If no code changes were made since the selected point, the options related to reverting code changes will be hidden.
Key considerations
Section titled “Key considerations”- Destructive action: Rewinding is a destructive action for your current session history and potentially your files. Use it with care.
- Agent awareness: When you rewind the conversation, the AI model loses all memory of the interactions that were removed. If you only revert code changes, you may need to inform the model that the files have changed.
- Manual edits: Rewinding only affects file changes made
by the AI’s edit
tools. It does not undo manual edits you’ve made or
changes triggered by
the shell tool (
!). - Compression: Rewind works across chat compression points by reconstructing the history from stored session data.
This document provides a guide to sandboxing in Gemini CLI, including prerequisites, quickstart, and configuration.
Prerequisites
Section titled “Prerequisites”Before using sandboxing, you need to install and set up Gemini CLI:
npm install -g @google/gemini-cli
To verify the installation:
gemini --version
Overview of sandboxing
Section titled “Overview of sandboxing”Sandboxing isolates potentially dangerous operations (such as shell commands or file modifications) from your host system, providing a security barrier between AI operations and your environment.
The benefits of sandboxing include:
- Security: Prevent accidental system damage or data loss.
- Isolation: Limit file system access to project directory.
- Consistency: Ensure reproducible environments across different systems.
- Safety: Reduce risk when working with untrusted code or experimental commands.
Sandboxing methods
Section titled “Sandboxing methods”Your ideal method of sandboxing may differ depending on your platform and your preferred container solution.
1. macOS Seatbelt (macOS only)
Section titled “1. macOS Seatbelt (macOS only)”Lightweight, built-in sandboxing using sandbox-exec.
Default profile: permissive-open -
restricts writes outside project
directory but allows most other operations.
2. Container-based (Docker/Podman)
Section titled “2. Container-based (Docker/Podman)”Cross-platform sandboxing with complete process isolation.
Note: Requires building the sandbox image locally or using a published image from your organization’s registry.
3. Windows Native Sandbox (Windows only)
Section titled “3. Windows Native Sandbox (Windows only)”… Troubleshooting and Side Effects:
The Windows Native sandbox uses the icacls command to
set a “Low Mandatory
Level” on files and directories it needs to write to.
- Persistence: These integrity level changes are persistent on the filesystem. Even after the sandbox session ends, files created or modified by the sandbox will retain their “Low” integrity level.
- Manual Reset: If you need to reset the integrity level
of a file or
directory, you can use:
Terminal window icacls "C:\path\to\dir" /setintegritylevel Medium - System Folders: The sandbox manager automatically skips
setting integrity
levels on system folders (like
C:\Windows) for safety.
4. gVisor / runsc (Linux only)
Section titled “4. gVisor / runsc (Linux only)”Strongest isolation available: runs containers inside a user-space kernel via gVisor. gVisor intercepts all container system calls and handles them in a sandboxed kernel written in Go, providing a strong security barrier between AI operations and the host OS.
Prerequisites:
- Linux (gVisor supports Linux only)
- Docker installed and running
- gVisor/runsc runtime configured
When you set sandbox: "runsc", Gemini CLI runs
docker run --runtime=runsc ... to execute containers
with gVisor isolation.
runsc is not auto-detected; you must specify it explicitly (e.g.
GEMINI_SANDBOX=runsc or sandbox: "runsc").
To set up runsc:
- Install the runsc binary.
- Configure the Docker daemon to use the runsc runtime.
- Verify the installation.
5. LXC/LXD (Linux only, experimental)
Section titled “5. LXC/LXD (Linux only, experimental)”Full-system container sandboxing using LXC/LXD. Unlike Docker/Podman, LXC
containers run a complete Linux system with systemd,
snapd, and other system
services. This is ideal for tools that don’t work in standard Docker
containers,
such as Snapcraft and Rockcraft.
Prerequisites:
- Linux only.
- LXC/LXD must be installed (
snap install lxdorapt install lxd). - A container must be created and running before starting Gemini CLI. Gemini does not create the container automatically.
Quick setup:
# Initialize LXD (first time only)lxd init --auto
# Create and start an Ubuntu containerlxc launch ubuntu:24.04 gemini-sandbox
# Enable LXC sandboxingexport GEMINI_SANDBOX=lxcgemini -p "build the project"
Custom container name:
export GEMINI_SANDBOX=lxcexport GEMINI_SANDBOX_IMAGE=my-snapcraft-containergemini -p "build the snap"
Limitations:
- Linux only (LXC is not available on macOS or Windows).
- The container must already exist and be running.
- The workspace directory is bind-mounted into the container at the same absolute path — the path must be writable inside the container.
- Used with tools like Snapcraft or Rockcraft that require a full system.
Tool sandboxing
Section titled “Tool sandboxing”Tool-level sandboxing provides granular isolation for individual tool
executions
(like shell_exec and write_file) instead of sandboxing the entire Gemini
CLI
process.
This approach offers better integration with your local environment for non-tool tasks (like UI rendering and configuration loading) while still providing security for tool-driven operations.
How to turn off tool sandboxing
Section titled “How to turn off tool sandboxing”If you experience issues with tool sandboxing or prefer full-process
isolation,
you can disable it by setting security.toolSandboxing to false in your
settings.json file.
{ "security": { "toolSandboxing": false }}
Sandbox expansion
Section titled “Sandbox expansion”Sandbox expansion is a dynamic permission system that lets Gemini CLI request additional permissions for a command when needed.
When a sandboxed command fails due to permission restrictions (like
restricted
file paths or network access), or when a command is proactively identified
as
requiring extra permissions (like npm install),
Gemini CLI will present you
with a “Sandbox Expansion Request.”
How sandbox expansion works
Section titled “How sandbox expansion works”- Detection: Gemini CLI detects a sandbox denial or proactively identifies a command that requires extra permissions.
- Request: A modal dialog is shown, explaining which additional permissions (e.g., specific directories or network access) are required.
- Approval: If you approve the expansion, the command is executed with the extended permissions for that specific run.
This mechanism ensures you don’t have to manually re-run commands with more permissive sandbox settings, while still maintaining control over what the AI can access.
Quickstart
Section titled “Quickstart”# Enable sandboxing with command flaggemini -s -p "analyze the code structure"
Use environment variable
macOS/Linux
export GEMINI_SANDBOX=truegemini -p "run the test suite"
Windows (PowerShell)
$env:GEMINI_SANDBOX="true"gemini -p "run the test suite"
Configure in settings.json
{ "tools": { "sandbox": "docker" }}
Configuration
Section titled “Configuration”Enable sandboxing (in order of precedence)
Section titled “Enable sandboxing (in order of precedence)”- Command flag:
-sor--sandbox - Environment variable:
GEMINI_SANDBOX=true|docker|podman|sandbox-exec|runsc|lxc - Settings file:
"sandbox": truein thetoolsobject of yoursettings.jsonfile (for example,{"tools": {"sandbox": true}}).
macOS Seatbelt profiles
Section titled “macOS Seatbelt profiles”Built-in profiles (set via SEATBELT_PROFILE env var):
permissive-open(default): Write restrictions, network allowedpermissive-proxied: Write restrictions, network via proxyrestrictive-open: Strict restrictions, network allowedrestrictive-proxied: Strict restrictions, network via proxystrict-open: Read and write restrictions, network allowedstrict-proxied: Read and write restrictions, network via proxy
Custom sandbox flags
Section titled “Custom sandbox flags”For container-based sandboxing, you can inject custom flags into the docker or
podman command using the SANDBOX_FLAGS environment variable. This is useful
for advanced configurations, such as disabling security features for
specific
use cases.
Example (Podman):
To disable SELinux labeling for volume mounts, you can set the following:
macOS/Linux
export SANDBOX_FLAGS="--security-opt label=disable"
Windows (PowerShell)
$env:SANDBOX_FLAGS="--security-opt label=disable"
Multiple flags can be provided as a space-separated string:
macOS/Linux
export SANDBOX_FLAGS="--flag1 --flag2=value"
Windows (PowerShell)
$env:SANDBOX_FLAGS="--flag1 --flag2=value"
Linux UID/GID handling
Section titled “Linux UID/GID handling”The sandbox automatically handles user permissions on Linux. Override these permissions with:
macOS/Linux
export SANDBOX_SET_UID_GID=true # Force host UID/GIDexport SANDBOX_SET_UID_GID=false # Disable UID/GID mapping
Windows (PowerShell)
$env:SANDBOX_SET_UID_GID="true" # Force host UID/GID$env:SANDBOX_SET_UID_GID="false" # Disable UID/GID mapping
Troubleshooting
Section titled “Troubleshooting”Common issues
Section titled “Common issues”“Operation not permitted”
- Operation requires access outside sandbox.
- Try more permissive profile or add mount points.
Missing commands
- Add to a custom Dockerfile. Automatic
BUILD_SANDBOXbuilds are only available when running Gemini CLI from source; npm installs need a prebuilt image instead. - Install via
sandbox.bashrc.
Network issues
- Check sandbox profile allows network.
- Verify proxy configuration.
Debug mode
Section titled “Debug mode”DEBUG=1 gemini -s -p "debug command"
Inspect sandbox
Section titled “Inspect sandbox”# Check environmentgemini -s -p "run shell command: env | grep SANDBOX"
# List mountsgemini -s -p "run shell command: mount | grep workspace"
Security notes
Section titled “Security notes”- Sandboxing reduces but doesn’t eliminate all risks.
- Use the most restrictive profile that allows your work.
- Container overhead is minimal after first build.
- GUI applications may not work in sandboxes.
Related documentation
Section titled “Related documentation”- Configuration: Full configuration options.
- Commands: Available commands.
- Troubleshooting: General troubleshooting.
Session management saves your conversation history so you can resume your work where you left off. Use these features to review past interactions, manage history across different projects, and configure how long data is retained.
Automatic saving
Section titled “Automatic saving”Your session history is recorded automatically as you interact with the model. This background process ensures your work is preserved even if you interrupt a session.
- What is saved: The complete conversation history,
including:
- Your prompts and the model’s responses.
- All tool executions (inputs and outputs).
- Token usage statistics (input, output, cached, etc.).
- Assistant thoughts and reasoning summaries (when available).
- Location: Sessions are stored in
~/.gemini/tmp/<project_hash>/chats/, where<project_hash>is a unique identifier based on your project’s root directory. - Scope: Sessions are project-specific. Switching directories to a different project switches to that project’s session history.
Resuming sessions
Section titled “Resuming sessions”You can resume a previous session to continue the conversation with all prior context restored. Resuming is supported both through command-line flags and an interactive browser.
From the command line
Section titled “From the command line”When starting Gemini CLI, use the --resume (or -r) flag to load existing
sessions.
-
Resume latest:
Terminal window gemini --resumeThis immediately loads the most recent session.
-
Resume by index: List available sessions first (see Listing sessions), then use the index number:
Terminal window gemini --resume 1 -
Resume by ID: You can also provide the full session UUID:
Terminal window gemini --resume a1b2c3d4-e5f6-7890-abcd-ef1234567890
From the interactive interface
Section titled “From the interactive interface”While the CLI is running, use the /resume slash
command to open the Session
Browser:
/resume
When typing /resume (or /chat) in slash completion, commands are grouped
under titled separators:
-- auto --(session browser)listis selectable and opens the session browser
-- checkpoints --(manual tagged checkpoint commands)
Unique prefixes such as /resum and /cha resolve to the same grouped menu.
The Session Browser provides an interactive interface where you can perform the following actions:
- Browse: Scroll through a list of your past sessions.
- Preview: See details like the session date, message count, and the first user prompt.
- Search: Press
/to enter search mode, then type to filter sessions by ID or content. - Select: Press Enter to resume the selected session.
- Esc: Press Esc to exit the Session Browser.
Manual chat checkpoints
Section titled “Manual chat checkpoints”For named branch points inside a session, use chat checkpoints:
/resume save decision-point/resume list/resume resume decision-point
Compatibility aliases:
/chat ...works for the same commands./resume checkpoints ...also remains supported during migration.
Parallel sessions with Git worktrees
Section titled “Parallel sessions with Git worktrees”When working on multiple tasks at once, you can use Git worktrees to give each Gemini session its own copy of the codebase. This prevents changes in one session from colliding with another.
Managing sessions
Section titled “Managing sessions”You can list and delete sessions to keep your history organized and manage disk space.
Listing sessions
Section titled “Listing sessions”To see a list of all available sessions for the current project from the
command
line, use the --list-sessions flag:
gemini --list-sessions
Output example:
Available sessions for this project (3):
1. Fix bug in auth (2 days ago) [a1b2c3d4] 2. Refactor database schema (5 hours ago) [e5f67890] 3. Update documentation (Just now) [abcd1234]
Deleting sessions
Section titled “Deleting sessions”You can remove old or unwanted sessions to free up space or declutter your history.
From the command line: Use the --delete-session flag with an index or ID:
gemini --delete-session 2
From the Session Browser:
- Open the browser with
/resume. - Navigate to the session you want to remove.
- Press x.
Configuration
Section titled “Configuration”You can configure how Gemini CLI manages your session history in your
settings.json file. These settings let you control
retention policies and
session lengths.
Session retention
Section titled “Session retention”By default, Gemini CLI automatically cleans up old session data to prevent your history from growing indefinitely. When a session is deleted, Gemini CLI also removes all associated data, including implementation plans, task trackers, tool outputs, and activity logs.
The default policy is to retain sessions for 30 days.
Configuration
Section titled “Configuration”You can customize these policies using the /settings
command or by manually
editing your settings.json file:
{ "general": { "sessionRetention": { "enabled": true, "maxAge": "30d", "maxCount": 50 } }}
enabled: (boolean) Master switch for session cleanup. Defaults totrue.maxAge: (string) Duration to keep sessions (for example, “24h”, “7d”, “4w”). Sessions older than this are deleted. Defaults to"30d".maxCount: (number) Maximum number of sessions to retain. The oldest sessions exceeding this count are deleted. Defaults to undefined (unlimited).minRetention: (string) Minimum retention period (safety limit). Defaults to"1d". Sessions newer than this period are never deleted by automatic cleanup.
Session limits
Section titled “Session limits”You can limit the length of individual sessions to prevent context windows from becoming too large and expensive.
{ "model": { "maxSessionTurns": 100 }}
-
maxSessionTurns: (number) The maximum number of turns (user and model exchanges) allowed in a single session. Set to-1for unlimited (default).Behavior when limit is reached:
- Interactive mode: The CLI shows an informational message and stops sending requests to the model. You must manually start a new session.
- Non-interactive mode: The CLI exits with an error.
Next steps
Section titled “Next steps”- Explore the Memory tool to save persistent information across sessions.
- Learn how to Checkpoint your session state.
- Check out the CLI reference for all command-line flags.
Control your Gemini CLI experience with the /settings
command. The /settings
command opens a dialog to view and edit all your Gemini CLI settings,
including
your UI experience, keybindings, and accessibility features.
Your Gemini CLI settings are stored in a settings.json file. In addition to
using the /settings command, you can also edit them
in one of the following
locations:
- User settings:
~/.gemini/settings.json - Workspace settings:
your-project/.gemini/settings.json
Settings reference
Section titled “Settings reference”Here is a list of all the available settings, grouped by category and ordered as they appear in the UI.
General
Section titled “General”| UI Label | Setting | Description | Default |
|---|---|---|---|
| Vim Mode | general.vimMode |
Enable Vim keybindings | false |
| Default Approval Mode | general.defaultApprovalMode |
The default approval mode for tool execution. ‘default’ prompts for approval, ‘auto_edit’ auto-approves edit tools, and ‘plan’ is read-only mode. YOLO mode (auto-approve all actions) can only be enabled via command line (—yolo or —approval-mode=yolo). | "default" |
| Enable Auto Update | general.enableAutoUpdate |
Enable automatic updates. | true |
| Enable Notifications | general.enableNotifications |
Enable run-event notifications for action-required prompts and session completion. | false |
| Enable Plan Mode | general.plan.enabled |
Enable Plan Mode for read-only safety during planning. | true |
| Plan Directory | general.plan.directory |
The directory where planning artifacts are stored. If not specified, defaults to the system temporary directory. A custom directory requires a policy to allow write access in Plan Mode. | undefined |
| Plan Model Routing | general.plan.modelRouting |
Automatically switch between Pro and Flash models based on Plan Mode status. Uses Pro for the planning phase and Flash for the implementation phase. | true |
| Retry Fetch Errors | general.retryFetchErrors |
Retry on “exception TypeError: fetch failed sending request” errors. | true |
| Max Chat Model Attempts | general.maxAttempts |
Maximum number of attempts for requests to the main chat model. Cannot exceed 10. | 10 |
| Debug Keystroke Logging | general.debugKeystrokeLogging |
Enable debug logging of keystrokes to the console. | false |
| Enable Session Cleanup | general.sessionRetention.enabled |
Enable automatic session cleanup | true |
| Keep chat history | general.sessionRetention.maxAge |
Automatically delete chats older than this time period (e.g., “30d”, “7d”, “24h”, “1w”) | "30d" |
Output
Section titled “Output”| UI Label | Setting | Description | Default |
|---|---|---|---|
| Output Format | output.format |
The format of the CLI output. Can be text or json. |
"text" |
| UI Label | Setting | Description | Default |
|---|---|---|---|
| Auto Theme Switching | ui.autoThemeSwitching |
Automatically switch between default light and dark themes based on terminal background color. | true |
| Terminal Background Polling Interval | ui.terminalBackgroundPollingInterval
|
Interval in seconds to poll the terminal background color. | 60 |
| Hide Window Title | ui.hideWindowTitle |
Hide the window title bar | false |
| Inline Thinking | ui.inlineThinkingMode |
Display model thinking inline: off or full. | "off" |
| Show Thoughts in Title | ui.showStatusInTitle |
Show Gemini CLI model thoughts in the terminal window title during the working phase | false |
| Dynamic Window Title | ui.dynamicWindowTitle |
Update the terminal window title with current status icons (Ready: ◇, Action Required: ✋, Working: ✦) | true |
| Show Home Directory Warning | ui.showHomeDirectoryWarning |
Show a warning when running Gemini CLI in the home directory. | true |
| Show Compatibility Warnings | ui.showCompatibilityWarnings |
Show warnings about terminal or OS compatibility issues. | true |
| Hide Tips | ui.hideTips |
Hide helpful tips in the UI | false |
| Escape Pasted @ Symbols | ui.escapePastedAtSymbols |
When enabled, @ symbols in pasted text are escaped to prevent unintended @path expansion. | false |
| Show Shortcuts Hint | ui.showShortcutsHint |
Show the ”? for shortcuts” hint above the input. | true |
| Compact Tool Output | ui.compactToolOutput |
Display tool outputs (like directory listings and file reads) in a compact, structured format. | true |
| Hide Banner | ui.hideBanner |
Hide the application banner | false |
| Hide Context Summary | ui.hideContextSummary |
Hide the context summary (GEMINI.md, MCP servers) above the input. | false |
| Hide CWD | ui.footer.hideCWD |
Hide the current working directory in the footer. | false |
| Hide Sandbox Status | ui.footer.hideSandboxStatus |
Hide the sandbox status indicator in the footer. | false |
| Hide Model Info | ui.footer.hideModelInfo |
Hide the model name and context usage in the footer. | false |
| Hide Context Window Percentage | ui.footer.hideContextPercentage |
Hides the context window usage percentage. | true |
| Hide Footer | ui.hideFooter |
Hide the footer from the UI | false |
| Show Memory Usage | ui.showMemoryUsage |
Display memory usage information in the UI | false |
| Show Line Numbers | ui.showLineNumbers |
Show line numbers in the chat. | true |
| Show Citations | ui.showCitations |
Show citations for generated text in the chat. | false |
| Show Model Info In Chat | ui.showModelInfoInChat |
Show the model name in the chat for each model turn. | false |
| Show User Identity | ui.showUserIdentity |
Show the signed-in user’s identity (e.g. email) in the UI. | true |
| Use Alternate Screen Buffer | ui.useAlternateBuffer |
Use an alternate screen buffer for the UI, preserving shell history. | false |
| Render Process | ui.renderProcess |
Enable Ink render process for the UI. | true |
| Terminal Buffer | ui.terminalBuffer |
Use the new terminal buffer architecture for rendering. | false |
| Use Background Color | ui.useBackgroundColor |
Whether to use background colors in the UI. | true |
| Incremental Rendering | ui.incrementalRendering |
Enable incremental rendering for the UI. This option will reduce flickering but may cause rendering artifacts. Only supported when useAlternateBuffer is enabled. | true |
| Show Spinner | ui.showSpinner |
Show the spinner during operations. | true |
| Loading Phrases | ui.loadingPhrases |
What to show while the model is working: tips, witty comments, all, or off. | "off" |
| Error Verbosity | ui.errorVerbosity |
Controls whether recoverable errors are hidden (low) or fully shown (full). | "low" |
| Screen Reader Mode | ui.accessibility.screenReader |
Render output in plain-text to be more screen reader accessible | false |
| UI Label | Setting | Description | Default |
|---|---|---|---|
| IDE Mode | ide.enabled |
Enable IDE integration mode. | false |
Billing
Section titled “Billing”| UI Label | Setting | Description | Default |
|---|---|---|---|
| Overage Strategy | billing.overageStrategy |
How to handle quota exhaustion when AI credits are available. ‘ask’ prompts each time, ‘always’ automatically uses credits, ‘never’ disables credit usage. | "ask" |
| UI Label | Setting | Description | Default |
|---|---|---|---|
| Model | model.name |
The Gemini model to use for conversations. | undefined |
| Max Session Turns | model.maxSessionTurns |
Maximum number of user/model/tool turns to keep in a session. -1 means unlimited. | -1 |
| Context Compression Threshold | model.compressionThreshold |
The fraction of context usage at which to trigger context compression (e.g. 0.2, 0.3). | 0.5 |
| Disable Loop Detection | model.disableLoopDetection |
Disable automatic detection and prevention of infinite loops. | false |
| Skip Next Speaker Check | model.skipNextSpeakerCheck |
Skip the next speaker check. | true |
Agents
Section titled “Agents”| UI Label | Setting | Description | Default |
|---|---|---|---|
| Confirm Sensitive Actions | agents.browser.confirmSensitiveActions
|
Require manual confirmation for sensitive browser actions (e.g., fill_form, evaluate_script). | false |
| Block File Uploads | agents.browser.blockFileUploads |
Hard-block file upload requests from the browser agent. | false |
Context
Section titled “Context”| UI Label | Setting | Description | Default |
|---|---|---|---|
| Memory Discovery Max Dirs | context.discoveryMaxDirs |
Maximum number of directories to search for memory. | 200 |
| Load Memory From Include Directories | context.loadMemoryFromIncludeDirectories
|
Controls how /memory reload loads GEMINI.md files. When true, include directories are scanned; when false, only the current directory is used. | false |
| Respect .gitignore | context.fileFiltering.respectGitIgnore
|
Respect .gitignore files when searching. | true |
| Respect .geminiignore | context.fileFiltering.respectGeminiIgnore
|
Respect .geminiignore files when searching. | true |
| Enable Recursive File Search | context.fileFiltering.enableRecursiveFileSearch
|
Enable recursive file search functionality when completing @ references in the prompt. | true |
| Enable Fuzzy Search | context.fileFiltering.enableFuzzySearch
|
Enable fuzzy search when searching for files. | true |
| Custom Ignore File Paths | context.fileFiltering.customIgnoreFilePaths
|
Additional ignore file paths to respect. These files take precedence over .geminiignore and .gitignore. Files earlier in the array take precedence over files later in the array, e.g. the first file takes precedence over the second one. | [] |
| UI Label | Setting | Description | Default |
|---|---|---|---|
| Sandbox Allowed Paths | tools.sandboxAllowedPaths |
List of additional paths that the sandbox is allowed to access. | [] |
| Sandbox Network Access | tools.sandboxNetworkAccess |
Whether the sandbox is allowed to access the network. | false |
| Enable Interactive Shell | tools.shell.enableInteractiveShell |
Use node-pty for an interactive shell experience. Fallback to child_process still applies. | true |
| Show Color | tools.shell.showColor |
Show color in shell output. | true |
| Use Ripgrep | tools.useRipgrep |
Use ripgrep for file content search instead of the fallback implementation. Provides faster search performance. | true |
| Tool Output Truncation Threshold | tools.truncateToolOutputThreshold |
Maximum characters to show when truncating large tool outputs. Set to 0 or negative to disable truncation. | 40000 |
| Disable LLM Correction | tools.disableLLMCorrection |
Disable LLM-based error correction for edit tools. When enabled, tools will fail immediately if exact string matches are not found, instead of attempting to self-correct. | true |
Security
Section titled “Security”| UI Label | Setting | Description | Default |
|---|---|---|---|
| Tool Sandboxing | security.toolSandboxing |
Tool-level sandboxing. Isolates individual tools instead of the entire CLI process. | false |
| Disable YOLO Mode | security.disableYoloMode |
Disable YOLO mode, even if enabled by a flag. | false |
| Disable Always Allow | security.disableAlwaysAllow |
Disable “Always allow” options in tool confirmation dialogs. | false |
| Allow Permanent Tool Approval | security.enablePermanentToolApproval
|
Enable the “Allow for all future sessions” option in tool confirmation dialogs. | false |
| Auto-add to Policy by Default | security.autoAddToPolicyByDefault |
When enabled, the “Allow for all future sessions” option becomes the default choice for low-risk tools in trusted workspaces. | false |
| Blocks extensions from Git | security.blockGitExtensions |
Blocks installing and loading extensions from Git. | false |
| Extension Source Regex Allowlist | security.allowedExtensions |
List of Regex patterns for allowed extensions. If nonempty, only extensions that match the patterns in this list are allowed. Overrides the blockGitExtensions setting. | [] |
| Folder Trust | security.folderTrust.enabled |
Setting to track whether Folder trust is enabled. | true |
| Enable Environment Variable Redaction | security.environmentVariableRedaction.enabled
|
Enable redaction of environment variables that may contain secrets. | false |
| Enable Context-Aware Security | security.enableConseca |
Enable the context-aware security checker. This feature uses an LLM to dynamically generate and enforce security policies for tool use based on your prompt, providing an additional layer of protection against unintended actions. | false |
Advanced
Section titled “Advanced”| UI Label | Setting | Description | Default |
|---|---|---|---|
| Auto Configure Max Old Space Size | advanced.autoConfigureMemory |
Automatically configure Node.js memory limits. Note: Because memory is allocated during the initial process boot, this setting is only read from the global user settings file and ignores workspace-level overrides. | true |
Experimental
Section titled “Experimental”| UI Label | Setting | Description | Default |
|---|---|---|---|
| Enable Git Worktrees | experimental.worktrees |
Enable automated Git worktree management for parallel work. | false |
| Use OSC 52 Paste | experimental.useOSC52Paste |
Use OSC 52 for pasting. This may be more robust than the default system when using remote terminal sessions (if your terminal is configured to allow it). | false |
| Use OSC 52 Copy | experimental.useOSC52Copy |
Use OSC 52 for copying. This may be more robust than the default system when using remote terminal sessions (if your terminal is configured to allow it). | false |
| Model Steering | experimental.modelSteering |
Enable model steering (user hints) to guide the model during tool execution. | false |
| Direct Web Fetch | experimental.directWebFetch |
Enable web fetch behavior that bypasses LLM summarization. | false |
| Memory Manager Agent | experimental.memoryManager |
Replace the built-in save_memory tool with a memory manager subagent that supports adding, removing, de-duplicating, and organizing memories. | false |
| Use the generalist profile to manage agent contexts. | experimental.generalistProfile |
Suitable for general coding and software development tasks. | false |
| Enable Context Management | experimental.contextManagement |
Enable logic for context management. | false |
| Topic & Update Narration | experimental.topicUpdateNarration |
Enable the experimental Topic & Update communication model for reduced chattiness and structured progress reporting. | false |
Skills
Section titled “Skills”| UI Label | Setting | Description | Default |
|---|---|---|---|
| Enable Agent Skills | skills.enabled |
Enable Agent Skills. | true |
HooksConfig
Section titled “HooksConfig”| UI Label | Setting | Description | Default |
|---|---|---|---|
| Enable Hooks | hooksConfig.enabled |
Canonical toggle for the hooks system. When disabled, no hooks will be executed. | true |
| Hook Notifications | hooksConfig.notifications |
Show visual indicators when hooks are executing. | true |
Agent Skills allow you to extend Gemini CLI with specialized expertise, procedural workflows, and task-specific resources. Based on the Agent Skills open standard, a “skill” is a self-contained directory that packages instructions and assets into a discoverable capability.
Overview
Section titled “Overview”Unlike general context files (GEMINI.md), which provide
persistent workspace-wide background, Skills represent on-demand
expertise.
This allows Gemini to maintain a vast library of specialized
capabilities—such
as security auditing, cloud deployments, or codebase migrations—without
cluttering the model’s immediate context window.
Gemini autonomously decides when to employ a skill based on your request and
the
skill’s description. When a relevant skill is identified, the model “pulls
in”
the full instructions and resources required to complete the task using the
activate_skill tool.
Key Benefits
Section titled “Key Benefits”- Shared Expertise: Package complex workflows (like a specific team’s PR review process) into a folder that anyone can use.
- Repeatable Workflows: Ensure complex multi-step tasks are performed consistently by providing a procedural framework.
- Resource Bundling: Include scripts, templates, or example data alongside instructions so the agent has everything it needs.
- Progressive Disclosure: Only skill metadata (name and description) is loaded initially. Detailed instructions and resources are only disclosed when the model explicitly activates the skill, saving context tokens.
Skill Discovery Tiers
Section titled “Skill Discovery Tiers”Gemini CLI discovers skills from three primary locations:
- Workspace Skills: Located in
.gemini/skills/or the.agents/skills/alias. Workspace skills are typically committed to version control and shared with the team. - User Skills: Located in
~/.gemini/skills/or the~/.agents/skills/alias. These are personal skills available across all your workspaces. - Extension Skills: Skills bundled within installed extensions.
Precedence: If multiple skills share the same name, higher-precedence locations override lower ones: Workspace > User > Extension.
Within the same tier (user or workspace), the .agents/skills/ alias takes
precedence over the .gemini/skills/ directory. This
generic alias provides an
intuitive path for managing agent-specific expertise that remains compatible
across different AI agent tools.
Managing Skills
Section titled “Managing Skills”In an Interactive Session
Section titled “In an Interactive Session”Use the /skills slash command to view and manage
available expertise:
/skills list(default): Shows all discovered skills and their status./skills link <path>: Links agent skills from a local directory via symlink./skills disable <name>: Prevents a specific skill from being used./skills enable <name>: Re-enables a disabled skill./skills reload: Refreshes the list of discovered skills from all tiers.
From the Terminal
Section titled “From the Terminal”The gemini skills command provides management
utilities:
# List all discovered skillsgemini skills list
# Link agent skills from a local directory via symlink# Discovers skills (SKILL.md or */SKILL.md) and creates symlinks in ~/.gemini/skills# (or ~/.agents/skills)gemini skills link /path/to/my-skills-repo
# Link to the workspace scope (.gemini/skills or .agents/skills)gemini skills link /path/to/my-skills-repo --scope workspace
# Install a skill from a Git repository, local directory, or zipped skill file (.skill)# Uses the user scope by default (~/.gemini/skills or ~/.agents/skills)gemini skills install https://github.com/user/repo.gitgemini skills install /path/to/local/skillgemini skills install /path/to/local/my-expertise.skill
# Install a specific skill from a monorepo or subdirectory using --pathgemini skills install https://github.com/my-org/my-skills.git --path skills/frontend-design
# Install to the workspace scope (.gemini/skills or .agents/skills)gemini skills install /path/to/skill --scope workspace
# Uninstall a skill by namegemini skills uninstall my-expertise --scope workspace
# Enable a skill (globally)gemini skills enable my-expertise
# Disable a skill. Can use --scope to specify workspace or user (defaults to workspace)gemini skills disable my-expertise --scope workspace
How it Works
Section titled “How it Works”- Discovery: At the start of a session, Gemini CLI scans the discovery tiers and injects the name and description of all enabled skills into the system prompt.
- Activation: When Gemini identifies a task matching a
skill’s
description, it calls the
activate_skilltool. - Consent: You will see a confirmation prompt in the UI detailing the skill’s name, purpose, and the directory path it will gain access to.
- Injection: Upon your approval:
- The
SKILL.mdbody and folder structure is added to the conversation history. - The skill’s directory is added to the agent’s allowed file paths, granting it permission to read any bundled assets.
- The
- Execution: The model proceeds with the specialized expertise active. It is instructed to prioritize the skill’s procedural guidance within reason.
Skill activation
Section titled “Skill activation”Once a skill is activated (typically by Gemini identifying a task that matches the skill’s description and your approval), its specialized instructions and resources are loaded into the agent’s context. A skill remains active and its guidance is prioritized for the duration of the session.
Creating your own skills
Section titled “Creating your own skills”To create your own skills, see the Create Agent Skills guide.
The core system instructions that guide Gemini CLI can be completely replaced
with your own Markdown file. This feature is controlled via the
GEMINI_SYSTEM_MD environment variable.
Overview
Section titled “Overview”The GEMINI_SYSTEM_MD variable instructs the CLI to
use an external Markdown
file for its system prompt, completely overriding the built-in default. This
is
a full replacement, not a merge. If you use a custom file, none of the
original
core instructions will apply unless you include them yourself.
This feature is intended for advanced users who need to enforce strict, project-specific behavior or create a customized persona.
How to enable
Section titled “How to enable”You can set the environment variable temporarily in your shell, or persist it
via a .gemini/.env file. See
Persisting
Environment Variables.
-
Use the project default path (
.gemini/system.md):GEMINI_SYSTEM_MD=trueorGEMINI_SYSTEM_MD=1- The CLI reads
./.gemini/system.md(relative to your current project directory).
-
Use a custom file path:
GEMINI_SYSTEM_MD=/absolute/path/to/my-system.md- Relative paths are supported and resolved from the current working directory.
- Tilde expansion is supported (for example,
~/my-system.md).
-
Disable the override (use built‑in prompt):
GEMINI_SYSTEM_MD=falseorGEMINI_SYSTEM_MD=0or unset the variable.
If the override is enabled but the target file does not exist, the CLI will
error with: missing system prompt file '<path>'.
Quick examples
Section titled “Quick examples”- One‑off session using a project file:
GEMINI_SYSTEM_MD=1 gemini
- Persist for a project using
.gemini/.env:- Create
.gemini/system.md, then add to.gemini/.env:GEMINI_SYSTEM_MD=1
- Create
- Use a custom file under your home directory:
GEMINI_SYSTEM_MD=~/prompts/SYSTEM.md gemini
UI indicator
Section titled “UI indicator”When GEMINI_SYSTEM_MD is active, the CLI shows a
|⌐■_■| indicator in the UI
to signal custom system‑prompt mode.
Variable Substitution
Section titled “Variable Substitution”When using a custom system prompt file, you can use the following variables to dynamically include built-in content:
-
${AgentSkills}: Injects a complete section (including header) of all available agent skills. -
${SubAgents}: Injects a complete section (including header) of available sub-agents. -
${AvailableTools}: Injects a bulleted list of all currently enabled tool names. -
Tool Name Variables: Injects the actual name of a tool using the pattern:
${toolName}_ToolName(for example,${write_file_ToolName},${run_shell_command_ToolName}).This pattern is generated dynamically for all available tools.
Example
Section titled “Example”# Custom System Prompt
You are a helpful assistant. ${AgentSkills}${SubAgents}
## Tooling
The following tools are available to you: ${AvailableTools}
You can use ${write_file_ToolName} to save logs.
Export the default prompt (recommended)
Section titled “Export the default prompt (recommended)”Before overriding, export the current default prompt so you can review required safety and workflow rules.
- Write the built‑in prompt to the project default path:
GEMINI_WRITE_SYSTEM_MD=1 gemini
- Or write to a custom path:
GEMINI_WRITE_SYSTEM_MD=~/prompts/DEFAULT_SYSTEM.md gemini
This creates the file and writes the current built‑in system prompt to it.
Best practices: SYSTEM.md vs GEMINI.md
Section titled “Best practices: SYSTEM.md vs GEMINI.md”- SYSTEM.md (firmware):
- Non‑negotiable operational rules: safety, tool‑use protocols, approvals, and mechanics that keep the CLI reliable.
- Stable across tasks and projects (or per project when needed).
- GEMINI.md (strategy):
- Persona, goals, methodologies, and project/domain context.
- Evolves per task; relies on SYSTEM.md for safe execution.
Keep SYSTEM.md minimal but complete for safety and tool operation. Keep GEMINI.md focused on high‑level guidance and project specifics.
Troubleshooting
Section titled “Troubleshooting”- Error:
missing system prompt file '…'- Ensure the referenced path exists and is readable.
- For
GEMINI_SYSTEM_MD=1|true, create./.gemini/system.mdin your project.
- Override not taking effect
- Confirm the variable is loaded (use
.gemini/.envor export in your shell). - Paths are resolved from the current working directory; try an absolute path.
- Confirm the variable is loaded (use
- Restore defaults
- Unset
GEMINI_SYSTEM_MDor set it to0/false.
- Unset
Observability is the key to turning experimental AI into reliable software. Gemini CLI provides built-in support for OpenTelemetry, transforming every agent interaction into a rich stream of logs, metrics, and traces. This three-pillar approach gives you the high-fidelity visibility needed to understand agent behavior, optimize performance, and ensure reliability across your entire workflow.
Whether you are debugging a complex tool interaction locally or monitoring enterprise-wide usage in the cloud, Gemini CLI’s observability system provides the actionable intelligence needed to move from “black box” AI to predictable, high-performance systems.
OpenTelemetry integration
Section titled “OpenTelemetry integration”Gemini CLI integrates with OpenTelemetry, a vendor-neutral, industry-standard observability framework.
The observability system provides:
- Universal compatibility: Export to any OpenTelemetry backend (Google Cloud, Jaeger, Prometheus, Datadog, etc.).
- Standardized data: Use consistent formats and collection methods across your toolchain.
- Future-proof integration: Connect with existing and future observability infrastructure.
- No vendor lock-in: Switch between backends without changing your instrumentation.
Configuration
Section titled “Configuration”You control telemetry behavior through the .gemini/settings.json file.
Environment variables can override these settings.
| Setting | Environment Variable | Description | Values | Default |
|---|---|---|---|---|
enabled |
GEMINI_TELEMETRY_ENABLED |
Enable or disable telemetry | true/false |
false |
target |
GEMINI_TELEMETRY_TARGET |
Where to send telemetry data | "gcp"/"local"
|
"local" |
otlpEndpoint |
GEMINI_TELEMETRY_OTLP_ENDPOINT |
OTLP collector endpoint | URL string | http://localhost:4317 |
otlpProtocol |
GEMINI_TELEMETRY_OTLP_PROTOCOL |
OTLP transport protocol | "grpc"/"http"
|
"grpc" |
outfile |
GEMINI_TELEMETRY_OUTFILE |
Save telemetry to file (overrides otlpEndpoint) |
file path | - |
logPrompts |
GEMINI_TELEMETRY_LOG_PROMPTS |
Include prompts in telemetry logs | true/false |
true |
useCollector |
GEMINI_TELEMETRY_USE_COLLECTOR |
Use external OTLP collector (advanced) | true/false |
false |
useCliAuth |
GEMINI_TELEMETRY_USE_CLI_AUTH |
Use CLI credentials for telemetry (GCP target only) | true/false |
false |
| - | GEMINI_CLI_SURFACE |
Optional custom label for traffic reporting | string | - |
Note on boolean environment variables: For boolean settings
like enabled,
setting the environment variable to true or 1 enables the feature.
For detailed configuration information, see the Configuration guide.
Google Cloud telemetry
Section titled “Google Cloud telemetry”You can export telemetry data directly to Google Cloud Trace, Cloud Monitoring, and Cloud Logging.
Prerequisites
Section titled “Prerequisites”You must complete several setup steps before enabling Google Cloud telemetry.
-
Set your Google Cloud project ID:
-
To send telemetry to a separate project:
macOS/Linux
Terminal window export OTLP_GOOGLE_CLOUD_PROJECT="your-telemetry-project-id"Windows (PowerShell)
Terminal window $env:OTLP_GOOGLE_CLOUD_PROJECT="your-telemetry-project-id" -
To send telemetry to the same project as inference:
macOS/Linux
Terminal window export GOOGLE_CLOUD_PROJECT="your-project-id"Windows (PowerShell)
Terminal window $env:GOOGLE_CLOUD_PROJECT="your-project-id"
-
-
Authenticate with Google Cloud using one of these methods:
- Method A: Application Default Credentials
(ADC): Use this method for
service accounts or standard
gcloudauthentication.-
For user accounts:
Terminal window gcloud auth application-default login -
For service accounts:
macOS/Linux
Terminal window export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account.json"Windows (PowerShell)
Terminal window $env:GOOGLE_APPLICATION_CREDENTIALS="C:\path\to\your\service-account.json"
-
-
Method B: CLI Auth (Direct export only): Simplest method for local users. Gemini CLI uses the same OAuth credentials you used for login. To enable this, set
useCliAuth: truein your.gemini/settings.json:{"telemetry": {"enabled": true,"target": "gcp","useCliAuth": true}}
- Method A: Application Default Credentials
(ADC): Use this method for
service accounts or standard
-
Ensure your account or service account has these IAM roles:
- Cloud Trace Agent
- Monitoring Metric Writer
- Logs Writer
-
Enable the required Google Cloud APIs:
Terminal window gcloud services enable \cloudtrace.googleapis.com \monitoring.googleapis.com \logging.googleapis.com \--project="$OTLP_GOOGLE_CLOUD_PROJECT"
Direct export
Section titled “Direct export”We recommend using direct export to send telemetry directly to Google Cloud services.
- Enable telemetry in
.gemini/settings.json:{"telemetry": {"enabled": true,"target": "gcp"}} - Run Gemini CLI and send prompts.
- View logs, metrics, and traces in the Google Cloud Console. See View Google Cloud telemetry for details.
View Google Cloud telemetry
Section titled “View Google Cloud telemetry”After you enable telemetry and run Gemini CLI, you can view your data in the Google Cloud Console.
- Logs: Logs Explorer
- Metrics: Metrics Explorer
- Traces: Trace Explorer
For detailed information on how to use these tools, see the following official Google Cloud documentation:
- View and analyze logs with Logs Explorer
- Create charts with Metrics Explorer
- Find and explore traces
Monitoring dashboards
Section titled “Monitoring dashboards”Gemini CLI provides a pre-configured Google Cloud Monitoring dashboard to visualize your telemetry.
Find this dashboard under Google Cloud Monitoring Dashboard Templates as “Gemini CLI Monitoring”.



To learn more, see Instant insights: Gemini CLI’s pre-configured monitoring dashboards.
Local telemetry
Section titled “Local telemetry”You can capture telemetry data locally for development and debugging. We recommend using file-based output for local development.
- Enable telemetry in
.gemini/settings.json:{"telemetry": {"enabled": true,"target": "local","outfile": ".gemini/telemetry.log"}} - Run Gemini CLI and send prompts.
- View logs and metrics in
.gemini/telemetry.log.
For advanced local telemetry setups (such as Jaeger or Genkit), see the Local development guide.
Client identification
Section titled “Client identification”Gemini CLI includes identifiers in its User-Agent
header to help you
differentiate and report on API traffic from different environments (for
example, identifying calls from Gemini Code Assist versus a standard
terminal).
Automatic identification
Section titled “Automatic identification”Most integrated environments are identified automatically without additional
configuration. The identifier is included as a prefix to the User-Agent and as
a “surface” tag in the parenthetical metadata.
| Environment | User-Agent Prefix | Surface Tag |
|---|---|---|
| Gemini Code Assist (Agent Mode) | GeminiCLI-a2a-server |
vscode |
| Zed (via ACP) | GeminiCLI-acp-zed |
zed |
| XCode (via ACP) | GeminiCLI-acp-xcode |
xcode |
| IntelliJ IDEA (via ACP) | GeminiCLI-acp-intellijidea
|
jetbrains |
| Standard Terminal | GeminiCLI |
terminal |
Example User-Agent:
GeminiCLI-a2a-server/0.34.0/gemini-pro (linux; x64; vscode)
Custom identification
Section titled “Custom identification”You can provide a custom identifier for your own scripts or automation by
setting the GEMINI_CLI_SURFACE environment variable.
This is useful for
tracking specific internal tools or distribution channels in your GCP logs.
macOS/Linux
export GEMINI_CLI_SURFACE="my-custom-tool"
Windows (PowerShell)
$env:GEMINI_CLI_SURFACE="my-custom-tool"
When set, the value appears at the end of the User-Agent parenthetical:
GeminiCLI/0.34.0/gemini-pro (linux; x64; my-custom-tool)
Logs, metrics, and traces
Section titled “Logs, metrics, and traces”This section describes the structure of logs, metrics, and traces generated by Gemini CLI.
Gemini CLI includes session.id, installation.id, active_approval_mode, and
user.email (when authenticated) as common attributes
on all data.
Logs provide timestamped records of specific events. Gemini CLI logs events across several categories.
Sessions
Section titled “Sessions”Session logs capture startup configuration and prompt submissions.
gemini_cli.config
Section titled
“gemini_cli.config”
Emitted at startup with the CLI configuration.
Attributes
model(string)embedding_model(string)sandbox_enabled(boolean)core_tools_enabled(string)approval_mode(string)api_key_enabled(boolean)vertex_ai_enabled(boolean)log_user_prompts_enabled(boolean)file_filtering_respect_git_ignore(boolean)debug_mode(boolean)mcp_servers(string)mcp_servers_count(int)mcp_tools(string)mcp_tools_count(int)output_format(string)extensions(string)extension_ids(string)extensions_count(int)auth_type(string)worktree_active(boolean)github_workflow_name(string, optional)github_repository_hash(string, optional)github_event_name(string, optional)github_pr_number(string, optional)github_issue_number(string, optional)github_custom_tracking_id(string, optional)
gemini_cli.user_prompt
Section titled
“gemini_cli.user_prompt”
Emitted when you submit a prompt.
Attributes
prompt_length(int)prompt_id(string)prompt(string; excluded iftelemetry.logPromptsisfalse)auth_type(string)
Approval mode
Section titled “Approval mode”These logs track changes to and usage of different approval modes.
Lifecycle
Section titled “Lifecycle”approval_mode_switch
Section titled
“approval_mode_switch”
Logs when you change the approval mode.
Attributes
from_mode(string)to_mode(string)
approval_mode_duration
Section titled
“approval_mode_duration”
Records time spent in an approval mode.
Attributes
mode(string)duration_ms(int)
Execution
Section titled “Execution”plan_execution
Section titled
“plan_execution”
Logs when you execute a plan and switch from plan mode to active execution.
Attributes
approval_mode(string)
Tool logs capture executions, truncation, and edit behavior.
gemini_cli.tool_call
Section titled
“gemini_cli.tool_call”
Emitted for each tool (function) call.
Attributes
function_name(string)function_args(string)duration_ms(int)success(boolean)decision(string: “accept”, “reject”, “auto_accept”, or “modify”)error(string, optional)error_type(string, optional)prompt_id(string)tool_type(string: “native” or “mcp”)mcp_server_name(string, optional)extension_name(string, optional)extension_id(string, optional)content_length(int, optional)start_time(number, optional)end_time(number, optional)metadata(object, optional), which may include:model_added_lines(number)model_removed_lines(number)user_added_lines(number)user_removed_lines(number)ask_user(object)
gemini_cli.tool_output_truncated
Section titled
“gemini_cli.tool_output_truncated”
Logs when tool output is truncated.
Attributes
tool_name(string)original_content_length(int)truncated_content_length(int)threshold(int)lines(int)prompt_id(string)
gemini_cli.edit_strategy
Section titled
“gemini_cli.edit_strategy”
Records the chosen edit strategy.
Attributes
strategy(string)
gemini_cli.edit_correction
Section titled
“gemini_cli.edit_correction”
Records the result of an edit correction.
Attributes
correction(string: “success” or “failure”)
gen_ai.client.inference.operation.details
Section titled
“gen_ai.client.inference.operation.details”
Provides detailed GenAI operation data aligned with OpenTelemetry conventions.
Attributes
gen_ai.request.model(string)gen_ai.provider.name(string)gen_ai.operation.name(string)gen_ai.input.messages(json string)gen_ai.output.messages(json string)gen_ai.response.finish_reasons(array of strings)gen_ai.usage.input_tokens(int)gen_ai.usage.output_tokens(int)gen_ai.request.temperature(float)gen_ai.request.top_p(float)gen_ai.request.top_k(int)gen_ai.request.max_tokens(int)gen_ai.system_instructions(json string)server.address(string)server.port(int)
File logs track operations performed by tools.
gemini_cli.file_operation
Section titled
“gemini_cli.file_operation”
Emitted for each file creation, read, or update.
Attributes
tool_name(string)operation(string: “create”, “read”, or “update”)lines(int, optional)mimetype(string, optional)extension(string, optional)programming_language(string, optional)
API logs capture requests, responses, and errors from Gemini API.
gemini_cli.api_request
Section titled
“gemini_cli.api_request”
Request sent to Gemini API.
Attributes
model(string)prompt_id(string)role(string: “user”, “model”, or “system”)request_text(string, optional)
gemini_cli.api_response
Section titled
“gemini_cli.api_response”
Response received from Gemini API.
Attributes
model(string)status_code(int or string)duration_ms(int)input_token_count(int)output_token_count(int)cached_content_token_count(int)thoughts_token_count(int)tool_token_count(int)total_token_count(int)prompt_id(string)auth_type(string)finish_reasons(array of strings)response_text(string, optional)
gemini_cli.api_error
Section titled
“gemini_cli.api_error”
Logs when an API request fails.
Attributes
error.message(string)model_name(string)duration(int)prompt_id(string)auth_type(string)error_type(string, optional)status_code(int or string, optional)role(string, optional)
gemini_cli.malformed_json_response
Section titled
“gemini_cli.malformed_json_response”
Logs when a JSON response cannot be parsed.
Attributes
model(string)
Model routing
Section titled “Model routing”These logs track how Gemini CLI selects and routes requests to models.
gemini_cli.slash_command
Section titled
“gemini_cli.slash_command”
Logs slash command execution.
Attributes
command(string)subcommand(string, optional)status(string: “success” or “error”)
gemini_cli.slash_command.model
Section titled
“gemini_cli.slash_command.model”
Logs model selection via slash command.
Attributes
model_name(string)
gemini_cli.model_routing
Section titled
“gemini_cli.model_routing”
Records model router decisions and reasoning.
Attributes
decision_model(string)decision_source(string)routing_latency_ms(int)reasoning(string, optional)failed(boolean)error_message(string, optional)approval_mode(string)
Chat and streaming
Section titled “Chat and streaming”These logs track chat context compression and streaming chunk errors.
gemini_cli.chat_compression
Section titled
“gemini_cli.chat_compression”
Logs chat context compression events.
Attributes
tokens_before(int)tokens_after(int)
gemini_cli.chat.invalid_chunk
Section titled
“gemini_cli.chat.invalid_chunk”
Logs invalid chunks received in a stream.
Attributes
error_message(string, optional)
gemini_cli.chat.content_retry
Section titled
“gemini_cli.chat.content_retry”
Logs retries due to content errors.
Attributes
attempt_number(int)error_type(string)retry_delay_ms(int)model(string)
gemini_cli.chat.content_retry_failure
Section titled
“gemini_cli.chat.content_retry_failure”
Logs when all content retries fail.
Attributes
total_attempts(int)final_error_type(string)total_duration_ms(int, optional)model(string)
gemini_cli.conversation_finished
Section titled
“gemini_cli.conversation_finished”
Logs when a conversation session ends.
Attributes
approvalMode(string)turnCount(int)
Resilience
Section titled “Resilience”Resilience logs record fallback mechanisms and recovery attempts.
gemini_cli.flash_fallback
Section titled
“gemini_cli.flash_fallback”
Logs switch to a flash model fallback.
Attributes
auth_type(string)
gemini_cli.ripgrep_fallback
Section titled
“gemini_cli.ripgrep_fallback”
Logs fallback to standard grep.
Attributes
error(string, optional)
gemini_cli.web_fetch_fallback_attempt
Section titled
“gemini_cli.web_fetch_fallback_attempt”
Logs web-fetch fallback attempts.
Attributes
reason(string: “private_ip” or “primary_failed”)
gemini_cli.agent.recovery_attempt
Section titled
“gemini_cli.agent.recovery_attempt”
Logs attempts to recover from agent errors.
Attributes
agent_name(string)attempt_number(int)success(boolean)error_type(string, optional)
Extensions
Section titled “Extensions”Extension logs track lifecycle events and settings changes.
gemini_cli.extension_install
Section titled
“gemini_cli.extension_install”
Logs when you install an extension.
Attributes
extension_name(string)extension_version(string)extension_source(string)status(string)
gemini_cli.extension_uninstall
Section titled
“gemini_cli.extension_uninstall”
Logs when you uninstall an extension.
Attributes
extension_name(string)status(string)
gemini_cli.extension_enable
Section titled
“gemini_cli.extension_enable”
Logs when you enable an extension.
Attributes
extension_name(string)setting_scope(string)
gemini_cli.extension_disable
Section titled
“gemini_cli.extension_disable”
Logs when you disable an extension.
Attributes
extension_name(string)setting_scope(string)
Agent runs
Section titled “Agent runs”Agent logs track the lifecycle of agent executions.
gemini_cli.agent.start
Section titled
“gemini_cli.agent.start”
Logs when an agent run begins.
Attributes
agent_id(string)agent_name(string)
gemini_cli.agent.finish
Section titled
“gemini_cli.agent.finish”
Logs when an agent run completes.
Attributes
agent_id(string)agent_name(string)duration_ms(int)turn_count(int)terminate_reason(string)
IDE logs capture connectivity events for the IDE companion.
gemini_cli.ide_connection
Section titled
“gemini_cli.ide_connection”
Logs IDE companion connections.
Attributes
connection_type(string)
UI logs track terminal rendering issues.
kitty_sequence_overflow
Section titled
“kitty_sequence_overflow”
Logs terminal control sequence overflows.
Attributes
sequence_length(int)truncated_sequence(string)
Miscellaneous
Section titled “Miscellaneous”gemini_cli.rewind
Section titled
“gemini_cli.rewind”
Logs when the conversation state is rewound.
Attributes
outcome(string)
gemini_cli.conseca.verdict
Section titled
“gemini_cli.conseca.verdict”
Logs security verdicts from ConSeca.
Attributes
verdict(string)decision(string: “accept”, “reject”, or “modify”)reason(string, optional)tool_name(string, optional)
gemini_cli.hook_call
Section titled
“gemini_cli.hook_call”
Logs execution of lifecycle hooks.
Attributes
hook_name(string)hook_type(string)duration_ms(int)success(boolean)
gemini_cli.tool_output_masking
Section titled
“gemini_cli.tool_output_masking”
Logs when tool output is masked for privacy.
Attributes
tokens_before(int)tokens_after(int)masked_count(int)total_prunable_tokens(int)
gemini_cli.keychain.availability
Section titled
“gemini_cli.keychain.availability”
Logs keychain availability checks.
Attributes
available(boolean)
gemini_cli.startup_stats
Section titled
“gemini_cli.startup_stats”
Logs detailed startup performance statistics.
Attributes
phases(json array of startup phases)os_platform(string)os_release(string)is_docker(boolean)
Metrics
Section titled “Metrics”Metrics provide numerical measurements of behavior over time.
Custom metrics
Section titled “Custom metrics”Gemini CLI exports several custom metrics.
Sessions
Section titled “Sessions”gemini_cli.session.count
Section titled
“gemini_cli.session.count”
Incremented once per CLI startup.
Onboarding
Section titled “Onboarding”Tracks onboarding flow from authentication to the user
-
gemini_cli.onboarding.start(Counter, Int): Incremented when the authentication flow begins. -
gemini_cli.onboarding.success(Counter, Int): Incremented when the user onboarding flow completes successfully.
Attributes (Success)
user_tier(string)
gemini_cli.tool.call.count
Section titled
“gemini_cli.tool.call.count”
Counts tool calls.
Attributes
function_name(string)success(boolean)decision(string: “accept”, “reject”, “modify”, or “auto_accept”)tool_type(string: “mcp” or “native”)
gemini_cli.tool.call.latency
Section titled
“gemini_cli.tool.call.latency”
Measures tool call latency (in ms).
Attributes
function_name(string)
gemini_cli.api.request.count
Section titled
“gemini_cli.api.request.count”
Counts all API requests.
Attributes
model(string)status_code(int or string)error_type(string, optional)
gemini_cli.api.request.latency
Section titled
“gemini_cli.api.request.latency”
Measures API request latency (in ms).
Attributes
model(string)
Token usage
Section titled “Token usage”gemini_cli.token.usage
Section titled
“gemini_cli.token.usage”
Counts input, output, thought, cache, and tool tokens.
Attributes
model(string)type(string: “input”, “output”, “thought”, “cache”, or “tool”)
gemini_cli.file.operation.count
Section titled
“gemini_cli.file.operation.count”
Counts file operations.
Attributes
operation(string: “create”, “read”, or “update”)lines(int, optional)mimetype(string, optional)extension(string, optional)programming_language(string, optional)
gemini_cli.lines.changed
Section titled
“gemini_cli.lines.changed”
Counts added or removed lines.
Attributes
function_name(string, optional)type(string: “added” or “removed”)
Chat and streaming
Section titled “Chat and streaming”gemini_cli.chat_compression
Section titled
“gemini_cli.chat_compression”
Counts compression operations.
Attributes
tokens_before(int)tokens_after(int)
gemini_cli.chat.invalid_chunk.count
Section titled
“gemini_cli.chat.invalid_chunk.count”
Counts invalid stream chunks.
gemini_cli.chat.content_retry.count
Section titled
“gemini_cli.chat.content_retry.count”
Counts content error retries.
gemini_cli.chat.content_retry_failure.count
Section titled
“gemini_cli.chat.content_retry_failure.count”
Counts requests where all retries failed.
Model routing
Section titled “Model routing”gemini_cli.slash_command.model.call_count
Section titled
“gemini_cli.slash_command.model.call_count”
Counts model selections.
Attributes
slash_command.model.model_name(string)
gemini_cli.model_routing.latency
Section titled
“gemini_cli.model_routing.latency”
Measures routing decision latency.
Attributes
routing.decision_model(string)routing.decision_source(string)routing.approval_mode(string)
gemini_cli.model_routing.failure.count
Section titled
“gemini_cli.model_routing.failure.count”
Counts routing failures.
Attributes
routing.decision_source(string)routing.error_message(string)routing.approval_mode(string)
Agent runs
Section titled “Agent runs”gemini_cli.agent.run.count
Section titled
“gemini_cli.agent.run.count”
Counts agent runs.
Attributes
agent_name(string)terminate_reason(string)
gemini_cli.agent.duration
Section titled
“gemini_cli.agent.duration”
Measures agent run duration.
Attributes
agent_name(string)
gemini_cli.agent.turns
Section titled
“gemini_cli.agent.turns”
Counts turns per agent run.
Attributes
agent_name(string)
Approval mode
Section titled “Approval mode”gemini_cli.plan.execution.count
Section titled
“gemini_cli.plan.execution.count”
Counts plan executions.
Attributes
approval_mode(string)
gemini_cli.ui.flicker.count
Section titled
“gemini_cli.ui.flicker.count”
Counts terminal flicker events.
Performance
Section titled “Performance”Gemini CLI provides detailed performance metrics for advanced monitoring.
gemini_cli.startup.duration
Section titled
“gemini_cli.startup.duration”
Measures startup time by phase.
Attributes
phase(string)details(map, optional)
gemini_cli.memory.usage
Section titled
“gemini_cli.memory.usage”
Measures heap and RSS memory.
Attributes
memory_type(string: “heap_used”, “heap_total”, “external”, “rss”)component(string, optional)
gemini_cli.cpu.usage
Section titled
“gemini_cli.cpu.usage”
Measures CPU usage percentage.
Attributes
component(string, optional)
gemini_cli.tool.queue.depth
Section titled
“gemini_cli.tool.queue.depth”
Measures tool execution queue depth.
gemini_cli.tool.execution.breakdown
Section titled
“gemini_cli.tool.execution.breakdown”
Breaks down tool time by phase.
Attributes
function_name(string)phase(string: “validation”, “preparation”, “execution”, “result_processing”)
GenAI semantic convention
Section titled “GenAI semantic convention”These metrics follow standard OpenTelemetry GenAI semantic conventions.
gen_ai.client.token.usage: Counts tokens used per operation.gen_ai.client.operation.duration: Measures operation duration in seconds.
Traces
Section titled “Traces”Traces provide an “under-the-hood” view of agent and backend operations. Use traces to debug tool interactions and optimize performance.
Every trace captures rich metadata via standard span attributes.
Standard span attributes
gen_ai.operation.name: High-level operation (for example,tool_call,llm_call,user_prompt,system_prompt,agent_call, orschedule_tool_calls).gen_ai.agent.name: Set togemini-cli.gen_ai.agent.description: The service agent description.gen_ai.input.messages: Input data or metadata.gen_ai.output.messages: Output data or results.gen_ai.request.model: Request model name.gen_ai.response.model: Response model name.gen_ai.prompt.name: The prompt name.gen_ai.tool.name: Executed tool name.gen_ai.tool.call_id: Unique ID for the tool call.gen_ai.tool.description: Tool description.gen_ai.tool.definitions: Tool definitions in JSON format.gen_ai.usage.input_tokens: Number of input tokens.gen_ai.usage.output_tokens: Number of output tokens.gen_ai.system_instructions: System instructions in JSON format.gen_ai.conversation.id: The CLI session ID.
For more details on semantic conventions for events, see the OpenTelemetry documentation.
Gemini CLI supports a variety of themes to customize its color scheme and
appearance. You can change the theme to suit your preferences via the /theme
command or "theme": configuration setting.
Available themes
Section titled “Available themes”Gemini CLI comes with a selection of pre-defined themes, which you can list
using the /theme command within Gemini CLI:
- Dark themes:
ANSIAtom OneAyuDefaultDraculaGitHubHolidayShades Of PurpleSolarized DarkTokyo Night
- Light themes:
ANSI LightAyu LightDefault LightGitHub LightGoogle CodeSolarized LightXcode
Changing themes
Section titled “Changing themes”- Enter
/themeinto Gemini CLI. - A dialog or selection prompt appears, listing the available themes.
- Using the arrow keys, select a theme. Some interfaces might offer a live preview or highlight as you select.
- Confirm your selection to apply the theme.
Theme persistence
Section titled “Theme persistence”Selected themes are saved in Gemini CLI’s configuration so your preference is remembered across sessions.
Custom color themes
Section titled “Custom color themes”Gemini CLI lets you create your own custom color themes by specifying them in
your settings.json file. This gives you full control
over the color palette
used in the CLI.
How to define a custom theme
Section titled “How to define a custom theme”Add a customThemes block to your user, project, or
system settings.json
file. Each custom theme is defined as an object with a unique name and a set
of
nested configuration objects. For example:
{ "ui": { "customThemes": { "MyCustomTheme": { "name": "MyCustomTheme", "type": "custom", "background": { "primary": "#181818" }, "text": { "primary": "#f0f0f0", "secondary": "#a0a0a0" } } } }}
Configuration objects:
text: Defines text colors.primary: The default text color.secondary: Used for less prominent text.link: Color for URLs and links.accent: Used for highlights and emphasis.response: Precedence overprimaryfor rendering model responses.
background: Defines background colors.primary: The main background color of the UI.diff.added: Background for added lines in diffs.diff.removed: Background for removed lines in diffs.
border: Defines border colors.default: The standard border color.focused: Border color when an element is focused.
status: Colors for status indicators.success: Used for successful operations.warning: Used for warnings.error: Used for errors.
ui: Other UI elements.comment: Color for code comments.symbol: Color for code symbols and operators.gradient: An array of colors used for gradient effects.
Required properties:
name(must match the key in thecustomThemesobject and be a string)type(must be the string"custom")
While all sub-properties are technically optional, we recommend providing at
least background.primary, text.primary, text.secondary,
and the various
accent colors via text.link, text.accent, and status to
ensure a cohesive
UI.
You can use either hex codes (for example, #FF0000)
or standard CSS color
names (for example, coral, teal, blue) for any color
value. See
CSS color names
for a full list of supported names.
You can define multiple custom themes by adding more entries to the
customThemes object.
Loading themes from a file
Section titled “Loading themes from a file”In addition to defining custom themes in settings.json, you can also load a
theme directly from a JSON file by specifying the file path in your
settings.json. This is useful for sharing themes or
keeping them separate from
your main configuration.
To load a theme from a file, set the theme property
in your settings.json to
the path of your theme file:
{ "ui": { "theme": "/path/to/your/theme.json" }}
The theme file must be a valid JSON file that follows the same structure as a
custom theme defined in settings.json.
Example my-theme.json:
{ "name": "Gruvbox Dark", "type": "custom", "background": { "primary": "#282828", "diff": { "added": "#2b3312", "removed": "#341212" } }, "text": { "primary": "#ebdbb2", "secondary": "#a89984", "link": "#83a598", "accent": "#d3869b" }, "border": { "default": "#3c3836", "focused": "#458588" }, "status": { "success": "#b8bb26", "warning": "#fabd2f", "error": "#fb4934" }, "ui": { "comment": "#928374", "symbol": "#8ec07c", "gradient": ["#cc241d", "#d65d0e", "#d79921"] }}
Example custom theme
Section titled “Example custom theme”
Using your custom theme
Section titled “Using your custom theme”- Select your custom theme using the
/themecommand in Gemini CLI. Your custom theme will appear in the theme selection dialog. - Or, set it as the default by adding
"theme": "MyCustomTheme"to theuiobject in yoursettings.json. - Custom themes can be set at the user, project, or system level, and follow the same configuration precedence as other settings.
Themes from extensions
Section titled “Themes from extensions”Extensions can also provide
custom themes.
Once an extension is installed and enabled, its themes are automatically
added
to the selection list in the /theme command.
Themes from extensions appear with the extension name in parentheses to help
you
identify their source, for example: shades-of-green (green-extension).
Dark themes
Section titled “Dark themes”
Atom One
Section titled “Atom One”
Default
Section titled “Default”
Dracula
Section titled “Dracula”
GitHub
Section titled “GitHub”
Holiday
Section titled “Holiday”
Shades Of Purple
Section titled “Shades Of Purple”
Solarized Dark
Section titled “Solarized Dark”
Tokyo Night
Section titled “Tokyo Night”
Light themes
Section titled “Light themes”ANSI Light
Section titled “ANSI Light”
Ayu Light
Section titled “Ayu Light”
Default Light
Section titled “Default Light”
GitHub Light
Section titled “GitHub Light”
Google Code
Section titled “Google Code”
Solarized Light
Section titled “Solarized Light”
Gemini CLI automatically optimizes API costs through token caching when using API key authentication (Gemini API key or Vertex AI). This feature reuses previous system instructions and context to reduce the number of tokens processed in subsequent requests.
Token caching is available for:
- API key users (Gemini API key)
- Vertex AI users (with project and location setup)
Token caching is not available for:
- OAuth users (Google Personal/Enterprise accounts) - the Code Assist API does not support cached content creation at this time
You can view your token usage and cached token savings using the /stats
command. When cached tokens are available, they will be displayed in the
stats
output.
The Trusted Folders feature is a security setting that gives you control over which projects can use the full capabilities of Gemini CLI. It prevents potentially malicious code from running by asking you to approve a folder before the CLI loads any project-specific configurations from it.
Enabling the feature
Section titled “Enabling the feature”The Trusted Folders feature is disabled by default. To use it, you must first enable it in your settings.
Add the following to your user settings.json file:
{ "security": { "folderTrust": { "enabled": true } }}
How it works: The trust dialog
Section titled “How it works: The trust dialog”Once the feature is enabled, the first time you run Gemini CLI from a folder, a dialog will automatically appear, prompting you to make a choice:
- Trust folder: Grants full trust to the current folder
(for example,
my-project). - Trust parent folder: Grants trust to the parent
directory (for example,
safe-projects), which automatically trusts all of its subdirectories as well. This is useful if you keep all your safe projects in one place. - Don’t trust: Marks the folder as untrusted. The CLI will operate in a restricted “safe mode.”
Your choice is saved in a central file (~/.gemini/trustedFolders.json), so you
will only be asked once per folder.
Understanding folder contents: The discovery phase
Section titled “Understanding folder contents: The discovery phase”Before you make a choice, Gemini CLI performs a discovery phase to scan the folder for potential configurations. This information is displayed in the trust dialog to help you make an informed decision.
The discovery UI lists the following categories of items found in the project:
- Commands: Custom
.tomlcommand definitions that add new functionality. - MCP Servers: Configured Model Context Protocol servers that the CLI will attempt to connect to.
- Hooks: System or custom hooks that can intercept and modify CLI behavior.
- Skills: Local agent skills that provide specialized capabilities.
- Setting overrides: Any project-specific configurations that override your global user settings.
Security warnings and errors
Section titled “Security warnings and errors”The trust dialog also highlights critical information that requires your attention:
- Security Warnings: The CLI will explicitly flag potentially dangerous settings, such as auto-approving certain tools or disabling the security sandbox.
- Discovery Errors: If the CLI encounters issues while
scanning the folder
(for example, a malformed
settings.jsonfile), these errors will be displayed prominently.
By reviewing these details, you can ensure that you only grant trust to projects that you know are safe.
Why trust matters: The impact of an untrusted workspace
Section titled “Why trust matters: The impact of an untrusted workspace”When a folder is untrusted, Gemini CLI runs in a restricted “safe mode” to protect you. In this mode, the following features are disabled:
-
Workspace settings are ignored: The CLI will not load the
.gemini/settings.jsonfile from the project. This prevents the loading of custom tools and other potentially dangerous configurations. -
Environment variables are ignored: The CLI will not load any
.envfiles from the project. -
Extension management is restricted: You cannot install, update, or uninstall extensions.
-
Tool auto-acceptance is disabled: You will always be prompted before any tool is run, even if you have auto-acceptance enabled globally.
-
Automatic memory loading is disabled: The CLI will not automatically load files into context from directories specified in local settings.
-
MCP servers do not connect: The CLI will not attempt to connect to any Model Context Protocol (MCP) servers.
-
Custom commands are not loaded: The CLI will not load any custom commands from .toml files, including both project-specific and global user commands.
Granting trust to a folder unlocks the full functionality of Gemini CLI for that workspace.
Managing your trust settings
Section titled “Managing your trust settings”If you need to change a decision or see all your settings, you have a couple of options:
-
Change the current folder’s trust: Run the
/permissionscommand from within the CLI. This will bring up the same interactive dialog, allowing you to change the trust level for the current folder. -
View all trust rules: To see a complete list of all your trusted and untrusted folder rules, you can inspect the contents of the
~/.gemini/trustedFolders.jsonfile in your home directory.
The trust check process (advanced)
Section titled “The trust check process (advanced)”For advanced users, it’s helpful to know the exact order of operations for how trust is determined:
-
IDE trust signal: If you are using the IDE Integration, the CLI first asks the IDE if the workspace is trusted. The IDE’s response takes highest priority.
-
Local trust file: If the IDE is not connected, the CLI checks the central
~/.gemini/trustedFolders.jsonfile.
Automate tasks with Gemini CLI. Learn how to use headless mode, pipe data into Gemini CLI, automate workflows with shell scripts, and generate structured JSON output for other applications.
Prerequisites
Section titled “Prerequisites”- Gemini CLI installed and authenticated.
- Familiarity with shell scripting (Bash/Zsh).
Why headless mode?
Section titled “Why headless mode?”Headless mode runs Gemini CLI once and exits. It’s perfect for:
- CI/CD: Analyzing pull requests automatically.
- Batch processing: Summarizing a large number of log files.
- Tool building: Creating your own “AI wrapper” scripts.
How to use headless mode
Section titled “How to use headless mode”Run Gemini CLI in headless mode by providing a prompt with the -p (or
--prompt) flag. This bypasses the interactive chat
interface and prints the
response to standard output (stdout). Positional arguments without the flag
default to interactive mode, unless the input or output is piped or
redirected.
Run a single command:
gemini -p "Write a poem about TypeScript"
How to pipe input to Gemini CLI
Section titled “How to pipe input to Gemini CLI”Feed data into Gemini using the standard Unix pipe |.
Gemini reads the
standard input (stdin) as context and answers your question using standard
output.
Pipe a file:
macOS/Linux
cat error.log | gemini -p "Explain why this failed"
Windows (PowerShell)
Get-Content error.log | gemini -p "Explain why this failed"
Pipe a command:
git diff | gemini -p "Write a commit message for these changes"
Use Gemini CLI output in scripts
Section titled “Use Gemini CLI output in scripts”Because Gemini prints to stdout, you can chain it with other tools or save the results to a file.
Scenario: Bulk documentation generator
Section titled “Scenario: Bulk documentation generator”You have a folder of Python scripts and want to generate a README.md for each
one.
-
Save the following code as
generate_docs.sh(orgenerate_docs.ps1for Windows):macOS/Linux (
generate_docs.sh)#!/bin/bash# Loop through all Python filesfor file in *.py; doecho "Generating docs for $file..."# Ask Gemini CLI to generate the documentation and print it to stdoutgemini -p "Generate a Markdown documentation summary for @$file. Print theresult to standard output." > "${file%.py}.md"doneWindows PowerShell (
generate_docs.ps1)Terminal window # Loop through all Python filesGet-ChildItem -Filter *.py | ForEach-Object {Write-Host "Generating docs for $($_.Name)..."$newName = $_.Name -replace '\.py$', '.md'# Ask Gemini CLI to generate the documentation and print it to stdoutgemini -p "Generate a Markdown documentation summary for @$($_.Name). Print the result to standard output." | Out-File -FilePath $newName -Encoding utf8} -
Make the script executable and run it in your directory:
macOS/Linux
Terminal window chmod +x generate_docs.sh./generate_docs.shWindows (PowerShell)
Terminal window .\generate_docs.ps1This creates a corresponding Markdown file for every Python file in the folder.
Extract structured JSON data
Section titled “Extract structured JSON data”When writing a script, you often need structured data (JSON) to pass to tools
like jq. To get pure JSON data from the model,
combine the
--output-format json flag with jq to parse the response field.
Scenario: Extract and return structured data
Section titled “Scenario: Extract and return structured data”-
Save the following script as
generate_json.sh(orgenerate_json.ps1for Windows):macOS/Linux (
generate_json.sh)#!/bin/bash# Ensure we are in a project rootif [ ! -f "package.json" ]; thenecho "Error: package.json not found."exit 1fi# Extract datagemini --output-format json "Return a raw JSON object with keys 'version' and 'deps' from @package.json" | jq -r '.response' > data.jsonWindows PowerShell (
generate_json.ps1)Terminal window # Ensure we are in a project rootif (-not (Test-Path "package.json")) {Write-Error "Error: package.json not found."exit 1}# Extract data (requires jq installed, or you can use ConvertFrom-Json)$output = gemini --output-format json "Return a raw JSON object with keys 'version' and 'deps' from @package.json" | ConvertFrom-Json$output.response | Out-File -FilePath data.json -Encoding utf8 -
Run the script:
macOS/Linux
Terminal window chmod +x generate_json.sh./generate_json.shWindows (PowerShell)
Terminal window .\generate_json.ps1 -
Check
data.json. The file should look like this:{"version": "1.0.0","deps": {"react": "^18.2.0"}}
Build your own custom AI tools
Section titled “Build your own custom AI tools”Use headless mode to perform custom, automated AI tasks.
Scenario: Create a “Smart Commit” alias
Section titled “Scenario: Create a “Smart Commit” alias”You can add a function to your shell configuration to create a git commit
wrapper that writes the message for you.
macOS/Linux (Bash/Zsh)
-
Open your
.zshrcfile (or.bashrcif you use Bash) in your preferred text editor.Terminal window nano ~/.zshrcNote: If you use VS Code, you can run
code ~/.zshrc. -
Scroll to the very bottom of the file and paste this code:
Terminal window function gcommit() {# Get the diff of staged changesdiff=$(git diff --staged)if [ -z "$diff" ]; thenecho "No staged changes to commit."return 1fi# Ask Gemini to write the messageecho "Generating commit message..."msg=$(echo "$diff" | gemini -p "Write a concise Conventional Commit message for this diff. Output ONLY the message.")# Commit with the generated messagegit commit -m "$msg"}Save your file and exit.
-
Run this command to make the function available immediately:
Terminal window source ~/.zshrc
Windows (PowerShell)
-
Open your PowerShell profile in your preferred text editor.
Terminal window notepad $PROFILE -
Scroll to the very bottom of the file and paste this code:
Terminal window function gcommit {# Get the diff of staged changes$diff = git diff --stagedif (-not $diff) {Write-Host "No staged changes to commit."return}# Ask Gemini to write the messageWrite-Host "Generating commit message..."$msg = $diff | gemini -p "Write a concise Conventional Commit message for this diff. Output ONLY the message."# Commit with the generated messagegit commit -m "$msg"}Save your file and exit.
-
Run this command to make the function available immediately:
Terminal window . $PROFILE -
Use your new command:
Terminal window gcommitGemini CLI will analyze your staged changes and commit them with a generated message.
Next steps
Section titled “Next steps”- Explore the Headless mode reference for full JSON schema details.
- Learn about Shell commands to let the agent run scripts instead of just writing them.
Explore, analyze, and modify your codebase using Gemini CLI. In this guide, you’ll learn how to provide Gemini CLI with files and directories, modify and create files, and control what Gemini CLI can see.
Prerequisites
Section titled “Prerequisites”- Gemini CLI installed and authenticated.
- A project directory to work with (for example, a git repository).
Providing context by reading files
Section titled “Providing context by reading files”Gemini CLI will generally try to read relevant files, sometimes prompting you for access (depending on your settings). To ensure that Gemini CLI uses a file, you can also include it directly.
Direct file inclusion (@)
Section titled “Direct file
inclusion (@)”
If you know the path to the file you want to work on, use the @ symbol. This
forces the CLI to read the file immediately and inject its content into your
prompt.
`@src/components/UserProfile.tsx Explain how this component handles user data.`
Working with multiple files
Section titled “Working with multiple files”Complex features often span multiple files. You can chain @ references to give
the agent a complete picture of the dependencies.
`@src/components/UserProfile.tsx @src/types/User.ts Refactor the component to use the updated User interface.`
Including entire directories
Section titled “Including entire directories”For broad questions or refactoring, you can include an entire directory. Be careful with large folders, as this consumes more tokens.
`@src/utils/ Check these utility functions for any deprecated API usage.`
How to find files (Exploration)
Section titled “How to find files (Exploration)”If you don’t know the exact file path, you can ask Gemini CLI to find it for you. This is useful when navigating a new codebase or looking for specific logic.
Scenario: Find a component definition
Section titled “Scenario: Find a component definition”You know there’s a UserProfile component, but you
don’t know where it lives.
`Find the file that defines the UserProfile component.`
Gemini uses the glob or list_directory tools to search your project
structure. It will return the specific path (for example,
src/components/UserProfile.tsx), which you can then
use with @ in your next
turn.
How to modify code
Section titled “How to modify code”Once Gemini CLI has context, you can direct it to make specific edits. The agent is capable of complex refactoring, not just simple text replacement.
`Update @src/components/UserProfile.tsx to show a loading spinner if the user data is null.`
Gemini CLI uses the replace tool to propose a
targeted code change.
Creating new files
Section titled “Creating new files”You can also ask the agent to create entirely new files or folder structures.
`Create a new file @src/components/LoadingSpinner.tsx with a simple Tailwind CSS spinner.`
Gemini CLI uses the write_file tool to generate the
new file from scratch.
Review and confirm changes
Section titled “Review and confirm changes”Gemini CLI prioritizes safety. Before any file is modified, it presents a unified diff of the proposed changes.
if (!user) return null;if (!user) return <LoadingSpinner />;
- Red lines (-): Code that will be removed.
- Green lines (+): Code that will be added.
Press y to confirm and apply the change to your local file system. If the diff doesn’t look right, press n to cancel and refine your prompt.
Verify the result
Section titled “Verify the result”After the edit is complete, verify the fix. You can simply read the file again or, better yet, run your project’s tests.
`Run the tests for the UserProfile component.`
Gemini CLI uses the run_shell_command tool to execute
your test runner (for
example, npm test or jest).
This ensures the changes didn’t break existing
functionality.
Advanced: Controlling what Gemini sees
Section titled “Advanced: Controlling what Gemini sees”By default, Gemini CLI respects your .gitignore file.
It won’t read or search
through node_modules, build artifacts, or other
ignored paths.
If you have sensitive files (like .env) or large
assets that you want to keep
hidden from the AI without ignoring them in Git, you can create a
.geminiignore file in your project root.
Example .geminiignore:
.envlocal-db-dump.sqlprivate-notes.md
Next steps
Section titled “Next steps”- Learn how to Manage context and memory to keep your agent smarter over long sessions.
- See Execute shell commands for more on running tests and builds.
- Explore the technical File system reference for advanced tool parameters.
Connect Gemini CLI to your external databases and services. In this guide, you’ll learn how to extend Gemini CLI’s capabilities by installing the GitHub MCP server and using it to manage your repositories.
Prerequisites
Section titled “Prerequisites”- Gemini CLI installed.
- Docker: Required for this specific example (many MCP servers run as Docker containers).
- GitHub token: A Personal Access Token (PAT) with repo permissions.
How to prepare your credentials
Section titled “How to prepare your credentials”Most MCP servers require authentication. For GitHub, you need a PAT.
- Create a fine-grained PAT.
- Grant it Read access to Metadata and Contents, and Read/Write access to Issues and Pull Requests.
- Store it in your environment:
macOS/Linux
export GITHUB_PERSONAL_ACCESS_TOKEN="github_pat_..."
Windows (PowerShell)
$env:GITHUB_PERSONAL_ACCESS_TOKEN="github_pat_..."
How to configure Gemini CLI
Section titled “How to configure Gemini CLI”You tell Gemini about new servers by editing your settings.json.
- Open
~/.gemini/settings.json(or the project-specific.gemini/settings.json). - Add the
mcpServersblock. This tells Gemini: “Run this docker container and talk to it.”
{ "mcpServers": { "github": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "GITHUB_PERSONAL_ACCESS_TOKEN", "ghcr.io/github/github-mcp-server:latest" ], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_PERSONAL_ACCESS_TOKEN}" } } }}
How to verify the connection
Section titled “How to verify the connection”Restart Gemini CLI. It will automatically try to start the defined servers.
Command: /mcp list
You should see: ✓ github: docker ... - Connected
If you see Disconnected or an error, check that
Docker is running and your API
token is valid.
How to use the new tools
Section titled “How to use the new tools”Now that the server is running, the agent has new capabilities (“tools”). You don’t need to learn special commands; just ask in natural language.
Scenario: Listing pull requests
Section titled “Scenario: Listing pull requests”Prompt: List the open PRs in the google/gemini-cli repository.
The agent will:
- Recognize the request matches a GitHub tool.
- Call
mcp_github_list_pull_requests. - Present the data to you.
Scenario: Creating an issue
Section titled “Scenario: Creating an issue”Prompt:
Create an issue in my repo titled "Bug: Login fails" with the description "See logs".
Troubleshooting
Section titled “Troubleshooting”- Server won’t start? Try running the docker command manually in your terminal to see if it prints an error (for example, “image not found”).
- Tools not found? Run
/mcp reloadto force the CLI to re-query the server for its capabilities.
Next steps
Section titled “Next steps”- Explore the MCP servers reference to learn about SSE and HTTP transports for remote servers.
- Browse the official MCP server list to find connectors for Slack, Postgres, Google Drive, and more.
Control what Gemini CLI knows about you and your projects. In this guide,
you’ll
learn how to define project-wide rules with GEMINI.md, teach the agent
persistent facts, and inspect the active context.
Prerequisites
Section titled “Prerequisites”- Gemini CLI installed and authenticated.
- A project directory where you want to enforce specific rules.
Why manage context?
Section titled “Why manage context?”Gemini CLI is powerful but general. It doesn’t know your preferred testing
framework, your indentation style, or your preference against any in
TypeScript. Context management solves this by giving the agent persistent
memory.
You’ll use these features when you want to:
- Enforce standards: Ensure every generated file matches your team’s style guide.
- Set a persona: Tell the agent to act as a “Senior Rust Engineer” or “QA Specialist.”
- Remember facts: Save details like “My database port is 5432” so you don’t have to repeat them.
How to define project-wide rules (GEMINI.md)
Section titled “How to define project-wide rules (GEMINI.md)”The most powerful way to control the agent’s behavior is through GEMINI.md
files. These are Markdown files containing instructions that are
automatically
loaded into every conversation.
Scenario: Create a project context file
Section titled “Scenario: Create a project context file”-
In the root of your project, create a file named
GEMINI.md. -
Add your instructions:
# Project Instructions- **Framework:** We use React with Vite.- **Styling:** Use Tailwind CSS for all styling. Do not write custom CSS.- **Testing:** All new components must include a Vitest unit test.- **Tone:** Be concise. Don't explain basic React concepts. -
Start a new session. Gemini CLI will now know these rules automatically.
Scenario: Using the hierarchy
Section titled “Scenario: Using the hierarchy”Context is loaded hierarchically. This lets you have general rules for everything and specific rules for sub-projects.
- Global:
~/.gemini/GEMINI.md(Rules for every project you work on). - Project Root:
./GEMINI.md(Rules for the current repository). - Subdirectory:
./src/GEMINI.md(Rules specific to thesrcfolder).
Example: You might set “Always use strict typing” in your global config, but “Use Python 3.11” only in your backend repository.
How to teach the agent facts (Memory)
Section titled “How to teach the agent facts (Memory)”Sometimes you don’t want to write a config file. You just want to tell the agent something once and have it remember forever. You can do this naturally in chat.
Scenario: Saving a memory
Section titled “Scenario: Saving a memory”Just tell the agent to remember something.
Prompt: Remember that I prefer using 'const' over 'let' wherever possible.
The agent will use the save_memory tool to store this
fact in your global
memory file.
Prompt: Save the fact that the staging server IP is 10.0.0.5.
Scenario: Using memory in conversation
Section titled “Scenario: Using memory in conversation”Once a fact is saved, you don’t need to invoke it explicitly. The agent “knows” it.
Next Prompt: Write a script to deploy to staging.
Agent Response: “I’ll write a script to deploy to 10.0.0.5…”
How to manage and inspect context
Section titled “How to manage and inspect context”As your project grows, you might want to see exactly what instructions the agent is following.
Scenario: View active context
Section titled “Scenario: View active context”To see the full, concatenated set of instructions currently loaded (from all
GEMINI.md files and saved memories), use the /memory show command.
Command: /memory show
This prints the raw text the model receives at the start of the session. It’s excellent for debugging why the agent might be ignoring a rule.
Scenario: Refresh context
Section titled “Scenario: Refresh context”If you edit a GEMINI.md file while a session is
running, the agent won’t know
immediately. Force a reload with:
Command: /memory reload
Best practices
Section titled “Best practices”- Keep it focused: Avoid adding excessive content to
GEMINI.md. Keep instructions actionable and relevant to code generation. - Use negative constraints: Explicitly telling the agent what not to do (for example, “Do not use class components”) is often more effective than vague positive instructions.
- Review often: Periodically check your
GEMINI.mdfiles to remove outdated rules.
Next steps
Section titled “Next steps”- Learn about Session management to see how short-term history works.
- Explore the Command reference for more
/memoryoptions. - Read the technical spec for Project context.
Architecting a complex solution requires precision. By combining Plan Mode’s structured environment with model steering’s real-time feedback, you can guide Gemini CLI through the research and design phases to ensure the final implementation plan is exactly what you need.
Prerequisites
Section titled “Prerequisites”- Gemini CLI installed and authenticated.
- Plan Mode enabled in your settings.
- Model steering enabled in your settings.
Why combine Plan Mode and model steering?
Section titled “Why combine Plan Mode and model steering?”Plan Mode typically follows a linear path: research, propose, and draft. Adding model steering lets you:
- Direct the research: Correct the agent if it’s looking in the wrong directory or missing a key dependency.
- Iterate mid-draft: Suggest a different architectural pattern while the agent is still writing the plan.
- Speed up the loop: Avoid waiting for a full research turn to finish before providing critical context.
Step 1: Start a complex task
Section titled “Step 1: Start a complex task”Enter Plan Mode and start a task that requires research.
Prompt: /plan I want to implement a new notification service using Redis.
Gemini CLI enters Plan Mode and starts researching your existing codebase to identify where the new service should live.
Step 2: Steer the research phase
Section titled “Step 2: Steer the research phase”As you see the agent calling tools like list_directory or grep_search,
you
might realize it’s missing the relevant context.
Action: While the spinner is active, type your hint:
"Don't forget to check packages/common/queues for the existing Redis config."
Result: Gemini CLI acknowledges your hint and immediately incorporates it into its research. You’ll see it start exploring the directory you suggested in its very next turn.
Step 3: Refine the design mid-turn
Section titled “Step 3: Refine the design mid-turn”After research, the agent starts drafting the implementation plan. If you notice it’s proposing a design that doesn’t align with your goals, steer it.
Action: Type:
"Actually, let's use a Publisher/Subscriber pattern instead of a simple queue for this service."
Result: The agent stops drafting the current version of the plan, re-evaluates the design based on your feedback, and starts a new draft that uses the Pub/Sub pattern.
Step 4: Approve and implement
Section titled “Step 4: Approve and implement”Once the agent has used your hints to craft the perfect plan, review the
final
.md file.
Action: Type: "Looks perfect. Let's start the implementation."
Gemini CLI exits Plan Mode and transitions to the implementation phase. Because the plan was refined in real-time with your feedback, the agent can now execute each step with higher confidence and fewer errors.
Tips for effective steering
Section titled “Tips for effective steering”- Be specific: Instead of “do it differently,” try “use
the existing
Loggerclass insrc/utils.” - Steer early: Providing feedback during the research phase is more efficient than waiting for the final plan to be drafted.
- Use for context: Steering is a great way to provide knowledge that might not be obvious from reading the code (for example, “We are planning to deprecate this module next month”).
Next steps
Section titled “Next steps”- Explore Agent Skills to add specialized expertise to your planning turns.
- See the Model steering reference for technical details.
Resume, browse, and rewind your conversations with Gemini CLI. In this guide, you’ll learn how to switch between tasks, manage your session history, and undo mistakes using the rewind feature.
Prerequisites
Section titled “Prerequisites”- Gemini CLI installed and authenticated.
- At least one active or past session.
How to resume where you left off
Section titled “How to resume where you left off”It’s common to switch context—maybe you’re waiting for a build and want to work on a different feature. Gemini makes it easy to jump back in.
Scenario: Resume the last session
Section titled “Scenario: Resume the last session”The fastest way to pick up your most recent work is with the --resume flag (or
-r).
gemini -r
This restores your chat history and memory, so you can say “Continue with the next step” immediately.
Scenario: Browse past sessions
Section titled “Scenario: Browse past sessions”If you want to find a specific conversation from yesterday, use the interactive browser.
Command: /resume
This opens a searchable list of all your past sessions. You’ll see:
- A timestamp (for example, “2 hours ago”).
- The first user message (helping you identify the topic).
- The number of turns in the conversation.
Select a session and press Enter to load it.
How to manage your workspace
Section titled “How to manage your workspace”Over time, you’ll accumulate a lot of history. Keeping your session list clean helps you find what you need.
Scenario: Deleting sessions
Section titled “Scenario: Deleting sessions”In the /resume browser, navigate to a session you no
longer need and press
x. This permanently deletes the history for that specific
conversation.
You can also manage sessions from the command line:
# List all sessions with their IDsgemini --list-sessions
# Delete a specific session by ID or indexgemini --delete-session 1
How to rewind time (Undo mistakes)
Section titled “How to rewind time (Undo mistakes)”Gemini CLI’s Rewind feature is like Ctrl+Z for your workflow.
Scenario: Triggering rewind
Section titled “Scenario: Triggering rewind”At any point in a chat, type /rewind or press
Esc twice.
Scenario: Choosing a restore point
Section titled “Scenario: Choosing a restore point”You’ll see a list of your recent interactions. Select the point before the undesired changes occurred.
Scenario: Choosing what to revert
Section titled “Scenario: Choosing what to revert”Gemini gives you granular control over the undo process. You can choose to:
- Rewind conversation: Only remove the chat history. The files stay changed. (Useful if the code is good but the chat got off track).
- Revert code changes: Keep the chat history but undo the file edits. (Useful if you want to keep the context but retry the implementation).
- Rewind both: Restore everything to exactly how it was.
How to fork conversations
Section titled “How to fork conversations”Sometimes you want to try two different approaches to the same problem.
- Start a session and get to a decision point.
- Save the current state with
/resume save decision-point. - Try your first approach.
- Later, use
/resume resume decision-pointto fork the conversation back to that moment and try a different approach.
This creates a new branch of history without losing your original work.
Next steps
Section titled “Next steps”- Learn about Checkpointing to understand the underlying safety mechanism.
- Explore Task planning to keep complex sessions organized.
- See the Command reference for
/resumeoptions, grouped checkpoint menus, and/chatcompatibility aliases.
Use the CLI to run builds, manage git, and automate system tasks without leaving the conversation. In this guide, you’ll learn how to run commands directly, automate complex workflows, and manage background processes safely.
Prerequisites
Section titled “Prerequisites”- Gemini CLI installed and authenticated.
- Basic familiarity with your system’s shell (Bash, Zsh, PowerShell, and so on).
How to run commands directly (!)
Section titled “How to run
commands directly (!)”
Sometimes you just need to check a file size or git status without asking the
AI
to do it for you. You can pass commands directly to your shell using the
!
prefix.
Example: !ls -la
This executes ls -la immediately and prints the
output to your terminal.
Gemini CLI also records the command and its output in the current session
context, so the model can reference it in follow-up prompts. Very large
outputs
may be truncated.
Scenario: Entering Shell mode
Section titled “Scenario: Entering Shell mode”If you’re doing a lot of manual work, toggle “Shell Mode” by typing ! and
pressing Enter. Now, everything you type is sent to the
shell until you exit
(usually by pressing Esc or typing exit).
How to automate complex tasks
Section titled “How to automate complex tasks”You can automate tasks using a combination of Gemini CLI and shell commands.
Scenario: Run tests and fix failures
Section titled “Scenario: Run tests and fix failures”You want to run tests and fix any failures.
Prompt:
Run the unit tests. If any fail, analyze the error and try to fix the code.
Workflow:
- Gemini calls
run_shell_command('npm test'). - You see a confirmation prompt:
Allow command 'npm test'? [y/N]. - You press
y. - The tests run. If they fail, Gemini reads the error output.
- Gemini uses
read_fileto inspect the failing test. - Gemini uses
replaceto fix the bug. - Gemini runs
npm testagain to verify the fix.
This loop lets Gemini work autonomously.
How to manage background processes
Section titled “How to manage background processes”You can ask Gemini to start long-running tasks, like development servers or file watchers.
Prompt: Start the React dev server in the background.
Gemini will run the command (for example, npm run dev) and detach it.
Scenario: Viewing active shells
Section titled “Scenario: Viewing active shells”To see what’s running in the background, use the /shells command.
Command: /shells
This opens a dashboard where you can view logs or kill runaway processes.
How to handle interactive commands
Section titled “How to handle interactive commands”Gemini CLI attempts to handle interactive commands (like git add -p or
confirmation prompts) by streaming the output to you. However, for highly
interactive tools (like vim or top), it’s often better to run them yourself
in a separate terminal window or use the ! prefix.
Safety features
Section titled “Safety features”Giving an AI access to your shell is powerful but risky. Gemini CLI includes several safety layers.
Confirmation prompts
Section titled “Confirmation prompts”By default, every shell command requested by the agent requires your explicit approval.
- Allow once: Runs the command one time.
- Allow always: Trusts this specific command for the rest of the session.
- Deny: Stops the agent.
Sandboxing
Section titled “Sandboxing”For maximum security, especially when running untrusted code or exploring new projects, we strongly recommend enabling Sandboxing. This runs all shell commands inside a secure Docker container.
Enable sandboxing: Use the --sandbox
flag when starting the CLI:
gemini --sandbox.
Next steps
Section titled “Next steps”- Learn about Sandboxing to safely run destructive commands.
- See the Shell tool reference for configuration options like timeouts and working directories.
- Explore Task planning to see how shell commands fit into larger workflows.
Agent Skills extend Gemini CLI with specialized expertise. In this guide, you’ll learn how to create your first skill, bundle custom scripts, and activate them during a session.
How to create a skill
Section titled “How to create a skill”A skill is defined by a directory containing a SKILL.md file. Let’s create an
API Auditor skill that helps you verify if local or remote
endpoints are
responding correctly.
Create the directory structure
Section titled “Create the directory structure”-
Run the following command to create the folders:
macOS/Linux
Terminal window mkdir -p .gemini/skills/api-auditor/scriptsWindows (PowerShell)
Terminal window New-Item -ItemType Directory -Force -Path ".gemini\skills\api-auditor\scripts"
Create the definition
Section titled “Create the definition”-
Create a file at
.gemini/skills/api-auditor/SKILL.md. This tells the agent when to use the skill and how to behave.---name: api-auditordescription:Expertise in auditing and testing API endpoints. Use when the user asks to"check", "test", or "audit" a URL or API.---# API Auditor InstructionsYou act as a QA engineer specialized in API reliability. When this skill isactive, you MUST:1. **Audit**: Use the bundled `scripts/audit.js` utility to check thestatus of the provided URL.2. **Report**: Analyze the output (status codes, latency) and explain anyfailures in plain English.3. **Secure**: Remind the user if they are testing a sensitive endpointwithout an `https://` protocol.
Add the tool logic
Section titled “Add the tool logic”Skills can bundle resources like scripts.
-
Create a file at
.gemini/skills/api-auditor/scripts/audit.js. This is the code the agent will run..gemini/skills/api-auditor/scripts/audit.js const url = process.argv[2];if (!url) {console.error('Usage: node audit.js <url>');process.exit(1);}console.log(`Auditing ${url}...`);fetch(url, { method: 'HEAD' }).then((r) => console.log(`Result: Success (Status ${r.status})`)).catch((e) => console.error(`Result: Failed (${e.message})`));
How to verify discovery
Section titled “How to verify discovery”Gemini CLI automatically discovers skills in the .gemini/skills directory. You
can also use .agents/skills as a more generic
alternative. Check that it found
your new skill.
Command: /skills list
You should see api-auditor in the list of available
skills.
How to use the skill
Section titled “How to use the skill”Now, try it out. Start a new session and ask a question that triggers the skill’s description.
User: “Can you audit http://geminicli.com”
Gemini recognizes the request matches the api-auditor
description and asks for
permission to activate it.
Model: (After calling activate_skill) “I’ve activated the
api-auditor
skill. I’ll run the audit script now…”
Gemini then uses the run_shell_command tool to
execute your bundled Node
script:
node .gemini/skills/api-auditor/scripts/audit.js http://geminili.com
Next steps
Section titled “Next steps”- Explore the Agent Skills Authoring Guide to learn about more advanced features.
- Learn how to share skills via Extensions.
Keep complex jobs on the rails with Gemini CLI’s built-in task planning. In this guide, you’ll learn how to ask for a plan, execute it step-by-step, and monitor progress with the todo list.
Prerequisites
Section titled “Prerequisites”- Gemini CLI installed and authenticated.
- A complex task in mind (for example, a multi-file refactor or new feature).
Why use task planning?
Section titled “Why use task planning?”Standard LLMs have a limited context window and can “forget” the original goal after 10 turns of code generation. Task planning provides:
- Visibility: You see exactly what the agent plans to do before it starts.
- Focus: The agent knows exactly which step it’s working on right now.
- Resilience: If the agent gets stuck, the plan helps it get back on track.
How to ask for a plan
Section titled “How to ask for a plan”The best way to trigger task planning is to explicitly ask for it.
Prompt:
I want to migrate this project from JavaScript to TypeScript. Please make a plan first.
Gemini will analyze your codebase and use the write_todos tool to generate a
structured list.
Example Plan:
- Create
tsconfig.json. - Rename
.jsfiles to.ts. - Fix type errors
in
utils.js. - Fix type errors
in
server.js. - Verify build passes.
How to review and iterate
Section titled “How to review and iterate”Once the plan is generated, it appears in your CLI. Review it.
- Missing steps? Tell the agent: “You forgot to add a
step for installing
@types/node.” - Wrong order? Tell the agent: “Let’s verify the build after each file, not just at the end.”
The agent will update the todo list dynamically.
How to execute the plan
Section titled “How to execute the plan”Tell the agent to proceed.
Prompt: Looks good. Start with the first step.
As the agent works, you’ll see the todo list update in real-time above the input box.
- Current focus: The active task is highlighted (for
example,
[IN_PROGRESS] Create tsconfig.json). - Progress: Completed tasks are marked as done.
How to monitor progress (Ctrl+T)
Section titled “How to monitor
progress (Ctrl+T)”
For a long-running task, the full todo list might be hidden to save space. You can toggle the full view at any time.
Action: Press Ctrl+T.
This shows the complete list, including pending, in-progress, and completed items. It’s a great way to check “how much is left?” without scrolling back up.
How to handle unexpected changes
Section titled “How to handle unexpected changes”Plans change. Maybe you discover a library is incompatible halfway through.
Prompt:
Actually, let's skip the 'server.js' refactor for now. It's too risky.
The agent will mark that task as cancelled or remove
it, and move to the next
item. This dynamic adjustment is what makes the todo system powerful—it’s a
living document, not a static text block.
Next steps
Section titled “Next steps”- Explore Session management to save your plan and finish it tomorrow.
- See the Todo tool reference for technical schema details.
- Learn about Memory management to persist planning preferences (for example, “Always create a test plan first”).
Access the live internet directly from your prompt. In this guide, you’ll learn how to search for up-to-date documentation, fetch deep context from specific URLs, and apply that knowledge to your code.
Prerequisites
Section titled “Prerequisites”- Gemini CLI installed and authenticated.
- An internet connection.
How to research new technologies
Section titled “How to research new technologies”Imagine you want to use a library released yesterday. The model doesn’t know about it yet. You need to teach it.
Scenario: Find documentation
Section titled “Scenario: Find documentation”Prompt:
Search for the 'Bun 1.0' release notes and summarize the key changes.
Gemini uses the google_web_search tool to find
relevant pages and synthesizes
an answer. This “grounding” process ensures the agent isn’t hallucinating
features that don’t exist.
Prompt: Find the documentation for the 'React Router v7' loader API.
How to fetch deep context
Section titled “How to fetch deep context”Search gives you a summary, but sometimes you need the raw details. The
web_fetch tool lets you feed a specific URL directly
into the agent’s context.
Scenario: Reading a blog post
Section titled “Scenario: Reading a blog post”You found a blog post with the exact solution to your bug.
Prompt:
Read https://example.com/fixing-memory-leaks and explain how to apply it to my code.
Gemini will retrieve the page content (stripping away ads and navigation) and use it to answer your question.
Scenario: Comparing sources
Section titled “Scenario: Comparing sources”You can even fetch multiple pages to compare approaches.
Prompt:
Compare the pagination patterns in https://api.example.com/v1/docs and https://api.example.com/v2/docs.
How to apply knowledge to code
Section titled “How to apply knowledge to code”The real power comes when you combine web tools with file editing.
Workflow:
- Search: “How do I implement auth with Supabase?”
- Fetch: “Read this guide: https://supabase.com/docs/guides/auth.”
- Implement: “Great. Now use that pattern to create an
auth.tsfile in my project.”
How to troubleshoot errors
Section titled “How to troubleshoot errors”When you hit an obscure error message, paste it into the chat.
Prompt:
I'm getting 'Error: hydration mismatch' in Next.js. Search for recent solutions.
The agent will search sources such as GitHub issues, StackOverflow, and forums to find relevant fixes that might be too new to be in its base training set.
Next steps
Section titled “Next steps”- Explore File management to see how to apply the code you generate.
- See the Web search tool reference for citation details.
- See the Web fetch tool reference for technical limitations.
Gemini CLI supports several built-in commands to help you manage your
session,
customize the interface, and control its behavior. These commands are
prefixed
with a forward slash (/), an at symbol (@), or an exclamation mark (!).
Slash commands (/)
Section titled “Slash commands
(/)”
Slash commands provide meta-level control over the CLI itself.
Built-in Commands
Section titled “Built-in Commands”/about
Section titled
“/about”
- Description: Show version info. Share this information when filing issues.
/agents
Section titled
“/agents”
- Description: Manage local and remote subagents.
- Sub-commands:
list:- Description: Lists all discovered agents, including built-in, local, and remote agents.
- Usage:
/agents list
reload(alias:refresh):- Description: Rescans agent directories
(
~/.gemini/agentsand.gemini/agents) and reloads the registry. - Usage:
/agents reload
- Description: Rescans agent directories
(
enable:- Description: Enables a specific subagent.
- Usage:
/agents enable <agent-name>
disable:- Description: Disables a specific subagent.
- Usage:
/agents disable <agent-name>
config:- Description: Opens a configuration dialog for the specified agent to adjust its model, temperature, or execution limits.
- Usage:
/agents config <agent-name>
- Description: Open a dialog that lets you change the authentication method.
- Description: File an issue about Gemini CLI. By
default, the issue is
filed within the GitHub repository for Gemini CLI. The string you enter
after
/bugwill become the headline for the bug being filed. The default/bugbehavior can be modified using theadvanced.bugCommandsetting in your.gemini/settings.jsonfiles.
- Description: Alias for
/resume. Both commands now expose the same session browser action and checkpoint subcommands. - Menu layout when typing
/chat(or/resume):-- auto --list(selecting this opens the auto-saved session browser)
-- checkpoints --list,save,resume,delete,share(manual tagged checkpoints)
- Unique prefixes (for example
/chaor/resu) resolve to the same grouped menu.
- Sub-commands:
debug- Description: Export the most recent API request as a JSON payload.
delete <tag>- Description: Deletes a saved conversation checkpoint.
- Equivalent:
/resume delete <tag>
list- Description: Lists available tags for manually saved checkpoints.
- Note: This command only lists chats saved within the current project. Because chat history is project-scoped, chats saved in other project directories will not be displayed.
- Equivalent:
/resume list
resume <tag>- Description: Resumes a conversation from a previous save.
- Note: You can only resume chats that were saved within the current project. To resume a chat from a different project, you must run the Gemini CLI from that project’s directory.
- Equivalent:
/resume resume <tag>
save <tag>- Description: Saves the current
conversation history. You must add a
<tag>for identifying the conversation state. - Details on checkpoint location: The
default locations for saved chat
checkpoints are:
- Linux/macOS:
~/.gemini/tmp/<project_hash>/ - Windows:
C:\Users\<YourUsername>\.gemini\tmp\<project_hash>\ - Behavior: Chats are saved into a project-specific directory, determined by where you run the CLI. Consequently, saved chats are only accessible when working within that same project.
- Note: These checkpoints are for manually saving and resuming conversation states. For automatic checkpoints created before file modifications, see the Checkpointing documentation.
- Equivalent:
/resume save <tag>
- Linux/macOS:
- Description: Saves the current
conversation history. You must add a
share [filename]- Description: Writes the current conversation to a provided Markdown or JSON file. If no filename is provided, then the CLI will generate one.
- Usage:
/chat share file.mdor/chat share file.json. - Equivalent:
/resume share [filename]
/clear
Section titled
“/clear”
- Description: Clear the terminal screen, including the visible session history and scrollback within the CLI. The underlying session data (for history recall) might be preserved depending on the exact implementation, but the visual display is cleared.
- Keyboard shortcut: Press Ctrl+L at any time to perform a clear action.
/commands
Section titled
“/commands”
- Description: Manage custom slash commands loaded from
.tomlfiles. - Sub-commands:
reload:- Description: Reload custom command
definitions from all sources
(user-level
~/.gemini/commands/, project-level<project>/.gemini/commands/, MCP prompts, and extensions). Use this to pick up new or modified.tomlfiles without restarting the CLI. - Usage:
/commands reload
- Description: Reload custom command
definitions from all sources
(user-level
/compress
Section titled
“/compress”
- Description: Replace the entire chat context with a summary. This saves on tokens used for future tasks while retaining a high level summary of what has happened.
- Description: Copies the last output produced by Gemini CLI to your clipboard, for easy sharing or reuse.
- Behavior:
- Local sessions use system clipboard tools (pbcopy/xclip/clip).
- Remote sessions (SSH/WSL) use OSC 52 and require terminal support.
- Note: This command requires platform-specific clipboard
tools to be
installed.
- On Linux, it requires
xcliporxsel. You can typically install them using your system’s package manager. - On macOS, it requires
pbcopy, and on Windows, it requiresclip. These tools are typically pre-installed on their respective systems.
- On Linux, it requires
/directory (or /dir)
Section titled “/directory (or
/dir)”
- Description: Manage workspace directories for multi-directory support.
- Sub-commands:
add:- Description: Add a directory to the workspace. The path can be absolute or relative to the current working directory. Moreover, the reference from home directory is supported as well.
- Usage:
/directory add <path1>,<path2> - Note: Disabled in restrictive sandbox
profiles. If you’re using that,
use
--include-directorieswhen starting the session instead.
show:- Description: Display all directories
added by
/directory addand--include-directories. - Usage:
/directory show
- Description: Display all directories
added by
- Description: Open Gemini CLI documentation in your browser.
/editor
Section titled
“/editor”
- Description: Open a dialog for selecting supported editors.
/extensions
Section titled
“/extensions”
- Description: Manage extensions. See Gemini CLI Extensions.
- Sub-commands:
config:- Description: Configure extension settings.
disable:- Description: Disable an extension.
enable:- Description: Enable an extension.
explore:- Description: Open extensions page in your browser.
install:- Description: Install an extension from a git repo or local path.
link:- Description: Link an extension from a local path.
list:- Description: List active extensions.
restart:- Description: Restart all extensions.
uninstall:- Description: Uninstall an extension.
update:- Description: Update extensions. Usage:
update
|—all
- Description: Update extensions. Usage:
update
/help (or /?)
Section titled “/help (or
/?)”
- Description: Display help information about Gemini CLI, including available commands and their usage.
/hooks
Section titled
“/hooks”
- Description: Manage hooks, which allow you to intercept and customize Gemini CLI behavior at specific lifecycle events.
- Sub-commands:
disable-all:- Description: Disable all enabled hooks.
disable <hook-name>:- Description: Disable a hook by name.
enable-all:- Description: Enable all disabled hooks.
enable <hook-name>:- Description: Enable a hook by name.
list(orshow,panel):- Description: Display all registered hooks with their status.
- Description: Manage IDE integration.
- Sub-commands:
disable:- Description: Disable IDE integration.
enable:- Description: Enable IDE integration.
install:- Description: Install required IDE companion.
status:- Description: Check status of IDE integration.
- Description: To help users easily create a
GEMINI.mdfile, this command analyzes the current directory and generates a tailored context file, making it simpler for them to provide project-specific instructions to the Gemini agent.
- Description: Manage configured Model Context Protocol (MCP) servers.
- Sub-commands:
auth:- Description: Authenticate with an OAuth-enabled MCP server.
- Usage:
/mcp auth <server-name> - Details: If
<server-name>is provided, it initiates the OAuth flow for that server. If no server name is provided, it lists all configured servers that support OAuth authentication.
desc- Description: List configured MCP servers and tools with descriptions.
disable- Description: Disable an MCP server.
enable- Description: Enable a disabled MCP server.
listorls:- Description: List configured MCP servers and tools. This is the default action if no subcommand is specified.
reload:- Description: Reloads all MCP servers and re-discovers their available tools.
schema:- Description: List configured MCP servers and tools with descriptions and schemas.
/memory
Section titled
“/memory”
- Description: Manage the AI’s instructional context
(hierarchical memory
loaded from
GEMINI.mdfiles). - Sub-commands:
add:- Description: Adds the following text to
the AI’s memory. Usage:
/memory add <text to remember>
- Description: Adds the following text to
the AI’s memory. Usage:
list:- Description: Lists the paths of the GEMINI.md files in use for hierarchical memory.
refresh:- Description: Reload the hierarchical
instructional memory from all
GEMINI.mdfiles found in the configured locations (global, project/ancestors, and sub-directories). This command updates the model with the latestGEMINI.mdcontent.
- Description: Reload the hierarchical
instructional memory from all
show:- Description: Display the full,
concatenated content of the current
hierarchical memory that has been loaded from all
GEMINI.mdfiles. This lets you inspect the instructional context being provided to the Gemini model.
- Description: Display the full,
concatenated content of the current
hierarchical memory that has been loaded from all
- Note: For more details on how
GEMINI.mdfiles contribute to hierarchical memory, see the CLI Configuration documentation.
/model
Section titled
“/model”
- Description: Manage model configuration.
- Sub-commands:
manage:- Description: Opens a dialog to configure the model.
set:- Description: Set the model to use.
- Usage:
/model set <model-name> [--persist]
/permissions
Section titled
“/permissions”
- Description: Manage folder trust settings and other permissions.
- Sub-commands:
trust:- Description: Manage folder trust settings.
- Usage:
/permissions trust [<directory-path>]
- Description: Switch to Plan Mode (read-only) and view
the current plan if
one has been generated.
- Note: This feature is enabled by default. It
can be disabled via the
general.plan.enabledsetting in your configuration.
- Note: This feature is enabled by default. It
can be disabled via the
- Sub-commands:
copy:- Description: Copy the currently approved plan to your clipboard.
/policies
Section titled
“/policies”
- Description: Manage policies.
- Sub-commands:
list:- Description: List all active policies grouped by mode.
/privacy
Section titled
“/privacy”
- Description: Display the Privacy Notice and allow users to select whether they consent to the collection of their data for service improvement purposes.
/quit (or /exit)
Section titled “/quit (or
/exit)”
- Description: Exit Gemini CLI.
/restore
Section titled
“/restore”
- Description: Restores the project files to the state they were in just before a tool was executed. This is particularly useful for undoing file edits made by a tool. If run without a tool call ID, it will list available checkpoints to restore from.
- Usage:
/restore [tool_call_id] - Note: Only available if checkpointing is configured via settings. See Checkpointing documentation for more details.
/rewind
Section titled
“/rewind”
- Description: Navigates backward through the conversation history, letting you review past interactions and potentially revert both chat state and file changes.
- Usage: Press Esc twice as a shortcut.
- Features:
- Select Interaction: Preview user prompts and file changes.
- Action Selection: Choose to rewind history only, revert code changes only, or both.
/resume
Section titled
“/resume”
- Description: Browse and resume previous conversation sessions, and manage manual chat checkpoints.
- Features:
- Auto sessions: Run
/resumeto open the interactive session browser for automatically saved conversations. - Chat checkpoints: Use checkpoint subcommands
directly (
/resume save,/resume resume, etc.). - Management: Delete unwanted sessions directly from the browser
- Resume: Select any session to resume and continue the conversation
- Search: Use
/to search through conversation content across all sessions - Session Browser: Interactive interface showing all saved sessions with timestamps, message counts, and first user message for context
- Sorting: Sort sessions by date or message count
- Auto sessions: Run
- Note: All conversations are automatically saved as you chat - no manual saving required. See Session Management for complete details.
- Alias:
/chatprovides the same behavior and subcommands. - Sub-commands:
list- Description: Lists available tags for manual chat checkpoints.
save <tag>- Description: Saves the current conversation as a tagged checkpoint.
resume <tag>(alias:load)- Description: Loads a previously saved tagged checkpoint.
delete <tag>- Description: Deletes a tagged checkpoint.
share [filename]- Description: Exports the current conversation to Markdown or JSON.
debug- Description: Export the most recent API request as JSON payload (nightly builds).
- Compatibility alias:
/resume checkpoints ...is still accepted for the same checkpoint commands.
/settings
Section titled
“/settings”
- Description: Open the settings editor to view and modify Gemini CLI settings.
- Details: This command provides a user-friendly
interface for changing
settings that control the behavior and appearance of Gemini CLI. It is
equivalent to manually editing the
.gemini/settings.jsonfile, but with validation and guidance to prevent errors. See the settings documentation for a full list of available settings. - Usage: Simply run
/settingsand the editor will open. You can then browse or search for specific settings, view their current values, and modify them as desired. Changes to some settings are applied immediately, while others require a restart.
/shells (or /bashes)
Section titled “/shells (or
/bashes)”
- Description: Toggle the background shells view. This lets you view and manage long-running processes that you’ve sent to the background.
/setup-github
Section titled
“/setup-github”
- Description: Set up GitHub Actions to triage issues and review PRs with Gemini.
/skills
Section titled
“/skills”
- Description: Manage Agent Skills, which provide on-demand expertise and specialized workflows.
- Sub-commands:
disable <name>:- Description: Disable a specific skill by name.
- Usage:
/skills disable <name>
enable <name>:- Description: Enable a specific skill by name.
- Usage:
/skills enable <name>
list:- Description: List all discovered skills and their current status (enabled/disabled).
reload:- Description: Refresh the list of discovered skills from all tiers (workspace, user, and extensions).
/stats
Section titled
“/stats”
- Description: Display detailed statistics for the current Gemini CLI session.
- Sub-commands:
session:- Description: Show session-specific usage statistics, including duration, tool calls, and performance metrics. This is the default view.
model:- Description: Show model-specific usage statistics, including token counts and quota information.
tools:- Description: Show tool-specific usage statistics.
/terminal-setup
Section titled
“/terminal-setup”
- Description: Configure terminal keybindings for multiline input (VS Code, Cursor, Windsurf).
/theme
Section titled
“/theme”
- Description: Open a dialog that lets you change the visual theme of Gemini CLI.
/tools
Section titled
“/tools”
- Description: Display a list of tools that are currently available within Gemini CLI.
- Usage:
/tools [desc] - Sub-commands:
descordescriptions:- Description: Show detailed descriptions of each tool, including each tool’s name with its full description as provided to the model.
nodescornodescriptions:- Description: Hide tool descriptions, showing only the tool names.
/upgrade
Section titled
“/upgrade”
- Description: Open the Gemini Code Assist upgrade page in your browser. This lets you upgrade your tier for higher usage limits.
- Note: This command is only available when logged in with Google.
- Description: Toggle vim mode on or off. When vim mode is enabled, the input area supports vim-style navigation and editing commands in both NORMAL and INSERT modes.
- Features:
- Count support: Prefix commands with numbers
(for example,
3h,5w,10G) - Editing commands: Delete with
x, change withc, insert withi,a,o,O; complex operations likedd,cc,dw,cw - INSERT mode: Standard text input with escape to return to NORMAL mode
- NORMAL mode: Navigate with
h,j,k,l; jump by words withw,b,e; go to line start/end with0,$,^; go to specific lines withG(orggfor first line) - Persistent setting: Vim mode preference is
saved to
~/.gemini/settings.jsonand restored between sessions - Repeat last command: Use
.to repeat the last editing operation - Status indicator: When enabled, shows
[NORMAL]or[INSERT]in the footer
- Count support: Prefix commands with numbers
(for example,
Custom commands
Section titled “Custom commands”Custom commands allow you to create personalized shortcuts for your most-used prompts. For detailed instructions on how to create, manage, and use them, see the dedicated Custom Commands documentation.
Input prompt shortcuts
Section titled “Input prompt shortcuts”These shortcuts apply directly to the input prompt for text manipulation.
-
Undo:
- Keyboard shortcut: Press Alt+z or Cmd+z to undo the last action in the input prompt.
-
Redo:
- Keyboard shortcut: Press Shift+Alt+Z or Shift+Cmd+Z to redo the last undone action in the input prompt.
At commands (@)
Section titled “At commands
(@)”
At commands are used to include the content of files or directories as part of your prompt to Gemini. These commands include git-aware filtering.
-
@<path_to_file_or_directory>- Description: Inject the content of the specified file or files into your current prompt. This is useful for asking questions about specific code, text, or collections of files.
- Examples:
@path/to/your/file.txt Explain this text.@src/my_project/ Summarize the code in this directory.What is this file about? @README.md
- Details:
- If a path to a single file is provided, the content of that file is read.
- If a path to a directory is provided, the command attempts to read the content of files within that directory and any subdirectories.
- Spaces in paths should be escaped with a backslash (for
example,
@My\ Documents/file.txt). - The command uses the
read_many_filestool internally. The content is fetched and then inserted into your query before being sent to the Gemini model. - Git-aware filtering: By default,
git-ignored files (like
node_modules/,dist/,.env,.git/) are excluded. This behavior can be changed via thecontext.fileFilteringsettings. - File types: The command is intended for
text-based files. While it
might attempt to read any file, binary files or very
large files might be
skipped or truncated by the underlying
read_many_filestool to ensure performance and relevance. The tool indicates if files were skipped.
- Output: The CLI will show a tool call message
indicating that
read_many_fileswas used, along with a message detailing the status and the path(s) that were processed.
-
@(Lone at symbol)- Description: If you type a lone
@symbol without a path, the query is passed as-is to the Gemini model. This might be useful if you are specifically talking about the@symbol in your prompt.
- Description: If you type a lone
Error handling for @ commands
Section titled “Error handling
for @ commands”
- If the path specified after
@is not found or is invalid, an error message will be displayed, and the query might not be sent to the Gemini model, or it will be sent without the file content. - If the
read_many_filestool encounters an error (for example, permission issues), this will also be reported.
Shell mode and passthrough
commands (!)
Section titled “Shell mode and
passthrough commands (!)”
The ! prefix lets you interact with your system’s
shell directly from within
Gemini CLI.
-
!<shell_command>- Description: Execute the given
<shell_command>usingbashon Linux/macOS orpowershell.exe -NoProfile -Commandon Windows (unless you overrideComSpec). Any output or errors from the command are displayed in the terminal. - Examples:
!ls -la(executesls -laand returns to Gemini CLI)!git status(executesgit statusand returns to Gemini CLI)
- Description: Execute the given
-
!(Toggle shell mode)- Description: Typing
!on its own toggles shell mode.- Entering shell mode:
- When active, shell mode uses a different coloring and a “Shell Mode Indicator”.
- While in shell mode, text you type is interpreted directly as a shell command.
- Exiting shell mode:
- When exited, the UI reverts to its standard appearance and normal Gemini CLI behavior resumes.
- Entering shell mode:
- Description: Typing
-
Caution for all
!usage: Commands you execute in shell mode have the same permissions and impact as if you ran them directly in your terminal. -
Environment variable: When a command is executed via
!or in shell mode, theGEMINI_CLI=1environment variable is set in the subprocess’s environment. This allows scripts or tools to detect if they are being run from within Gemini CLI.
We would love to accept your patches and contributions to this project. This document includes:
- Before you begin: Essential steps to take before becoming a Gemini CLI contributor.
- Code contribution process: How to contribute code to Gemini CLI.
- Development setup and workflow: How to set up your development environment and workflow.
- Documentation contribution process: How to contribute documentation to Gemini CLI.
We’re looking forward to seeing your contributions!
Before you begin
Section titled “Before you begin”Sign our Contributor License Agreement
Section titled “Sign our Contributor License Agreement”Contributions to this project must be accompanied by a Contributor License Agreement (CLA). You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project.
If you or your current employer have already signed the Google CLA (even if it was for a different project), you probably don’t need to do it again.
Visit https://cla.developers.google.com/ to see your current agreements or to sign a new one.
Review our Community Guidelines
Section titled “Review our Community Guidelines”This project follows Google’s Open Source Community Guidelines.
Code contribution process
Section titled “Code contribution process”Get started
Section titled “Get started”The process for contributing code is as follows:
- Find an issue that you want to work on. If an issue is
tagged as
🔒Maintainers only, this means it is reserved for project maintainers. We will not accept pull requests related to these issues. In the near future, we will explicitly mark issues looking for contributions using thehelp-wantedlabel. If you believe an issue is a good candidate for community contribution, please leave a comment on the issue. A maintainer will review it and apply thehelp-wantedlabel if appropriate. Only maintainers should attempt to add thehelp-wantedlabel to an issue. - Fork the repository and create a new branch.
- Make your changes in the
packages/directory. - Ensure all checks pass by running
npm run preflight. - Open a pull request with your changes.
Code reviews
Section titled “Code reviews”All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose.
To assist with the review process, we provide an automated review tool that helps detect common anti-patterns, testing issues, and other best practices that are easy to miss.
Using the automated review tool
Section titled “Using the automated review tool”You can run the review tool in two ways:
-
Using the helper script (Recommended): We provide a script that automatically handles checking out the PR into a separate worktree, installing dependencies, building the project, and launching the review tool.
Terminal window ./scripts/review.sh <PR_NUMBER> [model]Warning: If you run
scripts/review.sh, you must have first verified that the code for the PR being reviewed is safe to run and does not contain data exfiltration attacks.Authors are strongly encouraged to run this script on their own PRs immediately after creation. This allows you to catch and fix simple issues locally before a maintainer performs a full review.
Note on Models: By default, the script uses the latest Pro model (
gemini-3.1-pro-preview). If you do not have enough Pro quota, you can run it with the latest Flash model instead:./scripts/review.sh <PR_NUMBER> gemini-3-flash-preview. -
Manually from within Gemini CLI: If you already have the PR checked out and built, you can run the tool directly from the CLI prompt:
/review-frontend <PR_NUMBER>
Replace <PR_NUMBER> with your pull request
number. Reviewers should use this
tool to augment, not replace, their manual review process.
Self-assigning and unassigning issues
Section titled “Self-assigning and unassigning issues”To assign an issue to yourself, simply add a comment with the text /assign. To
unassign yourself from an issue, add a comment with the text /unassign.
The comment must contain only that text and nothing else. These commands will assign or unassign the issue as requested, provided the conditions are met (e.g., an issue must be unassigned to be assigned).
Please note that you can have a maximum of 3 issues assigned to you at any given time.
Pull request guidelines
Section titled “Pull request guidelines”To help us review and merge your PRs quickly, please follow these guidelines. PRs that do not meet these standards may be closed.
1. Link to an existing issue
Section titled “1. Link to an existing issue”All PRs should be linked to an existing issue in our tracker. This ensures that every change has been discussed and is aligned with the project’s goals before any code is written.
- For bug fixes: The PR should be linked to the bug report issue.
- For features: The PR should be linked to the feature request or proposal issue that has been approved by a maintainer.
If an issue for your change doesn’t exist, we will automatically close your PR along with a comment reminding you to associate the PR with an issue. The ideal workflow starts with an issue that has been reviewed and approved by a maintainer. Please open the issue first and wait for feedback before you start coding.
2. Keep it small and focused
Section titled “2. Keep it small and focused”We favor small, atomic PRs that address a single issue or add a single, self-contained feature.
- Do: Create a PR that fixes one specific bug or adds one specific feature.
- Don’t: Bundle multiple unrelated changes (e.g., a bug fix, a new feature, and a refactor) into a single PR.
Large changes should be broken down into a series of smaller, logical PRs that can be reviewed and merged independently.
3. Use draft PRs for work in progress
Section titled “3. Use draft PRs for work in progress”If you’d like to get early feedback on your work, please use GitHub’s Draft Pull Request feature. This signals to the maintainers that the PR is not yet ready for a formal review but is open for discussion and initial feedback.
4. Ensure all checks pass
Section titled “4. Ensure all checks pass”Before submitting your PR, ensure that all automated checks are passing by
running npm run preflight. This command runs all
tests, linting, and other
style checks.
5. Update documentation
Section titled “5. Update documentation”If your PR introduces a user-facing change (e.g., a new command, a modified
flag, or a change in behavior), you must also update the relevant
documentation
in the /docs directory.
See more about writing documentation: Documentation contribution process.
6. Write clear commit messages and a good PR description
Section titled “6. Write clear commit messages and a good PR description”Your PR should have a clear, descriptive title and a detailed description of the changes. Follow the Conventional Commits standard for your commit messages.
- Good PR title:
feat(cli): Add --json flag to 'config get' command - Bad PR title:
Made some changes
In the PR description, explain the “why” behind your changes and link to the
relevant issue (e.g., Fixes #123).
Forking
Section titled “Forking”If you are forking the repository you will be able to run the Build, Test and
Integration test workflows. However in order to make the integration tests
run
you’ll need to add a
GitHub Repository Secret
with a value of GEMINI_API_KEY and set that to a
valid API key that you have
available. Your key and secret are private to your repo; no one without
access
can see your key and you cannot see any secrets related to this repo.
Additionally you will need to click on the Actions
tab and enable workflows
for your repository, you’ll find it’s the large blue button in the center of
the
screen.
Development setup and workflow
Section titled “Development setup and workflow”This section guides contributors on how to build, modify, and understand the development setup of this project.
Setting up the development environment
Section titled “Setting up the development environment”Prerequisites:
- Node.js:
- Development: Please use Node.js
~20.19.0. This specific version is required due to an upstream development dependency issue. You can use a tool like nvm to manage Node.js versions. - Production: For running the CLI in a production
environment, any
version of Node.js
>=20is acceptable.
- Development: Please use Node.js
- Git
Build process
Section titled “Build process”To clone the repository:
git clone https://github.com/google-gemini/gemini-cli.git # Or your fork's URLcd gemini-cli
To install dependencies defined in package.json as
well as root dependencies:
npm install
To build the entire project (all packages):
npm run build
This command typically compiles TypeScript to JavaScript, bundles assets, and
prepares the packages for execution. Refer to scripts/build.js and
package.json scripts for more details on what
happens during the build.
Enabling sandboxing
Section titled “Enabling sandboxing”Sandboxing is highly recommended and requires, at a
minimum,
setting GEMINI_SANDBOX=true in your ~/.env and ensuring a sandboxing
provider (e.g. macOS Seatbelt, docker, or podman) is
available. See
Sandboxing for details.
To build both the gemini CLI utility and the sandbox
container, run
build:all from the root directory:
npm run build:all
To skip building the sandbox container, you can use npm run build instead.
Running the CLI
Section titled “Running the CLI”To start the Gemini CLI from the source code (after building), run the following command from the root directory:
npm start
If you’d like to run the source build outside of the gemini-cli folder, you
can
utilize npm link path/to/gemini-cli/packages/cli
(see:
docs) or
alias gemini="node path/to/gemini-cli/packages/cli"
to run with gemini
Running tests
Section titled “Running tests”This project contains two types of tests: unit tests and integration tests.
Unit tests
Section titled “Unit tests”To execute the unit test suite for the project:
npm run test
This will run tests located in the packages/core and
packages/cli
directories. Ensure tests pass before submitting any changes. For a more
comprehensive check, it is recommended to run npm run preflight.
Integration tests
Section titled “Integration tests”The integration tests are designed to validate the end-to-end functionality
of
the Gemini CLI. They are not run as part of the default npm run test command.
To run the integration tests, use the following command:
npm run test:e2e
For more detailed information on the integration testing framework, please see the Integration Tests documentation.
Linting and preflight checks
Section titled “Linting and preflight checks”To ensure code quality and formatting consistency, run the preflight check:
npm run preflight
This command will run ESLint, Prettier, all tests, and other checks as
defined
in the project’s package.json.
ProTip
after cloning create a git precommit hook file to ensure your commits are always clean.
echo "# Run npm build and check for errorsif ! npm run preflight; then echo "npm build failed. Commit aborted." exit 1fi" > .git/hooks/pre-commit && chmod +x .git/hooks/pre-commit
Formatting
Section titled “Formatting”To separately format the code in this project, run the following command from the root directory:
npm run format
This command uses Prettier to format the code according to the project’s style guidelines.
Linting
Section titled “Linting”To separately lint the code in this project, run the following command from the root directory:
npm run lint
Coding conventions
Section titled “Coding conventions”- Please adhere to the coding style, patterns, and conventions used throughout the existing codebase.
- Consult GEMINI.md (typically found in the project root) for specific instructions related to AI-assisted development, including conventions for React, comments, and Git usage.
- Imports: Pay special attention to import paths. The project uses ESLint to enforce restrictions on relative imports between packages.
Debugging
Section titled “Debugging”VS Code
Section titled “VS Code”- Run the CLI to interactively debug in VS Code with
F5 - Start the CLI in debug mode from the root directory:
This command runs
Terminal window npm run debugnode --inspect-brk dist/gemini.jswithin thepackages/clidirectory, pausing execution until a debugger attaches. You can then openchrome://inspectin your Chrome browser to connect to the debugger. - In VS Code, use the “Attach” launch configuration (found in
.vscode/launch.json).
Alternatively, you can use the “Launch Program” configuration in VS Code if you prefer to launch the currently open file directly, but ‘F5’ is generally recommended.
To hit a breakpoint inside the sandbox container run:
DEBUG=1 gemini
Note: If you have DEBUG=true in a
project’s .env file, it won’t affect
gemini-cli due to automatic exclusion. Use .gemini/.env files for gemini-cli
specific debug settings.
React DevTools
Section titled “React DevTools”To debug the CLI’s React-based UI, you can use React DevTools.
-
Start the Gemini CLI in development mode:
Terminal window DEV=true npm start -
Install and run React DevTools version 6 (which matches the CLI’s
react-devtools-core):You can either install it globally:
Terminal window npm install -g react-devtools@6react-devtoolsOr run it directly using npx:
Terminal window npx react-devtools@6Your running CLI application should then connect to React DevTools.
Sandboxing
Section titled “Sandboxing”macOS Seatbelt
Section titled “macOS Seatbelt”On macOS, gemini uses Seatbelt (sandbox-exec) under a permissive-open
profile (see packages/cli/src/utils/sandbox-macos-permissive-open.sb)
that
restricts writes to the project folder but otherwise allows all other
operations
and outbound network traffic (“open”) by default. You can switch to a
strict-open profile (see
packages/cli/src/utils/sandbox-macos-strict-open.sb)
that restricts both reads
and writes to the working directory while allowing outbound network traffic
by
setting SEATBELT_PROFILE=strict-open in your
environment or .env file.
Available built-in profiles are permissive-{open,proxied},
restrictive-{open,proxied}, and strict-{open,proxied} (see below for proxied
networking). You can also switch to a custom profile
SEATBELT_PROFILE=<profile> if you also create a
file
.gemini/sandbox-macos-<profile>.sb under your
project settings directory
.gemini.
Container-based sandboxing (all platforms)
Section titled “Container-based sandboxing (all platforms)”For stronger container-based sandboxing on macOS or other platforms, you can
set
GEMINI_SANDBOX=true|docker|podman|<command> in
your environment or .env
file. The specified command (or if true then either
docker or podman) must
be installed on the host machine. Once enabled, npm run build:all will build a
minimal container (“sandbox”) image and npm start
will launch inside a fresh
instance of that container. The first build can take 20-30s (mostly due to
downloading of the base image) but after that both build and start overhead
should be minimal. Default builds (npm run build)
will not rebuild the
sandbox.
Container-based sandboxing mounts the project directory (and system temp
directory) with read-write access and is started/stopped/removed
automatically
as you start/stop Gemini CLI. Files created within the sandbox should be
automatically mapped to your user/group on host machine. You can easily
specify
additional mounts, ports, or environment variables by setting
SANDBOX_{MOUNTS,PORTS,ENV} as needed. You can also
fully customize the sandbox
for your projects by creating the files .gemini/sandbox.Dockerfile and/or
.gemini/sandbox.bashrc under your project settings
directory (.gemini) and
running gemini with BUILD_SANDBOX=1 to trigger building of your custom
sandbox.
Proxied networking
Section titled “Proxied networking”All sandboxing methods, including macOS Seatbelt using *-proxied profiles,
support restricting outbound network traffic through a custom proxy server
that
can be specified as GEMINI_SANDBOX_PROXY_COMMAND=<command>, where
<command>
must start a proxy server that listens on :::8877
for relevant requests. See
docs/examples/proxy-script.md for a minimal proxy
that only allows HTTPS
connections to example.com:443 (e.g. curl https://example.com) and declines
all other requests. The proxy is started and stopped automatically alongside
the
sandbox.
Manual publish
Section titled “Manual publish”We publish an artifact for each commit to our internal registry. But if you need to manually cut a local build, then run the following commands:
npm run cleannpm installnpm run authnpm run prerelease:devnpm publish --workspaces
Documentation contribution process
Section titled “Documentation contribution process”Our documentation must be kept up-to-date with our code contributions. We want our documentation to be clear, concise, and helpful to our users. We value:
- Clarity: Use simple and direct language. Avoid jargon where possible.
- Accuracy: Ensure all information is correct and up-to-date.
- Completeness: Cover all aspects of a feature or topic.
- Examples: Provide practical examples to help users understand how to use Gemini CLI.
Getting started
Section titled “Getting started”The process for contributing to the documentation is similar to contributing code.
- Fork the repository and create a new branch.
- Make your changes in the
/docsdirectory. - Preview your changes locally in Markdown rendering.
- Lint and format your changes. Our preflight check
includes linting and
formatting for documentation files.
Terminal window npm run preflight - Open a pull request with your changes.
Documentation structure
Section titled “Documentation structure”Our documentation is organized using sidebar.json as the table of contents. When adding new documentation:
- Create your markdown file in the appropriate directory
under
/docs. - Add an entry to
sidebar.jsonin the relevant section. - Ensure all internal links use relative paths and point to existing files.
Style guide
Section titled “Style guide”We follow the Google Developer Documentation Style Guide. Please refer to it for guidance on writing style, tone, and formatting.
Key style points
Section titled “Key style points”- Use sentence case for headings.
- Write in second person (“you”) when addressing the reader.
- Use present tense.
- Keep paragraphs short and focused.
- Use code blocks with appropriate language tags for syntax highlighting.
- Include practical examples whenever possible.
Linting and formatting
Section titled “Linting and formatting”We use prettier to enforce a consistent style across
our documentation. The
npm run preflight command will check for any linting
issues.
You can also run the linter and formatter separately:
npm run lint- Check for linting issuesnpm run format- Auto-format markdown filesnpm run lint:fix- Auto-fix linting issues where possible
Please make sure your contributions are free of linting errors before submitting a pull request.
Before you submit
Section titled “Before you submit”Before submitting your documentation pull request, please:
- Run
npm run preflightto ensure all checks pass. - Review your changes for clarity and accuracy.
- Check that all links work correctly.
- Ensure any code examples are tested and functional.
- Sign the Contributor License Agreement (CLA) if you haven’t already.
Need help?
Section titled “Need help?”If you have questions about contributing documentation:
- Check our FAQ.
- Review existing documentation for examples.
- Open an issue to discuss your proposed changes.
- Reach out to the maintainers.
We appreciate your contributions to making Gemini CLI documentation better!
Gemini CLI supports connecting to remote subagents using the Agent-to-Agent (A2A) protocol. This allows Gemini CLI to interact with other agents, expanding its capabilities by delegating tasks to remote services.
Gemini CLI can connect to any compliant A2A agent. You can find samples of A2A agents in the following repositories:
Proxy support
Section titled “Proxy support”Gemini CLI routes traffic to remote agents through an HTTP/HTTPS proxy if one
is
configured. It uses the general.proxy setting in
your settings.json file or
standard environment variables (HTTP_PROXY, HTTPS_PROXY).
{ "general": { "proxy": "http://my-proxy:8080" }}
Defining remote subagents
Section titled “Defining remote subagents”Remote subagents are defined as Markdown files (.md)
with YAML frontmatter.
You can place them in:
- Project-level:
.gemini/agents/*.md(Shared with your team) - User-level:
~/.gemini/agents/*.md(Personal agents)
Configuration schema
Section titled “Configuration schema”| Field | Type | Required | Description |
|---|---|---|---|
kind |
string | Yes | Must be remote. |
name |
string | Yes | A unique name for the agent. Must be a valid slug (lowercase letters, numbers, hyphens, and underscores only). |
agent_card_url |
string | Yes* | The URL to the agent’s A2A card endpoint. Required
if agent_card_json is not provided. |
agent_card_json |
string | Yes* | The inline JSON string of the agent’s A2A card.
Required if agent_card_url is not
provided. |
auth |
object | No | Authentication configuration. See Authentication. |
Single-subagent example
Section titled “Single-subagent example”---kind: remotename: my-remote-agentagent_card_url: https://example.com/agent-card---
Multi-subagent example
Section titled “Multi-subagent example”The loader explicitly supports multiple remote subagents defined in a single Markdown file.
---- kind: remote name: remote-1 agent_card_url: https://example.com/1- kind: remote name: remote-2 agent_card_url: https://example.com/2---
Inline Agent Card JSON
Section titled “Inline Agent Card JSON”View formatting options for JSON strings
If you don’t have an endpoint serving the agent card, you can provide the
A2A
card directly as a JSON string using agent_card_json.
When providing a JSON string in YAML, you must properly format it as a string scalar. You can use single quotes, a block scalar, or double quotes (which require escaping internal double quotes).
Using single quotes
Section titled “Using single quotes”Single quotes allow you to embed unescaped double quotes inside the JSON string. This format is useful for shorter, single-line JSON strings.
---kind: remotename: single-quotes-agentagent_card_json: '{ "protocolVersion": "0.3.0", "name": "Example Agent", "version": "1.0.0", "url": "dummy-url" }'---
Using a block scalar
Section titled “Using a block scalar”The literal block scalar (|) preserves line
breaks and is highly recommended
for multiline JSON strings as it avoids quote escaping entirely. The
following
is a complete, valid Agent Card configuration using dummy values.
---kind: remotename: block-scalar-agentagent_card_json: | { "protocolVersion": "0.3.0", "name": "Example Agent Name", "description": "An example agent description for documentation purposes.", "version": "1.0.0", "url": "dummy-url", "preferredTransport": "HTTP+JSON", "capabilities": { "streaming": true, "extendedAgentCard": false }, "defaultInputModes": [ "text/plain" ], "defaultOutputModes": [ "application/json" ], "skills": [ { "id": "ExampleSkill", "name": "Example Skill Assistant", "description": "A description of what this example skill does.", "tags": [ "example-tag" ], "examples": [ "Show me an example." ] } ] }---
Using double quotes
Section titled “Using double quotes”Double quotes are also supported, but any internal double quotes in your JSON must be escaped with a backslash.
---kind: remotename: double-quotes-agentagent_card_json: '{ "protocolVersion": "0.3.0", "name": "Example Agent", "version": "1.0.0", "url": "dummy-url" }'---
Authentication
Section titled “Authentication”Many remote agents require authentication. Gemini CLI supports several
authentication methods aligned with the
A2A security
specification.
Add an auth block to your agent’s frontmatter to
configure credentials.
Supported auth types
Section titled “Supported auth types”Gemini CLI supports the following authentication types:
| Type | Description |
|---|---|
apiKey |
Send a static API key as an HTTP header. |
http |
HTTP authentication (Bearer token, Basic credentials, or any IANA-registered scheme). |
google-credentials |
Google Application Default Credentials (ADC). Automatically selects access or identity tokens. |
oauth |
OAuth 2.0 Authorization Code flow with PKCE. Opens a browser for interactive sign-in. |
Dynamic values
Section titled “Dynamic values”For apiKey and http auth
types, secret values (key, token, username,
password, value) support
dynamic resolution:
| Format | Description | Example |
|---|---|---|
$ENV_VAR |
Read from an environment variable. | $MY_API_KEY |
!command |
Execute a shell command and use the trimmed output. | !gcloud auth print-token
|
| literal | Use the string as-is. | sk-abc123 |
$$ / !! |
Escape prefix. $$FOO
becomes the literal $FOO. |
$$NOT_AN_ENV_VAR |
Security tip: Prefer
$ENV_VARor!commandover embedding secrets directly in agent files, especially for project-level agents checked into version control.
API key (apiKey)
Section titled “API key
(apiKey)”
Sends an API key as an HTTP header on every request.
| Field | Type | Required | Description |
|---|---|---|---|
type |
string | Yes | Must be apiKey. |
key |
string | Yes | The API key value. Supports dynamic values. |
name |
string | No | Header name to send the key in. Default: X-API-Key. |
---kind: remotename: my-agentagent_card_url: https://example.com/agent-cardauth: type: apiKey key: $MY_API_KEY---
HTTP authentication (http)
Section titled “HTTP
authentication (http)”
Supports Bearer tokens, Basic auth, and arbitrary IANA-registered HTTP authentication schemes.
Bearer token
Section titled “Bearer token”Use the following fields to configure a Bearer token:
| Field | Type | Required | Description |
|---|---|---|---|
type |
string | Yes | Must be http. |
scheme |
string | Yes | Must be Bearer. |
token |
string | Yes | The bearer token. Supports dynamic values. |
auth: type: http scheme: Bearer token: $MY_BEARER_TOKEN
Basic authentication
Section titled “Basic authentication”Use the following fields to configure Basic authentication:
| Field | Type | Required | Description |
|---|---|---|---|
type |
string | Yes | Must be http. |
scheme |
string | Yes | Must be Basic. |
username |
string | Yes | The username. Supports dynamic values. |
password |
string | Yes | The password. Supports dynamic values. |
auth: type: http scheme: Basic username: $MY_USERNAME password: $MY_PASSWORD
Raw scheme
Section titled “Raw scheme”For any other IANA-registered scheme (for example, Digest, HOBA), provide the raw authorization value.
| Field | Type | Required | Description |
|---|---|---|---|
type |
string | Yes | Must be http. |
scheme |
string | Yes | The scheme name (for example, Digest). |
value |
string | Yes | Raw value sent as Authorization: <scheme> <value>.
Supports dynamic values. |
auth: type: http scheme: Digest value: $MY_DIGEST_VALUE
Google
Application Default Credentials (google-credentials)
Section titled “Google
Application Default Credentials (google-credentials)”
Uses Google Application Default Credentials (ADC) to authenticate with Google Cloud services and Cloud Run endpoints. This is the recommended auth method for agents hosted on Google Cloud infrastructure.
| Field | Type | Required | Description |
|---|---|---|---|
type |
string | Yes | Must be google-credentials.
|
scopes |
string[] | No | OAuth scopes. Defaults to https://www.googleapis.com/auth/cloud-platform.
|
---kind: remotename: my-gcp-agentagent_card_url: https://my-agent-xyz.run.app/.well-known/agent.jsonauth: type: google-credentials---
How token selection works
Section titled “How token selection works”The provider automatically selects the correct token type based on the agent’s host:
| Host pattern | Token type | Use case |
|---|---|---|
*.googleapis.com |
Access token | Google APIs (Agent Engine, Vertex AI, etc.) |
*.run.app |
Identity token | Cloud Run services |
- Access tokens authorize API calls to Google services.
They are scoped
(default:
cloud-platform) and fetched viaGoogleAuth.getClient(). - Identity tokens prove the caller’s identity to a
service that validates
the token’s audience. The audience is set to the target host. These are
fetched via
GoogleAuth.getIdTokenClient().
Both token types are cached and automatically refreshed before expiry.
google-credentials relies on ADC, which means your
environment must have
credentials configured. Common setups:
- Local development: Run
gcloud auth application-default loginto authenticate with your Google account. - CI / Cloud environments: Use a service account. Set the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable to the path of your service account key file, or use workload identity on GKE / Cloud Run.
Allowed hosts
Section titled “Allowed hosts”For security, google-credentials only sends tokens to
known Google-owned
hosts:
*.googleapis.com*.run.app
Requests to any other host will be rejected with an error. If your agent is
hosted on a different domain, use one of the other auth types (apiKey, http,
or oauth).
Examples
Section titled “Examples”The following examples demonstrate how to configure Google Application Default Credentials.
Cloud Run agent:
---kind: remotename: cloud-run-agentagent_card_url: https://my-agent-xyz.run.app/.well-known/agent.jsonauth: type: google-credentials---
Google API with custom scopes:
---kind: remotename: vertex-agentagent_card_url: https://us-central1-aiplatform.googleapis.com/.well-known/agent.jsonauth: type: google-credentials scopes: - https://www.googleapis.com/auth/cloud-platform - https://www.googleapis.com/auth/compute---
OAuth 2.0 (oauth)
Section titled “OAuth 2.0
(oauth)”
Performs an interactive OAuth 2.0 Authorization Code flow with PKCE. On first use, Gemini CLI opens your browser for sign-in and persists the resulting tokens for subsequent requests.
| Field | Type | Required | Description |
|---|---|---|---|
type |
string | Yes | Must be oauth. |
client_id |
string | Yes* | OAuth client ID. Required for interactive auth. |
client_secret |
string | No* | OAuth client secret. Required by most authorization servers (confidential clients). Can be omitted for public clients that don’t require a secret. |
scopes |
string[] | No | Requested scopes. Can also be discovered from the agent card. |
authorization_url |
string | No | Authorization endpoint. Discovered from the agent card if omitted. |
token_url |
string | No | Token endpoint. Discovered from the agent card if omitted. |
---kind: remotename: oauth-agentagent_card_url: https://example.com/.well-known/agent.jsonauth: type: oauth client_id: my-client-id.apps.example.com---
If the agent card advertises an oauth2 security
scheme with
authorizationCode flow, the authorization_url, token_url,
and scopes are
automatically discovered. You only need to provide client_id (and
client_secret if required).
Tokens are persisted to disk and refreshed automatically when they expire.
Auth validation
Section titled “Auth validation”When Gemini CLI loads a remote agent, it validates your auth configuration
against the agent card’s declared securitySchemes.
If the agent requires
authentication that you haven’t configured, you’ll see an error describing
what’s needed.
google-credentials is treated as compatible with
http Bearer security
schemes, since it produces Bearer tokens.
Auth retry behavior
Section titled “Auth retry behavior”All auth providers automatically retry on 401 and
403 responses by
re-fetching credentials (up to 2 retries). This handles cases like expired
tokens or rotated credentials. For apiKey with !command values, the command
is re-executed on retry to fetch a fresh key.
Agent card fetching and auth
Section titled “Agent card fetching and auth”When connecting to a remote agent, Gemini CLI first fetches the agent card
without authentication. If the card endpoint returns a
401 or 403, it
retries the fetch with the configured auth headers. This
lets agents have
publicly accessible cards while protecting their task endpoints, or to
protect
both behind auth.
Managing Subagents
Section titled “Managing Subagents”Users can manage subagents using the following commands within Gemini CLI:
/agents list: Displays all available local and remote subagents./agents reload: Reloads the agent registry. Use this after adding or modifying agent definition files./agents enable <agent_name>: Enables a specific subagent./agents disable <agent_name>: Disables a specific subagent.
Disabling remote agents
Section titled “Disabling remote agents”Remote subagents are enabled by default. To disable them, set enableAgents to
false in your settings.json:
{ "experimental": { "enableAgents": false }}
Subagents are specialized agents that operate within your main Gemini CLI session. They are designed to handle specific, complex tasks—like deep codebase analysis, documentation lookup, or domain-specific reasoning—without cluttering the main agent’s context or toolset.
What are subagents?
Section titled “What are subagents?”Subagents are “specialists” that the main Gemini agent can hire for a specific job.
- Focused context: Each subagent has its own system prompt and persona.
- Specialized tools: Subagents can have a restricted or specialized set of tools.
- Independent context window: Interactions with a subagent happen in a separate context loop, which saves tokens in your main conversation history.
Subagents are exposed to the main agent as a tool of the same name. When the main agent calls the tool, it delegates the task to the subagent. Once the subagent completes its task, it reports back to the main agent with its findings.
How to use subagents
Section titled “How to use subagents”You can use subagents through automatic delegation or by explicitly forcing them in your prompt.
Automatic delegation
Section titled “Automatic delegation”Gemini CLI’s main agent is instructed to use specialized subagents when a
task
matches their expertise. For example, if you ask “How does the auth system
work?”, the main agent may decide to call the codebase_investigator subagent
to perform the research.
Forcing a subagent (@ syntax)
Section titled “Forcing a subagent (@ syntax)”You can explicitly direct a task to a specific subagent by using the @ symbol
followed by the subagent’s name at the beginning of your prompt. This is
useful
when you want to bypass the main agent’s decision-making and go straight to
a
specialist.
Example:
@codebase_investigator Map out the relationship between the AgentRegistry and the LocalAgentExecutor.
When you use the @ syntax, the CLI injects a system
note that nudges the
primary model to use that specific subagent tool immediately.
Built-in subagents
Section titled “Built-in subagents”Gemini CLI comes with the following built-in subagents:
Codebase Investigator
Section titled “Codebase Investigator”- Name:
codebase_investigator - Purpose: Analyze the codebase, reverse engineer, and understand complex dependencies.
- When to use: “How does the authentication system
work?”, “Map out the
dependencies of the
AgentRegistryclass.” - Configuration: Enabled by default. You can override its
settings in
settings.jsonunderagents.overrides. Example (forcing a specific model and increasing turns):{"agents": {"overrides": {"codebase_investigator": {"modelConfig": { "model": "gemini-3-flash-preview" },"runConfig": { "maxTurns": 50 }}}}}
CLI Help Agent
Section titled “CLI Help Agent”- Name:
cli_help - Purpose: Get expert knowledge about Gemini CLI itself, its commands, configuration, and documentation.
- When to use: “How do I configure a proxy?”, “What does
the
/rewindcommand do?” - Configuration: Enabled by default.
Generalist Agent
Section titled “Generalist Agent”- Name:
generalist_agent - Purpose: Route tasks to the appropriate specialized subagent.
- When to use: Implicitly used by the main agent for routing. Not directly invoked by the user.
- Configuration: Enabled by default. No specific configuration options.
Browser Agent (experimental)
Section titled “Browser Agent (experimental)”- Name:
browser_agent - Purpose: Automate web browser tasks — navigating websites, filling forms, clicking buttons, and extracting information from web pages — using the accessibility tree.
- When to use: “Go to example.com and fill out the contact form,” “Extract the pricing table from this page,” “Click the login button and enter my credentials.”
Prerequisites
Section titled “Prerequisites”The browser agent requires:
- Chrome version 144 or later (any recent stable release works).
The underlying
chrome-devtools-mcp
server is bundled with Gemini CLI and launched automatically — no separate
installation is needed.
Enabling the browser agent
Section titled “Enabling the browser agent”The browser agent is disabled by default. Enable it in your settings.json:
{ "agents": { "overrides": { "browser_agent": { "enabled": true } } }}
Session modes
Section titled “Session modes”The sessionMode setting controls how Chrome is
launched and managed. Set it
under agents.browser:
{ "agents": { "overrides": { "browser_agent": { "enabled": true } }, "browser": { "sessionMode": "persistent" } }}
The available modes are:
| Mode | Description |
|---|---|
persistent |
(Default) Launches Chrome with a
persistent profile stored at ~/.gemini/cli-browser-profile/. Cookies,
history, and settings are preserved between sessions. |
isolated |
Launches Chrome with a temporary profile that is deleted after each session. Use this for clean-state automation. |
existing |
Attaches to an already-running Chrome instance. You
must enable remote debugging first by navigating to chrome://inspect/#remote-debugging in
Chrome. No new browser process is launched. |
First-run consent
Section titled “First-run consent”The first time the browser agent is invoked, Gemini CLI displays a consent dialog. You must accept before the browser session starts. This dialog only appears once.
Configuration reference
Section titled “Configuration reference”All browser-specific settings go under agents.browser
in your settings.json.
For full details, see the
agents.browser configuration reference.
| Setting | Type | Default | Description |
|---|---|---|---|
sessionMode |
string |
"persistent" |
How Chrome is managed: "persistent", "isolated", or "existing". |
headless |
boolean |
false |
Run Chrome in headless mode (no visible window). |
profilePath |
string |
— | Custom path to a browser profile directory. |
visualModel |
string |
— | Model override for the visual agent. |
allowedDomains |
string[] |
— | Restrict navigation to specific domains (for
example, ["github.com"]). |
disableUserInput |
boolean |
true |
Disable user input on the browser window during automation (non-headless only). |
maxActionsPerTask |
number |
100 |
Maximum tool calls per task. The agent is terminated when the limit is reached. |
confirmSensitiveActions
|
boolean |
false |
Require manual confirmation for upload_file and evaluate_script. |
blockFileUploads |
boolean |
false |
Hard-block all file upload requests from the agent. |
Automation overlay and input blocking
Section titled “Automation overlay and input blocking”In non-headless mode, the browser agent injects a visual overlay into the
browser window to indicate that automation is in progress. By default, user
input (keyboard and mouse) is also blocked to prevent accidental
interference.
You can disable this by setting disableUserInput to
false.
Security
Section titled “Security”The browser agent enforces several layers of security:
- Domain restrictions: When
allowedDomainsis set, the agent can only navigate to the listed domains (and their subdomains when using*.prefix). Attempting to visit a disallowed domain throws a fatal error that immediately terminates the agent. The agent also attempts to detect and block the use of allowed domains as proxies (e.g., via query parameters or fragments) to access restricted content. - Blocked URL patterns: The underlying MCP server blocks
dangerous URL
schemes including
file://,javascript:,data:text/html,chrome://extensions, andchrome://settings/passwords. - Sensitive action confirmation: Form filling (
fill,fill_form) always requires user confirmation through the policy engine, regardless of approval mode. WhenconfirmSensitiveActionsistrue,upload_fileandevaluate_scriptalso require confirmation. - File upload blocking: Set
blockFileUploadstotrueto hard-block all file upload requests, preventing the agent from uploading any files. - Action rate limiting: The
maxActionsPerTasksetting (default: 100) limits the total number of tool calls per task to prevent runaway execution.
Visual agent
Section titled “Visual agent”By default, the browser agent interacts with pages through the accessibility
tree using element uid values. For tasks that
require visual identification
(for example, “click the yellow button” or “find the red error message”),
you
can enable the visual agent by setting a visualModel:
{ "agents": { "overrides": { "browser_agent": { "enabled": true } }, "browser": { "visualModel": "gemini-2.5-computer-use-preview-10-2025" } }}
When enabled, the agent gains access to the analyze_screenshot tool, which
captures a screenshot and sends it to the vision model for analysis. The
model
returns coordinates and element descriptions that the browser agent uses
with
the click_at tool for precise, coordinate-based
interactions.
Sandbox support
Section titled “Sandbox support”The browser agent adjusts its behavior automatically when running inside a sandbox.
macOS seatbelt (sandbox-exec)
Section titled “macOS seatbelt
(sandbox-exec)”
When the CLI runs under the macOS seatbelt sandbox, persistent and isolated
session modes are forced to isolated with headless enabled. This avoids
permission errors caused by seatbelt file-system restrictions on persistent
browser profiles. If sessionMode is set to existing, no override is applied.
Container sandboxes (Docker / Podman)
Section titled “Container sandboxes (Docker / Podman)”Chrome is not available inside the container, so the browser agent is
disabled unless sessionMode is set
to "existing". When enabled with
existing mode, the agent automatically connects to
Chrome on the host via the
resolved IP of host.docker.internal:9222 instead of
using local pipe
discovery. Port 9222 is currently hardcoded and
cannot be customized.
To use the browser agent in a Docker sandbox:
-
Start Chrome on the host with remote debugging enabled:
Terminal window # Option A: Launch Chrome from the command linegoogle-chrome --remote-debugging-port=9222# Option B: Enable in Chrome settings# Navigate to chrome://inspect/#remote-debugging and enable -
Configure
sessionModeand allowed domains in your project’s.gemini/settings.json:{"agents": {"overrides": {"browser_agent": { "enabled": true }},"browser": {"sessionMode": "existing","allowedDomains": ["example.com"]}}} -
Launch the CLI with port forwarding:
Terminal window GEMINI_SANDBOX=docker SANDBOX_PORTS=9222 gemini
Creating custom subagents
Section titled “Creating custom subagents”You can create your own subagents to automate specific workflows or enforce specific personas.
Agent definition files
Section titled “Agent definition files”Custom agents are defined as Markdown files (.md)
with YAML frontmatter. You
can place them in:
- Project-level:
.gemini/agents/*.md(Shared with your team) - User-level:
~/.gemini/agents/*.md(Personal agents)
File format
Section titled “File format”The file MUST start with YAML frontmatter enclosed in
triple-dashes ---.
The body of the markdown file becomes the agent’s System
Prompt.
Example: .gemini/agents/security-auditor.md
---name: security-auditordescription: Specialized in finding security vulnerabilities in code.kind: localtools: - read_file - grep_searchmodel: gemini-3-flash-previewtemperature: 0.2max_turns: 10---
You are a ruthless Security Auditor. Your job is to analyze code for potentialvulnerabilities.
Focus on:
1. SQL Injection2. XSS (Cross-Site Scripting)3. Hardcoded credentials4. Unsafe file operations
When you find a vulnerability, explain it clearly and suggest a fix. Do not fixit yourself; just report it.
Configuration schema
Section titled “Configuration schema”| Field | Type | Required | Description |
|---|---|---|---|
name |
string | Yes | Unique identifier (slug) used as the tool name for the agent. Only lowercase letters, numbers, hyphens, and underscores. |
description |
string | Yes | Short description of what the agent does. This is visible to the main agent to help it decide when to call this subagent. |
kind |
string | No | local (default) or remote. |
tools |
array | No | List of tool names this agent can use. Supports
wildcards: * (all tools), mcp_* (all MCP tools), mcp_server_* (all tools from a server).
If omitted, it inherits all tools from the parent
session. |
mcpServers |
object | No | Configuration for inline Model Context Protocol (MCP) servers isolated to this specific agent. |
model |
string | No | Specific model to use (for example, gemini-3-preview). Defaults to inherit (uses the main session model).
|
temperature |
number | No | Model temperature (0.0 - 2.0). Defaults to 1. |
max_turns |
number | No | Maximum number of conversation turns allowed for
this agent before it must return. Defaults to 30. |
timeout_mins |
number | No | Maximum execution time in minutes. Defaults to
10. |
Tool wildcards
Section titled “Tool wildcards”When defining tools for a subagent, you can use
wildcards to quickly grant
access to groups of tools:
*: Grant access to all available built-in and discovered tools.mcp_*: Grant access to all tools from all connected MCP servers.mcp_my-server_*: Grant access to all tools from a specific MCP server namedmy-server.
Isolation and recursion protection
Section titled “Isolation and recursion protection”Each subagent runs in its own isolated context loop. This means:
- Independent history: The subagent’s conversation history does not bloat the main agent’s context.
- Isolated tools: The subagent only has access to the tools you explicitly grant it.
- Recursion protection: To prevent infinite loops and
excessive token usage,
subagents cannot call other subagents. If a subagent is
granted the
*tool wildcard, it will still be unable to see or invoke other agents.
Subagent tool isolation
Section titled “Subagent tool isolation”Subagent tool isolation moves Gemini CLI away from a single global tool registry. By providing isolated execution environments, you can ensure that subagents only interact with the parts of the system they are designed for. This prevents unintended side effects, improves reliability by avoiding state contamination, and enables fine-grained permission control.
With this feature, you can:
- Specify tool access: Define exactly which tools an
agent can access using
a
toolslist in the agent definition. - Define inline MCP servers: Configure Model Context Protocol (MCP) servers (which provide a standardized way to connect AI models to external tools and data sources) directly in the subagent’s markdown frontmatter, isolating them to that specific agent.
- Maintain state isolation: Ensure that subagents only interact with their own set of tools and servers, preventing side effects and state contamination.
- Apply subagent-specific policies: Enforce granular rules in your Policy Engine TOML configuration based on the executing subagent’s name.
Configuring isolated tools and servers
Section titled “Configuring isolated tools and servers”You can configure tool isolation for a subagent by updating its markdown frontmatter. This lets you explicitly state which tools the subagent can use, rather than relying on the global registry.
Add an mcpServers object to define inline MCP servers
that are unique to the
agent.
Example:
---name: my-isolated-agenttools: - grep_search - read_filemcpServers: my-custom-server: command: 'node' args: ['path/to/server.js']---
Subagent-specific policies
Section titled “Subagent-specific policies”You can enforce fine-grained control over subagents using the Policy Engine’s TOML configuration. This allows you to grant or restrict permissions specifically for an agent, without affecting the rest of your CLI session.
To restrict a policy rule to a specific subagent, add the subagent property to
the [[rules]] block in your policy.toml file.
Example:
[[rules]]name = "Allow pr-creator to push code"subagent = "pr-creator"description = "Permit pr-creator to push branches automatically."action = "allow"toolName = "run_shell_command"commandPrefix = "git push"
In this configuration, the policy rule only triggers if the executing
subagent’s
name matches pr-creator. Rules without the subagent property apply
universally to all agents.
Managing subagents
Section titled “Managing subagents”You can manage subagents interactively using the /agents command or
persistently via settings.json.
Interactive management (/agents)
Section titled “Interactive management (/agents)”If you are in an interactive CLI session, you can use the /agents command to
manage subagents without editing configuration files manually. This is the
recommended way to quickly enable, disable, or re-configure agents on the
fly.
For a full list of sub-commands and usage, see the
/agents
command reference.
Persistent configuration (settings.json)
Section titled “Persistent configuration (settings.json)”While the /agents command and agent definition files
provide a starting point,
you can use settings.json for global, persistent
overrides. This is useful for
enforcing specific models or execution limits across all sessions.
agents.overrides
Section titled
“agents.overrides”
Use this to enable or disable specific agents or override their run configurations.
{ "agents": { "overrides": { "security-auditor": { "enabled": false, "runConfig": { "maxTurns": 20, "maxTimeMinutes": 10 } } } }}
modelConfigs.overrides
Section titled
“modelConfigs.overrides”
You can target specific subagents with custom model settings (like system
instruction prefixes or specific safety settings) using the overrideScope
field.
{ "modelConfigs": { "overrides": [ { "match": { "overrideScope": "security-auditor" }, "modelConfig": { "generateContentConfig": { "temperature": 0.1 } } } ] }}
Safety policies (TOML)
Section titled “Safety policies (TOML)”You can restrict access to specific subagents using the CLI’s Policy Engine. Subagents are treated as virtual tool names for policy matching purposes.
To govern access to a subagent, create a .toml file
in your policy directory
(e.g., ~/.gemini/policies/):
[[rule]]toolName = "codebase_investigator"decision = "deny"deny_message = "Deep codebase analysis is restricted for this session."
For more information on setting up fine-grained safety guardrails, see the Policy Engine reference.
Optimizing your subagent
Section titled “Optimizing your subagent”The main agent’s system prompt encourages it to use an expert subagent when one is available. It decides whether an agent is a relevant expert based on the agent’s description. You can improve the reliability with which an agent is used by updating the description to more clearly indicate:
- Its area of expertise.
- When it should be used.
- Some example scenarios.
For example, the following subagent description should be called fairly consistently for Git operations.
Git expert agent which should be used for all local and remote git operations. For example:
- Making commits
- Searching for regressions with bisect
- Interacting with source control and issues providers such as GitHub.
If you need to further tune your subagent, you can do so by selecting the
model
to optimize for with /model and then asking the
model why it does not think
that your subagent was called with a specific prompt and the given
description.
Remote subagents (Agent2Agent)
Section titled “Remote subagents (Agent2Agent)”Gemini CLI can also delegate tasks to remote subagents using the Agent-to-Agent (A2A) protocol.
See the Remote Subagents documentation for detailed configuration, authentication, and usage instructions.
Extension subagents
Section titled “Extension subagents”Extensions can bundle and distribute subagents. See the Extensions documentation for details on how to package agents within an extension.
Disabling subagents
Section titled “Disabling subagents”Subagents are enabled by default. To disable them, set enableAgents to false
in your settings.json:
{ "experimental": { "enableAgents": false }}
This guide covers best practices for developing, securing, and maintaining Gemini CLI extensions.
Development
Section titled “Development”Developing extensions for Gemini CLI is a lightweight, iterative process. Use these strategies to build robust and efficient extensions.
Structure your extension
Section titled “Structure your extension”While simple extensions may contain only a few files, we recommend a organized structure for complex projects.
my-extension/├── package.json├── tsconfig.json├── gemini-extension.json├── src/│ ├── index.ts│ └── tools/└── dist/
- Use TypeScript: We strongly recommend using TypeScript for type safety and improved developer experience.
- Separate source and build: Keep your source code in
src/and output build artifacts todist/. - Bundle dependencies: If your extension has many
dependencies, bundle them
using a tool like
esbuildto reduce installation time and avoid conflicts.
Iterate with link
Section titled “Iterate with
link”
Use the gemini extensions link command to develop
locally without reinstalling
your extension after every change.
cd my-extensiongemini extensions link .
Changes to your code are immediately available in the CLI after you rebuild the project and restart the session.
Use GEMINI.md
effectively
Section titled “Use GEMINI.md
effectively”
Your GEMINI.md file provides essential context to the
model.
- Focus on goals: Explain the high-level purpose of the extension and how to interact with its tools.
- Be concise: Avoid dumping exhaustive documentation into the file. Use clear, direct language.
- Provide examples: Include brief examples of how the model should use specific tools or commands.
Security
Section titled “Security”Follow the principle of least privilege and rigorous input validation when building extensions.
Minimal permissions
Section titled “Minimal permissions”Only request the permissions your MCP server needs to function. Avoid giving the model broad access (such as full shell access) if restricted tools are sufficient.
If your extension uses powerful tools like run_shell_command, restrict them in
your gemini-extension.json file:
{ "name": "my-safe-extension", "excludeTools": ["run_shell_command(rm -rf *)"]}
This ensures the CLI blocks dangerous commands even if the model attempts to execute them.
Validate inputs
Section titled “Validate inputs”Your MCP server runs on the user’s machine. Always validate tool inputs to prevent arbitrary code execution or unauthorized filesystem access.
// Example: Validating pathsif (!path.resolve(inputPath).startsWith(path.resolve(allowedDir) + path.sep)) { throw new Error('Access denied');}
Secure sensitive settings
Section titled “Secure sensitive settings”If your extension requires API keys or other secrets, use the sensitive: true
option in your manifest. This ensures keys are stored in the system keychain
and
obfuscated in the CLI output.
"settings": [ { "name": "API Key", "envVar": "MY_API_KEY", "sensitive": true }]
Release
Section titled “Release”Follow standard versioning and release practices to ensure a smooth experience for your users.
Semantic versioning
Section titled “Semantic versioning”Follow Semantic Versioning (SemVer) to communicate changes clearly.
- Major: Breaking changes (for example, renaming tools or changing arguments).
- Minor: New features (for example, adding new tools or commands).
- Patch: Bug fixes and performance improvements.
Release channels
Section titled “Release channels”Use Git branches to manage release channels. This lets users choose between stability and the latest features.
# Install the stable version (default branch)gemini extensions install github.com/user/repo
# Install the development versiongemini extensions install github.com/user/repo --ref dev
Clean artifacts
Section titled “Clean artifacts”When using GitHub Releases, ensure your archives only contain necessary files
(such as dist/, gemini-extension.json, and package.json). Exclude
node_modules/ and src/ to
minimize download size.
Test and verify
Section titled “Test and verify”Test your extension thoroughly before releasing it to users.
- Manual verification: Use
gemini extensions linkto test your extension in a live CLI session. Verify that tools appear in the debug console (F12) and that custom commands resolve correctly. - Automated testing: If your extension includes an MCP server, write unit tests for your tool logic using a framework like Vitest or Jest. You can test MCP tools in isolation by mocking the transport layer.
Troubleshooting
Section titled “Troubleshooting”Use these tips to diagnose and fix common extension issues.
Extension not loading
Section titled “Extension not loading”If your extension doesn’t appear in /extensions list:
- Check the manifest: Ensure
gemini-extension.jsonis in the root directory and contains valid JSON. - Verify the name: The
namefield in the manifest must match the extension directory name exactly. - Restart the CLI: Extensions are loaded at the start of a session. Restart Gemini CLI after making changes to the manifest or linking a new extension.
MCP server failures
Section titled “MCP server failures”If your tools aren’t working as expected:
- Check the logs: View the CLI logs to see if the MCP server failed to start.
- Test the command: Run the server’s
commandandargsdirectly in your terminal to ensure it starts correctly outside of Gemini CLI. - Debug console: In interactive mode, press F12 to open the debug console and inspect tool calls and responses.
Command conflicts
Section titled “Command conflicts”If a custom command isn’t responding:
- Check precedence: Remember that user and project
commands take precedence
over extension commands. Use the prefixed name (for example,
/extension.command) to verify the extension’s version. - Help command: Run
/helpto see a list of all available commands and their sources.
This guide covers the gemini extensions commands and
the structure of the
gemini-extension.json configuration file.
Manage extensions
Section titled “Manage extensions”Use the gemini extensions command group to manage
your extensions from the
terminal.
Note that commands like gemini extensions install are
not supported within the
CLI’s interactive mode. However, you can use the /extensions list command to
view installed extensions. All management operations, including updates to
slash
commands, take effect only after you restart the CLI session.
Install an extension
Section titled “Install an extension”Install an extension by providing its GitHub repository URL or a local file path.
Gemini CLI creates a copy of the extension during installation. You must run
gemini extensions update to pull changes from the
source. To install from
GitHub, you must have git installed on your machine.
gemini extensions install <source> [--ref <ref>] [--auto-update] [--pre-release] [--consent] [--skip-settings]
<source>: The GitHub URL or local path of the extension.--ref: The git ref (branch, tag, or commit) to install.--auto-update: Enable automatic updates for this extension.--pre-release: Enable installation of pre-release versions.--consent: Acknowledge security risks and skip the confirmation prompt.--skip-settings: Skip the configuration on install process.
Uninstall an extension
Section titled “Uninstall an extension”To uninstall one or more extensions, use the uninstall command:
gemini extensions uninstall <name...>
Disable an extension
Section titled “Disable an extension”Extensions are enabled globally by default. You can disable an extension entirely or for a specific workspace.
gemini extensions disable <name> [--scope <scope>]
<name>: The name of the extension to disable.--scope: The scope to disable the extension in (userorworkspace).
Enable an extension
Section titled “Enable an extension”Re-enable a disabled extension using the enable
command:
gemini extensions enable <name> [--scope <scope>]
<name>: The name of the extension to enable.--scope: The scope to enable the extension in (userorworkspace).
Update an extension
Section titled “Update an extension”Update an extension to the version specified in its gemini-extension.json
file.
gemini extensions update <name>
To update all installed extensions at once:
gemini extensions update --all
Create an extension from a template
Section titled “Create an extension from a template”Create a new extension directory using a built-in template.
gemini extensions new <path> [template]
<path>: The directory to create.[template]: The template to use (for example,mcp-server,context,custom-commands).
Link a local extension
Section titled “Link a local extension”Create a symbolic link between your development directory and Gemini CLI extensions directory. This lets you test changes immediately without reinstalling.
gemini extensions link <path>
Extension format
Section titled “Extension format”Gemini CLI loads extensions from <home>/.gemini/extensions. Each extension
must have a gemini-extension.json file in its root
directory.
gemini-extension.json
Section titled
“gemini-extension.json”
The manifest file defines the extension’s behavior and configuration.
{ "name": "my-extension", "version": "1.0.0", "description": "My awesome extension", "mcpServers": { "my-server": { "command": "node", "args": ["${extensionPath}/my-server.js"], "cwd": "${extensionPath}" } }, "contextFileName": "GEMINI.md", "excludeTools": ["run_shell_command"], "migratedTo": "https://github.com/new-owner/new-extension-repo", "plan": { "directory": ".gemini/plans" }}
name: The name of the extension. This is used to uniquely identify the extension and for conflict resolution when extension commands have the same name as user or project commands. The name should be lowercase or numbers and use dashes instead of underscores or spaces. This is how users will refer to your extension in the CLI. Note that we expect this name to match the extension directory name.version: The version of the extension.description: A short description of the extension. This will be displayed on geminicli.com/extensions.migratedTo: The URL of the new repository source for the extension. If this is set, the CLI will automatically check this new source for updates and migrate the extension’s installation to the new source if an update is found.mcpServers: A map of MCP servers to settings. The key is the name of the server, and the value is the server configuration. These servers will be loaded on startup just like MCP servers defined in asettings.jsonfile. If both an extension and asettings.jsonfile define an MCP server with the same name, the server defined in thesettings.jsonfile takes precedence.- Note that all MCP server configuration options are supported
except for
trust. - For portability, you should use
${extensionPath}to refer to files within your extension directory. - Separate your executable and its arguments using
commandandargsinstead of putting them both incommand.
- Note that all MCP server configuration options are supported
except for
contextFileName: The name of the file that contains the context for the extension. This will be used to load the context from the extension directory. If this property is not used but aGEMINI.mdfile is present in your extension directory, then that file will be loaded.excludeTools: An array of tool names to exclude from the model. You can also specify command-specific restrictions for tools that support it, like therun_shell_commandtool. For example,"excludeTools": ["run_shell_command(rm -rf)"]will block therm -rfcommand. Note that this differs from the MCP serverexcludeToolsfunctionality, which can be listed in the MCP server config.plan: Planning features configuration.directory: The directory where planning artifacts are stored. This serves as a fallback if the user hasn’t specified a plan directory in their settings. If not specified by either the extension or the user, the default is~/.gemini/tmp/<project>/<session-id>/plans/.
When Gemini CLI starts, it loads all the extensions and merges their configurations. If there are any conflicts, the workspace configuration takes precedence.
Extension settings
Section titled “Extension settings”Extensions can define settings that users provide during installation, such
as
API keys or URLs. These values are stored in a .env
file within the extension
directory.
To define settings, add a settings array to your
manifest:
{ "name": "my-api-extension", "version": "1.0.0", "settings": [ { "name": "API Key", "description": "Your API key for the service.", "envVar": "MY_API_KEY", "sensitive": true } ]}
name: The setting’s display name.description: A clear explanation of the setting.envVar: The environment variable name where the value is stored.sensitive: Iftrue, the value is stored in the system keychain and obfuscated in the UI.
To update an extension’s settings:
gemini extensions config <name> [setting] [--scope <scope>]
Custom commands
Section titled “Custom commands”Provide custom commands by placing
TOML files in a
commands/ subdirectory. Gemini CLI uses the
directory structure to determine
the command name.
For an extension named gcp:
commands/deploy.tomlbecomes/deploycommands/gcs/sync.tomlbecomes/gcs:sync(namespaced with a colon)
Intercept and customize CLI behavior using hooks.
Define
hooks in a hooks/hooks.json file within your
extension directory. Note that
hooks are not defined in the gemini-extension.json
manifest.
Agent skills
Section titled “Agent skills”Bundle agent skills to provide specialized
workflows. Place
skill definitions in a skills/ directory. For
example,
skills/security-audit/SKILL.md exposes a security-audit skill.
Sub-agents
Section titled “Sub-agents”Provide sub-agents that users can delegate
tasks to. Add
agent definition files (.md) to an agents/ directory in your extension root.
Policy Engine
Section titled “Policy Engine”Extensions can contribute policy rules and safety checkers to Gemini CLI
Policy Engine. These rules are
defined in
.toml files and take effect when the extension is
activated.
To add policies, create a policies/ directory in your
extension’s root and
place your .toml policy files inside it. Gemini CLI
automatically loads all
.toml files from this directory.
Rules contributed by extensions run in their own tier (tier 2), alongside workspace-defined policies. This tier has higher priority than the default rules but lower priority than user or admin policies.
Example policies.toml
[[rule]]mcpName = "my_server"toolName = "dangerous_tool"decision = "ask_user"priority = 100
[[safety_checker]]mcpName = "my_server"toolName = "write_data"priority = 200[safety_checker.checker]type = "in-process"name = "allowed-path"required_context = ["environment"]
Themes
Section titled “Themes”Extensions can provide custom themes to personalize the CLI UI. Themes are
defined in the themes array in gemini-extension.json.
Example
{ "name": "my-green-extension", "version": "1.0.0", "themes": [ { "name": "shades-of-green", "type": "custom", "background": { "primary": "#1a362a" }, "text": { "primary": "#a6e3a1", "secondary": "#6e8e7a", "link": "#89e689" }, "status": { "success": "#76c076", "warning": "#d9e689", "error": "#b34e4e" }, "border": { "default": "#4a6c5a" }, "ui": { "comment": "#6e8e7a" } } ]}
Custom themes provided by extensions can be selected using the /theme command
or by setting the ui.theme property in your settings.json file. Note that
when referring to a theme from an extension, the extension name is appended
to
the theme name in parentheses, for example,
shades-of-green (my-green-extension).
Conflict resolution
Section titled “Conflict resolution”Extension commands have the lowest precedence. If an extension command name
conflicts with a user or project command, the extension command is prefixed
with
the extension name (for example, /gcp.deploy) using
a dot separator.
Variables
Section titled “Variables”Gemini CLI supports variable substitution in gemini-extension.json and
hooks/hooks.json.
| Variable | Description |
|---|---|
${extensionPath} |
The absolute path to the extension’s directory. |
${workspacePath} |
The absolute path to the current workspace. |
${/} |
The platform-specific path separator. |
Release Gemini CLI extensions to your users through a Git repository or GitHub Releases.
Git repository releases are the simplest approach and offer the most
flexibility
for managing development branches. GitHub Releases are more efficient for
initial installations because they ship as single archives rather than
requiring
a full git clone. Use GitHub Releases if you need to
include platform-specific
binary files.
List your extension in the gallery
Section titled “List your extension in the gallery”The Gemini CLI extension gallery automatically indexes public extensions to help users discover your work. You don’t need to submit an issue or email us to list your extension.
To have your extension automatically discovered and listed:
- Use a public repository: Ensure your extension is hosted in a public GitHub repository.
- Add the GitHub topic: Add the
gemini-cli-extensiontopic to your repository’s About section. Our crawler uses this topic to find new extensions. - Place the manifest at the root: Ensure your
gemini-extension.jsonfile is in the absolute root of the repository or the release archive.
Our system crawls tagged repositories daily. Once you tag your repository, your extension will appear in the gallery if it passes validation.
Release through a Git repository
Section titled “Release through a Git repository”Releasing through Git is the most flexible option. Create a public Git
repository and provide the URL to your users. They can then install your
extension using gemini extensions install <your-repo-uri>.
Users can optionally depend on a specific branch, tag, or commit using the
--ref argument. For example:
gemini extensions install <your-repo-uri> --ref=stable
Whenever you push commits to the referenced branch, the CLI prompts users to
update their installation. The HEAD commit is always
treated as the latest
version.
Manage release channels
Section titled “Manage release channels”You can use branches or tags to manage different release channels, such as
stable, preview, or dev.
We recommend using your default branch as the stable release channel. This
ensures that the default installation command always provides the most
reliable
version of your extension. You can then use a dev
branch for active
development and merge it into the default branch when you are ready for a
release.
Release through GitHub Releases
Section titled “Release through GitHub Releases”Distributing extensions through GitHub Releases provides a faster installation experience by avoiding a repository clone.
Gemini CLI checks for updates by looking for the Latest
release on GitHub.
Users can also install specific versions using the --ref argument with a
release tag. Use the --pre-release flag to install
the latest version even if
it isn’t marked as Latest.
Custom pre-built archives
Section titled “Custom pre-built archives”You can attach custom archives directly to your GitHub Release as assets. This is useful if your extension requires a build step or includes platform-specific binaries.
Custom archives must be fully self-contained and follow the required archive structure. If your extension is platform-independent, provide a single generic asset.
Platform-specific archives
Section titled “Platform-specific archives”To let Gemini CLI find the correct asset for a user’s platform, use the following naming convention:
- Platform and architecture-specific:
{platform}.{arch}.{name}.{extension} - Platform-specific:
{platform}.{name}.{extension} - Generic: A single asset will be used as a fallback if no specific match is found.
Use these values for the placeholders:
{name}: Your extension name.{platform}: Usedarwin(macOS),linux, orwin32(Windows).{arch}: Usex64orarm64.{extension}: Use.tar.gzor.zip.
Examples:
darwin.arm64.my-tool.tar.gz(specific to Apple Silicon Macs)darwin.my-tool.tar.gz(fallback for all Macs, for example Intel)linux.x64.my-tool.tar.gzwin32.my-tool.zip
Archive structure
Section titled “Archive structure”Archives must be fully contained extensions. The gemini-extension.json file
must be at the root of the archive. The rest of the layout should match a
standard extension structure.
Example GitHub Actions workflow
Section titled “Example GitHub Actions workflow”Use this example workflow to build and release your extension for multiple platforms:
name: Release Extension
on: push: tags: - 'v*'
jobs: release: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3
- name: Set up Node.js uses: actions/setup-node@v3 with: node-version: '20'
- name: Install dependencies run: npm ci
- name: Build extension run: npm run build
- name: Create release assets run: | npm run package -- --platform=darwin --arch=arm64 npm run package -- --platform=linux --arch=x64 npm run package -- --platform=win32 --arch=x64
- name: Create GitHub Release uses: softprops/action-gh-release@v1 with: files: | release/darwin.arm64.my-tool.tar.gz release/linux.arm64.my-tool.tar.gz release/win32.arm64.my-tool.zip
Migrating an Extension Repository
Section titled “Migrating an Extension Repository”If you need to move your extension to a new repository (for example, from a
personal account to an organization) or rename it, you can use the migratedTo
property in your gemini-extension.json file to
seamlessly transition your
users.
- Create the new repository: Setup your extension in its new location.
- Update the old repository: In your original repository,
update the
gemini-extension.jsonfile to include themigratedToproperty, pointing to the new repository URL, and bump the version number. You can optionally change thenameof your extension at this time in the new repository.{"name": "my-extension","version": "1.1.0","migratedTo": "https://github.com/new-owner/new-extension-repo"} - Release the update: Publish this new version in your old repository.
When users check for updates, Gemini CLI will detect the migratedTo field,
verify that the new repository contains a valid extension update, and
automatically update their local installation to track the new source and
name
moving forward. All extension settings will automatically migrate to the new
installation.
Gemini CLI extensions let you expand the capabilities of Gemini CLI by adding custom tools, commands, and context. This guide walks you through creating your first extension, from setting up a template to adding custom functionality and linking it for local development.
Prerequisites
Section titled “Prerequisites”Before you start, ensure you have Gemini CLI installed and a basic understanding of Node.js.
Extension features
Section titled “Extension features”Extensions offer several ways to customize Gemini CLI. Use this table to decide which features your extension needs.
| Feature | What it is | When to use it | Invoked by |
|---|---|---|---|
| MCP server | A standard way to expose new tools and data sources to the model. | Use this when you want the model to be able to do new things, like fetching data from an internal API, querying a database, or controlling a local application. We also support MCP resources (which can replace custom commands) and system instructions (which can replace custom context) | Model |
| Custom commands | A shortcut (like /my-cmd)
that executes a pre-defined prompt or shell command. |
Use this for repetitive tasks or to save long, complex prompts that you use frequently. Great for automation. | User |
Context
file (GEMINI.md)
|
A markdown file containing instructions that are loaded into the model’s context at the start of every session. | Use this to define the “personality” of your extension, set coding standards, or provide essential knowledge that the model should always have. | CLI provides to model |
| Agent skills | A specialized set of instructions and workflows that the model activates only when needed. | Use this for complex, occasional tasks (like “create a PR” or “audit security”) to avoid cluttering the main context window when the skill isn’t being used. | Model |
| Hooks | A way to intercept and customize the CLI’s behavior at specific lifecycle events (for example, before/after a tool call). | Use this when you want to automate actions based on what the model is doing, like validating tool arguments, logging activity, or modifying the model’s input/output. | CLI |
| Custom themes | A set of color definitions to personalize the CLI UI. | Use this to provide a unique visual identity for your extension or to offer specialized high-contrast or thematic color schemes. | User (via /theme) |
Step 1: Create a new extension
Section titled “Step 1: Create a new extension”The easiest way to start is by using a built-in template. We’ll use the
mcp-server example as our foundation.
Run the following command to create a new directory called my-first-extension
with the template files:
gemini extensions new my-first-extension mcp-server
This creates a directory with the following structure:
my-first-extension/├── example.js├── gemini-extension.json└── package.json
Step 2: Understand the extension files
Section titled “Step 2: Understand the extension files”Your new extension contains several key files that define its behavior.
gemini-extension.json
Section titled
“gemini-extension.json”
The manifest file tells Gemini CLI how to load and use your extension.
{ "name": "mcp-server-example", "version": "1.0.0", "mcpServers": { "nodeServer": { "command": "node", "args": ["${extensionPath}${/}example.js"], "cwd": "${extensionPath}" } }}
name: The unique name for your extension.version: The version of your extension.mcpServers: Defines Model Context Protocol (MCP) servers to add new tools.command,args,cwd: Specify how to start your server. The${extensionPath}variable is replaced with the absolute path to your extension’s directory.
example.js
Section titled
“example.js”
This file contains the source code for your MCP server. It uses the
@modelcontextprotocol/sdk to define tools.
/** * @license * Copyright 2025 Google LLC * SPDX-License-Identifier: Apache-2.0 */
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';import { z } from 'zod';
const server = new McpServer({ name: 'prompt-server', version: '1.0.0',});
// Registers a new tool named 'fetch_posts'server.registerTool( 'fetch_posts', { description: 'Fetches a list of posts from a public API.', inputSchema: z.object({}).shape, }, async () => { const apiResponse = await fetch( 'https://jsonplaceholder.typicode.com/posts', ); const posts = await apiResponse.json(); const response = { posts: posts.slice(0, 5) }; return { content: [ { type: 'text', text: JSON.stringify(response), }, ], }; },);
const transport = new StdioServerTransport();await server.connect(transport);
package.json
Section titled
“package.json”
The standard configuration file for a Node.js project. It defines dependencies and scripts for your extension.
Step 3: Add extension settings
Section titled “Step 3: Add extension settings”Some extensions need configuration, such as API keys or user preferences. Let’s add a setting for an API key.
-
Open
gemini-extension.json. -
Add a
settingsarray to the configuration:{"name": "mcp-server-example","version": "1.0.0","settings": [{"name": "API Key","description": "The API key for the service.","envVar": "MY_SERVICE_API_KEY","sensitive": true}],"mcpServers": {// ...}}
When a user installs this extension, Gemini CLI will prompt them to enter the
“API Key”. The value will be stored securely in the system keychain (because
sensitive is true) and injected into the MCP
server’s process as the
MY_SERVICE_API_KEY environment variable.
Step 4: Link your extension
Section titled “Step 4: Link your extension”Link your extension to your Gemini CLI installation for local development.
-
Install dependencies:
Terminal window cd my-first-extensionnpm install -
Link the extension:
The
linkcommand creates a symbolic link from Gemini CLI extensions directory to your development directory. Changes you make are reflected immediately.Terminal window gemini extensions link .
Restart your Gemini CLI session to use the new fetch_posts tool. Test it by
asking: “fetch posts”.
Step 5: Add a custom command
Section titled “Step 5: Add a custom command”Custom commands create shortcuts for complex prompts.
-
Create a
commandsdirectory and a subdirectory for your command group:macOS/Linux
Terminal window mkdir -p commands/fsWindows (PowerShell)
Terminal window New-Item -ItemType Directory -Force -Path "commands\fs" -
Create a file named
commands/fs/grep-code.toml:prompt = """Please summarize the findings for the pattern `{{args}}`.Search Results:!{grep -r {{args}} .}"""This command,
/fs:grep-code, takes an argument, runs thegrepshell command, and pipes the results into a prompt for summarization.
After saving the file, restart Gemini CLI. Run /fs:grep-code "some pattern" to
use your new command.
Step 6: Add a custom GEMINI.md
Section titled “Step 6: Add a
custom GEMINI.md”
Provide persistent context to the model by adding a GEMINI.md file to your
extension. This is useful for setting behavior or providing essential tool
information.
-
Create a file named
GEMINI.mdin the root of your extension directory:# My First Extension InstructionsYou are an expert developer assistant. When the user asks you to fetchposts, use the `fetch_posts` tool. Be concise in your responses. -
Update your
gemini-extension.jsonto load this file:{"name": "my-first-extension","version": "1.0.0","contextFileName": "GEMINI.md","mcpServers": {"nodeServer": {"command": "node","args": ["${extensionPath}${/}example.js"],"cwd": "${extensionPath}"}}}
Restart Gemini CLI. The model now has the context from your GEMINI.md file in
every session where the extension is active.
(Optional) Step 7: Add an Agent Skill
Section titled “(Optional) Step 7: Add an Agent Skill”Agent Skills bundle specialized expertise and workflows. Skills are activated only when needed, which saves context tokens.
-
Create a
skillsdirectory and a subdirectory for your skill:macOS/Linux
Terminal window mkdir -p skills/security-auditWindows (PowerShell)
Terminal window New-Item -ItemType Directory -Force -Path "skills\security-audit" -
Create a
skills/security-audit/SKILL.mdfile:---name: security-auditdescription:Expertise in auditing code for security vulnerabilities. Use when the userasks to "check for security issues" or "audit" their changes.---# Security AuditorYou are an expert security researcher. When auditing code:1. Look for common vulnerabilities (OWASP Top 10).2. Check for hardcoded secrets or API keys.3. Suggest remediation steps for any findings.
Gemini CLI automatically discovers skills bundled with your extension. The model activates them when it identifies a relevant task.
Step 8: Release your extension
Section titled “Step 8: Release your extension”When your extension is ready, share it with others via a Git repository or GitHub Releases. Refer to the Extension Releasing Guide for detailed instructions and learn how to list your extension in the gallery.
Next steps
Section titled “Next steps”- Extension reference: Deeply understand the extension format, commands, and configuration.
- Best practices: Learn strategies for building great extensions.
To use Gemini CLI, you’ll need to authenticate with Google. This guide helps you quickly find the best way to sign in based on your account type and how you’re using the CLI.
For most users, we recommend starting Gemini CLI and logging in with your personal Google account.
Choose your authentication method
Section titled “Choose your authentication method ”Select the authentication method that matches your situation in the table below:
| User Type / Scenario | Recommended Authentication Method | Google Cloud Project Required |
|---|---|---|
| Individual Google accounts | Sign in with Google | No, with exceptions |
| Organization users with a company, school, or Google Workspace account | Sign in with Google | Yes |
| AI Studio user with a Gemini API key | Use Gemini API Key | No |
| Google Cloud Vertex AI user | Vertex AI | Yes |
| Headless mode | Use Gemini API Key or Vertex AI |
No (for Gemini API Key) Yes (for Vertex AI) |
What is my Google account type?
Section titled “What is my Google account type?”-
Individual Google accounts: Includes all free tier accounts such as Gemini Code Assist for individuals, as well as paid subscriptions for Google AI Pro and Ultra.
-
Organization accounts: Accounts using paid licenses through an organization such as a company, school, or Google Workspace. Includes Google AI Ultra for Business subscriptions.
(Recommended) Sign in with Google
Section titled “(Recommended) Sign in with Google ”If you run Gemini CLI on your local machine, the simplest authentication method is logging in with your Google account. This method requires a web browser on a machine that can communicate with the terminal running Gemini CLI (for example, your local machine).
If you are a Google AI Pro or Google AI Ultra subscriber, use the Google account associated with your subscription.
To authenticate and use Gemini CLI:
-
Start the CLI:
Terminal window gemini -
Select Sign in with Google. Gemini CLI opens a sign in prompt using your web browser. Follow the on-screen instructions. Your credentials will be cached locally for future sessions.
Do I need to set my Google Cloud project?
Section titled “Do I need to set my Google Cloud project?”Most individual Google accounts (free and paid) don’t require a Google Cloud project for authentication. However, you’ll need to set a Google Cloud project when you meet at least one of the following conditions:
- You are using a company, school, or Google Workspace account.
- You are using a Gemini Code Assist license from the Google Developer Program.
- You are using a license from a Gemini Code Assist subscription.
For instructions, see Set your Google Cloud Project.
Use Gemini API key
Section titled “Use Gemini API key ”If you don’t want to authenticate using your Google account, you can use an API key from Google AI Studio.
To authenticate and use Gemini CLI with a Gemini API key:
-
Obtain your API key from Google AI Studio.
-
Set the
GEMINI_API_KEYenvironment variable to your key. For example:macOS/Linux
Terminal window # Replace YOUR_GEMINI_API_KEY with the key from AI Studioexport GEMINI_API_KEY="YOUR_GEMINI_API_KEY"Windows (PowerShell)
Terminal window # Replace YOUR_GEMINI_API_KEY with the key from AI Studio$env:GEMINI_API_KEY="YOUR_GEMINI_API_KEY"To make this setting persistent, see Persisting Environment Variables.
-
Start the CLI:
Terminal window gemini -
Select Use Gemini API key.
Use Vertex AI
Section titled “Use Vertex AI ”To use Gemini CLI with Google Cloud’s Vertex AI platform, choose from the following authentication options:
- A. Application Default Credentials (ADC) using
gcloud. - B. Service account JSON key.
- C. Google Cloud API key.
Regardless of your authentication method for Vertex AI, you’ll need to set
GOOGLE_CLOUD_PROJECT to your Google Cloud project ID
with the Vertex AI API
enabled, and GOOGLE_CLOUD_LOCATION to the location
of your Vertex AI resources
or the location where you want to run your jobs.
For example:
macOS/Linux
# Replace with your project ID and desired location (for example, us-central1)export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"export GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION"
Windows (PowerShell)
# Replace with your project ID and desired location (for example, us-central1)$env:GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"$env:GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION"
To make any Vertex AI environment variable settings persistent, see Persisting Environment Variables.
A.
Vertex AI - application default credentials (ADC) using gcloud
Section titled “A. Vertex AI -
application default credentials (ADC) using gcloud”
Consider this authentication method if you have Google Cloud CLI installed.
If you have previously set GOOGLE_API_KEY or GEMINI_API_KEY, you must unset
them to use ADC.
macOS/Linux
unset GOOGLE_API_KEY GEMINI_API_KEY
Windows (PowerShell)
Remove-Item Env:\GOOGLE_API_KEY, Env:\GEMINI_API_KEY -ErrorAction Ignore
-
Verify you have a Google Cloud project and Vertex AI API is enabled.
-
Log in to Google Cloud:
Terminal window gcloud auth application-default login -
Start the CLI:
Terminal window gemini -
Select Vertex AI.
B. Vertex AI - service account JSON key
Section titled “B. Vertex AI - service account JSON key”Consider this method of authentication in non-interactive environments, CI/CD pipelines, or if your organization restricts user-based ADC or API key creation.
If you have previously set GOOGLE_API_KEY or GEMINI_API_KEY, you must unset
them:
macOS/Linux
unset GOOGLE_API_KEY GEMINI_API_KEY
Windows (PowerShell)
Remove-Item Env:\GOOGLE_API_KEY, Env:\GEMINI_API_KEY -ErrorAction Ignore
-
Create a service account and key and download the provided JSON file. Assign the “Vertex AI User” role to the service account.
-
Set the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable to the JSON file’s absolute path. For example:macOS/Linux
Terminal window # Replace /path/to/your/keyfile.json with the actual pathexport GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/keyfile.json"Windows (PowerShell)
Terminal window # Replace C:\path\to\your\keyfile.json with the actual path$env:GOOGLE_APPLICATION_CREDENTIALS="C:\path\to\your\keyfile.json" -
Start the CLI:
Terminal window gemini -
Select Vertex AI.
C. Vertex AI - Google Cloud API key
Section titled “C. Vertex AI - Google Cloud API key”-
Obtain a Google Cloud API key: Get an API Key.
-
Set the
GOOGLE_API_KEYenvironment variable:macOS/Linux
Terminal window # Replace YOUR_GOOGLE_API_KEY with your Vertex AI API keyexport GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"Windows (PowerShell)
Terminal window # Replace YOUR_GOOGLE_API_KEY with your Vertex AI API key$env:GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"If you see errors like
"API keys are not supported by this API...", your organization might restrict API key usage for this service. Try the other Vertex AI authentication methods instead. -
Start the CLI:
Terminal window gemini -
Select Vertex AI.
Set your Google Cloud project
Section titled “Set your Google Cloud project ”When you sign in using your Google account, you may need to configure a Google Cloud project for Gemini CLI to use. This applies when you meet at least one of the following conditions:
- You are using a Company, School, or Google Workspace account.
- You are using a Gemini Code Assist license from the Google Developer Program.
- You are using a license from a Gemini Code Assist subscription.
To configure Gemini CLI to use a Google Cloud project, do the following:
-
Configure your environment variables. Set either the
GOOGLE_CLOUD_PROJECTorGOOGLE_CLOUD_PROJECT_IDvariable to the project ID to use with Gemini CLI. Gemini CLI checks forGOOGLE_CLOUD_PROJECTfirst, then falls back toGOOGLE_CLOUD_PROJECT_ID.For example, to set the
GOOGLE_CLOUD_PROJECT_IDvariable:macOS/Linux
Terminal window # Replace YOUR_PROJECT_ID with your actual Google Cloud project IDexport GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"Windows (PowerShell)
Terminal window # Replace YOUR_PROJECT_ID with your actual Google Cloud project ID$env:GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"To make this setting persistent, see Persisting Environment Variables.
Persisting environment variables
Section titled “Persisting environment variables ”To avoid setting environment variables for every terminal session, you can persist them with the following methods:
-
Add your environment variables to your shell configuration file: Append the environment variable commands to your shell’s startup file.
macOS/Linux (for example,
~/.bashrc,~/.zshrc, or~/.profile):Terminal window echo 'export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"' >> ~/.bashrcsource ~/.bashrcWindows (PowerShell) (for example,
$PROFILE):Terminal window Add-Content -Path $PROFILE -Value '$env:GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"'. $PROFILE
-
Use a
.envfile: Create a.gemini/.envfile in your project directory or home directory. Gemini CLI automatically loads variables from the first.envfile it finds, searching up from the current directory, then in your home directory’s.gemini/.env(for example,~/.gemini/.envor%USERPROFILE%\.gemini\.env).Example for user-wide settings:
macOS/Linux
Terminal window mkdir -p ~/.geminicat >> ~/.gemini/.env <<'EOF'GOOGLE_CLOUD_PROJECT="your-project-id"# Add other variables like GEMINI_API_KEY as neededEOFWindows (PowerShell)
Terminal window New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.gemini"@"GOOGLE_CLOUD_PROJECT="your-project-id"# Add other variables like GEMINI_API_KEY as needed"@ | Out-File -FilePath "$env:USERPROFILE\.gemini\.env" -Encoding utf8 -Append
Variables are loaded from the first file found, not merged.
Running in Google Cloud environments
Section titled “Running in Google Cloud environments ”When running Gemini CLI within certain Google Cloud environments, authentication is automatic.
In a Google Cloud Shell environment, Gemini CLI typically authenticates automatically using your Cloud Shell credentials. In Compute Engine environments, Gemini CLI automatically uses Application Default Credentials (ADC) from the environment’s metadata server.
If automatic authentication fails, use one of the interactive methods described on this page.
Running in headless mode
Section titled “Running in headless mode ”Headless mode will use your existing authentication method, if an existing authentication credential is cached.
If you have not already signed in with an authentication credential, you must configure authentication using environment variables:
What’s next?
Section titled “What’s next?”Your authentication method affects your quotas, pricing, Terms of Service, and privacy notices. Review the following pages to learn more:
Learn about how you can use Gemini 3 Pro and Gemini 3 Flash on Gemini CLI.
How to get started with Gemini 3 on Gemini CLI
Section titled “How to get started with Gemini 3 on Gemini CLI”Get started by upgrading Gemini CLI to the latest version:
npm install -g @google/gemini-cli
If your version is 0.21.1 or later:
- Run
/model. - Select Auto (Gemini 3).
For more information, see Gemini CLI model selection.
Usage limits and fallback
Section titled “Usage limits and fallback”Gemini CLI will tell you when you reach your Gemini 3 Pro daily usage limit. When you encounter that limit, you’ll be given the option to switch to Gemini 2.5 Pro, upgrade for higher limits, or stop. You’ll also be told when your usage limit resets and Gemini 3 Pro can be used again.
Similarly, when you reach your daily usage limit for Gemini 2.5 Pro, you’ll see a message prompting fallback to Gemini 2.5 Flash.
Capacity errors
Section titled “Capacity errors”There may be times when the Gemini 3 Pro model is overloaded. When that happens, Gemini CLI will ask you to decide whether you want to keep trying Gemini 3 Pro or fallback to Gemini 2.5 Pro.
Model selection and routing types
Section titled “Model selection and routing types”When using Gemini CLI, you may want to control how your requests are routed between models. By default, Gemini CLI uses Auto routing.
When using Gemini 3 Pro, you may want to use Auto routing or Pro routing to manage your usage limits:
- Auto routing: Auto routing first determines whether a prompt involves a complex or simple operation. For simple prompts, it will automatically use Gemini 2.5 Flash. For complex prompts, if Gemini 3 Pro is enabled, it will use Gemini 3 Pro; otherwise, it will use Gemini 2.5 Pro.
- Pro routing: If you want to ensure your task is
processed by the most
capable model, use
/modeland select Pro. Gemini CLI will prioritize the most capable model available, including Gemini 3 Pro if it has been enabled.
To learn more about selecting a model and routing, refer to Gemini CLI Model Selection.
How to enable Gemini 3 with Gemini CLI on Gemini Code Assist
Section titled “How to enable Gemini 3 with Gemini CLI on Gemini Code Assist”If you’re using Gemini Code Assist Standard or Gemini Code Assist Enterprise, enabling Gemini 3 Pro on Gemini CLI requires configuring your release channels. Using Gemini 3 Pro will require two steps: administrative enablement and user enablement.
To learn more about these settings, refer to Configure Gemini Code Assist release channels.
Administrator instructions
Section titled “Administrator instructions”An administrator with Google Cloud Settings Admin permissions must follow these directions:
- Navigate to the Google Cloud Project you’re using with Gemini CLI for Code Assist.
- Go to Admin for Gemini > Settings.
- Under Release channels for Gemini Code Assist in local IDEs select Preview.
- Click Save changes.
User instructions
Section titled “User instructions”Wait for two to three minutes after your administrator has enabled Preview, then:
- Open Gemini CLI.
- Use the
/settingscommand. - Set Preview Features to
true.
Restart Gemini CLI and you should have access to Gemini 3.
Next steps
Section titled “Next steps”If you need help, we recommend searching for an existing GitHub issue. If you cannot find a GitHub issue that matches your concern, you can create a new issue. For comments and feedback, consider opening a GitHub discussion.
This document provides an overview of Gemini CLI’s system requirements, installation methods, and release types.
Recommended system specifications
Section titled “Recommended system specifications”- Operating System:
- macOS 15+
- Windows 11 24H2+
- Ubuntu 20.04+
- Hardware:
- “Casual” usage: 4GB+ RAM (short sessions, common tasks and edits)
- “Power” usage: 16GB+ RAM (long sessions, large codebases, deep context)
- Runtime: Node.js 20.0.0+
- Shell: Bash, Zsh, or PowerShell
- Location: Gemini Code Assist supported locations
- Internet connection required
Install Gemini CLI
Section titled “Install Gemini CLI”We recommend most users install Gemini CLI using one of the following installation methods:
- npm
- Homebrew
- MacPorts
- Anaconda
Note that Gemini CLI comes pre-installed on Cloud Shell and Cloud Workstations.
Install globally with npm
Section titled “Install globally with npm”npm install -g @google/gemini-cli
Install globally with Homebrew (macOS/Linux)
Section titled “Install globally with Homebrew (macOS/Linux)”brew install gemini-cli
Install globally with MacPorts (macOS)
Section titled “Install globally with MacPorts (macOS)”sudo port install gemini-cli
Install with Anaconda (for restricted environments)
Section titled “Install with Anaconda (for restricted environments)”# Create and activate a new environmentconda create -y -n gemini_env -c conda-forge nodejsconda activate gemini_env
# Install Gemini CLI globally via npm (inside the environment)npm install -g @google/gemini-cli
Run Gemini CLI
Section titled “Run Gemini CLI”For most users, we recommend running Gemini CLI with the gemini command:
gemini
For a list of options and additional commands, see the CLI cheatsheet.
You can also run Gemini CLI using one of the following advanced methods:
- Run instantly with npx. You can run Gemini CLI without permanent installation.
- In a sandbox. This method offers increased security and isolation.
- From the source. This is recommended for contributors to the project.
Run instantly with npx
Section titled “Run instantly with npx”# Using npx (no installation required)npx @google/gemini-cli
You can also execute the CLI directly from the main branch on GitHub, which is helpful for testing features still in development:
npx https://github.com/google-gemini/gemini-cli
Run in a sandbox (Docker/Podman)
Section titled “Run in a sandbox (Docker/Podman)”For security and isolation, Gemini CLI can be run inside a container. This is the default way that the CLI executes tools that might have side effects.
- Directly from the registry: You can run the published
sandbox image
directly. This is useful for environments where you only have Docker and
want
to run the CLI.
Terminal window # Run the published sandbox imagedocker run --rm -it us-docker.pkg.dev/gemini-code-dev/gemini-cli/sandbox:0.1.1 - Using the
--sandboxflag: If you have Gemini CLI installed locally (using the standard installation described above), you can instruct it to run inside the sandbox container.Terminal window gemini --sandbox -y -p "your prompt here"
Run from source (recommended for Gemini CLI contributors)
Section titled “Run from source (recommended for Gemini CLI contributors)”Contributors to the project will want to run the CLI directly from the source code.
-
Development mode: This method provides hot-reloading and is useful for active development.
Terminal window # From the root of the repositorynpm run start -
Production mode (React optimizations): This method runs the CLI with React production mode enabled, which is useful for testing performance without development overhead.
Terminal window # From the root of the repositorynpm run start:prod -
Production-like mode (linked package): This method simulates a global installation by linking your local package. It’s useful for testing a local build in a production workflow.
Terminal window # Link the local cli package to your global node_modulesnpm link packages/cli# Now you can run your local version using the `gemini` commandgemini
Releases
Section titled “Releases”Gemini CLI has three release channels: nightly, preview, and stable. For most users, we recommend the stable release, which is the default installation.
Stable
Section titled “Stable”New stable releases are published each week. The stable release is the
promotion
of last week’s preview release along with any bug
fixes. The stable release
uses latest tag, but omitting the tag also installs
the latest stable release
by default:
npm install -g @google/gemini-cli
Preview
Section titled “Preview”New preview releases will be published each week. These releases are not
fully
vetted and may contain regressions or other outstanding issues. Try out the
preview release by using the preview tag:
npm install -g @google/gemini-cli
Nightly
Section titled “Nightly”Nightly releases are published every day. The nightly release includes all
changes from the main branch at time of release. It should be assumed there
are
pending validations and issues. You can help test the latest changes by
installing with the nightly tag:
npm install -g @google/gemini-cli
Welcome to Gemini CLI! This guide will help you install, configure, and start using Gemini CLI to enhance your workflow right from your terminal.
Quickstart: Install, authenticate, configure, and use Gemini CLI
Section titled “Quickstart: Install, authenticate, configure, and use Gemini CLI”Gemini CLI brings the power of advanced language models directly to your command line interface. As an AI-powered assistant, Gemini CLI can help you with a variety of tasks, from understanding and generating code to reviewing and editing documents.
Install
Section titled “Install”The standard method to install and run Gemini CLI uses npm:
npm install -g @google/gemini-cli
Once Gemini CLI is installed, run Gemini CLI from your command line:
gemini
For more installation options, see Gemini CLI Installation.
Authenticate
Section titled “Authenticate”To begin using Gemini CLI, you must authenticate with a Google service. In most cases, you can log in with your existing Google account:
-
Run Gemini CLI after installation:
Terminal window gemini -
When asked “How would you like to authenticate for this project?” select 1. Sign in with Google.
-
Select your Google account.
-
Click on Sign in.
Certain account types may require you to configure a Google Cloud project. For more information, including other authentication methods, see Gemini CLI Authentication Setup.
Configure
Section titled “Configure”Gemini CLI offers several ways to configure its behavior, including environment variables, command-line arguments, and settings files.
To explore your configuration options, see Gemini CLI Configuration.
Once installed and authenticated, you can start using Gemini CLI by issuing commands and prompts in your terminal. Ask it to generate code, explain files, and more.
Rename your photographs based on content
Section titled “Rename your photographs based on content”You can use Gemini CLI to automate file management tasks that require visual analysis. In this example, Gemini CLI renames images based on their actual subject matter.
Scenario: You have a folder containing the following files:
photos/photo1.pngphotos/photo2.pngphotos/photo3.png
Give Gemini the following prompt:
Rename the photos in my "photos" directory based on their contents.
Result: Gemini asks for permission to rename your files.
Select Allow once and your files are renamed:
photos/yellow_flowers.pngphotos/antique_dresser.pngphotos/green_android_robot.png
Explain a repository by reading its code
Section titled “Explain a repository by reading its code”Gemini CLI is effective for rapid codebase exploration. The following example shows how to ask Gemini CLI to fetch, analyze, and summarize a remote project.
Scenario: You want to understand how a popular open-source utility works by inspecting its code, not just its README.
Give Gemini CLI the following prompt:
Clone the 'chalk' repository from https://github.com/chalk/chalk, read its key source files, and explain how it works.
Result: Gemini performs a sequence of actions to answer your request.
- First, it asks for permission to run
git cloneto download the repository. - Next, it finds the important source files and asks for permission to read them.
- Finally, after analyzing the code, it provides a summary.
Gemini CLI returns an explanation based on the actual source code:
The `chalk` library is a popular npm package for styling terminal output withcolors. After analyzing the source code, here's how it works:
- **Core functionality:** The main file sets up a chainable API. Each color or modifier (like `bold` or `italic`) is a getter that appends the corresponding ANSI escape code to an internal stack.
- **Chainable API:** When you call `chalk.red.bold('Hello')`, you are accessing getters. The `red` getter adds the red color code, and the `bold` getter adds the bold code.
- **Output generation:** When the chain is treated as a string (for example, in `console.log`), a final `toString()` method is called. This method joins all the stored ANSI codes, wraps them around the input string ('Hello'), and adds a reset code at the end. This produces the final, styled string that the terminal can render.
Combine two spreadsheets into one spreadsheet
Section titled “Combine two spreadsheets into one spreadsheet”Gemini CLI can process and transform data across multiple files. Use this capability to merge reports or reformat data sets without manual copying.
Scenario: You have two .csv files: Revenue - 2023.csv
and
Revenue - 2024.csv. Each file contains monthly
revenue figures.
Give Gemini CLI the following prompt:
Combine the two .csv files into a single .csv file, with each year a different column.
Result: Gemini CLI reads each file and then asks for permission to write a new file. Provide your permission and Gemini CLI provides the combined data:
Month,2023,2024January,0,1000February,0,1200March,0,2400April,900,500May,1000,800June,1000,900July,1200,1000August,1800,400September,2000,2000October,2400,3400November,3400,1800December,2100,9000
Run unit tests
Section titled “Run unit tests”Gemini CLI can generate boilerplate code and tests based on your existing implementation. This example demonstrates how to request code coverage for a JavaScript component.
Scenario: You’ve written a simple login page. You wish to write unit tests to ensure that your login page has code coverage.
Give Gemini CLI the following prompt:
Write unit tests for Login.js.
Result: Gemini CLI asks for permission to write a new file and creates a test for your login page.
Check usage and quota
Section titled “Check usage and quota”You can check your current token usage and quota information using the
/stats model command. This command provides a
snapshot of your current
session’s token usage, as well as your overall quota and usage for the
supported
models.
For more information on the /stats command and its
subcommands, see the
Command Reference.
Next steps
Section titled “Next steps”- Follow the File management guide to start working with your codebase.
- See Shell commands to learn about terminal integration.
This document provides the technical specification for Gemini CLI hooks, including JSON schemas and API details.
Global hook mechanics
Section titled “Global hook mechanics”- Communication:
stdinfor Input (JSON),stdoutfor Output (JSON), andstderrfor logs and feedback. - Exit codes:
0: Success.stdoutis parsed as JSON. Preferred for all logic.2: System Block. The action is blocked;stderris used as the rejection reason.Other: Warning. A non-fatal failure occurred; the CLI continues with a warning.
- Silence is Mandatory: Your script must
not print any plain text to
stdoutother than the final JSON.
Configuration schema
Section titled “Configuration schema”Hooks are defined in settings.json within the hooks object. Each event (for
example, BeforeTool) contains an array of
hook definitions.
Hook definition
Section titled “Hook definition”| Field | Type | Required | Description |
|---|---|---|---|
matcher |
string |
No | A regex (for tools) or exact string (for lifecycle) to filter when the hook runs. |
sequential |
boolean |
No | If true, hooks in this
group run one after another. If false,
they run in parallel. |
hooks |
array |
Yes | An array of hook configurations. |
Hook configuration
Section titled “Hook configuration”| Field | Type | Required | Description |
|---|---|---|---|
type |
string |
Yes | The execution engine. Currently only "command" is supported. |
command |
string |
Yes* | The shell command to execute. (Required when type is "command"). |
name |
string |
No | A friendly name for identifying the hook in logs and CLI commands. |
timeout |
number |
No | Execution timeout in milliseconds (default: 60000). |
description |
string |
No | A brief explanation of the hook’s purpose. |
Base input schema
Section titled “Base input schema”All hooks receive these common fields via stdin:
{ "session_id": string, // Unique ID for the current session "transcript_path": string, // Absolute path to session transcript JSON "cwd": string, // Current working directory "hook_event_name": string, // The firing event (for example "BeforeTool") "timestamp": string // ISO 8601 execution time}
Common output fields
Section titled “Common output fields”Most hooks support these fields in their stdout JSON:
| Field | Type | Description |
|---|---|---|
systemMessage |
string |
Displayed immediately to the user in the terminal. |
suppressOutput |
boolean |
If true, hides internal
hook metadata from logs/telemetry. |
continue |
boolean |
If false, stops the entire
agent loop immediately. |
stopReason |
string |
Displayed to the user when continue is false.
|
decision |
string |
"allow" or "deny" (alias "block"). Specific impact depends on the
event. |
reason |
string |
The feedback/error message provided when a decision is "deny". |
Tool hooks
Section titled “Tool hooks”Matchers and tool names
Section titled “Matchers and tool names”For BeforeTool and AfterTool
events, the matcher field in your settings is
compared against the name of the tool being executed.
- Built-in Tools: You can match any built-in tool (for
example,
read_file,run_shell_command). See the Tools Reference for a full list of available tool names. - MCP Tools: Tools from MCP servers follow the naming
pattern
mcp_<server_name>_<tool_name>. - Regex Support: Matchers support regular expressions
(for example,
matcher: "read_.*"matches all file reading tools).
BeforeTool
Section titled
“BeforeTool”
Fires before a tool is invoked. Used for argument validation, security checks, and parameter rewriting.
- Input Fields:
tool_name: (string) The name of the tool being called.tool_input: (object) The raw arguments generated by the model.mcp_context: (object) Optional metadata for MCP-based tools.original_request_name: (string) The original name of the tool being called, if this is a tail tool call.
- Relevant Output Fields:
decision: Set to"deny"(or"block") to prevent the tool from executing.reason: Required if denied. This text is sent to the agent as a tool error, allowing it to respond or retry.hookSpecificOutput.tool_input: An object that merges with and overrides the model’s arguments before execution.continue: Set tofalseto kill the entire agent loop immediately.
- Exit Code 2 (Block Tool): Prevents execution. Uses
stderras thereasonsent to the agent. The turn continues.
AfterTool
Section titled
“AfterTool”
Fires after a tool executes. Used for result auditing, context injection, or hiding sensitive output from the agent.
- Input Fields:
tool_name: (string)tool_input: (object) The original arguments.tool_response: (object) The result containingllmContent,returnDisplay, and optionalerror.mcp_context: (object)original_request_name: (string) The original name of the tool being called, if this is a tail tool call.
- Relevant Output Fields:
decision: Set to"deny"to hide the real tool output from the agent.reason: Required if denied. This text replaces the tool result sent back to the model.hookSpecificOutput.additionalContext: Text that is appended to the tool result for the agent.hookSpecificOutput.tailToolCallRequest: ({ name: string, args: object }) A request to execute another tool immediately after this one. The result of this “tail call” will replace the original tool’s response. Ideal for programmatic tool routing.continue: Set tofalseto kill the entire agent loop immediately.
- Exit Code 2 (Block Result): Hides the tool result. Uses
stderras the replacement content sent to the agent. The turn continues.
Agent hooks
Section titled “Agent hooks”BeforeAgent
Section titled
“BeforeAgent”
Fires after a user submits a prompt, but before the agent begins planning. Used for prompt validation or injecting dynamic context.
- Input Fields:
prompt: (string) The original text submitted by the user.
- Relevant Output Fields:
hookSpecificOutput.additionalContext: Text that is appended to the prompt for this turn only.decision: Set to"deny"to block the turn and discard the user’s message (it will not appear in history).continue: Set tofalseto block the turn but save the message to history.reason: Required if denied or stopped.
- Exit Code 2 (Block Turn): Aborts the turn and erases
the prompt from
context. Same as
decision: "deny".
AfterAgent
Section titled
“AfterAgent”
Fires once per turn after the model generates its final response. Primary use case is response validation and automatic retries.
- Input Fields:
prompt: (string) The user’s original request.prompt_response: (string) The final text generated by the agent.stop_hook_active: (boolean) Indicates if this hook is already running as part of a retry sequence.
- Relevant Output Fields:
decision: Set to"deny"to reject the response and force a retry.reason: Required if denied. This text is sent to the agent as a new prompt to request a correction.continue: Set tofalseto stop the session without retrying.hookSpecificOutput.clearContext: Iftrue, clears conversation history (LLM memory) while preserving UI display.
- Exit Code 2 (Retry): Rejects the response and triggers
an automatic retry
turn using
stderras the feedback prompt.
Model hooks
Section titled “Model hooks”BeforeModel
Section titled
“BeforeModel”
Fires before sending a request to the LLM. Operates on a stable, SDK-agnostic request format.
- Input Fields:
llm_request: (object) Containsmodel,messages, andconfig(generation params).
- Relevant Output Fields:
hookSpecificOutput.llm_request: An object that overrides parts of the outgoing request (for example, changing models or temperature).hookSpecificOutput.llm_response: A Synthetic Response object. If provided, the CLI skips the LLM call entirely and uses this as the response.decision: Set to"deny"to block the request and abort the turn.
- Exit Code 2 (Block Turn): Aborts the turn and skips the
LLM call. Uses
stderras the error message.
BeforeToolSelection
Section titled
“BeforeToolSelection”
Fires before the LLM decides which tools to call. Used to filter the available toolset or force specific tool modes.
- Input Fields:
llm_request: (object) Same format asBeforeModel.
- Relevant Output Fields:
hookSpecificOutput.toolConfig.mode: ("AUTO" | "ANY" | "NONE")"NONE": Disables all tools (Wins over other hooks)."ANY": Forces at least one tool call.
hookSpecificOutput.toolConfig.allowedFunctionNames: (string[]) Whitelist of tool names.
- Union Strategy: Multiple hooks’ whitelists are combined.
- Limitations: Does not support
decision,continue, orsystemMessage.
AfterModel
Section titled
“AfterModel”
Fires immediately after an LLM response chunk is received. Used for real-time redaction or PII filtering.
- Input Fields:
llm_request: (object) The original request.llm_response: (object) The model’s response (or a single chunk during streaming).
- Relevant Output Fields:
hookSpecificOutput.llm_response: An object that replaces the model’s response chunk.decision: Set to"deny"to discard the response chunk and block the turn.continue: Set tofalseto kill the entire agent loop immediately.
- Note on Streaming: Fired for every chunk generated by the model. Modifying the response only affects the current chunk.
- Exit Code 2 (Block Response): Aborts the turn and
discards the model’s
output. Uses
stderras the error message.
Lifecycle & system hooks
Section titled “Lifecycle & system hooks”SessionStart
Section titled
“SessionStart”
Fires on application startup, resuming a session, or after a /clear command.
Used for loading initial context.
- Input fields:
source: ("startup" | "resume" | "clear")
- Relevant output fields:
hookSpecificOutput.additionalContext: (string)- Interactive: Injected as the first turn in history.
- Non-interactive: Prepended to the user’s prompt.
systemMessage: Shown at the start of the session.
- Advisory only:
continueanddecisionfields are ignored. Startup is never blocked.
SessionEnd
Section titled
“SessionEnd”
Fires when the CLI exits or a session is cleared. Used for cleanup or final telemetry.
- Input Fields:
reason: ("exit" | "clear" | "logout" | "prompt_input_exit" | "other")
- Relevant Output Fields:
systemMessage: Displayed to the user during shutdown.
- Best Effort: The CLI will not wait for
this hook to complete and
ignores all flow-control fields (
continue,decision).
Notification
Section titled
“Notification”
Fires when the CLI emits a system alert (for example, Tool Permissions). Used for external logging or cross-platform alerts.
- Input Fields:
notification_type: ("ToolPermission")message: Summary of the alert.details: JSON object with alert-specific metadata (for example, tool name, file path).
- Relevant Output Fields:
systemMessage: Displayed alongside the system alert.
- Observability Only: This hook cannot block alerts or grant permissions automatically. Flow-control fields are ignored.
PreCompress
Section titled
“PreCompress”
Fires before the CLI summarizes history to save tokens. Used for logging or state saving.
- Input Fields:
trigger: ("auto" | "manual")
- Relevant Output Fields:
systemMessage: Displayed to the user before compression.
- Advisory Only: Fired asynchronously. It cannot block or modify the compression process. Flow-control fields are ignored.
Stable Model API
Section titled “Stable Model API”Gemini CLI uses these structures to ensure hooks don’t break across SDK updates.
LLMRequest:
{ "model": string, "messages": Array<{ "role": "user" | "model" | "system", "content": string // Non-text parts are filtered out for hooks }>, "config": { "temperature": number, ... }, "toolConfig": { "mode": string, "allowedFunctionNames": string[] }}
LLMResponse:
{ "candidates": Array<{ "content": { "role": "model", "parts": string[] }, "finishReason": string }>, "usageMetadata": { "totalTokenCount": number }}
Hooks are scripts or programs that Gemini CLI executes at specific points in the agentic loop, allowing you to intercept and customize behavior without modifying the CLI’s source code.
What are hooks?
Section titled “What are hooks?”Hooks run synchronously as part of the agent loop—when a hook event fires, Gemini CLI waits for all matching hooks to complete before continuing.
With hooks, you can:
- Add context: Inject relevant information (like git history) before the model processes a request.
- Validate actions: Review tool arguments and block potentially dangerous operations.
- Enforce policies: Implement security scanners and compliance checks.
- Log interactions: Track tool usage and model responses for auditing.
- Optimize behavior: Dynamically filter available tools or adjust model parameters.
Getting started
Section titled “Getting started”- Writing hooks guide: A tutorial on creating your first hook with comprehensive examples.
- Best practices: Guidelines on security, performance, and debugging.
- Hooks reference: The definitive technical specification of I/O schemas and exit codes.
Core concepts
Section titled “Core concepts”Hook events
Section titled “Hook events”Hooks are triggered by specific events in Gemini CLI’s lifecycle.
| Event | When It Fires | Impact | Common Use Cases |
|---|---|---|---|
SessionStart |
When a session begins (startup, resume, clear) | Inject Context | Initialize resources, load context |
SessionEnd |
When a session ends (exit, clear) | Advisory | Clean up, save state |
BeforeAgent |
After user submits prompt, before planning | Block Turn / Context | Add context, validate prompts, block turns |
AfterAgent |
When agent loop ends | Retry / Halt | Review output, force retry or halt execution |
BeforeModel |
Before sending request to LLM | Block Turn / Mock | Modify prompts, swap models, mock responses |
AfterModel |
After receiving LLM response | Block Turn / Redact | Filter/redact responses, log interactions |
BeforeToolSelection |
Before LLM selects tools | Filter Tools | Filter available tools, optimize selection |
BeforeTool |
Before a tool executes | Block Tool / Rewrite | Validate arguments, block dangerous ops |
AfterTool |
After a tool executes | Block Result / Context | Process results, run tests, hide results |
PreCompress |
Before context compression | Advisory | Save state, notify user |
Notification |
When a system notification occurs | Advisory | Forward to desktop alerts, logging |
Global mechanics
Section titled “Global mechanics”Understanding these core principles is essential for building robust hooks.
Strict JSON requirements (The “Golden Rule”)
Section titled “Strict JSON requirements (The “Golden Rule”)”Hooks communicate via stdin (Input) and stdout (Output).
- Silence is Mandatory: Your script must
not print any plain text to
stdoutother than the final JSON object. Even a singleechoorprintcall before the JSON will break parsing. - Pollution = Failure: If
stdoutcontains non-JSON text, parsing will fail. The CLI will default to “Allow” and treat the entire output as asystemMessage. - Debug via Stderr: Use
stderrfor all logging and debugging (for example,echo "debug" >&2). Gemini CLI capturesstderrbut never attempts to parse it as JSON.
Exit codes
Section titled “Exit codes”Gemini CLI uses exit codes to determine the high-level outcome of a hook execution:
| Exit Code | Label | Behavioral Impact |
|---|---|---|
| 0 | Success | The stdout is parsed as JSON.
Preferred code for all logic, including
intentional blocks (for example, {"decision": "deny"}). |
| 2 | System Block | Critical Block. The target action (tool, turn,
or stop) is aborted. stderr is used as
the rejection reason. High severity; used for security stops or
script failures. |
| Other | Warning | Non-fatal failure. A warning is shown, but the interaction proceeds using original parameters. |
Matchers
Section titled “Matchers”You can filter which specific tools or triggers fire your hook using the
matcher field.
- Tool events (
BeforeTool,AfterTool): Matchers are Regular Expressions. (for example,"write_.*"). - Lifecycle events: Matchers are Exact
Strings. (for example,
"startup"). - Wildcards:
"*"or""(empty string) matches all occurrences.
Configuration
Section titled “Configuration”Hooks are configured in settings.json. Gemini CLI
merges configurations from
multiple layers in the following order of precedence (highest to lowest):
- Project settings:
.gemini/settings.jsonin the current directory. - User settings:
~/.gemini/settings.json. - System settings:
/etc/gemini-cli/settings.json. - Extensions: Hooks defined by installed extensions.
Configuration schema
Section titled “Configuration schema”{ "hooks": { "BeforeTool": [ { "matcher": "write_file|replace", "hooks": [ { "name": "security-check", "type": "command", "command": "$GEMINI_PROJECT_DIR/.gemini/hooks/security.sh", "timeout": 5000 } ] } ] }}
Hook configuration fields
Section titled “Hook configuration fields”| Field | Type | Required | Description |
|---|---|---|---|
type |
string | Yes | The execution engine. Currently only "command" is supported. |
command |
string | Yes* | The shell command to execute. (Required when type is "command"). |
name |
string | No | A friendly name for identifying the hook in logs and CLI commands. |
timeout |
number | No | Execution timeout in milliseconds (default: 60000). |
description |
string | No | A brief explanation of the hook’s purpose. |
Environment variables
Section titled “Environment variables”Hooks are executed with a sanitized environment.
GEMINI_PROJECT_DIR: The absolute path to the project root.GEMINI_SESSION_ID: The unique ID for the current session.GEMINI_CWD: The current working directory.CLAUDE_PROJECT_DIR: (Alias) Provided for compatibility.
Security and risks
Section titled “Security and risks”Project-level hooks are particularly risky when opening
untrusted projects.
Gemini CLI fingerprints project hooks. If a hook’s name or
command changes
(for example, via git pull), it is treated as a
new, untrusted hook and
you will be warned before it executes.
See Security Considerations for a detailed threat model.
Managing hooks
Section titled “Managing hooks”Use the CLI commands to manage hooks without editing JSON manually:
- View hooks:
/hooks panel - Enable/Disable all:
/hooks enable-allor/hooks disable-all - Toggle individual:
/hooks enable <name>or/hooks disable <name>
Gemini CLI can integrate with your IDE to provide a more seamless and context-aware experience. This integration allows the CLI to understand your workspace better and enables powerful features like native in-editor diffing.
There are two primary ways to integrate Gemini CLI with an IDE:
- VS Code companion extension: Install the “Gemini CLI Companion” extension on Antigravity, Visual Studio Code, or other VS Code compatible editors.
- Agent Client Protocol (ACP): An open protocol for interoperability between AI coding agents and IDEs. This method is used for integrations with tools like JetBrains and Zed, which leverage the ACP Agent Registry for easy discovery and installation of compatible agents like Gemini CLI.
VS Code companion extension
Section titled “VS Code companion extension”The Gemini CLI Companion extension grants Gemini CLI direct access to your VS Code compatible IDEs and improves your experience by providing real-time context such as open files, cursor positions, and text selection. The extension also enables a native diffing interface so you can seamlessly review and apply AI-generated code changes directly within your editor.
Features
Section titled “Features”-
Workspace context: The CLI automatically gains awareness of your workspace to provide more relevant and accurate responses. This context includes:
- The 10 most recently accessed files in your workspace.
- Your active cursor position.
- Any text you have selected (up to a 16KB limit; longer selections will be truncated).
-
Native diffing: When Gemini suggests code modifications, you can view the changes directly within your IDE’s native diff viewer. This lets you review, edit, and accept or reject the suggested changes seamlessly.
-
VS Code commands: You can access Gemini CLI features directly from the VS Code Command Palette (
Cmd+Shift+PorCtrl+Shift+P):Gemini CLI: Run: Starts a new Gemini CLI session in the integrated terminal.Gemini CLI: Accept Diff: Accepts the changes in the active diff editor.Gemini CLI: Close Diff Editor: Rejects the changes and closes the active diff editor.Gemini CLI: View Third-Party Notices: Displays the third-party notices for the extension.
Installation and setup
Section titled “Installation and setup”There are three ways to set up the IDE integration:
1. Automatic nudge (recommended)
Section titled “1. Automatic nudge (recommended)”When you run Gemini CLI inside a supported editor, it will automatically detect your environment and prompt you to connect. Answering “Yes” will automatically run the necessary setup, which includes installing the companion extension and enabling the connection.
2. Manual installation from CLI
Section titled “2. Manual installation from CLI”If you previously dismissed the prompt or want to install the extension manually, you can run the following command inside Gemini CLI:
/ide install
This will find the correct extension for your IDE and install it.
3. Manual installation from a marketplace
Section titled “3. Manual installation from a marketplace”You can also install the extension directly from a marketplace.
- For Visual Studio Code: Install from the VS Code Marketplace.
- For VS Code forks: To support forks of VS Code, the extension is also published on the Open VSX Registry. Follow your editor’s instructions for installing extensions from this registry.
Enabling and disabling
Section titled “Enabling and disabling”You can control the IDE integration from within the CLI:
- To enable the connection to the IDE, run:
/ide enable
- To disable the connection, run:
/ide disable
When enabled, Gemini CLI will automatically attempt to connect to the IDE companion extension.
Checking the status
Section titled “Checking the status”To check the connection status and see the context the CLI has received from the IDE, run:
/ide status
If connected, this command will show the IDE it’s connected to and a list of recently opened files it is aware of.
Working with diffs
Section titled “Working with diffs”When you ask Gemini to modify a file, it can open a diff view directly in your editor.
To accept a diff, you can perform any of the following actions:
- Click the checkmark icon in the diff editor’s title bar.
- Save the file (for example, with
Cmd+SorCtrl+S). - Open the Command Palette and run Gemini CLI: Accept Diff.
- Respond with
yesin the CLI when prompted.
To reject a diff, you can:
- Click the ‘x’ icon in the diff editor’s title bar.
- Close the diff editor tab.
- Open the Command Palette and run Gemini CLI: Close Diff Editor.
- Respond with
noin the CLI when prompted.
You can also modify the suggested changes directly in the diff view before accepting them.
If you select ‘Allow for this session’ in the CLI, changes will no longer show up in the IDE as they will be auto-accepted.
Agent Client Protocol (ACP)
Section titled “Agent Client Protocol (ACP)”ACP is an open protocol that standardizes how AI coding agents communicate with code editors and IDEs. It addresses the challenge of fragmented distribution, where agents traditionally needed custom integrations for each client. With ACP, developers can implement their agent once, and it becomes compatible with any ACP-compliant editor.
For a comprehensive introduction to ACP, including its architecture and benefits, refer to the official ACP Introduction documentation.
The ACP Agent Registry
Section titled “The ACP Agent Registry”Gemini CLI is officially available in the ACP Agent Registry. This allows you to install and update Gemini CLI directly within supporting IDEs and eliminates the need for manual downloads or IDE-specific extensions.
Using the registry ensures:
- Ease of use: Discover and install agents directly within your IDE settings.
- Latest versions: Ensures users always have access to the most up-to-date agent implementations.
For more details on how the registry works, visit the official ACP Agent Registry page. You can learn about how specific IDEs leverage this integration in the following section.
IDE-specific integration
Section titled “IDE-specific integration”Gemini CLI is an ACP-compatible agent available in the ACP Agent Registry. Here’s how different IDEs leverage the ACP and the registry:
JetBrains IDEs
Section titled “JetBrains IDEs”JetBrains IDEs (like IntelliJ IDEA, PyCharm, or GoLand) offer built-in registry support, allowing users to find and install ACP-compatible agents directly.
For more details, refer to the official JetBrains AI Blog announcement.
Zed, a modern code editor, also integrates with the ACP Agent Registry. This allows Zed users to easily browse, install, and manage ACP agents.
Learn more about Zed’s integration with the ACP Registry in their blog post.
Other ACP-compatible IDEs
Section titled “Other ACP-compatible IDEs”Any other IDE that supports the ACP Agent Registry can install Gemini CLI directly through their in-built registry features.
Using with sandboxing
Section titled “Using with sandboxing”If you are using Gemini CLI within a sandbox, be aware of the following:
- On macOS: The IDE integration requires network access to communicate with the IDE companion extension. You must use a Seatbelt profile that allows network access.
- In a Docker container: If you run Gemini CLI inside a
Docker (or Podman)
container, the IDE integration can still connect to the VS Code
extension
running on your host machine. The CLI is configured to automatically
find the
IDE server on
host.docker.internal. No special configuration is usually required, but you may need to ensure your Docker networking setup allows connections from the container to the host.
Troubleshooting
Section titled “Troubleshooting”VS Code companion extension errors
Section titled “VS Code companion extension errors”Connection errors
Section titled “Connection errors”-
Message:
🔴 Disconnected: Failed to connect to IDE companion extension in [IDE Name]. Please ensure the extension is running. To install the extension, run /ide install.- Cause: Gemini CLI could not find the necessary
environment variables
(
GEMINI_CLI_IDE_WORKSPACE_PATHorGEMINI_CLI_IDE_SERVER_PORT) to connect to the IDE. This usually means the IDE companion extension is not running or did not initialize correctly. - Solution:
- Make sure you have installed the Gemini CLI Companion extension in your IDE and that it is enabled.
- Open a new terminal window in your IDE to ensure it picks up the correct environment.
- Cause: Gemini CLI could not find the necessary
environment variables
(
-
Message:
🔴 Disconnected: IDE connection error. The connection was lost unexpectedly. Please try reconnecting by running /ide enable- Cause: The connection to the IDE companion was lost.
- Solution: Run
/ide enableto try and reconnect. If the issue continues, open a new terminal window or restart your IDE.
Manual PID override
Section titled “Manual PID override”If automatic IDE detection fails, or if you are running Gemini CLI in a
standalone terminal and want to manually associate it with a specific IDE
instance, you can set the GEMINI_CLI_IDE_PID
environment variable to the
process ID (PID) of your IDE.
macOS/Linux
export GEMINI_CLI_IDE_PID=12345
Windows (PowerShell)
$env:GEMINI_CLI_IDE_PID=12345
When this variable is set, Gemini CLI will skip automatic detection and attempt to connect using the provided PID.
Configuration errors
Section titled “Configuration errors”-
Message:
🔴 Disconnected: Directory mismatch. Gemini CLI is running in a different location than the open workspace in [IDE Name]. Please run the CLI from one of the following directories: [List of directories]- Cause: The CLI’s current working directory is outside the workspace you have open in your IDE.
- Solution:
cdinto the same directory that is open in your IDE and restart the CLI.
-
Message:
🔴 Disconnected: To use this feature, please open a workspace folder in [IDE Name] and try again.- Cause: You have no workspace open in your IDE.
- Solution: Open a workspace in your IDE and restart the CLI.
General errors
Section titled “General errors”-
Message:
IDE integration is not supported in your current environment. To use this feature, run Gemini CLI in one of these supported IDEs: [List of IDEs]- Cause: You are running Gemini CLI in a terminal or environment that is not a supported IDE.
- Solution: Run Gemini CLI from the integrated terminal of a supported IDE, like Antigravity or VS Code.
-
Message:
No installer is available for IDE. Please install Gemini CLI Companion extension manually from the marketplace.- Cause: You ran
/ide install, but the CLI does not have an automated installer for your specific IDE. - Solution: Open your IDE’s extension marketplace, search for “Gemini CLI Companion”, and install it manually.
- Cause: You ran
ACP integration errors
Section titled “ACP integration errors”For issues related to ACP integration, refer to the debugging and telemetry section in the ACP Mode documentation.
This document provides an overview of Gemini CLI’s system requirements, installation methods, and release types.
Recommended system specifications
Section titled “Recommended system specifications”- Operating System:
- macOS 15+
- Windows 11 24H2+
- Ubuntu 20.04+
- Hardware:
- “Casual” usage: 4GB+ RAM (short sessions, common tasks and edits)
- “Power” usage: 16GB+ RAM (long sessions, large codebases, deep context)
- Runtime: Node.js 20.0.0+
- Shell: Bash, Zsh, or PowerShell
- Location: Gemini Code Assist supported locations
- Internet connection required
Install Gemini CLI
Section titled “Install Gemini CLI”We recommend most users install Gemini CLI using one of the following installation methods:
- npm
- Homebrew
- MacPorts
- Anaconda
Note that Gemini CLI comes pre-installed on Cloud Shell and Cloud Workstations.
Install globally with npm
Section titled “Install globally with npm”npm install -g @google/gemini-cli
Install globally with Homebrew (macOS/Linux)
Section titled “Install globally with Homebrew (macOS/Linux)”brew install gemini-cli
Install globally with MacPorts (macOS)
Section titled “Install globally with MacPorts (macOS)”sudo port install gemini-cli
Install with Anaconda (for restricted environments)
Section titled “Install with Anaconda (for restricted environments)”# Create and activate a new environmentconda create -y -n gemini_env -c conda-forge nodejsconda activate gemini_env
# Install Gemini CLI globally via npm (inside the environment)npm install -g @google/gemini-cli
Run Gemini CLI
Section titled “Run Gemini CLI”For most users, we recommend running Gemini CLI with the gemini command:
gemini
For a list of options and additional commands, see the CLI cheatsheet.
You can also run Gemini CLI using one of the following advanced methods:
- Run instantly with npx. You can run Gemini CLI without permanent installation.
- In a sandbox. This method offers increased security and isolation.
- From the source. This is recommended for contributors to the project.
Run instantly with npx
Section titled “Run instantly with npx”# Using npx (no installation required)npx @google/gemini-cli
You can also execute the CLI directly from the main branch on GitHub, which is helpful for testing features still in development:
npx https://github.com/google-gemini/gemini-cli
Run in a sandbox (Docker/Podman)
Section titled “Run in a sandbox (Docker/Podman)”For security and isolation, Gemini CLI can be run inside a container. This is the default way that the CLI executes tools that might have side effects.
- Directly from the registry: You can run the published
sandbox image
directly. This is useful for environments where you only have Docker and
want
to run the CLI.
Terminal window # Run the published sandbox imagedocker run --rm -it us-docker.pkg.dev/gemini-code-dev/gemini-cli/sandbox:0.1.1 - Using the
--sandboxflag: If you have Gemini CLI installed locally (using the standard installation described above), you can instruct it to run inside the sandbox container.Terminal window gemini --sandbox -y -p "your prompt here"
Run from source (recommended for Gemini CLI contributors)
Section titled “Run from source (recommended for Gemini CLI contributors)”Contributors to the project will want to run the CLI directly from the source code.
-
Development mode: This method provides hot-reloading and is useful for active development.
Terminal window # From the root of the repositorynpm run start -
Production mode (React optimizations): This method runs the CLI with React production mode enabled, which is useful for testing performance without development overhead.
Terminal window # From the root of the repositorynpm run start:prod -
Production-like mode (linked package): This method simulates a global installation by linking your local package. It’s useful for testing a local build in a production workflow.
Terminal window # Link the local cli package to your global node_modulesnpm link packages/cli# Now you can run your local version using the `gemini` commandgemini
Releases
Section titled “Releases”Gemini CLI has three release channels: nightly, preview, and stable. For most users, we recommend the stable release, which is the default installation.
Stable
Section titled “Stable”New stable releases are published each week. The stable release is the
promotion
of last week’s preview release along with any bug
fixes. The stable release
uses latest tag, but omitting the tag also installs
the latest stable release
by default:
npm install -g @google/gemini-cli
Preview
Section titled “Preview”New preview releases will be published each week. These releases are not
fully
vetted and may contain regressions or other outstanding issues. Try out the
preview release by using the preview tag:
npm install -g @google/gemini-cli
Nightly
Section titled “Nightly”Nightly releases are published every day. The nightly release includes all
changes from the main branch at time of release. It should be assumed there
are
pending validations and issues. You can help test the latest changes by
installing with the nightly tag:
npm install -g @google/gemini-cli
This document provides information about the integration testing framework used in this project.
Overview
Section titled “Overview”The integration tests are designed to validate the end-to-end functionality of Gemini CLI. They execute the built binary in a controlled environment and verify that it behaves as expected when interacting with the file system.
These tests are located in the integration-tests
directory and are run using a
custom test runner.
Building the tests
Section titled “Building the tests”Prior to running any integration tests, you need to create a release bundle that you want to actually test:
npm run bundle
You must re-run this command after making any changes to the CLI source code, but not after making changes to tests.
Running the tests
Section titled “Running the tests”The integration tests are not run as part of the default npm run test command.
They must be run explicitly using the npm run test:integration:all script.
The integration tests can also be run using the following shortcut:
npm run test:e2e
Running a specific set of tests
Section titled “Running a specific set of tests”To run a subset of test files, you can use
npm run <integration test command> <file_name1> ....
where <integration
test command> is either test:e2e or test:integration* and <file_name>
is any of the .test.js files in the integration-tests/ directory. For
example, the following command runs list_directory.test.js and
write_file.test.js:
npm run test:e2e list_directory write_file
Running a single test by name
Section titled “Running a single test by name”To run a single test by its name, use the --test-name-pattern flag:
npm run test:e2e -- --test-name-pattern "reads a file"
Regenerating model responses
Section titled “Regenerating model responses”Some integration tests use faked out model responses, which may need to be regenerated from time to time as the implementations change.
To regenerate these golden files, set the REGENERATE_MODEL_GOLDENS environment variable to “true” when running the tests, for example:
WARNING: If running locally you should review these updated responses for any information about yourself or your system that gemini may have included in these responses.
REGENERATE_MODEL_GOLDENS="true" npm run test:e2e
WARNING: Make sure you run await rig.cleanup() at the end of your test, else the golden files will not be updated.
Deflaking a test
Section titled “Deflaking a test”Before adding a new integration test, you should test it at least 5 times with the deflake script or workflow to make sure that it is not flaky.
Deflake script
Section titled “Deflake script”npm run deflake -- --runs=5 --command="npm run test:e2e -- -- --test-name-pattern '<your-new-test-name>'"
Deflake workflow
Section titled “Deflake workflow”gh workflow run deflake.yml --ref <your-branch> -f test_name_pattern="<your-test-name-pattern>"
Running all tests
Section titled “Running all tests”To run the entire suite of integration tests, use the following command:
npm run test:integration:all
Sandbox matrix
Section titled “Sandbox matrix”The all command will run tests for no sandboxing, docker and
podman.
Each individual type can be run using the following commands:
npm run test:integration:sandbox:none
npm run test:integration:sandbox:docker
npm run test:integration:sandbox:podman
Memory regression tests
Section titled “Memory regression tests”Memory regression tests are designed to detect heap growth and leaks across
key
CLI scenarios. They are located in the memory-tests
directory.
These tests are distinct from standard integration tests because they measure memory usage and compare it against committed baselines.
Running memory tests
Section titled “Running memory tests”Memory tests are not run as part of the default npm run test or
npm run test:e2e commands. They are run nightly in
CI but can be run manually:
npm run test:memory
Updating baselines
Section titled “Updating baselines”If you intentionally change behavior that affects memory usage, you may need
to
update the baselines. Set the UPDATE_MEMORY_BASELINES environment variable to
true:
UPDATE_MEMORY_BASELINES=true npm run test:memory
This will run the tests, take median snapshots, and overwrite
memory-tests/baselines.json. You should review the
changes and commit the
updated baseline file.
How it works
Section titled “How it works”The harness (MemoryTestHarness in packages/test-utils):
- Forces garbage collection multiple times to reduce noise.
- Takes median snapshots to filter spikes.
- Compares against baselines with a 10% tolerance.
- Can analyze sustained leaks across 3 snapshots using
analyzeSnapshots().
Performance regression tests
Section titled “Performance regression tests”Performance regression tests are designed to detect wall-clock time, CPU
usage,
and event loop delay regressions across key CLI scenarios. They are located
in
the perf-tests directory.
These tests are distinct from standard integration tests because they measure performance metrics and compare it against committed baselines.
Running performance tests
Section titled “Running performance tests”Performance tests are not run as part of the default npm run test or
npm run test:e2e commands. They are run nightly in
CI but can be run manually:
npm run test:perf
Updating baselines
Section titled “Updating baselines”If you intentionally change behavior that affects performance, you may need
to
update the baselines. Set the UPDATE_PERF_BASELINES
environment variable to
true:
UPDATE_PERF_BASELINES=true npm run test:perf
This will run the tests multiple times (with warmup), apply IQR outlier
filtering, and overwrite perf-tests/baselines.json.
You should review the
changes and commit the updated baseline file.
How it works
Section titled “How it works”The harness (PerfTestHarness in packages/test-utils):
- Measures wall-clock time using
performance.now(). - Measures CPU usage using
process.cpuUsage(). - Monitors event loop delay using
perf_hooks.monitorEventLoopDelay(). - Applies IQR (Interquartile Range) filtering to remove outlier samples.
- Compares against baselines with a 15% tolerance.
Diagnostics
Section titled “Diagnostics”The integration test runner provides several options for diagnostics to help track down test failures.
Keeping test output
Section titled “Keeping test output”You can preserve the temporary files created during a test run for inspection. This is useful for debugging issues with file system operations.
To keep the test output set the KEEP_OUTPUT
environment variable to true.
KEEP_OUTPUT=true npm run test:integration:sandbox:none
When output is kept, the test runner will print the path to the unique directory for the test run.
Verbose output
Section titled “Verbose output”For more detailed debugging, set the VERBOSE
environment variable to true.
VERBOSE=true npm run test:integration:sandbox:none
When using VERBOSE=true and KEEP_OUTPUT=true in the same command, the output
is streamed to the console and also saved to a log file within the test’s
temporary directory.
The verbose output is formatted to clearly identify the source of the logs:
--- TEST: <log dir>:<test-name> ---... output from the gemini command ...--- END TEST: <log dir>:<test-name> ---
Linting and formatting
Section titled “Linting and formatting”To ensure code quality and consistency, the integration test files are linted as part of the main build process. You can also manually run the linter and auto-fixer.
Running the linter
Section titled “Running the linter”To check for linting errors, run the following command:
npm run lint
You can include the :fix flag in the command to
automatically fix any fixable
linting errors:
npm run lint:fix
Directory structure
Section titled “Directory structure”The integration tests create a unique directory for each test run inside the
.integration-tests directory. Within this directory,
a subdirectory is created
for each test file, and within that, a subdirectory is created for each
individual test case.
This structure makes it easy to locate the artifacts for a specific test run, file, or case.
.integration-tests/└── <run-id>/ └── <test-file-name>.test.js/ └── <test-case-name>/ ├── output.log └── ...other test artifacts...
Continuous integration
Section titled “Continuous integration”To ensure the integration tests are always run, a GitHub Actions workflow is
defined in .github/workflows/chained_e2e.yml. This
workflow automatically runs
the integrations tests for pull requests against the main branch, or when a
pull request is added to a merge queue.
The workflow runs the tests in different sandboxing environments to ensure Gemini CLI is tested across each:
sandbox:none: Runs the tests without any sandboxing.sandbox:docker: Runs the tests in a Docker container.sandbox:podman: Runs the tests in a Podman container.
This document provides a detailed overview of the automated processes we use to manage and triage issues and pull requests. Our goal is to provide prompt feedback and ensure that contributions are reviewed and integrated efficiently. Understanding this automation will help you as a contributor know what to expect and how to best interact with our repository bots.
Guiding principle: Issues and pull requests
Section titled “Guiding principle: Issues and pull requests”First and foremost, almost every Pull Request (PR) should be linked to a corresponding Issue. The issue describes the “what” and the “why” (the bug or feature), while the PR is the “how” (the implementation). This separation helps us track work, prioritize features, and maintain clear historical context. Our automation is built around this principle.
Detailed automation workflows
Section titled “Detailed automation workflows”Here is a breakdown of the specific automation workflows that run in our repository.
1. When you open an
issue: Automated Issue Triage
Section titled “1. When you open
an issue: Automated Issue Triage”
This is the first bot you will interact with when you create an issue. Its job is to perform an initial analysis and apply the correct labels.
- Workflow File:
.github/workflows/gemini-automated-issue-triage.yml - When it runs: Immediately after an issue is created or reopened.
- What it does:
- It uses a Gemini model to analyze the issue’s title and body against a detailed set of guidelines.
- Applies one
area/*label: Categorizes the issue into a functional area of the project (for example,area/ux,area/models,area/platform). - Applies one
kind/*label: Identifies the type of issue (for example,kind/bug,kind/enhancement,kind/question). - Applies one
priority/*label: Assigns a priority from P0 (critical) to P3 (low) based on the described impact. - May apply
status/need-information: If the issue lacks critical details (like logs or reproduction steps), it will be flagged for more information. - May apply
status/need-retesting: If the issue references a CLI version that is more than six versions old, it will be flagged for retesting on a current version.
- What you should do:
- Fill out the issue template as completely as possible. The more detail you provide, the more accurate the triage will be.
- If the
status/need-informationlabel is added, provide the requested details in a comment.
2. When
you open a pull request: Continuous Integration (CI)
Section titled “2. When you open
a pull request: Continuous Integration (CI)”
This workflow ensures that all changes meet our quality standards before they can be merged.
- Workflow File:
.github/workflows/ci.yml - When it runs: On every push to a pull request.
- What it does:
- Lint: Checks that your code adheres to our project’s formatting and style rules.
- Test: Runs our full suite of automated tests across macOS, Windows, and Linux, and on multiple Node.js versions. This is the most time-consuming part of the CI process.
- Post Coverage Comment: After all tests have successfully passed, a bot will post a comment on your PR. This comment provides a summary of how well your changes are covered by tests.
- What you should do:
- Ensure all CI checks pass. A green checkmark ✅ will appear next to your commit when everything is successful.
- If a check fails (a red “X” ❌), click the “Details” link next to the failed check to view the logs, identify the problem, and push a fix.
3.
Ongoing triage for pull requests: PR Auditing and Label Sync
Section titled “3. Ongoing triage
for pull requests: PR Auditing and Label Sync”
This workflow runs periodically to ensure all open PRs are correctly linked to issues and have consistent labels.
- Workflow File:
.github/workflows/gemini-scheduled-pr-triage.yml - When it runs: Every 15 minutes on all open pull requests.
- What it does:
- Checks for a linked issue: The bot scans your
PR description for a
keyword that links it to an issue (for example,
Fixes #123,Closes #456). - Adds
status/need-issue: If no linked issue is found, the bot will add thestatus/need-issuelabel to your PR. This is a clear signal that an issue needs to be created and linked. - Synchronizes labels: If an issue is
linked, the bot ensures the PR’s
labels perfectly match the issue’s labels. It will add any
missing labels
and remove any that don’t belong, and it will remove the
status/need-issuelabel if it was present.
- Checks for a linked issue: The bot scans your
PR description for a
keyword that links it to an issue (for example,
- What you should do:
- Always link your PR to an issue. This is the
most important step. Add a
line like
Resolves #<issue-number>to your PR description. - This will ensure your PR is correctly categorized and moves through the review process smoothly.
- Always link your PR to an issue. This is the
most important step. Add a
line like
4. Ongoing
triage for issues: Scheduled Issue Triage
Section titled “4. Ongoing triage
for issues: Scheduled Issue Triage”
This is a fallback workflow to ensure that no issue gets missed by the triage process.
- Workflow File:
.github/workflows/gemini-scheduled-issue-triage.yml - When it runs: Every hour on all open issues.
- What it does:
- It actively seeks out issues that either have no labels at all
or still have
the
status/need-triagelabel. - It then triggers the same powerful Gemini-based analysis as the initial triage bot to apply the correct labels.
- It actively seeks out issues that either have no labels at all
or still have
the
- What you should do:
- You typically don’t need to do anything. This workflow is a safety net to ensure every issue is eventually categorized, even if the initial triage fails.
5. Automatic unassignment of inactive contributors: Unassign Inactive Issue Assignees
Section titled “5. Automatic
unassignment of inactive contributors: Unassign Inactive Issue
Assignees”
To keep the list of open help wanted issues
accessible to all contributors,
this workflow automatically removes external contributors
who have not
opened a linked pull request within 7 days of being
assigned. Maintainers,
org members, and repo collaborators with write access or above are always
exempt
and will never be auto-unassigned.
- Workflow File:
.github/workflows/unassign-inactive-assignees.yml - When it runs: Every day at 09:00 UTC, and can be
triggered manually with
an optional
dry_runmode. - What it does:
- Finds every open issue labeled
help wantedthat has at least one assignee. - Identifies privileged users (team members, repo collaborators with write+ access, maintainers) and skips them entirely.
- For each remaining (external) assignee it reads the issue’s
timeline to
determine:
- The exact date they were assigned (using
assignedtimeline events). - Whether they have opened a PR that is already linked/cross-referenced to the issue.
- The exact date they were assigned (using
- Each cross-referenced PR is fetched to verify it is ready for review: open and non-draft, or already merged. Draft PRs do not count.
- If an assignee has been assigned for more than 7 days and no qualifying PR is found, they are automatically unassigned and a comment is posted explaining the reason and how to re-claim the issue.
- Assignees who have a non-draft, open or merged PR linked to the issue are never unassigned by this workflow.
- Finds every open issue labeled
- What you should do:
- Open a real PR, not a draft: Within 7 days of
being assigned, open a PR
that is ready for review and include
Fixes #<issue-number>in the description. Draft PRs do not satisfy the requirement and will not prevent auto-unassignment. - Re-assign if unassigned by mistake: Comment
/assignon the issue to assign yourself again. - Unassign yourself if you can no longer work on
the issue by commenting
/unassign, so other contributors can pick it up right away.
- Open a real PR, not a draft: Within 7 days of
being assigned, open a PR
that is ready for review and include
6. Release automation
Section titled “6. Release automation”This workflow handles the process of packaging and publishing new versions of Gemini CLI.
- Workflow File:
.github/workflows/release-manual.yml - When it runs: On a daily schedule for “nightly” releases, and manually for official patch/minor releases.
- What it does:
- Automatically builds the project, bumps the version numbers, and publishes the packages to npm.
- Creates a corresponding release on GitHub with generated release notes.
- What you should do:
- As a contributor, you don’t need to do anything for this
process. You can be
confident that once your PR is merged into the
mainbranch, your changes will be included in the very next nightly release.
- As a contributor, you don’t need to do anything for this
process. You can be
confident that once your PR is merged into the
We hope this detailed overview is helpful. If you have any questions about our automation or processes, don’t hesitate to ask!
This guide provides instructions for setting up and using local development features for Gemini CLI.
Tracing
Section titled “Tracing”Gemini CLI uses OpenTelemetry (OTel) to record traces that help you debug agent behavior. Traces instrument key events like model calls, tool scheduler operations, and tool calls.
Traces provide deep visibility into agent behavior and help you debug complex issues. They are captured automatically when you enable telemetry.
View traces
Section titled “View traces”You can view traces using Genkit Developer UI, Jaeger, or Google Cloud.
Use Genkit
Section titled “Use Genkit”Genkit provides a web-based UI for viewing traces and other telemetry data.
-
Start the Genkit telemetry server:
Run the following command to start the Genkit server:
Terminal window npm run telemetry -- --target=genkitThe script will output the URL for the Genkit Developer UI. For example:
Genkit Developer UI: http://localhost:4000 -
Run Gemini CLI:
In a separate terminal, run your Gemini CLI command:
Terminal window gemini -
View the traces:
Open the Genkit Developer UI URL in your browser and navigate to the Traces tab to view the traces.
Use Jaeger
Section titled “Use Jaeger”You can view traces in the Jaeger UI for local development.
-
Start the telemetry collector:
Run the following command in your terminal to download and start Jaeger and an OTel collector:
Terminal window npm run telemetry -- --target=localThis command configures your workspace for local telemetry and provides a link to the Jaeger UI (usually
http://localhost:16686).- Collector logs:
~/.gemini/tmp/<projectHash>/otel/collector.log
- Collector logs:
-
Run Gemini CLI:
In a separate terminal, run your Gemini CLI command:
Terminal window gemini -
View the traces:
After running your command, open the Jaeger UI link in your browser to view the traces.
Use Google Cloud
Section titled “Use Google Cloud”You can use an OpenTelemetry collector to forward telemetry data to Google Cloud Trace for custom processing or routing.
-
Configure
.gemini/settings.json:{"telemetry": {"enabled": true,"target": "gcp","useCollector": true}} -
Start the telemetry collector:
Run the following command to start a local OTel collector that forwards to Google Cloud:
Terminal window npm run telemetry -- --target=gcpThe script outputs links to view traces, metrics, and logs in the Google Cloud Console.
- Collector logs:
~/.gemini/tmp/<projectHash>/otel/collector-gcp.log
- Collector logs:
-
Run Gemini CLI:
In a separate terminal, run your Gemini CLI command:
Terminal window gemini -
View logs, metrics, and traces:
After sending prompts, view your data in the Google Cloud Console. See the telemetry documentation for links to Logs, Metrics, and Trace explorers.
For more detailed information on telemetry, see the telemetry documentation.
Instrument code with traces
Section titled “Instrument code with traces”You can add traces to your own code for more detailed instrumentation.
Adding traces helps you debug and understand the flow of execution. Use the
runInDevTraceSpan function to wrap any section of
code in a trace span.
Here is a basic example:
import { runInDevTraceSpan } from '@google/gemini-cli-core';import { GeminiCliOperation } from '@google/gemini-cli-core/lib/telemetry/constants.js';
await runInDevTraceSpan( { operation: GeminiCliOperation.ToolCall, attributes: { [GEN_AI_AGENT_NAME]: 'gemini-cli', }, }, async ({ metadata }) => { // metadata allows you to record the input and output of the // operation as well as other attributes. metadata.input = { key: 'value' }; // Set custom attributes. metadata.attributes['custom.attribute'] = 'custom.value';
// Your code to be traced goes here. try { const output = await somethingRisky(); metadata.output = output; return output; } catch (e) { metadata.error = e; throw e; } },);
In this example:
operation: The operation type of the span, represented by theGeminiCliOperationenum.metadata.input: (Optional) An object containing the input data for the traced operation.metadata.output: (Optional) An object containing the output data from the traced operation.metadata.attributes: (Optional) A record of custom attributes to add to the span.metadata.error: (Optional) An error object to record if the operation fails.
This monorepo contains two main packages: @google/gemini-cli and
@google/gemini-cli-core.
@google/gemini-cli
Section titled
“@google/gemini-cli”
This is the main package for Gemini CLI. It is responsible for the user interface, command parsing, and all other user-facing functionality.
When this package is published, it is bundled into a single executable file.
This bundle includes all of the package’s dependencies, including
@google/gemini-cli-core. This means that whether a
user installs the package
with npm install -g @google/gemini-cli or runs it
directly with
npx @google/gemini-cli, they are using this single,
self-contained executable.
@google/gemini-cli-core
Section titled
“@google/gemini-cli-core”
This package contains the core logic for interacting with the Gemini API. It is responsible for making API requests, handling authentication, and managing the local cache.
This package is not bundled. When it is published, it is published as a
standard
Node.js package with its own dependencies. This allows it to be used as a
standalone package in other projects, if needed. All transpiled js code in
the
dist folder is included in the package.
NPM workspaces
Section titled “NPM workspaces”This project uses NPM Workspaces to manage the packages within this monorepo. This simplifies development by allowing us to manage dependencies and run scripts across multiple packages from the root of the project.
How it works
Section titled “How it works”The root package.json file defines the workspaces for
this project:
{ "workspaces": ["packages/*"]}
This tells NPM that any folder inside the packages
directory is a separate
package that should be managed as part of the workspace.
Benefits of workspaces
Section titled “Benefits of workspaces”- Simplified dependency management: Running
npm installfrom the root of the project will install all dependencies for all packages in the workspace and link them together. This means you don’t need to runnpm installin each package’s directory. - Automatic linking: Packages within the workspace can
depend on each other.
When you run
npm install, NPM will automatically create symlinks between the packages. This means that when you make changes to one package, the changes are immediately available to other packages that depend on it. - Simplified script execution: You can run scripts in any
package from the
root of the project using the
--workspaceflag. For example, to run thebuildscript in theclipackage, you can runnpm run build --workspace @google/gemini-cli.
Gemini CLI offers a generous free tier that covers many individual developers’ use cases. For enterprise or professional usage, or if you need increased quota, several options are available depending on your authentication account type.
For a high-level comparison of available subscriptions and to select the right quota for your needs, see the Plans page.
Overview
Section titled “Overview”This article outlines the specific quotas and pricing applicable to Gemini CLI when using different authentication methods.
The following table summarizes the available quotas and their respective limits:
| Authentication method | Tier / Subscription | Maximum requests per user per day |
|---|---|---|
| Google account | Gemini Code Assist (Individual) | 1,000 requests |
| Google AI Pro | 1,500 requests | |
| Google AI Ultra | 2,000 requests | |
| Gemini API key | Free tier (Unpaid) | 250 requests |
| Pay-as-you-go (Paid) | Varies | |
| Vertex AI | Express mode (Free) | Varies |
| Pay-as-you-go (Paid) | Varies | |
| Google Workspace | Code Assist Standard | 1,500 requests |
| Code Assist Enterprise | 2,000 requests | |
| Workspace AI Ultra | 2,000 requests |
Generally, there are three categories to choose from:
- Free Usage: Ideal for experimentation and light use.
- Paid Tier (fixed price): For individual developers or enterprises who need more generous daily quotas and predictable costs.
- Pay-As-You-Go: The most flexible option for professional use, long-running tasks, or when you need full control over your usage.
Requests are limited per user per minute and are subject to the availability of the service in times of high demand.
Free usage
Section titled “Free usage”Access to Gemini CLI begins with a generous free tier, perfect for experimentation and light use.
Your free usage is governed by the following limits, which depend on your authorization type.
Log in with Google (Gemini Code Assist for individuals)
Section titled “Log in with Google (Gemini Code Assist for individuals)”For users who authenticate by using their Google account to access Gemini Code Assist for individuals. This includes:
- 1000 maximum model requests / user / day
- Model requests will be made across the Gemini model family as determined by Gemini CLI.
Learn more at Gemini Code Assist for Individuals Limits.
Log in with Gemini API Key (unpaid)
Section titled “Log in with Gemini API Key (unpaid)”If you are using a Gemini API key, you can also benefit from a free tier. This includes:
- 250 maximum model requests / user / day
- Model requests to Flash model only.
Learn more at Gemini API Rate Limits.
Log in with Vertex AI (Express Mode)
Section titled “Log in with Vertex AI (Express Mode)”Vertex AI offers an Express Mode without the need to enable billing. This includes:
- 90 days before you need to enable billing.
- Quotas and models are specific to your account and their limits vary.
Learn more at Vertex AI Express Mode Limits.
Paid tier: Higher limits for a fixed cost
Section titled “Paid tier: Higher limits for a fixed cost”If you use up your initial number of requests, you can continue to benefit from Gemini CLI by upgrading to one of the following subscriptions:
Individuals
Section titled “Individuals”These tiers apply when you sign in with a personal account. To verify whether you’re on a personal account, visit Google One:
- If you are on a personal account, you will see your personal dashboard.
- If you are not on a personal account, you will see: “You’re currently signed in to your Google Workspace Account.”
Supported tiers: - Tiers not listed above, including Google AI Plus, are not supported.
-
Google AI Pro and AI Ultra. This is recommended for individual developers. Quotas and pricing are based on a fixed price subscription.
For predictable costs, you can log in with Google.
Learn more at Gemini Code Assist Quotas and Limits
Through your organization
Section titled “Through your organization”These tiers are applicable when you are signing in with a Google Workspace account.
- To verify your account type, visit the Google One page.
- You are on a workspace account if you see the message “You’re currently signed in to your Google Workspace Account”.
Supported tiers: - Tiers not listed above, including Workspace AI Standard/Plus and AI Expanded, are not supported.
-
Purchase a Gemini Code Assist Subscription through Google Cloud.
Quotas and pricing are based on a fixed price subscription with assigned license seats. For predictable costs, you can sign in with Google.
This includes the following request limits:
- Gemini Code Assist Standard edition:
- 1500 maximum model requests / user / day
- Gemini Code Assist Enterprise edition:
- 2000 maximum model requests / user / day
- Model requests will be made across the Gemini model family as determined by Gemini CLI.
- Gemini Code Assist Standard edition:
Pay as you go
Section titled “Pay as you go”If you hit your daily request limits or exhaust your Gemini Pro quota even after upgrading, the most flexible solution is to switch to a pay-as-you-go model, where you pay for the specific amount of processing you use. This is the recommended path for uninterrupted access.
To do this, log in using a Gemini API key or Vertex AI.
Vertex AI (regular mode)
Section titled “Vertex AI (regular mode)”An enterprise-grade platform for building, deploying, and managing AI models, including Gemini. It offers enhanced security, data governance, and integration with other Google Cloud services.
- Quota: Governed by a dynamic shared quota system or pre-purchased provisioned throughput.
- Cost: Based on model and token usage.
Learn more at Vertex AI Dynamic Shared Quota and Vertex AI Pricing.
Gemini API key
Section titled “Gemini API key”Ideal for developers who want to quickly build applications with the Gemini models. This is the most direct way to use the models.
- Quota: Varies by pricing tier.
- Cost: Varies by pricing tier and model/token usage.
Learn more at Gemini API Rate Limits, Gemini API Pricing
It’s important to highlight that when using an API key, you pay per token/call. This can be more expensive for many small calls with few tokens, but it’s the only way to ensure your workflow isn’t interrupted by reaching a limit on your quota.
Gemini for workspace plans
Section titled “Gemini for workspace plans”These plans currently apply only to the use of Gemini web-based products provided by Google-based experiences (for example, the Gemini web app or the Flow video editor). These plans do not apply to the API usage which powers the Gemini CLI. Supporting these plans is under active consideration for future support.
Check usage and limits
Section titled “Check usage and limits”You can check your current token usage and applicable limits using the
/stats model command. This command provides a
snapshot of your current
session’s token usage, as well as information about the limits associated
with
your current quota.
For more information on the /stats command and its
subcommands, see the
Command
Reference.
A summary of model usage is also presented on exit at the end of a session.
Tips to avoid high costs
Section titled “Tips to avoid high costs”When using a pay-as-you-go plan, be mindful of your usage to avoid unexpected costs.
- Be selective with suggestions: Before accepting a suggestion, especially for a computationally intensive task like refactoring a large codebase, consider if it’s the most cost-effective approach.
- Use precise prompts: You are paying per call, so think about the most efficient way to get your desired result. A well-crafted prompt can often get you the answer you need in a single call, rather than multiple back-and-forth interactions.
- Monitor your usage: Use the
/stats modelcommand to track your token usage during a session. This can help you stay aware of your spending in real time.
Gemini CLI offers several ways to configure its behavior, including environment variables, command-line arguments, and settings files. This document outlines the different configuration methods and available settings.
Configuration layers
Section titled “Configuration layers”Configuration is applied in the following order of precedence (lower numbers are overridden by higher numbers):
- Default values: Hardcoded defaults within the application.
- System defaults file: System-wide default settings that can be overridden by other settings files.
- User settings file: Global settings for the current user.
- Project settings file: Project-specific settings.
- System settings file: System-wide settings that override all other settings files.
- Environment variables: System-wide or session-specific
variables,
potentially loaded from
.envfiles. - Command-line arguments: Values passed when launching the CLI.
Settings files
Section titled “Settings files”Gemini CLI uses JSON settings files for persistent configuration. There are four locations for these files:
- System defaults file:
- Location:
/etc/gemini-cli/system-defaults.json(Linux),C:\ProgramData\gemini-cli\system-defaults.json(Windows) or/Library/Application Support/GeminiCli/system-defaults.json(macOS). The path can be overridden using theGEMINI_CLI_SYSTEM_DEFAULTS_PATHenvironment variable. - Scope: Provides a base layer of system-wide default settings. These settings have the lowest precedence and are intended to be overridden by user, project, or system override settings.
- Location:
- User settings file:
- Location:
~/.gemini/settings.json(where~is your home directory). - Scope: Applies to all Gemini CLI sessions for the current user. User settings override system defaults.
- Location:
- Project settings file:
- Location:
.gemini/settings.jsonwithin your project’s root directory. - Scope: Applies only when running Gemini CLI from that specific project. Project settings override user settings and system defaults.
- Location:
- System settings file:
- Location:
/etc/gemini-cli/settings.json(Linux),C:\ProgramData\gemini-cli\settings.json(Windows) or/Library/Application Support/GeminiCli/settings.json(macOS). The path can be overridden using theGEMINI_CLI_SYSTEM_SETTINGS_PATHenvironment variable. - Scope: Applies to all Gemini CLI sessions on the system, for all users. System settings act as overrides, taking precedence over all other settings files. May be useful for system administrators at enterprises to have controls over users’ Gemini CLI setups.
- Location:
Note on environment variables in settings: String values
within your
settings.json and gemini-extension.json files can reference environment
variables using $VAR_NAME, ${VAR_NAME}, or ${VAR_NAME:-DEFAULT_VALUE}
syntax. These variables will be automatically resolved when the settings are
loaded. For example, if you have an environment variable MY_API_TOKEN, you
could use it in settings.json like this: "apiKey": "$MY_API_TOKEN". If you
want to provide a fallback value, use ${MY_API_TOKEN:-default-token}.
Additionally, each extension can have its own .env
file in its directory,
which will be loaded automatically.
Note for Enterprise Users: For guidance on deploying and managing Gemini CLI in a corporate environment, see the Enterprise Configuration documentation.
The .gemini directory in your project
Section titled “The .gemini
directory in your project”
In addition to a project settings file, a project’s .gemini directory can
contain other project-specific files related to Gemini CLI’s operation, such
as:
- Custom sandbox profiles (for example,
.gemini/sandbox-macos-custom.sb,.gemini/sandbox.Dockerfile).
Available settings in settings.json
Section titled “Available
settings in settings.json”
Settings are organized into categories. All settings should be placed within
their corresponding top-level category object in your settings.json file.
policyPaths
Section titled
“policyPaths”
policyPaths(array):- Description: Additional policy files or directories to load.
- Default:
[] - Requires restart: Yes
adminPolicyPaths
Section titled
“adminPolicyPaths”
adminPolicyPaths(array):- Description: Additional admin policy files or directories to load.
- Default:
[] - Requires restart: Yes
general
Section titled
“general”
-
general.preferredEditor(string):- Description: The preferred editor to open files in.
- Default:
undefined
-
general.vimMode(boolean):- Description: Enable Vim keybindings
- Default:
false
-
general.defaultApprovalMode(enum):- Description: The default approval mode for tool execution. ‘default’ prompts for approval, ‘auto_edit’ auto-approves edit tools, and ‘plan’ is read-only mode. YOLO mode (auto-approve all actions) can only be enabled via command line (—yolo or —approval-mode=yolo).
- Default:
"default" - Values:
"default","auto_edit","plan"
-
general.devtools(boolean):- Description: Enable DevTools inspector on launch.
- Default:
false
-
general.enableAutoUpdate(boolean):- Description: Enable automatic updates.
- Default:
true
-
general.enableAutoUpdateNotification(boolean):- Description: Enable update notification prompts.
- Default:
true
-
general.enableNotifications(boolean):- Description: Enable run-event notifications for action-required prompts and session completion.
- Default:
false
-
general.checkpointing.enabled(boolean):- Description: Enable session checkpointing for recovery
- Default:
false - Requires restart: Yes
-
general.plan.enabled(boolean):- Description: Enable Plan Mode for read-only safety during planning.
- Default:
true - Requires restart: Yes
-
general.plan.directory(string):- Description: The directory where planning artifacts are stored. If not specified, defaults to the system temporary directory. A custom directory requires a policy to allow write access in Plan Mode.
- Default:
undefined - Requires restart: Yes
-
general.plan.modelRouting(boolean):- Description: Automatically switch between Pro and Flash models based on Plan Mode status. Uses Pro for the planning phase and Flash for the implementation phase.
- Default:
true
-
general.retryFetchErrors(boolean):- Description: Retry on “exception TypeError: fetch failed sending request” errors.
- Default:
true
-
general.maxAttempts(number):- Description: Maximum number of attempts for requests to the main chat model. Cannot exceed 10.
- Default:
10
-
general.debugKeystrokeLogging(boolean):- Description: Enable debug logging of keystrokes to the console.
- Default:
false
-
general.sessionRetention.enabled(boolean):- Description: Enable automatic session cleanup
- Default:
true
-
general.sessionRetention.maxAge(string):- Description: Automatically delete chats older than this time period (e.g., “30d”, “7d”, “24h”, “1w”)
- Default:
"30d"
-
general.sessionRetention.maxCount(number):- Description: Alternative: Maximum number of sessions to keep (most recent)
- Default:
undefined
-
general.sessionRetention.minRetention(string):- Description: Minimum retention period (safety limit, defaults to “1d”)
- Default:
"1d"
output
Section titled
“output”
output.format(enum):- Description: The format of the CLI output. Can
be
textorjson. - Default:
"text" - Values:
"text","json"
- Description: The format of the CLI output. Can
be
-
ui.debugRainbow(boolean):- Description: Enable debug rainbow rendering. Only useful for debugging rendering bugs and performance issues.
- Default:
false - Requires restart: Yes
-
ui.theme(string):- Description: The color theme for the UI. See the CLI themes guide for available options.
- Default:
undefined
-
ui.autoThemeSwitching(boolean):- Description: Automatically switch between default light and dark themes based on terminal background color.
- Default:
true
-
ui.terminalBackgroundPollingInterval(number):- Description: Interval in seconds to poll the terminal background color.
- Default:
60
-
ui.customThemes(object):- Description: Custom theme definitions.
- Default:
{}
-
ui.hideWindowTitle(boolean):- Description: Hide the window title bar
- Default:
false - Requires restart: Yes
-
ui.inlineThinkingMode(enum):- Description: Display model thinking inline: off or full.
- Default:
"off" - Values:
"off","full"
-
ui.showStatusInTitle(boolean):- Description: Show Gemini CLI model thoughts in the terminal window title during the working phase
- Default:
false
-
ui.dynamicWindowTitle(boolean):- Description: Update the terminal window title with current status icons (Ready: ◇, Action Required: ✋, Working: ✦)
- Default:
true
-
ui.showHomeDirectoryWarning(boolean):- Description: Show a warning when running Gemini CLI in the home directory.
- Default:
true - Requires restart: Yes
-
ui.showCompatibilityWarnings(boolean):- Description: Show warnings about terminal or OS compatibility issues.
- Default:
true - Requires restart: Yes
-
ui.hideTips(boolean):- Description: Hide helpful tips in the UI
- Default:
false
-
ui.escapePastedAtSymbols(boolean):- Description: When enabled, @ symbols in pasted text are escaped to prevent unintended @path expansion.
- Default:
false
-
ui.showShortcutsHint(boolean):- Description: Show the ”? for shortcuts” hint above the input.
- Default:
true
-
ui.compactToolOutput(boolean):- Description: Display tool outputs (like directory listings and file reads) in a compact, structured format.
- Default:
true
-
ui.hideBanner(boolean):- Description: Hide the application banner
- Default:
false
-
ui.hideContextSummary(boolean):- Description: Hide the context summary (GEMINI.md, MCP servers) above the input.
- Default:
false
-
ui.footer.items(array):- Description: List of item IDs to display in the footer. Rendered in order
- Default:
undefined
-
ui.footer.showLabels(boolean):- Description: Display a second line above the footer items with descriptive headers (e.g., /model).
- Default:
true
-
ui.footer.hideCWD(boolean):- Description: Hide the current working directory in the footer.
- Default:
false
-
ui.footer.hideSandboxStatus(boolean):- Description: Hide the sandbox status indicator in the footer.
- Default:
false
-
ui.footer.hideModelInfo(boolean):- Description: Hide the model name and context usage in the footer.
- Default:
false
-
ui.footer.hideContextPercentage(boolean):- Description: Hides the context window usage percentage.
- Default:
true
-
ui.hideFooter(boolean):- Description: Hide the footer from the UI
- Default:
false
-
ui.collapseDrawerDuringApproval(boolean):- Description: Whether to collapse the UI drawer when a tool is awaiting confirmation.
- Default:
true
-
ui.showMemoryUsage(boolean):- Description: Display memory usage information in the UI
- Default:
false
-
ui.showLineNumbers(boolean):- Description: Show line numbers in the chat.
- Default:
true
-
ui.showCitations(boolean):- Description: Show citations for generated text in the chat.
- Default:
false
-
ui.showModelInfoInChat(boolean):- Description: Show the model name in the chat for each model turn.
- Default:
false
-
ui.showUserIdentity(boolean):- Description: Show the signed-in user’s identity (e.g. email) in the UI.
- Default:
true
-
ui.useAlternateBuffer(boolean):- Description: Use an alternate screen buffer for the UI, preserving shell history.
- Default:
false - Requires restart: Yes
-
ui.renderProcess(boolean):- Description: Enable Ink render process for the UI.
- Default:
true - Requires restart: Yes
-
ui.terminalBuffer(boolean):- Description: Use the new terminal buffer architecture for rendering.
- Default:
false - Requires restart: Yes
-
ui.useBackgroundColor(boolean):- Description: Whether to use background colors in the UI.
- Default:
true
-
ui.incrementalRendering(boolean):- Description: Enable incremental rendering for the UI. This option will reduce flickering but may cause rendering artifacts. Only supported when useAlternateBuffer is enabled.
- Default:
true - Requires restart: Yes
-
ui.showSpinner(boolean):- Description: Show the spinner during operations.
- Default:
true
-
ui.loadingPhrases(enum):- Description: What to show while the model is working: tips, witty comments, all, or off.
- Default:
"off" - Values:
"tips","witty","all","off"
-
ui.errorVerbosity(enum):- Description: Controls whether recoverable errors are hidden (low) or fully shown (full).
- Default:
"low" - Values:
"low","full"
-
ui.customWittyPhrases(array):- Description: Custom witty phrases to display during loading. When provided, the CLI cycles through these instead of the defaults.
- Default:
[]
-
ui.accessibility.enableLoadingPhrases(boolean):- Description: @deprecated Use ui.loadingPhrases instead. Enable loading phrases during operations.
- Default:
true - Requires restart: Yes
-
ui.accessibility.screenReader(boolean):- Description: Render output in plain-text to be more screen reader accessible
- Default:
false - Requires restart: Yes
-
ide.enabled(boolean):- Description: Enable IDE integration mode.
- Default:
false - Requires restart: Yes
-
ide.hasSeenNudge(boolean):- Description: Whether the user has seen the IDE integration nudge.
- Default:
false
privacy
Section titled
“privacy”
privacy.usageStatisticsEnabled(boolean):- Description: Enable collection of usage statistics
- Default:
true - Requires restart: Yes
billing
Section titled
“billing”
billing.overageStrategy(enum):- Description: How to handle quota exhaustion when AI credits are available. ‘ask’ prompts each time, ‘always’ automatically uses credits, ‘never’ disables credit usage.
- Default:
"ask" - Values:
"ask","always","never"
-
model.name(string):- Description: The Gemini model to use for conversations.
- Default:
undefined
-
model.maxSessionTurns(number):- Description: Maximum number of user/model/tool turns to keep in a session. -1 means unlimited.
- Default:
-1
-
model.summarizeToolOutput(object):- Description: Enables or disables summarization of tool output. Configure per-tool token budgets (for example {“run_shell_command”: {“tokenBudget”: 2000}}). Currently only the run_shell_command tool supports summarization.
- Default:
undefined
-
model.compressionThreshold(number):- Description: The fraction of context usage at which to trigger context compression (e.g. 0.2, 0.3).
- Default:
0.5 - Requires restart: Yes
-
model.disableLoopDetection(boolean):- Description: Disable automatic detection and prevention of infinite loops.
- Default:
false - Requires restart: Yes
-
model.skipNextSpeakerCheck(boolean):- Description: Skip the next speaker check.
- Default:
true
modelConfigs
Section titled
“modelConfigs”
-
modelConfigs.aliases(object):-
Description: Named presets for model configs. Can be used in place of a model name and can inherit from other aliases using an
extendsproperty. -
Default:
{"base": {"modelConfig": {"generateContentConfig": {"temperature": 0,"topP": 1}}},"chat-base": {"extends": "base","modelConfig": {"generateContentConfig": {"thinkingConfig": {"includeThoughts": true},"temperature": 1,"topP": 0.95,"topK": 64}}},"chat-base-2.5": {"extends": "chat-base","modelConfig": {"generateContentConfig": {"thinkingConfig": {"thinkingBudget": 8192}}}},"chat-base-3": {"extends": "chat-base","modelConfig": {"generateContentConfig": {"thinkingConfig": {"thinkingLevel": "HIGH"}}}},"gemini-3-pro-preview": {"extends": "chat-base-3","modelConfig": {"model": "gemini-3-pro-preview"}},"gemini-3-flash-preview": {"extends": "chat-base-3","modelConfig": {"model": "gemini-3-flash-preview"}},"gemini-2.5-pro": {"extends": "chat-base-2.5","modelConfig": {"model": "gemini-2.5-pro"}},"gemini-2.5-flash": {"extends": "chat-base-2.5","modelConfig": {"model": "gemini-2.5-flash"}},"gemini-2.5-flash-lite": {"extends": "chat-base-2.5","modelConfig": {"model": "gemini-2.5-flash-lite"}},"gemini-2.5-flash-base": {"extends": "base","modelConfig": {"model": "gemini-2.5-flash"}},"gemini-3-flash-base": {"extends": "base","modelConfig": {"model": "gemini-3-flash-preview"}},"classifier": {"extends": "base","modelConfig": {"model": "gemini-2.5-flash-lite","generateContentConfig": {"maxOutputTokens": 1024,"thinkingConfig": {"thinkingBudget": 512}}}},"prompt-completion": {"extends": "base","modelConfig": {"model": "gemini-2.5-flash-lite","generateContentConfig": {"temperature": 0.3,"maxOutputTokens": 16000,"thinkingConfig": {"thinkingBudget": 0}}}},"fast-ack-helper": {"extends": "base","modelConfig": {"model": "gemini-2.5-flash-lite","generateContentConfig": {"temperature": 0.2,"maxOutputTokens": 120,"thinkingConfig": {"thinkingBudget": 0}}}},"edit-corrector": {"extends": "base","modelConfig": {"model": "gemini-2.5-flash-lite","generateContentConfig": {"thinkingConfig": {"thinkingBudget": 0}}}},"summarizer-default": {"extends": "base","modelConfig": {"model": "gemini-2.5-flash-lite","generateContentConfig": {"maxOutputTokens": 2000}}},"summarizer-shell": {"extends": "base","modelConfig": {"model": "gemini-2.5-flash-lite","generateContentConfig": {"maxOutputTokens": 2000}}},"web-search": {"extends": "gemini-3-flash-base","modelConfig": {"generateContentConfig": {"tools": [{"googleSearch": {}}]}}},"web-fetch": {"extends": "gemini-3-flash-base","modelConfig": {"generateContentConfig": {"tools": [{"urlContext": {}}]}}},"web-fetch-fallback": {"extends": "gemini-3-flash-base","modelConfig": {}},"loop-detection": {"extends": "gemini-3-flash-base","modelConfig": {}},"loop-detection-double-check": {"extends": "base","modelConfig": {"model": "gemini-3-pro-preview"}},"llm-edit-fixer": {"extends": "gemini-3-flash-base","modelConfig": {}},"next-speaker-checker": {"extends": "gemini-3-flash-base","modelConfig": {}},"chat-compression-3-pro": {"modelConfig": {"model": "gemini-3-pro-preview"}},"chat-compression-3-flash": {"modelConfig": {"model": "gemini-3-flash-preview"}},"chat-compression-3.1-flash-lite": {"modelConfig": {"model": "gemini-3.1-flash-lite-preview"}},"chat-compression-2.5-pro": {"modelConfig": {"model": "gemini-2.5-pro"}},"chat-compression-2.5-flash": {"modelConfig": {"model": "gemini-2.5-flash"}},"chat-compression-2.5-flash-lite": {"modelConfig": {"model": "gemini-2.5-flash-lite"}},"chat-compression-default": {"modelConfig": {"model": "gemini-3-pro-preview"}},"agent-history-provider-summarizer": {"modelConfig": {"model": "gemini-3-flash-preview"}}}
-
-
modelConfigs.customAliases(object):- Description: Custom named presets for model configs. These are merged with (and override) the built-in aliases.
- Default:
{}
-
modelConfigs.customOverrides(array):- Description: Custom model config overrides. These are merged with (and added to) the built-in overrides.
- Default:
[]
-
modelConfigs.overrides(array):- Description: Apply specific configuration overrides based on matches, with a primary key of model (or alias). The most specific match will be used.
- Default:
[]
-
modelConfigs.modelDefinitions(object):-
Description: Registry of model metadata, including tier, family, and features.
-
Default:
{"gemini-3.1-flash-lite-preview": {"tier": "flash-lite","family": "gemini-3","isPreview": true,"isVisible": true,"features": {"thinking": false,"multimodalToolUse": true}},"gemini-3.1-pro-preview": {"tier": "pro","family": "gemini-3","isPreview": true,"isVisible": true,"features": {"thinking": true,"multimodalToolUse": true}},"gemini-3.1-pro-preview-customtools": {"tier": "pro","family": "gemini-3","isPreview": true,"isVisible": false,"features": {"thinking": true,"multimodalToolUse": true}},"gemini-3-pro-preview": {"tier": "pro","family": "gemini-3","isPreview": true,"isVisible": true,"features": {"thinking": true,"multimodalToolUse": true}},"gemini-3-flash-preview": {"tier": "flash","family": "gemini-3","isPreview": true,"isVisible": true,"features": {"thinking": false,"multimodalToolUse": true}},"gemini-2.5-pro": {"tier": "pro","family": "gemini-2.5","isPreview": false,"isVisible": true,"features": {"thinking": false,"multimodalToolUse": false}},"gemini-2.5-flash": {"tier": "flash","family": "gemini-2.5","isPreview": false,"isVisible": true,"features": {"thinking": false,"multimodalToolUse": false}},"gemini-2.5-flash-lite": {"tier": "flash-lite","family": "gemini-2.5","isPreview": false,"isVisible": true,"features": {"thinking": false,"multimodalToolUse": false}},"auto": {"tier": "auto","isPreview": true,"isVisible": false,"features": {"thinking": true,"multimodalToolUse": false}},"pro": {"tier": "pro","isPreview": false,"isVisible": false,"features": {"thinking": true,"multimodalToolUse": false}},"flash": {"tier": "flash","isPreview": false,"isVisible": false,"features": {"thinking": false,"multimodalToolUse": false}},"flash-lite": {"tier": "flash-lite","isPreview": false,"isVisible": false,"features": {"thinking": false,"multimodalToolUse": false}},"auto-gemini-3": {"displayName": "Auto (Gemini 3)","tier": "auto","isPreview": true,"isVisible": true,"dialogDescription": "Let Gemini CLI decide the best model for the task: gemini-3-pro, gemini-3-flash","features": {"thinking": true,"multimodalToolUse": false}},"auto-gemini-2.5": {"displayName": "Auto (Gemini 2.5)","tier": "auto","isPreview": false,"isVisible": true,"dialogDescription": "Let Gemini CLI decide the best model for the task: gemini-2.5-pro, gemini-2.5-flash","features": {"thinking": false,"multimodalToolUse": false}}} -
Requires restart: Yes
-
-
modelConfigs.modelIdResolutions(object):-
Description: Rules for resolving requested model names to concrete model IDs based on context.
-
Default:
{"gemini-3.1-pro-preview": {"default": "gemini-3.1-pro-preview","contexts": [{"condition": {"hasAccessToPreview": false},"target": "gemini-2.5-pro"},{"condition": {"useCustomTools": true},"target": "gemini-3.1-pro-preview-customtools"}]},"gemini-3.1-pro-preview-customtools": {"default": "gemini-3.1-pro-preview-customtools","contexts": [{"condition": {"hasAccessToPreview": false},"target": "gemini-2.5-pro"}]},"gemini-3-flash-preview": {"default": "gemini-3-flash-preview","contexts": [{"condition": {"hasAccessToPreview": false},"target": "gemini-2.5-flash"}]},"gemini-3-pro-preview": {"default": "gemini-3-pro-preview","contexts": [{"condition": {"hasAccessToPreview": false},"target": "gemini-2.5-pro"},{"condition": {"useGemini3_1": true,"useCustomTools": true},"target": "gemini-3.1-pro-preview-customtools"},{"condition": {"useGemini3_1": true},"target": "gemini-3.1-pro-preview"}]},"auto-gemini-3": {"default": "gemini-3-pro-preview","contexts": [{"condition": {"hasAccessToPreview": false},"target": "gemini-2.5-pro"},{"condition": {"useGemini3_1": true,"useCustomTools": true},"target": "gemini-3.1-pro-preview-customtools"},{"condition": {"useGemini3_1": true},"target": "gemini-3.1-pro-preview"}]},"auto": {"default": "gemini-3-pro-preview","contexts": [{"condition": {"hasAccessToPreview": false},"target": "gemini-2.5-pro"},{"condition": {"useGemini3_1": true,"useCustomTools": true},"target": "gemini-3.1-pro-preview-customtools"},{"condition": {"useGemini3_1": true},"target": "gemini-3.1-pro-preview"}]},"pro": {"default": "gemini-3-pro-preview","contexts": [{"condition": {"hasAccessToPreview": false},"target": "gemini-2.5-pro"},{"condition": {"useGemini3_1": true,"useCustomTools": true},"target": "gemini-3.1-pro-preview-customtools"},{"condition": {"useGemini3_1": true},"target": "gemini-3.1-pro-preview"}]},"auto-gemini-2.5": {"default": "gemini-2.5-pro"},"gemini-3.1-flash-lite-preview": {"default": "gemini-3.1-flash-lite-preview","contexts": [{"condition": {"useGemini3_1FlashLite": false},"target": "gemini-2.5-flash-lite"}]},"flash": {"default": "gemini-3-flash-preview","contexts": [{"condition": {"hasAccessToPreview": false},"target": "gemini-2.5-flash"}]},"flash-lite": {"default": "gemini-2.5-flash-lite","contexts": [{"condition": {"useGemini3_1FlashLite": true},"target": "gemini-3.1-flash-lite-preview"}]}} -
Requires restart: Yes
-
-
modelConfigs.classifierIdResolutions(object):-
Description: Rules for resolving classifier tiers (flash, pro) to concrete model IDs.
-
Default:
{"flash": {"default": "gemini-3-flash-preview","contexts": [{"condition": {"requestedModels": ["auto-gemini-2.5", "gemini-2.5-pro"]},"target": "gemini-2.5-flash"},{"condition": {"requestedModels": ["auto-gemini-3", "gemini-3-pro-preview"]},"target": "gemini-3-flash-preview"}]},"pro": {"default": "gemini-3-pro-preview","contexts": [{"condition": {"requestedModels": ["auto-gemini-2.5", "gemini-2.5-pro"]},"target": "gemini-2.5-pro"},{"condition": {"useGemini3_1": true,"useCustomTools": true},"target": "gemini-3.1-pro-preview-customtools"},{"condition": {"useGemini3_1": true},"target": "gemini-3.1-pro-preview"}]}} -
Requires restart: Yes
-
-
modelConfigs.modelChains(object):-
Description: Availability policy chains defining fallback behavior for models.
-
Default:
{"preview": [{"model": "gemini-3-pro-preview","actions": {"terminal": "prompt","transient": "prompt","not_found": "prompt","unknown": "prompt"},"stateTransitions": {"terminal": "terminal","transient": "terminal","not_found": "terminal","unknown": "terminal"}},{"model": "gemini-3-flash-preview","isLastResort": true,"actions": {"terminal": "prompt","transient": "prompt","not_found": "prompt","unknown": "prompt"},"stateTransitions": {"terminal": "terminal","transient": "terminal","not_found": "terminal","unknown": "terminal"}}],"default": [{"model": "gemini-2.5-pro","actions": {"terminal": "prompt","transient": "prompt","not_found": "prompt","unknown": "prompt"},"stateTransitions": {"terminal": "terminal","transient": "terminal","not_found": "terminal","unknown": "terminal"}},{"model": "gemini-2.5-flash","isLastResort": true,"actions": {"terminal": "prompt","transient": "prompt","not_found": "prompt","unknown": "prompt"},"stateTransitions": {"terminal": "terminal","transient": "terminal","not_found": "terminal","unknown": "terminal"}}],"lite": [{"model": "gemini-2.5-flash-lite","actions": {"terminal": "silent","transient": "silent","not_found": "silent","unknown": "silent"},"stateTransitions": {"terminal": "terminal","transient": "terminal","not_found": "terminal","unknown": "terminal"}},{"model": "gemini-2.5-flash","actions": {"terminal": "silent","transient": "silent","not_found": "silent","unknown": "silent"},"stateTransitions": {"terminal": "terminal","transient": "terminal","not_found": "terminal","unknown": "terminal"}},{"model": "gemini-2.5-pro","isLastResort": true,"actions": {"terminal": "silent","transient": "silent","not_found": "silent","unknown": "silent"},"stateTransitions": {"terminal": "terminal","transient": "terminal","not_found": "terminal","unknown": "terminal"}}]} -
Requires restart: Yes
-
agents
Section titled
“agents”
-
agents.overrides(object):- Description: Override settings for specific agents, e.g. to disable the agent, set a custom model config, or run config.
- Default:
{} - Requires restart: Yes
-
agents.browser.sessionMode(enum):- Description: Session mode: ‘persistent’, ‘isolated’, or ‘existing’.
- Default:
"persistent" - Values:
"persistent","isolated","existing" - Requires restart: Yes
-
agents.browser.headless(boolean):- Description: Run browser in headless mode.
- Default:
false - Requires restart: Yes
-
agents.browser.profilePath(string):- Description: Path to browser profile directory for session persistence.
- Default:
undefined - Requires restart: Yes
-
agents.browser.visualModel(string):- Description: Model for the visual agent’s analyze_screenshot tool. When set, enables the tool.
- Default:
undefined - Requires restart: Yes
-
agents.browser.allowedDomains(array):-
Description: A list of allowed domains for the browser agent (e.g., [“github.com”, “*.google.com”]).
-
Default:
["github.com", "*.google.com", "localhost"] -
Requires restart: Yes
-
-
agents.browser.disableUserInput(boolean):- Description: Disable user input on browser window during automation.
- Default:
true
-
agents.browser.maxActionsPerTask(number):- Description: The maximum number of tool calls allowed per browser task. Enforcement is hard: the agent will be terminated when the limit is reached.
- Default:
100
-
agents.browser.confirmSensitiveActions(boolean):- Description: Require manual confirmation for sensitive browser actions (e.g., fill_form, evaluate_script).
- Default:
false - Requires restart: Yes
-
agents.browser.blockFileUploads(boolean):- Description: Hard-block file upload requests from the browser agent.
- Default:
false - Requires restart: Yes
context
Section titled
“context”
-
context.fileName(string | string[]):- Description: The name of the context file or files to load into memory. Accepts either a single string or an array of strings.
- Default:
undefined
-
context.importFormat(string):- Description: The format to use when importing memory.
- Default:
undefined
-
context.includeDirectoryTree(boolean):- Description: Whether to include the directory tree of the current working directory in the initial request to the model.
- Default:
true
-
context.discoveryMaxDirs(number):- Description: Maximum number of directories to search for memory.
- Default:
200
-
context.memoryBoundaryMarkers(array):-
Description: File or directory names that mark the boundary for GEMINI.md discovery. The upward traversal stops at the first directory containing any of these markers. An empty array disables parent traversal.
-
Default:
[".git"] -
Requires restart: Yes
-
-
context.includeDirectories(array):- Description: Additional directories to include in the workspace context. Missing directories will be skipped with a warning.
- Default:
[]
-
context.loadMemoryFromIncludeDirectories(boolean):- Description: Controls how /memory reload loads GEMINI.md files. When true, include directories are scanned; when false, only the current directory is used.
- Default:
false
-
context.fileFiltering.respectGitIgnore(boolean):- Description: Respect .gitignore files when searching.
- Default:
true - Requires restart: Yes
-
context.fileFiltering.respectGeminiIgnore(boolean):- Description: Respect .geminiignore files when searching.
- Default:
true - Requires restart: Yes
-
context.fileFiltering.enableRecursiveFileSearch(boolean):- Description: Enable recursive file search functionality when completing @ references in the prompt.
- Default:
true - Requires restart: Yes
-
context.fileFiltering.enableFuzzySearch(boolean):- Description: Enable fuzzy search when searching for files.
- Default:
true - Requires restart: Yes
-
context.fileFiltering.customIgnoreFilePaths(array):- Description: Additional ignore file paths to respect. These files take precedence over .geminiignore and .gitignore. Files earlier in the array take precedence over files later in the array, e.g. the first file takes precedence over the second one.
- Default:
[] - Requires restart: Yes
-
tools.sandbox(string):- Description: Legacy full-process sandbox execution environment. Set to a boolean to enable or disable the sandbox, provide a string path to a sandbox profile, or specify an explicit sandbox command (e.g., “docker”, “podman”, “lxc”, “windows-native”).
- Default:
undefined - Requires restart: Yes
-
tools.sandboxAllowedPaths(array):- Description: List of additional paths that the sandbox is allowed to access.
- Default:
[] - Requires restart: Yes
-
tools.sandboxNetworkAccess(boolean):- Description: Whether the sandbox is allowed to access the network.
- Default:
false - Requires restart: Yes
-
tools.shell.enableInteractiveShell(boolean):- Description: Use node-pty for an interactive shell experience. Fallback to child_process still applies.
- Default:
true - Requires restart: Yes
-
tools.shell.backgroundCompletionBehavior(enum):- Description: Controls what happens when a background shell command finishes. ‘silent’ (default): quietly exits in background. ‘inject’: automatically returns output to agent. ‘notify’: shows brief message in chat.
- Default:
"silent" - Values:
"silent","inject","notify"
-
tools.shell.pager(string):- Description: The pager command to use for shell
output. Defaults to
cat. - Default:
"cat"
- Description: The pager command to use for shell
output. Defaults to
-
tools.shell.showColor(boolean):- Description: Show color in shell output.
- Default:
true
-
tools.shell.inactivityTimeout(number):- Description: The maximum time in seconds allowed without output from the shell command. Defaults to 5 minutes.
- Default:
300
-
tools.shell.enableShellOutputEfficiency(boolean):- Description: Enable shell output efficiency optimizations for better performance.
- Default:
true
-
tools.core(array):- Description: Restrict the set of built-in tools with an allowlist. Match semantics mirror tools.allowed; see the built-in tools documentation for available names.
- Default:
undefined - Requires restart: Yes
-
tools.allowed(array):- Description: Tool names that bypass the confirmation dialog. Useful for trusted commands (for example [“run_shell_command(git)”, “run_shell_command(npm test)”]). See shell tool command restrictions for matching details.
- Default:
undefined - Requires restart: Yes
-
tools.exclude(array):- Description: Tool names to exclude from discovery.
- Default:
undefined - Requires restart: Yes
-
tools.discoveryCommand(string):- Description: Command to run for tool discovery.
- Default:
undefined - Requires restart: Yes
-
tools.callCommand(string):- Description: Defines a custom shell command for invoking discovered tools. The command must take the tool name as the first argument, read JSON arguments from stdin, and emit JSON results on stdout.
- Default:
undefined - Requires restart: Yes
-
tools.useRipgrep(boolean):- Description: Use ripgrep for file content search instead of the fallback implementation. Provides faster search performance.
- Default:
true
-
tools.truncateToolOutputThreshold(number):- Description: Maximum characters to show when truncating large tool outputs. Set to 0 or negative to disable truncation.
- Default:
40000 - Requires restart: Yes
-
tools.disableLLMCorrection(boolean):- Description: Disable LLM-based error correction for edit tools. When enabled, tools will fail immediately if exact string matches are not found, instead of attempting to self-correct.
- Default:
true - Requires restart: Yes
-
mcp.serverCommand(string):- Description: Command to start an MCP server.
- Default:
undefined - Requires restart: Yes
-
mcp.allowed(array):- Description: A list of MCP servers to allow.
- Default:
undefined - Requires restart: Yes
-
mcp.excluded(array):- Description: A list of MCP servers to exclude.
- Default:
undefined - Requires restart: Yes
useWriteTodos
Section titled
“useWriteTodos”
useWriteTodos(boolean):- Description: Enable the write_todos tool.
- Default:
true
security
Section titled
“security”
-
security.toolSandboxing(boolean):- Description: Tool-level sandboxing. Isolates individual tools instead of the entire CLI process.
- Default:
false - Requires restart: Yes
-
security.disableYoloMode(boolean):- Description: Disable YOLO mode, even if enabled by a flag.
- Default:
false - Requires restart: Yes
-
security.disableAlwaysAllow(boolean):- Description: Disable “Always allow” options in tool confirmation dialogs.
- Default:
false - Requires restart: Yes
-
security.enablePermanentToolApproval(boolean):- Description: Enable the “Allow for all future sessions” option in tool confirmation dialogs.
- Default:
false
-
security.autoAddToPolicyByDefault(boolean):- Description: When enabled, the “Allow for all future sessions” option becomes the default choice for low-risk tools in trusted workspaces.
- Default:
false
-
security.blockGitExtensions(boolean):- Description: Blocks installing and loading extensions from Git.
- Default:
false - Requires restart: Yes
-
security.allowedExtensions(array):- Description: List of Regex patterns for allowed extensions. If nonempty, only extensions that match the patterns in this list are allowed. Overrides the blockGitExtensions setting.
- Default:
[] - Requires restart: Yes
-
security.folderTrust.enabled(boolean):- Description: Setting to track whether Folder trust is enabled.
- Default:
true - Requires restart: Yes
-
security.environmentVariableRedaction.allowed(array):- Description: Environment variables to always allow (bypass redaction).
- Default:
[] - Requires restart: Yes
-
security.environmentVariableRedaction.blocked(array):- Description: Environment variables to always redact.
- Default:
[] - Requires restart: Yes
-
security.environmentVariableRedaction.enabled(boolean):- Description: Enable redaction of environment variables that may contain secrets.
- Default:
false - Requires restart: Yes
-
security.auth.selectedType(string):- Description: The currently selected authentication type.
- Default:
undefined - Requires restart: Yes
-
security.auth.enforcedType(string):- Description: The required auth type. If this does not match the selected auth type, the user will be prompted to re-authenticate.
- Default:
undefined - Requires restart: Yes
-
security.auth.useExternal(boolean):- Description: Whether to use an external authentication flow.
- Default:
undefined - Requires restart: Yes
-
security.enableConseca(boolean):- Description: Enable the context-aware security checker. This feature uses an LLM to dynamically generate and enforce security policies for tool use based on your prompt, providing an additional layer of protection against unintended actions.
- Default:
false - Requires restart: Yes
advanced
Section titled
“advanced”
-
advanced.autoConfigureMemory(boolean):- Description: Automatically configure Node.js memory limits. Note: Because memory is allocated during the initial process boot, this setting is only read from the global user settings file and ignores workspace-level overrides.
- Default:
true - Requires restart: Yes
-
advanced.dnsResolutionOrder(string):- Description: The DNS resolution order.
- Default:
undefined - Requires restart: Yes
-
advanced.excludedEnvVars(array):-
Description: Environment variables to exclude from project context.
-
Default:
["DEBUG", "DEBUG_MODE"]
-
-
advanced.bugCommand(object):- Description: Configuration for the bug report command.
- Default:
undefined
experimental
Section titled
“experimental”
-
experimental.adk.agentSessionNoninteractiveEnabled(boolean):- Description: Enable non-interactive agent sessions.
- Default:
false - Requires restart: Yes
-
experimental.adk.agentSessionInteractiveEnabled(boolean):- Description: Enable the agent session implementation for the interactive CLI.
- Default:
false - Requires restart: Yes
-
experimental.enableAgents(boolean):- Description: Enable local and remote subagents.
- Default:
true - Requires restart: Yes
-
experimental.worktrees(boolean):- Description: Enable automated Git worktree management for parallel work.
- Default:
false - Requires restart: Yes
-
experimental.extensionManagement(boolean):- Description: Enable extension management features.
- Default:
true - Requires restart: Yes
-
experimental.extensionConfig(boolean):- Description: Enable requesting and fetching of extension settings.
- Default:
true - Requires restart: Yes
-
experimental.extensionRegistry(boolean):- Description: Enable extension registry explore UI.
- Default:
false - Requires restart: Yes
-
experimental.extensionRegistryURI(string):- Description: The URI (web URL or local file path) of the extension registry.
- Default:
"https://geminicli.com/extensions.json" - Requires restart: Yes
-
experimental.extensionReloading(boolean):- Description: Enables extension loading/unloading within the CLI session.
- Default:
false - Requires restart: Yes
-
experimental.jitContext(boolean):- Description: Enable Just-In-Time (JIT) context loading.
- Default:
false - Requires restart: Yes
-
experimental.useOSC52Paste(boolean):- Description: Use OSC 52 for pasting. This may be more robust than the default system when using remote terminal sessions (if your terminal is configured to allow it).
- Default:
false
-
experimental.useOSC52Copy(boolean):- Description: Use OSC 52 for copying. This may be more robust than the default system when using remote terminal sessions (if your terminal is configured to allow it).
- Default:
false
-
experimental.taskTracker(boolean):- Description: Enable task tracker tools.
- Default:
false - Requires restart: Yes
-
experimental.modelSteering(boolean):- Description: Enable model steering (user hints) to guide the model during tool execution.
- Default:
false
-
experimental.directWebFetch(boolean):- Description: Enable web fetch behavior that bypasses LLM summarization.
- Default:
false - Requires restart: Yes
-
experimental.dynamicModelConfiguration(boolean):- Description: Enable dynamic model configuration (definitions, resolutions, and chains) via settings.
- Default:
false - Requires restart: Yes
-
experimental.gemmaModelRouter.enabled(boolean):- Description: Enable the Gemma Model Router (experimental). Requires a local endpoint serving Gemma via the Gemini API using LiteRT-LM shim.
- Default:
false - Requires restart: Yes
-
experimental.gemmaModelRouter.classifier.host(string):- Description: The host of the classifier.
- Default:
"http://localhost:9379" - Requires restart: Yes
-
experimental.gemmaModelRouter.classifier.model(string):- Description: The model to use for the
classifier. Only tested on
gemma3-1b-gpu-custom. - Default:
"gemma3-1b-gpu-custom" - Requires restart: Yes
- Description: The model to use for the
classifier. Only tested on
-
experimental.memoryManager(boolean):- Description: Replace the built-in save_memory tool with a memory manager subagent that supports adding, removing, de-duplicating, and organizing memories.
- Default:
false - Requires restart: Yes
-
experimental.generalistProfile(boolean):- Description: Suitable for general coding and software development tasks.
- Default:
false - Requires restart: Yes
-
experimental.contextManagement(boolean):- Description: Enable logic for context management.
- Default:
false - Requires restart: Yes
-
experimental.topicUpdateNarration(boolean):- Description: Enable the experimental Topic & Update communication model for reduced chattiness and structured progress reporting.
- Default:
false
skills
Section titled
“skills”
-
skills.enabled(boolean):- Description: Enable Agent Skills.
- Default:
true - Requires restart: Yes
-
skills.disabled(array):- Description: List of disabled skills.
- Default:
[] - Requires restart: Yes
hooksConfig
Section titled
“hooksConfig”
-
hooksConfig.enabled(boolean):- Description: Canonical toggle for the hooks system. When disabled, no hooks will be executed.
- Default:
true - Requires restart: Yes
-
hooksConfig.disabled(array):- Description: List of hook names (commands) that should be disabled. Hooks in this list will not execute even if configured.
- Default:
[]
-
hooksConfig.notifications(boolean):- Description: Show visual indicators when hooks are executing.
- Default:
true
-
hooks.BeforeTool(array):- Description: Hooks that execute before tool execution. Can intercept, validate, or modify tool calls.
- Default:
[]
-
hooks.AfterTool(array):- Description: Hooks that execute after tool execution. Can process results, log outputs, or trigger follow-up actions.
- Default:
[]
-
hooks.BeforeAgent(array):- Description: Hooks that execute before agent loop starts. Can set up context or initialize resources.
- Default:
[]
-
hooks.AfterAgent(array):- Description: Hooks that execute after agent loop completes. Can perform cleanup or summarize results.
- Default:
[]
-
hooks.Notification(array):- Description: Hooks that execute on notification events (errors, warnings, info). Can log or alert on specific conditions.
- Default:
[]
-
hooks.SessionStart(array):- Description: Hooks that execute when a session starts. Can initialize session-specific resources or state.
- Default:
[]
-
hooks.SessionEnd(array):- Description: Hooks that execute when a session ends. Can perform cleanup or persist session data.
- Default:
[]
-
hooks.PreCompress(array):- Description: Hooks that execute before chat history compression. Can back up or analyze conversation before compression.
- Default:
[]
-
hooks.BeforeModel(array):- Description: Hooks that execute before LLM requests. Can modify prompts, inject context, or control model parameters.
- Default:
[]
-
hooks.AfterModel(array):- Description: Hooks that execute after LLM responses. Can process outputs, extract information, or log interactions.
- Default:
[]
-
hooks.BeforeToolSelection(array):- Description: Hooks that execute before tool selection. Can filter or prioritize available tools dynamically.
- Default:
[]
contextManagement
Section titled
“contextManagement”
-
contextManagement.historyWindow.maxTokens(number):- Description: The number of tokens to allow before triggering compression.
- Default:
150000 - Requires restart: Yes
-
contextManagement.historyWindow.retainedTokens(number):- Description: The number of tokens to always retain.
- Default:
40000 - Requires restart: Yes
-
contextManagement.messageLimits.normalMaxTokens(number):- Description: The target number of tokens to budget for a normal conversation turn.
- Default:
2500 - Requires restart: Yes
-
contextManagement.messageLimits.retainedMaxTokens(number):- Description: The maximum number of tokens a single conversation turn can consume before truncation.
- Default:
12000 - Requires restart: Yes
-
contextManagement.messageLimits.normalizationHeadRatio(number):- Description: The ratio of tokens to retain from the beginning of a truncated message (0.0 to 1.0).
- Default:
0.25 - Requires restart: Yes
-
contextManagement.tools.distillation.maxOutputTokens(number):- Description: Maximum tokens to show to the model when truncating large tool outputs.
- Default:
10000 - Requires restart: Yes
-
contextManagement.tools.distillation.summarizationThresholdTokens(number):- Description: Threshold above which truncated tool outputs will be summarized by an LLM.
- Default:
20000 - Requires restart: Yes
-
contextManagement.tools.outputMasking.protectionThresholdTokens(number):- Description: Minimum number of tokens to protect from masking (most recent tool outputs).
- Default:
50000 - Requires restart: Yes
-
contextManagement.tools.outputMasking.minPrunableThresholdTokens(number):- Description: Minimum prunable tokens required to trigger a masking pass.
- Default:
30000 - Requires restart: Yes
-
contextManagement.tools.outputMasking.protectLatestTurn(boolean):- Description: Ensures the absolute latest turn is never masked, regardless of token count.
- Default:
true - Requires restart: Yes
-
admin.secureModeEnabled(boolean):- Description: If true, disallows YOLO mode and “Always allow” options from being used.
- Default:
false
-
admin.extensions.enabled(boolean):- Description: If false, disallows extensions from being installed or used.
- Default:
true
-
admin.mcp.enabled(boolean):- Description: If false, disallows MCP servers from being used.
- Default:
true
-
admin.mcp.config(object):- Description: Admin-configured MCP servers (allowlist).
- Default:
{}
-
admin.mcp.requiredConfig(object):- Description: Admin-required MCP servers that are always injected.
- Default:
{}
-
admin.skills.enabled(boolean):- Description: If false, disallows agent skills from being used.
- Default:
true
mcpServers
Section titled
“mcpServers”
Configures connections to one or more Model-Context Protocol (MCP) servers
for
discovering and using custom tools. Gemini CLI attempts to connect to each
configured MCP server to discover available tools. Every discovered tool is
prepended with the mcp_ prefix and its server alias
to form a fully qualified
name (FQN) (for example, mcp_serverAlias_actualToolName) to avoid conflicts.
Note that the system might strip certain schema properties from MCP tool
definitions for compatibility. At least one of command, url, or httpUrl
must be provided. If multiple are specified, the order of precedence is
httpUrl, then url, then
command.
mcpServers.<SERVER_NAME>(object): The server parameters for the named server.command(string, optional): The command to execute to start the MCP server via standard I/O.args(array of strings, optional): Arguments to pass to the command.env(object, optional): Environment variables to set for the server process.cwd(string, optional): The working directory in which to start the server.url(string, optional): The URL of an MCP server that uses Server-Sent Events (SSE) for communication.httpUrl(string, optional): The URL of an MCP server that uses streamable HTTP for communication.headers(object, optional): A map of HTTP headers to send with requests tourlorhttpUrl.timeout(number, optional): Timeout in milliseconds for requests to this MCP server.trust(boolean, optional): Trust this server and bypass all tool call confirmations.description(string, optional): A brief description of the server, which may be used for display purposes.includeTools(array of strings, optional): List of tool names to include from this MCP server. When specified, only the tools listed here will be available from this server (allowlist behavior). If not specified, all tools from the server are enabled by default.excludeTools(array of strings, optional): List of tool names to exclude from this MCP server. Tools listed here will not be available to the model, even if they are exposed by the server. Note:excludeToolstakes precedence overincludeTools- if a tool is in both lists, it will be excluded.
telemetry
Section titled
“telemetry”
Configures logging and metrics collection for Gemini CLI. For more information, see Telemetry.
- Properties:
enabled(boolean): Whether or not telemetry is enabled.target(string): The destination for collected telemetry. Supported values arelocalandgcp.otlpEndpoint(string): The endpoint for the OTLP Exporter.otlpProtocol(string): The protocol for the OTLP Exporter (grpcorhttp).logPrompts(boolean): Whether or not to include the content of user prompts in the logs.outfile(string): The file to write telemetry to whentargetislocal.useCollector(boolean): Whether to use an external OTLP collector.
Example settings.json
Section titled “Example
settings.json”
Here is an example of a settings.json file with the
nested structure, new as
of v0.3.0:
{ "general": { "vimMode": true, "preferredEditor": "code", "sessionRetention": { "enabled": true, "maxAge": "30d", "maxCount": 100 } }, "ui": { "theme": "GitHub", "hideBanner": true, "hideTips": false, "customWittyPhrases": [ "You forget a thousand things every day. Make sure this is one of ’em", "Connecting to AGI" ] }, "tools": { "sandbox": "docker", "discoveryCommand": "bin/get_tools", "callCommand": "bin/call_tool", "exclude": ["write_file"] }, "mcpServers": { "mainServer": { "command": "bin/mcp_server.py" }, "anotherServer": { "command": "node", "args": ["mcp_server.js", "--verbose"] } }, "telemetry": { "enabled": true, "target": "local", "otlpEndpoint": "http://localhost:4317", "logPrompts": true }, "privacy": { "usageStatisticsEnabled": true }, "model": { "name": "gemini-1.5-pro-latest", "maxSessionTurns": 10, "summarizeToolOutput": { "run_shell_command": { "tokenBudget": 100 } } }, "context": { "fileName": ["CONTEXT.md", "GEMINI.md"], "includeDirectories": ["path/to/dir1", "~/path/to/dir2", "../path/to/dir3"], "loadFromIncludeDirectories": true, "fileFiltering": { "respectGitIgnore": false } }, "advanced": { "excludedEnvVars": ["DEBUG", "DEBUG_MODE", "NODE_ENV"] }}
Shell history
Section titled “Shell history”The CLI keeps a history of shell commands you run. To avoid conflicts between different projects, this history is stored in a project-specific directory within your user’s home folder.
- Location:
~/.gemini/tmp/<project_hash>/shell_history<project_hash>is a unique identifier generated from your project’s root path.- The history is stored in a file named
shell_history.
Environment variables and .env files
Section titled “Environment
variables and .env files”
Environment variables are a common way to configure applications, especially for sensitive information like API keys or for settings that might change between environments. For authentication setup, see the Authentication documentation which covers all available authentication methods.
The CLI automatically loads environment variables from an .env file. The
loading order is:
.envfile in the current working directory.- If not found, it searches upwards in parent directories until it finds
an
.envfile or reaches the project root (identified by a.gitfolder) or the home directory. - If still not found, it looks for
~/.env(in the user’s home directory).
Environment variable exclusion: Some environment variables
(like DEBUG and
DEBUG_MODE) are automatically excluded from being
loaded from project .env
files to prevent interference with gemini-cli behavior. Variables from
.gemini/.env files are never excluded. You can
customize this behavior using
the advanced.excludedEnvVars setting in your settings.json file.
GEMINI_API_KEY:- Your API key for the Gemini API.
- One of several available authentication methods.
- Set this in your shell profile (for example,
~/.bashrc,~/.zshrc) or an.envfile.
GEMINI_MODEL:- Specifies the default Gemini model to use.
- Overrides the hardcoded default
- Example:
export GEMINI_MODEL="gemini-3-flash-preview"(Windows PowerShell:$env:GEMINI_MODEL="gemini-3-flash-preview")
GEMINI_CLI_IDE_PID:- Manually specifies the PID of the IDE process to use for integration. This is useful when running Gemini CLI in a standalone terminal while still wanting to associate it with a specific IDE instance.
- Overrides the automatic IDE detection logic.
GEMINI_CLI_HOME:- Specifies the root directory for Gemini CLI’s user-level configuration and storage.
- By default, this is the user’s system home directory. The CLI
will create a
.geminifolder inside this directory. - Useful for shared compute environments or keeping CLI state isolated.
- Example:
export GEMINI_CLI_HOME="/path/to/user/config"(Windows PowerShell:$env:GEMINI_CLI_HOME="C:\path\to\user\config")
GEMINI_CLI_SURFACE:- Specifies a custom label to include in the
User-Agentheader for API traffic reporting. - This is useful for tracking specific internal tools or distribution channels.
- Example:
export GEMINI_CLI_SURFACE="my-custom-tool"(Windows PowerShell:$env:GEMINI_CLI_SURFACE="my-custom-tool")
- Specifies a custom label to include in the
GOOGLE_API_KEY:- Your Google Cloud API key.
- Required for using Vertex AI in express mode.
- Ensure you have the necessary permissions.
- Example:
export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"(Windows PowerShell:$env:GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY").
GOOGLE_CLOUD_PROJECT:- Your Google Cloud Project ID.
- Required for using Code Assist or Vertex AI.
- If using Vertex AI, ensure you have the necessary permissions in this project.
- Cloud Shell note: When running in a Cloud Shell
environment, this
variable defaults to a special project allocated for Cloud Shell
users. If
you have
GOOGLE_CLOUD_PROJECTset in your global environment in Cloud Shell, it will be overridden by this default. To use a different project in Cloud Shell, you must defineGOOGLE_CLOUD_PROJECTin a.envfile. - Example:
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"(Windows PowerShell:$env:GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID").
GOOGLE_APPLICATION_CREDENTIALS(string):- Description: The path to your Google Application Credentials JSON file.
- Example:
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/credentials.json"(Windows PowerShell:$env:GOOGLE_APPLICATION_CREDENTIALS="C:\path\to\your\credentials.json")
GOOGLE_GENAI_API_VERSION:- Specifies the API version to use for Gemini API requests.
- When set, overrides the default API version used by the SDK.
- Example:
export GOOGLE_GENAI_API_VERSION="v1"(Windows PowerShell:$env:GOOGLE_GENAI_API_VERSION="v1")
OTLP_GOOGLE_CLOUD_PROJECT:- Your Google Cloud Project ID for Telemetry in Google Cloud
- Example:
export OTLP_GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"(Windows PowerShell:$env:OTLP_GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID").
GEMINI_TELEMETRY_ENABLED:- Set to
trueor1to enable telemetry. Any other value is treated as disabling it. - Overrides the
telemetry.enabledsetting.
- Set to
GEMINI_TELEMETRY_TARGET:- Sets the telemetry target (
localorgcp). - Overrides the
telemetry.targetsetting.
- Sets the telemetry target (
GEMINI_TELEMETRY_OTLP_ENDPOINT:- Sets the OTLP endpoint for telemetry.
- Overrides the
telemetry.otlpEndpointsetting.
GEMINI_TELEMETRY_OTLP_PROTOCOL:- Sets the OTLP protocol (
grpcorhttp). - Overrides the
telemetry.otlpProtocolsetting.
- Sets the OTLP protocol (
GEMINI_TELEMETRY_LOG_PROMPTS:- Set to
trueor1to enable or disable logging of user prompts. Any other value is treated as disabling it. - Overrides the
telemetry.logPromptssetting.
- Set to
GEMINI_TELEMETRY_OUTFILE:- Sets the file path to write telemetry to when the target is
local. - Overrides the
telemetry.outfilesetting.
- Sets the file path to write telemetry to when the target is
GEMINI_TELEMETRY_USE_COLLECTOR:- Set to
trueor1to enable or disable using an external OTLP collector. Any other value is treated as disabling it. - Overrides the
telemetry.useCollectorsetting.
- Set to
GOOGLE_CLOUD_LOCATION:- Your Google Cloud Project Location (for example, us-central1).
- Required for using Vertex AI in non-express mode.
- Example:
export GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION"(Windows PowerShell:$env:GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION").
GEMINI_SANDBOX:- Alternative to the
sandboxsetting insettings.json. - Accepts
true,false,docker,podman, or a custom command string.
- Alternative to the
GEMINI_SYSTEM_MD:- Replaces the built‑in system prompt with content from a Markdown file.
true/1: Use project default path./.gemini/system.md.- Any other string: Treat as a path (relative/absolute supported,
~expands). false/0or unset: Use the built‑in prompt. See System Prompt Override.
GEMINI_WRITE_SYSTEM_MD:- Writes the current built‑in system prompt to a file for review.
true/1: Write to./.gemini/system.md. Otherwise treat the value as a path.- Run the CLI once with this set to generate the file.
SEATBELT_PROFILE(macOS specific):- Switches the Seatbelt (
sandbox-exec) profile on macOS. permissive-open: (Default) Restricts writes to the project folder (and a few other folders, seepackages/cli/src/utils/sandbox-macos-permissive-open.sb) but allows other operations.restrictive-open: Declines operations by default, allows network.strict-open: Restricts both reads and writes to the working directory, allows network.strict-proxied: Same asstrict-openbut routes network through proxy.<profile_name>: Uses a custom profile. To define a custom profile, create a file namedsandbox-macos-<profile_name>.sbin your project’s.gemini/directory (for example,my-project/.gemini/sandbox-macos-custom.sb).
- Switches the Seatbelt (
DEBUGorDEBUG_MODE(often used by underlying libraries or the CLI itself):- Set to
trueor1to enable verbose debug logging, which can be helpful for troubleshooting. - Note: These variables are automatically
excluded from project
.envfiles by default to prevent interference with gemini-cli behavior. Use.gemini/.envfiles if you need to set these for gemini-cli specifically.
- Set to
NO_COLOR:- Set to any value to disable all color output in the CLI.
CLI_TITLE:- Set to a string to customize the title of the CLI.
CODE_ASSIST_ENDPOINT:- Specifies the endpoint for the code assist server.
- This is useful for development and testing.
Environment variable redaction
Section titled “Environment variable redaction”To prevent accidental leakage of sensitive information, Gemini CLI
automatically
redacts potential secrets from environment variables when executing tools
(such
as shell commands). This “best effort” redaction applies to variables
inherited
from the system or loaded from .env files.
Default Redaction Rules:
- By Name: Variables are redacted if their names contain
sensitive terms
like
TOKEN,SECRET,PASSWORD,KEY,AUTH,CREDENTIAL,PRIVATE, orCERT. - By Value: Variables are redacted if their values match
known secret
patterns, such as:
- Private keys (RSA, OpenSSH, PGP, etc.)
- Certificates
- URLs containing credentials
- API keys and tokens (GitHub, Google, AWS, Stripe, Slack, etc.)
- Specific Blocklist: Certain variables like
CLIENT_ID,DB_URI,DATABASE_URL, andCONNECTION_STRINGare always redacted by default.
Allowlist (Never Redacted):
- Common system variables (for example,
PATH,HOME,USER,SHELL,TERM,LANG). - Variables starting with
GEMINI_CLI_. - GitHub Action specific variables.
Configuration:
You can customize this behavior in your settings.json
file:
security.allowedEnvironmentVariables: A list of variable names to never redact, even if they match sensitive patterns.security.blockedEnvironmentVariables: A list of variable names to always redact, even if they don’t match sensitive patterns.
{ "security": { "allowedEnvironmentVariables": ["MY_PUBLIC_KEY", "NOT_A_SECRET_TOKEN"], "blockedEnvironmentVariables": ["INTERNAL_IP_ADDRESS"] }}
Command-line arguments
Section titled “Command-line arguments”Arguments passed directly when running the CLI can override other configurations for that specific session.
--acp:- Starts the agent in Agent Communication Protocol (ACP) mode.
--allowed-mcp-server-names:- A comma-separated list of MCP server names to allow for the session.
--allowed-tools <tool1,tool2,...>:- A comma-separated list of tool names that will bypass the confirmation dialog.
- Example:
gemini --allowed-tools "ShellTool(git status)"
--approval-mode <mode>:- Sets the approval mode for tool calls. Available modes:
default: Prompt for approval on each tool call (default behavior)auto_edit: Automatically approve edit tools (replace, write_file) while prompting for othersyolo: Automatically approve all tool calls (equivalent to--yolo)plan: Read-only mode for tool calls (requires experimental planning to be enabled).Note: This mode is currently under development and not yet fully functional.
- Cannot be used together with
--yolo. Use--approval-mode=yoloinstead of--yolofor the new unified approach. - Example:
gemini --approval-mode auto_edit
- Sets the approval mode for tool calls. Available modes:
--debug(-d):- Enables debug mode for this session, providing more verbose output. Open the debug console with F12 to see the additional logging.
--delete-session <identifier>:- Delete a specific chat session by its index number or full session UUID.
- Use
--list-sessionsfirst to see available sessions, their indices, and UUIDs. - Example:
gemini --delete-session 3orgemini --delete-session a1b2c3d4-e5f6-7890-abcd-ef1234567890
--extensions <extension_name ...>(-e <extension_name ...>):- Specifies a list of extensions to use for the session. If not provided, all available extensions are used.
- Use the special term
gemini -e noneto disable all extensions. - Example:
gemini -e my-extension -e my-other-extension
--fake-responses:- Path to a file with fake model responses for testing.
--help(or-h):- Displays help information about command-line arguments.
--include-directories <dir1,dir2,...>:- Includes additional directories in the workspace for multi-directory support.
- Can be specified multiple times or as comma-separated values.
- 5 directories can be added at maximum.
- Example:
--include-directories /path/to/project1,/path/to/project2or--include-directories /path/to/project1 --include-directories /path/to/project2
--list-extensions(-l):- Lists all available extensions and exits.
--list-sessions:- List all available chat sessions for the current project and exit.
- Shows session indices, dates, message counts, and preview of first user message.
- Example:
gemini --list-sessions
--model <model_name>(-m <model_name>):- Specifies the Gemini model to use for this session.
- Example:
npm start -- --model gemini-3-pro-preview
--output-format <format>:- Description: Specifies the format of the CLI output for non-interactive mode.
- Values:
text: (Default) The standard human-readable output.json: A machine-readable JSON output.stream-json: A streaming JSON output that emits real-time events.
- Note: For structured output and scripting, use
the
--output-format jsonor--output-format stream-jsonflag.
--prompt <your_prompt>(-p <your_prompt>):- Deprecated: Use positional arguments instead.
- Used to pass a prompt directly to the command. This invokes Gemini CLI in a non-interactive mode.
--prompt-interactive <your_prompt>(-i <your_prompt>):- Starts an interactive session with the provided prompt as the initial input.
- The prompt is processed within the interactive session, not before it.
- Cannot be used when piping input from stdin.
- Example:
gemini -i "explain this code"
--record-responses:- Path to a file to record model responses for testing.
--resume [session_id](-r [session_id]):- Resume a previous chat session. Use “latest” for the most recent session, provide a session index number, or provide a full session UUID.
- If no session_id is provided, defaults to “latest”.
- Example:
gemini --resume 5orgemini --resume latestorgemini --resume a1b2c3d4-e5f6-7890-abcd-ef1234567890orgemini --resume - See Session Management for more details.
--sandbox(-s):- Enables sandbox mode for this session.
--screen-reader:- Enables screen reader mode, which adjusts the TUI for better compatibility with screen readers.
--version:- Displays the version of the CLI.
--yolo:- Enables YOLO mode, which automatically approves all tool calls.
Context files (hierarchical instructional context)
Section titled “Context files (hierarchical instructional context)”While not strictly configuration for the CLI’s behavior, context
files
(defaulting to GEMINI.md but configurable via the
context.fileName setting)
are crucial for configuring the instructional context (also
referred to as
“memory”) provided to the Gemini model. This powerful feature lets you give
project-specific instructions, coding style guides, or any relevant
background
information to the AI, making its responses more tailored and accurate to
your
needs. The CLI includes UI elements, such as an indicator in the footer
showing
the number of loaded context files, to keep you informed about the active
context.
- Purpose: These Markdown files contain instructions, guidelines, or context that you want the Gemini model to be aware of during your interactions. The system is designed to manage this instructional context hierarchically.
Example context
file content (for example, GEMINI.md)
Section titled “Example context
file content (for example, GEMINI.md)”
Here’s a conceptual example of what a context file at the root of a TypeScript project might contain:
# Project: My Awesome TypeScript Library
## General Instructions:
- When generating new TypeScript code, follow the existing coding style.- Ensure all new functions and classes have JSDoc comments.- Prefer functional programming paradigms where appropriate.- All code should be compatible with TypeScript 5.0 and Node.js 20+.
## Coding Style:
- Use 2 spaces for indentation.- Interface names should be prefixed with `I` (for example, `IUserService`).- Private class members should be prefixed with an underscore (`_`).- Always use strict equality (`===` and `!==`).
## Specific Component: `src/api/client.ts`
- This file handles all outbound API requests.- When adding new API call functions, ensure they include robust error handling and logging.- Use the existing `fetchWithRetry` utility for all GET requests.
## Regarding Dependencies:
- Avoid introducing new external dependencies unless absolutely necessary.- If a new dependency is required, state the reason.
This example demonstrates how you can provide general project context, specific coding conventions, and even notes about particular files or components. The more relevant and precise your context files are, the better the AI can assist you. Project-specific context files are highly encouraged to establish conventions and context.
- Hierarchical loading and precedence: The CLI implements
a sophisticated
hierarchical memory system by loading context files (for example,
GEMINI.md) from several locations. Content from files lower in this list (more specific) typically overrides or supplements content from files higher up (more general). The exact concatenation order and final context can be inspected using the/memory showcommand. The typical loading order is:- Global context file:
- Location:
~/.gemini/<configured-context-filename>(for example,~/.gemini/GEMINI.mdin your user home directory). - Scope: Provides default instructions for all your projects.
- Location:
- Project root and ancestors context files:
- Location: The CLI searches for the configured context
file in the
current working directory and then in each parent
directory up to either
the project root (identified by a
.gitfolder) or your home directory. - Scope: Provides context relevant to the entire project or a significant portion of it.
- Location: The CLI searches for the configured context
file in the
current working directory and then in each parent
directory up to either
the project root (identified by a
- Sub-directory context files (contextual/local):
- Location: The CLI also scans for the configured context
file in
subdirectories below the current working
directory (respecting common
ignore patterns like
node_modules,.git, etc.). The breadth of this search is limited to 200 directories by default, but can be configured with thecontext.discoveryMaxDirssetting in yoursettings.jsonfile. - Scope: Allows for highly specific instructions relevant to a particular component, module, or subsection of your project.
- Location: The CLI also scans for the configured context
file in
subdirectories below the current working
directory (respecting common
ignore patterns like
- Global context file:
- Concatenation and UI indication: The contents of all found context files are concatenated (with separators indicating their origin and path) and provided as part of the system prompt to the Gemini model. The CLI footer displays the count of loaded context files, giving you a quick visual cue about the active instructional context.
- Importing content: You can modularize your context
files by importing
other Markdown files using the
@path/to/file.mdsyntax. For more details, see the Memory Import Processor documentation. - Commands for memory management:
- Use
/memory refreshto force a re-scan and reload of all context files from all configured locations. This updates the AI’s instructional context. - Use
/memory showto display the combined instructional context currently loaded, allowing you to verify the hierarchy and content being used by the AI. - See the Commands
documentation for full details on
the
/memorycommand and its sub-commands (showandreload).
- Use
By understanding and utilizing these configuration layers and the hierarchical nature of context files, you can effectively manage the AI’s memory and tailor Gemini CLI’s responses to your specific needs and projects.
Sandboxing
Section titled “Sandboxing”Gemini CLI can execute potentially unsafe operations (like shell commands and file modifications) within a sandboxed environment to protect your system.
Sandboxing is disabled by default, but you can enable it in a few ways:
- Using
--sandboxor-sflag. - Setting
GEMINI_SANDBOXenvironment variable. - Sandbox is enabled when using
--yoloor--approval-mode=yoloby default.
By default, it uses a pre-built gemini-cli-sandbox
Docker image.
For project-specific sandboxing needs, you can create a custom Dockerfile at
.gemini/sandbox.Dockerfile in your project’s root
directory. This Dockerfile
can be based on the base sandbox image:
FROM gemini-cli-sandbox
# Add your custom dependencies or configurations here.# Note: The base image runs as the non-root 'node' user.# You must switch to 'root' to install system packages.# For example:# USER root# RUN apt-get update && apt-get install -y some-package# USER node# COPY ./my-config /app/my-config
When .gemini/sandbox.Dockerfile exists, you can use
BUILD_SANDBOX
environment variable when running Gemini CLI to automatically build the
custom
sandbox image:
BUILD_SANDBOX=1 gemini -s
Building a custom sandbox with BUILD_SANDBOX is only
supported when running
Gemini CLI from source. If you installed the CLI with npm, build the Docker
image separately and reference that image in your sandbox configuration.
Usage statistics
Section titled “Usage statistics”To help us improve Gemini CLI, we collect anonymized usage statistics. This data helps us understand how the CLI is used, identify common issues, and prioritize new features.
What we collect:
- Tool calls: We log the names of the tools that are called, whether they succeed or fail, and how long they take to execute. We do not collect the arguments passed to the tools or any data returned by them.
- API requests: We log the Gemini model used for each request, the duration of the request, and whether it was successful. We do not collect the content of the prompts or responses.
- Session information: We collect information about the configuration of the CLI, such as the enabled tools and the approval mode.
What we DON’T collect:
- Personally identifiable information (PII): We do not collect any personal information, such as your name, email address, or API keys.
- Prompt and response content: We do not log the content of your prompts or the responses from the Gemini model.
- File content: We do not log the content of any files that are read or written by the CLI.
How to opt out:
You can opt out of usage statistics collection at any time by setting the
usageStatisticsEnabled property to false under the privacy
category in
your settings.json file:
{ "privacy": { "usageStatisticsEnabled": false }}
Gemini CLI ships with a set of default keyboard shortcuts for editing input, navigating history, and controlling the UI. Use this reference to learn the available combinations.
Basic Controls
Section titled “Basic Controls”| Command | Action | Keys |
|---|---|---|
basic.confirm |
Confirm the current selection or choice. | Enter |
basic.cancel |
Dismiss dialogs or cancel the current focus. | EscCtrl+[
|
basic.quit |
Cancel the current request or quit the CLI when input is empty. | Ctrl+C |
basic.exit |
Exit the CLI when the input buffer is empty. | Ctrl+D |
Cursor Movement
Section titled “Cursor Movement”| Command | Action | Keys |
|---|---|---|
cursor.home |
Move the cursor to the start of the line. | Ctrl+AHome
|
cursor.end |
Move the cursor to the end of the line. | Ctrl+EEnd
|
cursor.up |
Move the cursor up one line. | Up |
cursor.down |
Move the cursor down one line. | Down |
cursor.left |
Move the cursor one character to the left. | Left |
cursor.right |
Move the cursor one character to the right. | RightCtrl+F
|
cursor.wordLeft |
Move the cursor one word to the left. | Ctrl+LeftAlt+LeftAlt+B
|
cursor.wordRight |
Move the cursor one word to the right. | Ctrl+RightAlt+RightAlt+F
|
Editing
Section titled “Editing”| Command | Action | Keys |
|---|---|---|
edit.deleteRightAll |
Delete from the cursor to the end of the line. | Ctrl+K |
edit.deleteLeftAll |
Delete from the cursor to the start of the line. | Ctrl+U |
edit.clear |
Clear all text in the input field. | Ctrl+C |
edit.deleteWordLeft |
Delete the previous word. | Ctrl+BackspaceAlt+BackspaceCtrl+W |
edit.deleteWordRight |
Delete the next word. | Ctrl+DeleteAlt+DeleteAlt+D |
edit.deleteLeft |
Delete the character to the left. | BackspaceCtrl+H |
edit.deleteRight |
Delete the character to the right. | DeleteCtrl+D
|
edit.undo |
Undo the most recent text edit. | Cmd/Win+ZAlt+Z |
edit.redo |
Redo the most recent undone text edit. | Ctrl+Shift+ZShift+Cmd/Win+ZAlt+Shift+Z |
Scrolling
Section titled “Scrolling”| Command | Action | Keys |
|---|---|---|
scroll.up |
Scroll content up. | Shift+Up |
scroll.down |
Scroll content down. | Shift+Down |
scroll.home |
Scroll to the top. | Ctrl+HomeShift+Home |
scroll.end |
Scroll to the bottom. | Ctrl+EndShift+End |
scroll.pageUp |
Scroll up by one page. | Page Up |
scroll.pageDown |
Scroll down by one page. | Page Down |
History & Search
Section titled “History & Search”| Command | Action | Keys |
|---|---|---|
history.previous |
Show the previous entry in history. | Ctrl+P |
history.next |
Show the next entry in history. | Ctrl+N |
history.search.start |
Start reverse search through history. | Ctrl+R |
history.search.submit |
Submit the selected reverse-search match. | Enter |
history.search.accept |
Accept a suggestion while reverse searching. | Tab |
Navigation
Section titled “Navigation”| Command | Action | Keys |
|---|---|---|
nav.up |
Move selection up in lists. | Up |
nav.down |
Move selection down in lists. | Down |
nav.dialog.up |
Move up within dialog options. | UpK |
nav.dialog.down |
Move down within dialog options. | DownJ |
nav.dialog.next |
Move to the next item or question in a dialog. | Tab |
nav.dialog.previous |
Move to the previous item or question in a dialog. | Shift+Tab |
Suggestions & Completions
Section titled “Suggestions & Completions”| Command | Action | Keys |
|---|---|---|
suggest.accept |
Accept the inline suggestion. | TabEnter
|
suggest.focusPrevious |
Move to the previous completion option. | UpCtrl+P
|
suggest.focusNext |
Move to the next completion option. | DownCtrl+N
|
suggest.expand |
Expand an inline suggestion. | Right |
suggest.collapse |
Collapse an inline suggestion. | Left |
Text Input
Section titled “Text Input”| Command | Action | Keys |
|---|---|---|
input.submit |
Submit the current prompt. | Enter |
input.queueMessage |
Queue the current prompt to be processed after the current task finishes. | Tab |
input.newline |
Insert a newline without submitting. | Ctrl+EnterCmd/Win+EnterAlt+EnterShift+EnterCtrl+J |
input.openExternalEditor |
Open the current prompt or the plan in an external editor. | Ctrl+GCtrl+Shift+G |
input.deprecatedOpenExternalEditor |
Deprecated command to open external editor. | Ctrl+X |
input.paste |
Paste from the clipboard. | Ctrl+VCmd/Win+VAlt+V
|
App Controls
Section titled “App Controls”| Command | Action | Keys |
|---|---|---|
app.showErrorDetails |
Toggle the debug console for detailed error information. | F12 |
app.showFullTodos |
Toggle the full TODO list. | Ctrl+T |
app.showIdeContextDetail |
Show IDE context details. | F4 |
app.toggleMarkdown |
Toggle Markdown rendering. | Alt+M |
app.toggleCopyMode |
Toggle copy mode when in alternate buffer mode. | F9 |
app.toggleMouseMode |
Toggle mouse mode (scrolling and clicking). | Ctrl+S |
app.toggleYolo |
Toggle YOLO (auto-approval) mode for tool calls. | Ctrl+Y |
app.cycleApprovalMode |
Cycle through approval modes: default (prompt), auto_edit (auto-approve edits), and plan (read-only). Plan mode is skipped when the agent is busy. | Shift+Tab |
app.showMoreLines |
Expand and collapse blocks of content when not in alternate buffer mode. | Ctrl+O |
app.expandPaste |
Expand or collapse a paste placeholder when cursor is over placeholder. | Ctrl+O |
app.focusShellInput |
Move focus from Gemini to the active shell. | Tab |
app.unfocusShellInput |
Move focus from the shell back to Gemini. | Shift+Tab |
app.clearScreen |
Clear the terminal screen and redraw the UI. | Ctrl+L |
app.restart |
Restart the application. | RShift+R
|
app.suspend |
Suspend the CLI and move it to the background. | Ctrl+Z |
app.showShellUnfocusWarning |
Show warning when trying to move focus away from shell input. | Tab |
Background Shell Controls
Section titled “Background Shell Controls”| Command | Action | Keys |
|---|---|---|
background.escape |
Dismiss background shell list. | Esc |
background.select |
Confirm selection in background shell list. | Enter |
background.toggle |
Toggle current background shell visibility. | Ctrl+B |
background.toggleList |
Toggle background shell list. | Ctrl+L |
background.kill |
Kill the active background shell. | Ctrl+K |
background.unfocus |
Move focus from background shell to Gemini. | Shift+Tab |
background.unfocusList |
Move focus from background shell list to Gemini. | Tab |
background.unfocusWarning |
Show warning when trying to move focus away from background shell. | Tab |
app.dumpFrame |
Dump the current frame as a snapshot. | F8 |
app.startRecording |
Start recording the session. | F6 |
app.stopRecording |
Stop recording the session. | F7 |
Extension Controls
Section titled “Extension Controls”| Command | Action | Keys |
|---|---|---|
extension.update |
Update the current extension if available. | I |
extension.link |
Link the current extension to a local path. | L |
Customizing Keybindings
Section titled “Customizing Keybindings”You can add alternative keybindings or remove default keybindings by creating
a
keybindings.json file in your home gemini directory
(typically
~/.gemini/keybindings.json).
Configuration Format
Section titled “Configuration Format”The configuration uses a JSON array of objects, similar to VS Code’s
keybinding
schema. Each object must specify a command from the
reference tables above and
a key combination.
[ { "command": "edit.clear", "key": "cmd+l" }, { // prefix "-" to unbind a key "command": "-app.toggleYolo", "key": "ctrl+y" }, { "command": "input.submit", "key": "ctrl+y" }, { // multiple modifiers "command": "cursor.right", "key": "shift+alt+a" }, { // Some mac keyboards send "Å" instead of "shift+option+a" "command": "cursor.right", "key": "Å" }, { // some base keys have special multi-char names "command": "cursor.right", "key": "shift+pageup" }]
- Unbinding To remove an existing or default keybinding,
prefix a minus sign
(
-) to thecommandname. - No Auto-unbinding The same key can be bound to multiple commands in different contexts at the same time. Therefore, creating a binding does not automatically unbind the key from other commands.
- Explicit Modifiers: Key matching is explicit. For
example, a binding for
ctrl+fwill only trigger on exactlyctrl+f, notctrl+shift+foralt+ctrl+f. - Literal Characters: Terminals often translate complex
key combinations
(especially on macOS with the
Optionkey) into special characters, losing modifier and keystroke information along the way. For example,shift+5might be sent as%. In these cases, you must bind to the literal character%as bindings toshift+5will never fire. To see precisely what is being sent, enableDebug Keystroke Loggingand hit f12 to open the debug log console. - Key Modifiers: The supported key modifiers are:
ctrlshift,alt(synonyms:opt,option)cmd(synonym:meta)
- Base Key: The base key can be any single unicode code
point or any of the
following special keys:
- Navigation:
up,down,left,right,home,end,pageup,pagedown - Actions:
enter,escape,tab,space,backspace,delete,clear,insert,printscreen - Toggles:
capslock,numlock,scrolllock,pausebreak - Function Keys:
f1throughf35 - Numpad:
numpad0throughnumpad9,numpad_add,numpad_subtract,numpad_multiply,numpad_divide,numpad_decimal,numpad_separator
- Navigation:
Additional context-specific shortcuts
Section titled “Additional context-specific shortcuts”Option+B/F/M(macOS only): Are interpreted asCmd+B/F/Meven if your terminal isn’t configured to send Meta with Option.!on an empty prompt: Enter or exit shell mode.?on an empty prompt: Toggle the shortcuts panel above the input. PressEsc,Backspace, any printable key, or a registered app hotkey to close it. The panel also auto-hides while the agent is running/streaming or when action-required dialogs are shown. Press?again to close the panel and insert a?into the prompt.Tab+Tab(while typing in the prompt): Toggle between minimal and full UI details when no completion/search interaction is active. The selected mode is remembered for future sessions. Full UI remains the default on first run, and singleTabkeeps its existing completion/focus behavior.Shift + Tab(while typing in the prompt): Cycle approval modes: default, auto-edit, and plan (skipped when agent is busy).\(at end of a line) +Enter: Insert a newline without leaving single-line mode.Escpressed twice quickly: Clear the input prompt if it is not empty, otherwise browse and rewind previous interactions.Up Arrow/Down Arrow: When the cursor is at the top or bottom of a single-line input, navigate backward or forward through prompt history.Number keys (1-9, multi-digit)inside selection dialogs: Jump directly to the numbered radio option and confirm when the full number is entered.Ctrl + O: Expand or collapse paste placeholders ([Pasted Text: X lines]) inline when the cursor is over the placeholder.Ctrl + X(while a plan is presented): Open the plan in an external editor to collaboratively edit or comment on the implementation strategy.Double-clickon a paste placeholder (alternate buffer mode only): Expand to view full content inline. Double-click again to collapse.
Vi mode shortcuts
Section titled “Vi mode shortcuts”When vim mode is enabled with /vim or general.vimMode: true, Gemini CLI
supports NORMAL and INSERT modes.
Mode switching
Section titled “Mode switching”| Action | Keys |
|---|---|
| Enter NORMAL mode from INSERT mode | Esc |
| Enter INSERT mode at the cursor | i |
| Enter INSERT mode after the cursor | a |
| Enter INSERT mode at the start of the line | I |
| Enter INSERT mode at the end of the line | A |
| Insert a new line below and switch to INSERT | o |
| Insert a new line above and switch to INSERT | O |
| Clear input in NORMAL mode | Esc Esc |
Navigation in NORMAL mode
Section titled “Navigation in NORMAL mode”| Action | Keys |
|---|---|
| Move left | h |
| Move down | j |
| Move up | k |
| Move right | l |
| Move to start of line | 0 |
| Move to first non-whitespace char | ^ |
| Move to end of line | $ |
| Move forward by word | w |
| Move backward by word | b |
| Move to end of word | e |
| Move forward by WORD | W |
| Move backward by WORD | B |
| Move to end of WORD | E |
| Go to first line | gg |
| Go to last line | G |
| Go to line N | N G or N gg |
Counts are supported for navigation commands. For example, 5j moves down five
lines and 3w moves forward three words.
Editing in NORMAL mode
Section titled “Editing in NORMAL mode”| Action | Keys |
|---|---|
| Delete character under cursor | x |
| Delete to end of line | D |
| Delete line | dd |
| Change to end of line | C |
| Change line | cc |
| Delete forward word | dw |
| Delete backward word | db |
| Delete to end of word | de |
| Delete forward WORD | dW |
| Delete backward WORD | dB |
| Delete to end of WORD | dE |
| Change forward word | cw |
| Change backward word | cb |
| Change to end of word | ce |
| Change forward WORD | cW |
| Change backward WORD | cB |
| Change to end of WORD | cE |
| Delete to start of line | d0 |
| Delete to first non-whitespace | d^ |
| Change to start of line | c0 |
| Change to first non-whitespace | c^ |
| Delete from first line to here | dgg |
| Delete from here to last line | dG |
| Change from first line to here | cgg |
| Change from here to last line | cG |
| Undo last change | u |
| Repeat last command | . |
Counts are also supported for editing commands. For example, 3dd deletes three
lines and 2cw changes two words.
Limitations
Section titled “Limitations”- On Windows Terminal:
shift+enteris only supported in version 1.25 and higher.shift+tabis not supported on Node 20 and earlier versions of Node 22.
- On macOS’s Terminal:
shift+enteris not supported.
The Memory Import Processor is a feature that lets you modularize your
GEMINI.md
files by importing content from other files using the @file.md syntax.
Overview
Section titled “Overview”This feature enables you to break down large GEMINI.md files into smaller, more manageable components that can be reused across different contexts. The import processor supports both relative and absolute paths, with built-in safety features to prevent circular imports and ensure file access security.
Syntax
Section titled “Syntax”Use the @ symbol followed by the path to the file you
want to import:
# Main GEMINI.md file
This is the main content.
@./components/instructions.md
More content here.
@./shared/configuration.md
Supported path formats
Section titled “Supported path formats”Relative paths
Section titled “Relative paths”@./file.md- Import from the same directory@../file.md- Import from parent directory@./components/file.md- Import from subdirectory
Absolute paths
Section titled “Absolute paths”@/absolute/path/to/file.md- Import using absolute path
Examples
Section titled “Examples”Basic import
Section titled “Basic import”# My GEMINI.md
Welcome to my project!
@./get-started.md
## Features
@./features/overview.md
Nested imports
Section titled “Nested imports”The imported files can themselves contain imports, creating a nested structure:
@./header.md @./content.md @./footer.md
# Project Header
@./shared/title.md
Safety features
Section titled “Safety features”Circular import detection
Section titled “Circular import detection”The processor automatically detects and prevents circular imports:
@./file-b.md
@./file-a.md <!-- This will be detected and prevented -->
File access security
Section titled “File access security”The validateImportPath function ensures that imports
are only allowed from
specified directories, preventing access to sensitive files outside the
allowed
scope.
Maximum import depth
Section titled “Maximum import depth”To prevent infinite recursion, there’s a configurable maximum import depth (default: 5 levels).
Error handling
Section titled “Error handling”Missing files
Section titled “Missing files”If a referenced file doesn’t exist, the import will fail gracefully with an error comment in the output.
File access errors
Section titled “File access errors”Permission issues or other file system errors are handled gracefully with appropriate error messages.
Code region detection
Section titled “Code region detection”The import processor uses the marked library to
detect code blocks and inline
code spans, ensuring that @ imports inside these
regions are properly ignored.
This provides robust handling of nested code blocks and complex Markdown
structures.
Import tree structure
Section titled “Import tree structure”The processor returns an import tree that shows the hierarchy of imported
files,
similar to Claude’s /memory feature. This helps
users debug problems with
their GEMINI.md files by showing which files were read and their import
relationships.
Example tree structure:
Memory Files L project: GEMINI.md L a.md L b.md L c.md L d.md L e.md L f.md L included.md
The tree preserves the order that files were imported and shows the complete import chain for debugging purposes.
Comparison to
Claude Code’s /memory (claude.md) approach
Section titled “Comparison to
Claude Code’s /memory (claude.md) approach”
Claude Code’s /memory feature (as seen in claude.md) produces a flat, linear
document by concatenating all included files, always marking file boundaries
with clear comments and path names. It does not explicitly present the
import
hierarchy, but the LLM receives all file contents and paths, which is
sufficient
for reconstructing the hierarchy if needed.
API reference
Section titled “API reference”processImports(content, basePath, debugMode?, importState?)
Section titled
“processImports(content, basePath, debugMode?,
importState?)”
Processes import statements in GEMINI.md content.
Parameters:
content(string): The content to process for importsbasePath(string): The directory path where the current file is locateddebugMode(boolean, optional): Whether to enable debug logging (default: false)importState(ImportState, optional): State tracking for circular import prevention
Returns: Promise<ProcessImportsResult> - Object containing processed content and import tree
ProcessImportsResult
Section titled
“ProcessImportsResult”
interface ProcessImportsResult { content: string; // The processed content with imports resolved importTree: MemoryFile; // Tree structure showing the import hierarchy}
MemoryFile
Section titled
“MemoryFile”
interface MemoryFile { path: string; // The file path imports?: MemoryFile[]; // Direct imports, in the order they were imported}
validateImportPath(importPath, basePath, allowedDirectories)
Section titled
“validateImportPath(importPath, basePath,
allowedDirectories)”
Validates import paths to ensure they are safe and within allowed directories.
Parameters:
importPath(string): The import path to validatebasePath(string): The base directory for resolving relative pathsallowedDirectories(string[]): Array of allowed directory paths
Returns: boolean - Whether the import path is valid
findProjectRoot(startDir)
Section titled
“findProjectRoot(startDir)”
Finds the project root by searching for a .git
directory upwards from the
given start directory. Implemented as an async function
using non-blocking
file system APIs to avoid blocking the Node.js event loop.
Parameters:
startDir(string): The directory to start searching from
Returns: Promise<string> - The project root directory
(or the start
directory if no .git is found)
Best Practices
Section titled “Best Practices”- Use descriptive file names for imported components
- Keep imports shallow - avoid deeply nested import chains
- Document your structure - maintain a clear hierarchy of imported files
- Test your imports - ensure all referenced files exist and are accessible
- Use relative paths when possible for better portability
Troubleshooting
Section titled “Troubleshooting”Common issues
Section titled “Common issues”- Import not working: Check that the file exists and the path is correct
- Circular import warnings: Review your import structure for circular references
- Permission errors: Ensure the files are readable and within allowed directories
- Path resolution issues: Use absolute paths if relative paths aren’t resolving correctly
Debug mode
Section titled “Debug mode”Enable debug mode to see detailed logging of the import process:
const result = await processImports(content, basePath, true);
Gemini CLI includes a powerful policy engine that provides fine-grained control over tool execution. It allows users and administrators to define rules that determine whether a tool call should be allowed, denied, or require user confirmation.
Quick start
Section titled “Quick start”To create your first policy:
-
Create the policy directory if it doesn’t exist:
macOS/Linux
Terminal window mkdir -p ~/.gemini/policiesWindows (PowerShell)
Terminal window New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.gemini\policies" -
Create a new policy file (for example,
~/.gemini/policies/my-rules.toml). You can use any filename ending in.toml; all such files in this directory will be loaded and combined:[[rule]]toolName = "run_shell_command"commandPrefix = "rm -rf"decision = "deny"priority = 100 -
Run a command that triggers the policy (for example, ask Gemini CLI to
rm -rf /). The tool will now be blocked automatically.
Core concepts
Section titled “Core concepts”The policy engine operates on a set of rules. Each rule is a combination of conditions and a resulting decision. When a large language model wants to execute a tool, the policy engine evaluates all rules to find the highest-priority rule that matches the tool call.
A rule consists of the following main components:
- Conditions: Criteria that a tool call must meet for the rule to apply. This can include the tool’s name, the arguments provided to it, or the current approval mode.
- Decision: The action to take if the rule matches (
allow,deny, orask_user). - Priority: A number that determines the rule’s precedence. Higher numbers win.
For example, this rule will ask for user confirmation before executing any
git
command.
[[rule]]toolName = "run_shell_command"commandPrefix = "git"decision = "ask_user"priority = 100
Conditions
Section titled “Conditions”Conditions are the criteria that a tool call must meet for a rule to apply. The primary conditions are the tool’s name and its arguments.
Tool Name
Section titled “Tool Name”The toolName in the rule must match the name of the
tool being called.
- Wildcards: You can use wildcards to match multiple
tools.
*: Matches any tool (built-in or MCP).mcp_server_*: Matches any tool from a specific MCP server.mcp_*_toolName: Matches a specific tool name across all MCP servers.mcp_*: Matches any tool from any MCP server.
Recommendation: While FQN wildcards are supported, the recommended approach for MCP tools is to use the
mcpNamefield in your TOML rules. See Special syntax for MCP tools.
Arguments pattern
Section titled “Arguments pattern”If argsPattern is specified, the tool’s arguments are
converted to a stable
JSON string, which is then tested against the provided regular expression.
If
the arguments don’t match the pattern, the rule does not apply.
Execution environment
Section titled “Execution environment”If interactive is specified, the rule will only apply
if the CLI’s execution
environment matches the specified boolean value:
true: The rule applies only in interactive mode.false: The rule applies only in non-interactive (headless) mode.
If omitted, the rule applies to both interactive and non-interactive environments.
Decisions
Section titled “Decisions”There are three possible decisions a rule can enforce:
allow: The tool call is executed automatically without user interaction.deny: The tool call is blocked and is not executed. For global rules (those without anargsPattern), tools that are denied are completely excluded from the model’s memory. This means the model will not even see the tool as an option, which is more secure and saves context window space.ask_user: The user is prompted to approve or deny the tool call. (In non-interactive mode, this is treated asdeny.)
Priority system and tiers
Section titled “Priority system and tiers”The policy engine uses a sophisticated priority system to resolve conflicts when multiple rules match a single tool call. The core principle is simple: the rule with the highest priority wins.
To provide a clear hierarchy, policies are organized into three tiers. Each tier has a designated number that forms the base of the final priority calculation.
| Tier | Base | Description |
|---|---|---|
| Default | 1 | Built-in policies that ship with Gemini CLI. |
| Extension | 2 | Policies defined in extensions. |
| Workspace | 3 | Policies defined in the current workspace’s configuration directory. |
| User | 4 | Custom policies defined by the user. |
| Admin | 5 | Policies managed by an administrator (for example, in an enterprise environment). |
Within a TOML policy file, you assign a priority value from 0 to 999. The engine transforms this into a final priority using the following formula:
final_priority = tier_base + (toml_priority / 1000)
This system guarantees that:
- Admin policies always override User, Workspace, and Default policies (defined in policy TOML files).
- User policies override Workspace and Default policies.
- Workspace policies override Default policies.
- You can still order rules within a single tier with fine-grained control.
For example:
- A
priority: 50rule in a Default policy TOML becomes1.050. - A
priority: 10rule in a Workspace policy TOML becomes2.010. - A
priority: 100rule in a User policy TOML becomes3.100. - A
priority: 20rule in an Admin policy TOML becomes4.020.
Approval modes
Section titled “Approval modes”Approval modes allow the policy engine to apply different sets of rules based
on
the CLI’s operational mode. A rule in a TOML policy file can be associated
with
one or more modes (for example, yolo, autoEdit, plan). The rule will
only
be active if the CLI is running in one of its specified modes. If a rule has
no
modes specified, it is always active.
default: The standard interactive mode where most write tools require confirmation.autoEdit: Optimized for automated code editing; some write tools may be auto-approved.plan: A strict, read-only mode for research and design. See Customizing Plan Mode Policies.yolo: A mode where all tools are auto-approved (use with extreme caution).
To maintain the integrity of Plan Mode as a safe research environment,
persistent tool approvals are context-aware. When you select “Allow
for all
future sessions”, the policy engine explicitly includes the
current mode and
all more permissive modes in the hierarchy (plan
< default < autoEdit <
yolo).
- Approvals in
planmode: These represent an intentional choice to trust a tool globally. The resulting rule explicitly includes all modes (plan,default,autoEdit, andyolo). - Approvals in other modes: These only apply to the
current mode and those
more permissive. For example:
- An approval granted in
defaultmode applies todefault,autoEdit, andyolo. - An approval granted in
autoEditmode applies toautoEditandyolo. - An approval granted in
yolomode applies only toyolo. This ensures that trust flows correctly to more permissive environments while maintaining the safety of more restricted modes likeplan.
- An approval granted in
Rule matching
Section titled “Rule matching”When a tool call is made, the engine checks it against all active rules, starting from the highest priority. The first rule that matches determines the outcome.
A rule matches a tool call if all of its conditions are met:
- Tool name: The
toolNamein the TOML rule must match the name of the tool being called.- Wildcards: You can use wildcards like
*,mcp_server_*, ormcp_*_toolNameto match multiple tools. See Tool Name for details.
- Wildcards: You can use wildcards like
- Arguments pattern: If
argsPatternis specified, the tool’s arguments are converted to a stable JSON string, which is then tested against the provided regular expression. If the arguments don’t match the pattern, the rule does not apply.
Configuration
Section titled “Configuration”Policies are defined in .toml files. The CLI loads
these files from Default,
User, and (if configured) Admin directories.
Policy locations
Section titled “Policy locations”| Tier | Type | Location |
|---|---|---|
| User | Custom | ~/.gemini/policies/*.toml
|
| Workspace | Custom | $WORKSPACE_ROOT/.gemini/policies/*.toml
|
| Admin | System | See below (OS specific) |
System-wide policies (Admin)
Section titled “System-wide policies (Admin)”Administrators can enforce system-wide policies (Tier 4) that override all user and default settings. These policies can be loaded from standard system locations or supplemental paths.
Standard Locations
Section titled “Standard Locations”These are the default paths the CLI searches for admin policies:
| OS | Policy Directory Path |
|---|---|
| Linux | /etc/gemini-cli/policies
|
| macOS | /Library/Application Support/GeminiCli/policies
|
| Windows | C:\ProgramData\gemini-cli\policies |
Supplemental Admin Policies
Section titled “Supplemental Admin Policies”Administrators can also specify supplemental policy paths using:
- The
--admin-policycommand-line flag. - The
adminPolicyPathssetting in a system settings file.
These supplemental policies are assigned the same Admin tier (Base 4) as policies in standard locations.
Security Guard: Supplemental admin policies are
ignored if any .toml
policy files are found in the standard system location. This prevents
flag-based
overrides when a central system policy has already been established.
Security Requirements
Section titled “Security Requirements”To prevent privilege escalation, the CLI enforces strict security checks on the standard system policy directory. If checks fail, the policies in that directory are ignored.
- Linux / macOS: Must be owned by
root(UID 0) and NOT writable by group or others (for example,chmod 755). - Windows: Must be in
C:\ProgramData. Standard users (Users,Everyone) must NOT haveWrite,Modify, orFull Controlpermissions. If you see a security warning, use the folder properties to remove write permissions for non-admin groups. You may need to “Disable inheritance” in Advanced Security Settings.
TOML rule schema
Section titled “TOML rule schema”Here is a breakdown of the fields available in a TOML policy rule:
[[rule]]# A unique name for the tool, or an array of names.toolName = "run_shell_command"
# (Optional) The name of a subagent. If provided, the rule only applies to tool# calls made by this specific subagent.subagent = "codebase_investigator"
# (Optional) The name of an MCP server. Can be combined with toolName# to form a composite FQN internally like "mcp_mcpName_toolName".mcpName = "my-custom-server"
# (Optional) Metadata hints provided by the tool. A rule matches if all# key-value pairs provided here are present in the tool's annotations.toolAnnotations = { readOnlyHint = true }
# (Optional) A regex to match against the tool's arguments.argsPattern = '"command":"(git|npm)'
# (Optional) A string or array of strings that a shell command must start with.# This is syntactic sugar for `toolName = "run_shell_command"` and an# `argsPattern`.commandPrefix = "git"
# (Optional) A regex to match against the entire shell command.# This is also syntactic sugar for `toolName = "run_shell_command"`.# Note: This pattern is tested against the JSON representation of the arguments# (e.g., `{"command":"<your_command>"}`). Because it prepends `"command":"`,# it effectively matches from the start of the command.# Anchors like `^` or `$` apply to the full JSON string,# so `^` should usually be avoided here.# You cannot use commandPrefix and commandRegex in the same rule.commandRegex = "git (commit|push)"
# The decision to take. Must be "allow", "deny", or "ask_user".decision = "ask_user"
# The priority of the rule, from 0 to 999.priority = 10
# (Optional) A custom message to display when a tool call is denied by this# rule. This message is returned to the model and user,# useful for explaining *why* it was denied.denyMessage = "Deletion is permanent"
# (Optional) An array of approval modes where this rule is active.# If omitted or empty, the rule applies to all modes.modes = ["default", "autoEdit", "yolo"]
# (Optional) A boolean to restrict the rule to interactive (true) or# non-interactive (false) environments.# If omitted, the rule applies to both.interactive = true
# (Optional) If true, lets shell commands use redirection operators# (>, >>, <, <<, <<<). By default, the policy engine asks for confirmation# when redirection is detected, even if a rule matches the command.# This permission is granular; it only applies to the specific rule it's# defined in. In chained commands (e.g., cmd1 > file && cmd2), each# individual command rule must permit redirection if it's used.allowRedirection = true
Using arrays (lists)
Section titled “Using arrays (lists)”To apply the same rule to multiple tools or command prefixes, you can provide
an
array of strings for the toolName and commandPrefix fields.
Example:
This single rule will apply to both the write_file
and replace tools.
[[rule]]toolName = ["write_file", "replace"]decision = "ask_user"priority = 10
Special syntax for run_shell_command
Section titled “Special syntax
for run_shell_command”
To simplify writing policies for run_shell_command,
you can use
commandPrefix or commandRegex instead of the more complex argsPattern.
commandPrefix: Matches if thecommandargument starts with the given string.commandRegex: Matches if thecommandargument matches the given regular expression.
Example:
This rule will ask for user confirmation before executing any git command.
[[rule]]toolName = "run_shell_command"commandPrefix = "git"decision = "ask_user"priority = 100
Special syntax for MCP tools
Section titled “Special syntax for MCP tools”You can create rules that target tools from Model Context Protocol (MCP)
servers
using the mcpName field. This is the
recommended approach for defining MCP
policies, as it is much more robust than manually writing Fully Qualified
Names
(FQNs) or string wildcards.
1. Targeting a specific tool on a server
Combine mcpName and toolName
to target a single operation. When using
mcpName, the toolName field
should strictly be the simple name of the tool
(for example, search), not the
Fully Qualified Name (for example,
mcp_server_search).
# Allows the `search` tool on the `my-jira-server` MCP[[rule]]mcpName = "my-jira-server"toolName = "search"decision = "allow"priority = 200
2. Targeting all tools on a specific server
Specify only the mcpName to apply a rule to every
tool provided by that
server.
Note: This applies to all decision types (allow, deny, ask_user).
# Denies all tools from the `untrusted-server` MCP[[rule]]mcpName = "untrusted-server"decision = "deny"priority = 500denyMessage = "This server is not trusted by the admin."
3. Targeting all MCP servers
Use mcpName = "*" to create a rule that applies to
all tools from any
registered MCP server. This is useful for setting category-wide defaults.
# Ask user for any tool call from any MCP server[[rule]]toolName = "*"mcpName = "*"decision = "ask_user"priority = 10
Special syntax for subagents
Section titled “Special syntax for subagents”You can secure and govern subagents using standard policy rules by treating
the
subagent’s name as the toolName.
When the main agent invokes a subagent (e.g., using the unified invoke_agent
tool), the Policy Engine automatically treats the target agent_name as a
virtual tool alias for rule matching.
Example:
This rule denies access to the codebase_investigator
subagent.
[[rule]]toolName = "codebase_investigator"decision = "deny"priority = 500deny_message = "Deep codebase analysis is restricted for this session."
- Backward Compatibility: Any rules written targeting historical 1:1 subagent tool names will continue to match transparently.
- Context differentiation: To create rules based on
who is calling a
tool, use the
subagentfield instead. See TOML rule schema.
Default policies
Section titled “Default policies”Gemini CLI ships with a set of default policies to provide a safe out-of-the-box experience.
- Read-only tools (like
read_file,glob) are generally allowed. - Agent delegation defaults to
ask_userto ensure remote agents can prompt for confirmation, but local sub-agent actions are executed silently and checked individually. - Write tools (like
write_file,run_shell_command) default toask_user. - In
yolomode, a high-priority rule allows all tools. - In
autoEditmode, rules allow certain write operations to happen without prompting.
Gemini CLI uses tools to interact with your local environment, access information, and perform actions on your behalf. These tools extend the model’s capabilities beyond text generation, letting it read files, execute commands, and search the web.
How to use Gemini CLI’s tools
Section titled “How to use Gemini CLI’s tools”Tools are generally invoked automatically by Gemini CLI when it needs to perform an action. However, you can also trigger specific tools manually using shorthand syntax.
Automatic execution and security
Section titled “Automatic execution and security”When the model wants to use a tool, Gemini CLI evaluates the request against its security policies.
- User confirmation: You must manually approve tools that modify files or execute shell commands (mutators). The CLI shows you a diff or the exact command before you confirm.
- Sandboxing: You can run tool executions in secure, containerized environments to isolate changes from your host system. For more details, see the Sandboxing guide.
- Trusted folders: You can configure which directories allow the model to use system tools. For more details, see the Trusted folders guide.
Review confirmation prompts carefully before allowing a tool to execute.
How to use manually-triggered tools
Section titled “How to use manually-triggered tools”You can directly trigger key tools using special syntax in your prompt:
- File
access (
@): Use the@symbol followed by a file or directory path to include its content in your prompt. This triggers theread_many_filestool. - Shell commands (
!): Use the!symbol followed by a system command to execute it directly. This triggers therun_shell_commandtool.
How to manage tools
Section titled “How to manage tools”Using built-in commands, you can inspect available tools and configure how they behave.
Tool discovery
Section titled “Tool discovery”Use the /tools command to see what tools are
currently active in your session.
/tools: Lists all registered tools with their display names./tools desc: Lists all tools with their full descriptions.
This is especially useful for verifying that MCP servers or custom tools are loaded correctly.
Tool configuration
Section titled “Tool configuration”You can enable, disable, or configure specific tools in your settings. For example, you can set a specific pager for shell commands or configure the browser used for web searches. See the Settings guide for details.
Available tools
Section titled “Available tools”The following sections list all available tools, categorized by their primary function. For detailed parameter information, see the linked documentation for each tool.
Execution
Section titled “Execution”| Tool | Kind | Description |
|---|---|---|
run_shell_command |
Execute |
Executes arbitrary shell commands. Supports interactive sessions and background processes. Requires manual confirmation. |
File System
Section titled “File System”| Tool | Kind | Description |
|---|---|---|
glob |
Search |
Finds files matching specific glob patterns across the workspace. |
grep_search |
Search |
Searches for a regular expression pattern within
file contents. Legacy alias: search_file_content. |
list_directory |
Read |
Lists the names of files and subdirectories within a specified path. |
read_file |
Read |
Reads the content of a specific file. Supports text, images, audio, and PDF. |
read_many_files |
Read |
Reads and concatenates content from multiple files.
Often triggered by the @ symbol in your
prompt. |
replace |
Edit |
Performs precise text replacement within a file. Requires manual confirmation. |
write_file |
Edit |
Creates or overwrites a file with new content. Requires manual confirmation. |
Interaction
Section titled “Interaction”| Tool | Kind | Description |
|---|---|---|
ask_user |
Communicate |
Requests clarification or missing information via an interactive dialog. |
write_todos |
Other |
Maintains an internal list of subtasks. The model uses this to track its own progress. |
Memory
Section titled “Memory”| Tool | Kind | Description |
|---|---|---|
activate_skill |
Other |
Loads specialized procedural expertise from the
.gemini/skills directory. |
get_internal_docs |
Think |
Accesses Gemini CLI’s own documentation for accurate answers about its capabilities. |
save_memory |
Think |
Persists specific facts and project details to your
GEMINI.md file. |
Planning
Section titled “Planning”| Tool | Kind | Description |
|---|---|---|
enter_plan_mode |
Plan |
Switches the CLI to a safe, read-only “Plan Mode” for researching complex changes. |
exit_plan_mode |
Plan |
Finalizes a plan, presents it for review, and requests approval to start implementation. |
System
Section titled “System”| Tool | Kind | Description |
|---|---|---|
complete_task |
Other |
Finalizes a subagent’s mission and returns the result to the parent agent. This tool is not available to the user. |
Task Tracking
Section titled “Task Tracking”| Tool | Kind | Description |
|---|---|---|
tracker_add_dependency |
Think |
Adds a dependency between two existing tasks in the tracker. |
tracker_create_task |
Think |
Creates a new task in the internal tracker to monitor progress. |
tracker_get_task |
Think |
Retrieves the details and current status of a specific tracked task. |
tracker_list_tasks |
Think |
Lists all tasks currently being tracked. |
tracker_update_task |
Think |
Updates the status or details of an existing task. |
tracker_visualize |
Think |
Generates a visual representation of the current task dependency graph. |
update_topic |
Think |
Updates the current topic and status to keep the user informed of progress. |
| Tool | Kind | Description |
|---|---|---|
google_web_search |
Search |
Performs a Google Search to find up-to-date information. |
web_fetch |
Fetch |
Retrieves and processes content from specific URLs. Warning: This tool can access local and private network addresses (for example, localhost), which may pose a security risk if used with untrusted prompts. In Plan Mode, this tool requires explicit user confirmation. |
Under the hood
Section titled “Under the hood”For developers, the tool system is designed to be extensible and robust. The
ToolRegistry class manages all available tools.
You can extend Gemini CLI with custom tools by configuring
tools.discoveryCommand in your settings or by
connecting to MCP servers.
Next steps
Section titled “Next steps”- Learn how to Set up an MCP server.
- Explore Agent Skills for specialized expertise.
- See the Command reference for slash commands.
This page provides answers to common questions and solutions to frequent problems encountered while using Gemini CLI.
General issues
Section titled “General issues”This section addresses common questions about Gemini CLI usage, security, and troubleshooting general errors.
Why can’t I use third-party software like Claude Code, OpenClaw, or OpenCode with Gemini CLI?
Section titled “Why can’t I use third-party software like Claude Code, OpenClaw, or OpenCode with Gemini CLI?”Using third-party software, tools, or services to harvest or piggyback on Gemini CLI’s OAuth authentication to access our backend services is a direct violation of our applicable terms and policies. Doing so bypasses our intended authentication and security structures, and such actions may be grounds for immediate suspension or termination of your account. If you would like to use a third-party coding agent with Gemini, the supported and secure method is to use a Vertex AI or Google AI Studio API key.
Why am I
getting an API error: 429 - Resource exhausted?
Section titled “Why am I getting
an API error: 429 - Resource exhausted?”
This error indicates that you have exceeded your API request limit. The Gemini API has rate limits to prevent abuse and ensure fair usage.
To resolve this, you can:
- Check your usage: Review your API usage in the Google AI Studio or your Google Cloud project dashboard.
- Optimize your prompts: If you are making many requests in a short period, try to batch your prompts or introduce delays between requests.
- Request a quota increase: If you consistently need a higher limit, you can request a quota increase from Google.
Why am I getting an ERR_REQUIRE_ESM error when
running npm run start?
Section titled “Why am I getting
an ERR_REQUIRE_ESM error when running npm run start?”
This error typically occurs in Node.js projects when there is a mismatch between CommonJS and ES Modules.
This is often due to a misconfiguration in your package.json or
tsconfig.json. Ensure that:
- Your
package.jsonhas"type": "module". - Your
tsconfig.jsonhas"module": "NodeNext"or a compatible setting in thecompilerOptions.
If the problem persists, try deleting your node_modules directory and
package-lock.json file, and then run npm install again.
Why don’t I see cached token counts in my stats output?
Section titled “Why don’t I see cached token counts in my stats output?”Cached token information is only displayed when cached tokens are being used.
This feature is available for API key users (Gemini API key or Google Cloud
Vertex AI) but not for OAuth users (such as Google Personal/Enterprise
accounts
like Google Gmail or Google Workspace, respectively). This is because the
Gemini
Code Assist API does not support cached content creation. You can still view
your total token usage using the /stats command in
Gemini CLI.
Installation and updates
Section titled “Installation and updates”How do I check which version of Gemini CLI I’m currently running?
Section titled “How do I check which version of Gemini CLI I’m currently running?”You can check your current Gemini CLI version using one of these methods:
- Run
gemini --versionorgemini -vfrom your terminal - Check the globally installed version using your package manager:
- npm:
npm list -g @google/gemini-cli - pnpm:
pnpm list -g @google/gemini-cli - yarn:
yarn global list @google/gemini-cli - bun:
bun pm ls -g @google/gemini-cli - homebrew:
brew list --versions gemini-cli
- npm:
- Inside an active Gemini CLI session, use the
/aboutcommand
How do I update Gemini CLI to the latest version?
Section titled “How do I update Gemini CLI to the latest version?”If you installed it globally via npm, update it using
the command
npm install -g @google/gemini-cli. If you
compiled it from source, pull
the latest changes from the repository, and then rebuild using the command
npm run build.
Platform-specific issues
Section titled “Platform-specific issues”
Why does the CLI crash on Windows when I run a command like chmod +x?
Section titled “Why does the CLI
crash on Windows when I run a command like chmod +x?”
Commands like chmod are specific to Unix-like
operating systems (Linux,
macOS). They are not available on Windows by default.
To resolve this, you can:
- Use Windows-equivalent commands: Instead of
chmod, you can useicaclsto modify file permissions on Windows. - Use a compatibility layer: Tools like Git Bash or Windows Subsystem for Linux (WSL) provide a Unix-like environment on Windows where these commands will work.
Configuration
Section titled “Configuration”How do I configure my
GOOGLE_CLOUD_PROJECT?
Section titled “How do I
configure my GOOGLE_CLOUD_PROJECT?”
You can configure your Google Cloud Project ID using an environment variable.
Set the GOOGLE_CLOUD_PROJECT environment variable in
your shell:
macOS/Linux
export GOOGLE_CLOUD_PROJECT="your-project-id"
Windows (PowerShell)
$env:GOOGLE_CLOUD_PROJECT="your-project-id"
To make this setting permanent, add this line to your shell’s startup file
(for
example, ~/.bashrc, ~/.zshrc).
What is the best way to store my API keys securely?
Section titled “What is the best way to store my API keys securely?”Exposing API keys in scripts or checking them into source control is a security risk.
To store your API keys securely, you can:
- Use a
.envfile: Create a.envfile in your project’s.geminidirectory (.gemini/.env) and store your keys there. Gemini CLI will automatically load these variables. - Use your system’s keyring: For the most secure storage, use your operating system’s secret management tool (like macOS Keychain, Windows Credential Manager, or a secret manager on Linux). You can then have your scripts or environment load the key from the secure storage at runtime.
Where are Gemini CLI configuration and settings files stored?
Section titled “Where are Gemini CLI configuration and settings files stored?”Gemini CLI configuration is stored in two settings.json files:
- In your home directory:
~/.gemini/settings.json. - In your project’s root directory:
./.gemini/settings.json.
Refer to Gemini CLI Configuration for more details.
Google AI Pro/Ultra and subscription FAQs
Section titled “Google AI Pro/Ultra and subscription FAQs”Where can I learn more about my Google AI Pro or Google AI Ultra subscription?
Section titled “Where can I learn more about my Google AI Pro or Google AI Ultra subscription?”To learn more about your Google AI Pro or Google AI Ultra subscription, visit Manage subscription in your subscription settings.
How do I know if I have higher limits for Google AI Pro or Ultra?
Section titled “How do I know if I have higher limits for Google AI Pro or Ultra?”If you’re subscribed to Google AI Pro or Ultra, you automatically have higher limits to Gemini Code Assist and Gemini CLI. These are shared across Gemini CLI and agent mode in the IDE. You can confirm you have higher limits by checking if you are still subscribed to Google AI Pro or Ultra in your subscription settings.
What is the privacy policy for using Gemini Code Assist or Gemini CLI if I’ve subscribed to Google AI Pro or Ultra?
Section titled “What is the privacy policy for using Gemini Code Assist or Gemini CLI if I’ve subscribed to Google AI Pro or Ultra?”To learn more about your privacy policy and terms of service governed by your subscription, visit Gemini Code Assist: Terms of Service and Privacy Policies.
I’ve upgraded to Google AI Pro or Ultra but it still says I am hitting quota limits. Is this a bug?
Section titled “I’ve upgraded to Google AI Pro or Ultra but it still says I am hitting quota limits. Is this a bug?”The higher limits in your Google AI Pro or Ultra subscription are for Gemini 2.5 across both Gemini 2.5 Pro and Flash. They are shared quota across Gemini CLI and agent mode in Gemini Code Assist IDE extensions. You can learn more about quota limits for Gemini CLI, Gemini Code Assist and agent mode in Gemini Code Assist at Quotas and limits.
If I upgrade to higher limits for Gemini CLI and Gemini Code Assist by purchasing a Google AI Pro or Ultra subscription, will Gemini start using my data to improve its machine learning models?
Section titled “If I upgrade to higher limits for Gemini CLI and Gemini Code Assist by purchasing a Google AI Pro or Ultra subscription, will Gemini start using my data to improve its machine learning models?”Google does not use your data to improve Google’s machine learning models if you purchase a paid plan. Note: If you decide to remain on the free version of Gemini Code Assist, Gemini Code Assist for individuals, you can also opt out of using your data to improve Google’s machine learning models. See the Gemini Code Assist for individuals privacy notice for more information.
Not seeing your question?
Section titled “Not seeing your question?”Search the Gemini CLI Q&A discussions on GitHub or start a new discussion on GitHub
Gemini CLI is an open-source tool that lets you interact with Google’s powerful AI services directly from your command-line interface. Gemini CLI software is licensed under the Apache 2.0 license. When you use Gemini CLI to access or use Google’s services, the Terms of Service and Privacy Notices applicable to those services apply to such access and use.
Directly accessing the services powering Gemini CLI (for example, the Gemini Code Assist service) using third-party software, tools, or services (for example, using OpenClaw with Gemini CLI OAuth) is a violation of applicable terms and policies. Such actions may be grounds for suspension or termination of your account.
Your Gemini CLI Usage Statistics are handled in accordance with Google’s Privacy Policy.
Supported authentication methods
Section titled “Supported authentication methods”Your authentication method refers to the method you use to log into and access Google’s services with Gemini CLI. Supported authentication methods include:
- Logging in with your Google account to Gemini Code Assist.
- Using an API key with Gemini Developer API.
- Using an API key with Vertex AI GenAI API.
The Terms of Service and Privacy Notices applicable to the aforementioned Google services are set forth in the table below.
If you log in with your Google account and you do not already have a Gemini Code Assist account associated with your Google account, you will be directed to the sign up flow for Gemini Code Assist for individuals. If your Google account is managed by your organization, your administrator may not permit access to Gemini Code Assist for individuals. See the Gemini Code Assist for individuals FAQs for further information.
| Authentication Method | Service(s) | Terms of Service | Privacy Notice |
|---|---|---|---|
| Google Account | Gemini Code Assist services | Terms of Service | Privacy Notices |
| Gemini Developer API Key | Gemini API - Unpaid Services | Gemini API Terms of Service - Unpaid Services | Google Privacy Policy |
| Gemini Developer API Key | Gemini API - Paid Services | Gemini API Terms of Service - Paid Services | Google Privacy Policy |
| Vertex AI GenAI API Key | Vertex AI GenAI API | Google Cloud Platform Terms of Service | Google Cloud Privacy Notice |
1. If you have signed in with your Google account to Gemini Code Assist
Section titled “1. If you have signed in with your Google account to Gemini Code Assist”For users who use their Google account to access Gemini Code Assist, these Terms of Service and Privacy Notice documents apply:
- Gemini Code Assist for individuals: Google Terms of Service and Gemini Code Assist for individuals Privacy Notice.
- Gemini Code Assist with Google AI Pro or Ultra subscription: Google Terms of Service, Google One Additional Terms of Service and Google Privacy Policy*.
- Gemini Code Assist Standard and Enterprise editions: Google Cloud Platform Terms of Service and Google Cloud Privacy Notice.
* If your account is also associated with an active subscription to Gemini Code Assist Standard or Enterprise edition, the terms and privacy policy of Gemini Code Assist Standard or Enterprise edition will apply to all your use of Gemini Code Assist.
2. If you have signed in with a Gemini API key to the Gemini Developer API
Section titled “2. If you have signed in with a Gemini API key to the Gemini Developer API”If you are using a Gemini API key for authentication with the Gemini Developer API, these Terms of Service and Privacy Notice documents apply:
- Terms of Service: Your use of Gemini CLI is governed by the
Gemini API Terms of Service. These
terms may differ depending on whether you are using an unpaid or paid
service:
- For unpaid services, refer to the Gemini API Terms of Service - Unpaid Services.
- For paid services, refer to the Gemini API Terms of Service - Paid Services.
- Privacy Notice: The collection and use of your data is described in the Google Privacy Policy.
3. If you have signed in with a Gemini API key to the Vertex AI GenAI API
Section titled “3. If you have signed in with a Gemini API key to the Vertex AI GenAI API”If you are using a Gemini API key for authentication with a Vertex AI GenAI API backend, these Terms of Service and Privacy Notice documents apply:
- Terms of Service: Your use of Gemini CLI is governed by the Google Cloud Platform Service Terms.
- Privacy Notice: The collection and use of your data is described in the Google Cloud Privacy Notice.
Usage statistics opt-out
Section titled “Usage statistics opt-out”You may opt-out from sending Gemini CLI Usage Statistics to Google by following the instructions available here: Usage Statistics Configuration.
This guide provides solutions to common issues and debugging tips, including topics on:
- Authentication or login errors
- Frequently asked questions (FAQs)
- Debugging tips
- Existing GitHub Issues similar to yours or creating new Issues
Authentication or login errors
Section titled “Authentication or login errors”-
Error:
You must be a named user on your organization's Gemini Code Assist Standard edition subscription to use this service. Please contact your administrator to request an entitlement to Gemini Code Assist Standard edition.-
Cause: This error might occur if Gemini CLI detects the
GOOGLE_CLOUD_PROJECTorGOOGLE_CLOUD_PROJECT_IDenvironment variable is defined. Setting these variables forces an organization subscription check. This might be an issue if you are using an individual Google account not linked to an organizational subscription. -
Solution:
-
Individual Users: Unset the
GOOGLE_CLOUD_PROJECTandGOOGLE_CLOUD_PROJECT_IDenvironment variables. Check and remove these variables from your shell configuration files (for example,.bashrc,.zshrc) and any.envfiles. If this doesn’t resolve the issue, try using a different Google account. -
Organizational Users: Contact your Google Cloud administrator to be added to your organization’s Gemini Code Assist subscription.
-
-
-
Error:
Failed to sign in. Message: Your current account is not eligible... because it is not currently available in your location.- Cause: Gemini CLI does not currently support
your location. For a full
list of supported locations, see the following pages:
- Gemini Code Assist for individuals: Available locations
- Cause: Gemini CLI does not currently support
your location. For a full
list of supported locations, see the following pages:
-
Error:
Failed to sign in. Message: Request contains an invalid argument- Cause: Users with Google Workspace accounts or Google Cloud accounts associated with their Gmail accounts may not be able to activate the free tier of the Google Code Assist plan.
- Solution: For Google Cloud accounts, you can
work around this by setting
GOOGLE_CLOUD_PROJECTto your project ID. Alternatively, you can obtain the Gemini API key from Google AI Studio, which also includes a separate free tier.
-
Error:
UNABLE_TO_GET_ISSUER_CERT_LOCALLYorunable to get local issuer certificate- Cause: You may be on a corporate network with a firewall that intercepts and inspects SSL/TLS traffic. This often requires a custom root CA certificate to be trusted by Node.js.
- Solution: First try setting
NODE_USE_SYSTEM_CA; if that does not resolve the issue, setNODE_EXTRA_CA_CERTS.- Set the
NODE_USE_SYSTEM_CA=1environment variable to tell Node.js to use the operating system’s native certificate store (where corporate certificates are typically already installed).- Example:
export NODE_USE_SYSTEM_CA=1(Windows PowerShell:$env:NODE_USE_SYSTEM_CA=1)
- Example:
- Set the
NODE_EXTRA_CA_CERTSenvironment variable to the absolute path of your corporate root CA certificate file.- Example:
export NODE_EXTRA_CA_CERTS=/path/to/your/corporate-ca.crt(Windows PowerShell:$env:NODE_EXTRA_CA_CERTS="C:\path\to\your\corporate-ca.crt")
- Example:
- Set the
Common error messages and solutions
Section titled “Common error messages and solutions”-
Error:
EADDRINUSE(Address already in use) when starting an MCP server.- Cause: Another process is already using the port that the MCP server is trying to bind to.
- Solution: Either stop the other process that is using the port or configure the MCP server to use a different port.
-
Error: Command not found (when attempting to run Gemini CLI with
gemini).- Cause: Gemini CLI is not correctly installed or
it is not in your
system’s
PATH. - Solution: The update depends on how you
installed Gemini CLI:
- If you installed
geminiglobally, check that yournpmglobal binary directory is in yourPATH. You can update Gemini CLI using the commandnpm install -g @google/gemini-cli. - If you are running
geminifrom source, ensure you are using the correct command to invoke it (for example,node packages/cli/dist/index.js ...). To update Gemini CLI, pull the latest changes from the repository, and then rebuild using the commandnpm run build.
- If you installed
- Cause: Gemini CLI is not correctly installed or
it is not in your
system’s
-
Error:
MODULE_NOT_FOUNDor import errors.- Cause: Dependencies are not installed correctly, or the project hasn’t been built.
- Solution:
- Run
npm installto ensure all dependencies are present. - Run
npm run buildto compile the project. - Verify that the build completed successfully with
npm run start.
- Run
-
Error: “Operation not permitted”, “Permission denied”, or similar.
- Cause: When sandboxing is enabled, Gemini CLI may attempt operations that are restricted by your sandbox configuration, such as writing outside the project directory or system temp directory.
- Solution: Refer to the Configuration: Sandboxing documentation for more information, including how to customize your sandbox configuration.
-
Gemini CLI is not running in interactive mode in “CI” environments
- Issue: Gemini CLI does not enter interactive
mode (no prompt appears) if
an environment variable starting with
CI_(for example,CI_TOKEN) is set. This is because theis-in-cipackage, used by the underlying UI framework, detects these variables and assumes a non-interactive CI environment. - Cause: The
is-in-cipackage checks for the presence ofCI,CONTINUOUS_INTEGRATION, or any environment variable with aCI_prefix. When any of these are found, it signals that the environment is non-interactive, which prevents Gemini CLI from starting in its interactive mode. - Solution: If the
CI_prefixed variable is not needed for the CLI to function, you can temporarily unset it for the command. For example,env -u CI_TOKEN gemini
- Issue: Gemini CLI does not enter interactive
mode (no prompt appears) if
an environment variable starting with
-
DEBUG mode not working from project .env file
- Issue: Setting
DEBUG=truein a project’s.envfile doesn’t enable debug mode for gemini-cli. - Cause: The
DEBUGandDEBUG_MODEvariables are automatically excluded from project.envfiles to prevent interference with gemini-cli behavior. - Solution: Use a
.gemini/.envfile instead, or configure theadvanced.excludedEnvVarssetting in yoursettings.jsonto exclude fewer variables.
- Issue: Setting
-
Warning:
npm WARN deprecated node-domexception@1.0.0ornpm WARN deprecated globduring install/update- Issue: When installing or updating Gemini CLI
globally via
npm install -g @google/gemini-cliornpm update -g @google/gemini-cli, you might see deprecation warnings regardingnode-domexceptionor old versions ofglob. - Cause: These warnings occur because some
dependencies (or their
sub-dependencies, like
google-auth-library) rely on older package versions. Since Gemini CLI requires Node.js 20 or higher, the platform’s native features (like the nativeDOMException) are used, making these warnings purely informational. - Solution: These warnings are harmless and can be safely ignored. Your installation or update will complete successfully and function properly without any action required.
- Issue: When installing or updating Gemini CLI
globally via
Exit codes
Section titled “Exit codes”Gemini CLI uses specific exit codes to indicate the reason for termination. This is especially useful for scripting and automation.
| Exit Code | Error Type | Description |
|---|---|---|
| 41 | FatalAuthenticationError |
An error occurred during the authentication process. |
| 42 | FatalInputError |
Invalid or missing input was provided to the CLI. (non-interactive mode only) |
| 44 | FatalSandboxError |
An error occurred with the sandboxing environment (for example, Docker, Podman, or Seatbelt). |
| 52 | FatalConfigError |
A configuration file (settings.json) is
invalid or contains errors. |
| 53 | FatalTurnLimitedError |
The maximum number of conversational turns for the session was reached. (non-interactive mode only) |
Debugging tips
Section titled “Debugging tips”-
CLI debugging:
- Use the
--debugflag for more detailed output. In interactive mode, press F12 to view the debug console. - Check the CLI logs, often found in a user-specific configuration or cache directory.
- Use the
-
Core debugging:
- Check the server console output for error messages or stack traces.
- Increase log verbosity if configurable. For example, set the
DEBUG_MODEenvironment variable totrueor1. - Use Node.js debugging tools (for example,
node --inspect) if you need to step through server-side code.
-
Tool issues:
- If a specific tool is failing, try to isolate the issue by running the simplest possible version of the command or operation the tool performs.
- For
run_shell_command, check that the command works directly in your shell first. - For file system tools, verify that paths are correct and check the permissions.
-
Pre-flight checks:
- Always run
npm run preflightbefore committing code. This can catch many common issues related to formatting, linting, and type errors.
- Always run
Existing GitHub issues similar to yours or creating new issues
Section titled “Existing GitHub issues similar to yours or creating new issues”If you encounter an issue that was not covered here in this Troubleshooting guide, consider searching Gemini CLI Issue tracker on GitHub. If you can’t find an issue similar to yours, consider creating a new GitHub Issue with a detailed description. Pull requests are also welcome!
Your uninstall method depends on how you ran the CLI. Follow the instructions for either npx or a global npm installation.
Method 1: Using npx
Section titled “Method 1: Using npx”npx runs packages from a temporary cache without a permanent installation. To “uninstall” the CLI, you must clear this cache, which will remove gemini-cli and any other packages previously executed with npx.
The npx cache is a directory named _npx inside your
main npm cache folder. You
can find your npm cache path by running npm config get cache.
For macOS / Linux
# The path is typically ~/.npm/_npxrm -rf "$(npm config get cache)/_npx"
For Windows (PowerShell)
# The path is typically $env:LocalAppData\npm-cache\_npxRemove-Item -Path (Join-Path $env:LocalAppData "npm-cache\_npx") -Recurse -Force
Method 2: Using npm (global install)
Section titled “Method 2: Using npm (global install)”If you installed the CLI globally (for example,
npm install -g @google/gemini-cli), use the npm uninstall command with the
-g flag to remove it.
npm uninstall -g @google/gemini-cli
This command completely removes the package from your system.
Method 3: Homebrew
Section titled “Method 3: Homebrew”If you installed the CLI globally using Homebrew (for example,
brew install gemini-cli), use the brew uninstall command to remove it.
brew uninstall gemini-cli
Method 4: MacPorts
Section titled “Method 4: MacPorts”If you installed the CLI globally using MacPorts (for example,
sudo port install gemini-cli), use the port uninstall command to remove it.
sudo port uninstall gemini-cli
This document provides a guide to configuring and using Model Context Protocol (MCP) servers with Gemini CLI.
What is an MCP server?
Section titled “What is an MCP server?”An MCP server is an application that exposes tools and resources to the Gemini CLI through the Model Context Protocol, allowing it to interact with external systems and data sources. MCP servers act as a bridge between the Gemini model and your local environment or other services like APIs.
An MCP server enables Gemini CLI to:
- Discover tools: List available tools, their descriptions, and parameters through standardized schema definitions.
- Execute tools: Call specific tools with defined arguments and receive structured responses.
- Access resources: Read data from specific resources that the server exposes (files, API payloads, reports, etc.).
With an MCP server, you can extend Gemini CLI’s capabilities to perform actions beyond its built-in features, such as interacting with databases, APIs, custom scripts, or specialized workflows.
Core integration architecture
Section titled “Core integration architecture”Gemini CLI integrates with MCP servers through a sophisticated discovery and
execution system built into the core package (packages/core/src/tools/):
Discovery Layer (mcp-client.ts)
Section titled “Discovery Layer
(mcp-client.ts)”
The discovery process is orchestrated by discoverMcpTools(), which:
- Iterates through configured servers from your
settings.jsonmcpServersconfiguration - Establishes connections using appropriate transport mechanisms (Stdio, SSE, or Streamable HTTP)
- Fetches tool definitions from each server using the MCP protocol
- Sanitizes and validates tool schemas for compatibility with the Gemini API
- Registers tools in the global tool registry with conflict resolution
- Fetches and registers resources if the server exposes any
Execution layer (mcp-tool.ts)
Section titled “Execution layer
(mcp-tool.ts)”
Each discovered MCP tool is wrapped in a DiscoveredMCPTool instance that:
- Handles confirmation logic based on server trust settings and user preferences
- Manages tool execution by calling the MCP server with proper parameters
- Processes responses for both the LLM context and user display
- Maintains connection state and handles timeouts
Transport mechanisms
Section titled “Transport mechanisms”Gemini CLI supports three MCP transport types:
- Stdio Transport: Spawns a subprocess and communicates via stdin/stdout
- SSE Transport: Connects to Server-Sent Events endpoints
- Streamable HTTP Transport: Uses HTTP streaming for communication
Working with MCP resources
Section titled “Working with MCP resources”Some MCP servers expose contextual “resources” in addition to the tools and prompts. Gemini CLI discovers these automatically and gives you the possibility to reference them in the chat.
Discovery and listing
Section titled “Discovery and listing”- When discovery runs, the CLI fetches each server’s
resources/listresults. - The
/mcpcommand displays a Resources section alongside Tools and Prompts for every connected server.
This returns a concise, plain-text list of URIs plus metadata.
Referencing resources in a conversation
Section titled “Referencing resources in a conversation”You can use the same @ syntax already known for
referencing local files:
@server://resource/path
Resource URIs appear in the completion menu together with filesystem paths.
When
you submit the message, the CLI calls resources/read
and injects the content
in the conversation.
How to set up your MCP server
Section titled “How to set up your MCP server”Gemini CLI uses the mcpServers configuration in your
settings.json file to
locate and connect to MCP servers. This configuration supports multiple
servers
with different transport mechanisms.
Configure the MCP server in settings.json
Section titled “Configure the MCP server in settings.json”You can configure MCP servers in your settings.json
file in two main ways:
through the top-level mcpServers object for specific
server definitions, and
through the mcp object for global settings that
control server discovery and
execution.
Global MCP settings (mcp)
Section titled “Global MCP
settings (mcp)”
The mcp object in your settings.json lets you define global rules for all
MCP servers.
mcp.serverCommand(string): A global command to start an MCP server.mcp.allowed(array of strings): A list of MCP server names to allow. If this is set, only servers from this list (matching the keys in themcpServersobject) will be connected to.mcp.excluded(array of strings): A list of MCP server names to exclude. Servers in this list will not be connected to.
Example:
{ "mcp": { "allowed": ["my-trusted-server"], "excluded": ["experimental-server"] }}
Server-specific
configuration (mcpServers)
Section titled “Server-specific
configuration (mcpServers)”
The mcpServers object is where you define each
individual MCP server you want
the CLI to connect to.
Configuration structure
Section titled “Configuration structure”Add an mcpServers object to your settings.json file:
{ ...file contains other config objects "mcpServers": { "serverName": { "command": "path/to/server", "args": ["--arg1", "value1"], "env": { "API_KEY": "$MY_API_TOKEN" }, "cwd": "./server-directory", "timeout": 30000, "trust": false } }}
Configuration properties
Section titled “Configuration properties”Each server configuration supports the following properties:
Required (one of the following)
Section titled “Required (one of the following)”command(string): Path to the executable for Stdio transporturl(string): SSE endpoint URL (for example,"http://localhost:8080/sse")httpUrl(string): HTTP streaming endpoint URL
Optional
Section titled “Optional”args(string[]): Command-line arguments for Stdio transportheaders(object): Custom HTTP headers when usingurlorhttpUrlenv(object): Environment variables for the server process. Values can reference environment variables using$VAR_NAMEor${VAR_NAME}syntax (all platforms), or%VAR_NAME%(Windows only).cwd(string): Working directory for Stdio transporttimeout(number): Request timeout in milliseconds (default: 600,000ms = 10 minutes)trust(boolean): Whentrue, bypasses all tool call confirmations for this server (default:false)includeTools(string[]): List of tool names to include from this MCP server. When specified, only the tools listed here will be available from this server (allowlist behavior). If not specified, all tools from the server are enabled by default.excludeTools(string[]): List of tool names to exclude from this MCP server. Tools listed here will not be available to the model, even if they are exposed by the server.excludeToolstakes precedence overincludeTools. If a tool is in both lists, it will be excluded.targetAudience(string): The OAuth Client ID allowlisted on the IAP-protected application you are trying to access. Used withauthProviderType: 'service_account_impersonation'.targetServiceAccount(string): The email address of the Google Cloud Service Account to impersonate. Used withauthProviderType: 'service_account_impersonation'.
Environment variable expansion
Section titled “Environment variable expansion”Gemini CLI automatically expands environment variables in the env block of
your MCP server configuration. This lets you securely reference variables
defined in your shell or environment without hardcoding sensitive
information
directly in your settings.json file.
The expansion utility supports:
- POSIX/Bash syntax:
$VARIABLE_NAMEor${VARIABLE_NAME}(supported on all platforms) - Windows syntax:
%VARIABLE_NAME%(supported only when running on Windows)
If a variable is not defined in the current environment, it resolves to an empty string.
Example:
"env": { "API_KEY": "$MY_EXTERNAL_TOKEN", "LOG_LEVEL": "$LOG_LEVEL", "TEMP_DIR": "%TEMP%"}
Security and environment sanitization
Section titled “Security and environment sanitization”To protect your credentials, Gemini CLI performs environment sanitization when spawning MCP server processes.
Automatic redaction
Section titled “Automatic redaction”By default, the CLI redacts sensitive environment variables from the base environment (inherited from the host process) to prevent unintended exposure to third-party MCP servers. This includes:
- Core project keys:
GEMINI_API_KEY,GOOGLE_API_KEY, etc. - Variables matching sensitive patterns:
*TOKEN*,*SECRET*,*PASSWORD*,*KEY*,*AUTH*,*CREDENTIAL*. - Certificates and private key patterns.
Explicit overrides
Section titled “Explicit overrides”If an environment variable must be passed to an MCP server, you must
explicitly
state it in the env property of the server
configuration in settings.json.
Explicitly defined variables (including those from extensions) are trusted
and
are not subjected to the automatic redaction process.
This follows the security principle that if a variable is explicitly configured by the user for a specific server, it constitutes informed consent to share that specific data with that server.
OAuth support for remote MCP servers
Section titled “OAuth support for remote MCP servers”Gemini CLI supports OAuth 2.0 authentication for remote MCP servers using SSE or HTTP transports. This enables secure access to MCP servers that require authentication.
Automatic OAuth discovery
Section titled “Automatic OAuth discovery”For servers that support OAuth discovery, you can omit the OAuth configuration and let the CLI discover it automatically:
{ "mcpServers": { "discoveredServer": { "url": "https://api.example.com/sse" } }}
The CLI will automatically:
- Detect when a server requires OAuth authentication (401 responses)
- Discover OAuth endpoints from server metadata
- Perform dynamic client registration if supported
- Handle the OAuth flow and token management
Authentication flow
Section titled “Authentication flow”When connecting to an OAuth-enabled server:
- Initial connection attempt fails with 401 Unauthorized
- OAuth discovery finds authorization and token endpoints
- Browser opens for user authentication (requires local browser access)
- Authorization code is exchanged for access tokens
- Tokens are stored securely for future use
- Connection retry succeeds with valid tokens
Browser redirect requirements
Section titled “Browser redirect requirements”This feature will not work in:
- Headless environments without browser access
- Remote SSH sessions without X11 forwarding
- Containerized environments without browser support
Managing OAuth authentication
Section titled “Managing OAuth authentication”Use the /mcp auth command to manage OAuth
authentication:
# List servers requiring authentication/mcp auth
# Authenticate with a specific server/mcp auth serverName
# Re-authenticate if tokens expire/mcp auth serverName
OAuth configuration properties
Section titled “OAuth configuration properties”enabled(boolean): Enable OAuth for this serverclientId(string): OAuth client identifier (optional with dynamic registration)clientSecret(string): OAuth client secret (optional for public clients)authorizationUrl(string): OAuth authorization endpoint (auto-discovered if omitted)tokenUrl(string): OAuth token endpoint (auto-discovered if omitted)scopes(string[]): Required OAuth scopesredirectUri(string): Custom redirect URI (defaults to an OS-assigned random port, e.g.,http://localhost:<random-port>/oauth/callback)tokenParamName(string): Query parameter name for tokens in SSE URLsaudiences(string[]): Audiences the token is valid for
Token management
Section titled “Token management”OAuth tokens are automatically:
- Stored securely in
~/.gemini/mcp-oauth-tokens.json - Refreshed when expired (if refresh tokens are available)
- Validated before each connection attempt
- Cleaned up when invalid or expired
Authentication provider type
Section titled “Authentication provider type”You can specify the authentication provider type using the authProviderType
property:
authProviderType(string): Specifies the authentication provider. Can be one of the following:dynamic_discovery(default): The CLI will automatically discover the OAuth configuration from the server.google_credentials: The CLI will use the Google Application Default Credentials (ADC) to authenticate with the server. When using this provider, you must specify the required scopes.service_account_impersonation: The CLI will impersonate a Google Cloud Service Account to authenticate with the server. This is useful for accessing IAP-protected services (this was specifically designed for Cloud Run services).
Google credentials
Section titled “Google credentials”{ "mcpServers": { "googleCloudServer": { "httpUrl": "https://my-gcp-service.run.app/mcp", "authProviderType": "google_credentials", "oauth": { "scopes": ["https://www.googleapis.com/auth/userinfo.email"] } } }}
Service account impersonation
Section titled “Service account impersonation”To authenticate with a server using Service Account Impersonation, you must
set
the authProviderType to service_account_impersonation and provide the
following properties:
targetAudience(string): The OAuth Client ID allowlisted on the IAP-protected application you are trying to access.targetServiceAccount(string): The email address of the Google Cloud Service Account to impersonate.
The CLI will use your local Application Default Credentials (ADC) to generate an OIDC ID token for the specified service account and audience. This token will then be used to authenticate with the MCP server.
Setup instructions
Section titled “Setup instructions”- Create or use an existing OAuth 2.0 client ID. To use an existing OAuth 2.0 client ID, follow the steps in How to share OAuth Clients.
- Add the OAuth ID to the allowlist for programmatic access for the application. Since Cloud Run is not yet a supported resource type in gcloud iap, you must allowlist the Client ID on the project.
- Create a service account. Documentation, Cloud Console Link
- Add both the service account and users to the IAP Policy in the “Security” tab of the Cloud Run service itself or via gcloud.
- Grant all users and groups who will access the MCP
Server the necessary
permissions to
impersonate the service
account
(for example,
roles/iam.serviceAccountTokenCreator). - Enable the IAM Credentials API for your project.
Example configurations
Section titled “Example configurations”Python MCP server (stdio)
Section titled “Python MCP server (stdio)”{ "mcpServers": { "pythonTools": { "command": "python", "args": ["-m", "my_mcp_server", "--port", "8080"], "cwd": "./mcp-servers/python", "env": { "DATABASE_URL": "$DB_CONNECTION_STRING", "API_KEY": "${EXTERNAL_API_KEY}" }, "timeout": 15000 } }}
Node.js MCP server (stdio)
Section titled “Node.js MCP server (stdio)”{ "mcpServers": { "nodeServer": { "command": "node", "args": ["dist/server.js", "--verbose"], "cwd": "./mcp-servers/node", "trust": true } }}
Docker-based MCP server
Section titled “Docker-based MCP server”{ "mcpServers": { "dockerizedServer": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "API_KEY", "-v", "${PWD}:/workspace", "my-mcp-server:latest" ], "env": { "API_KEY": "$EXTERNAL_SERVICE_TOKEN" } } }}
HTTP-based MCP server
Section titled “HTTP-based MCP server”{ "mcpServers": { "httpServer": { "httpUrl": "http://localhost:3000/mcp", "timeout": 5000 } }}
HTTP-based MCP Server with custom headers
Section titled “HTTP-based MCP Server with custom headers”{ "mcpServers": { "httpServerWithAuth": { "httpUrl": "http://localhost:3000/mcp", "headers": { "Authorization": "Bearer your-api-token", "X-Custom-Header": "custom-value", "Content-Type": "application/json" }, "timeout": 5000 } }}
MCP server with tool filtering
Section titled “MCP server with tool filtering”{ "mcpServers": { "filteredServer": { "command": "python", "args": ["-m", "my_mcp_server"], "includeTools": ["safe_tool", "file_reader", "data_processor"], // "excludeTools": ["dangerous_tool", "file_deleter"], "timeout": 30000 } }}
SSE MCP server with SA impersonation
Section titled “SSE MCP server with SA impersonation”{ "mcpServers": { "myIapProtectedServer": { "url": "https://my-iap-service.run.app/sse", "authProviderType": "service_account_impersonation", "targetAudience": "YOUR_IAP_CLIENT_ID.apps.googleusercontent.com", "targetServiceAccount": "your-sa@your-project.iam.gserviceaccount.com" } }}
Discovery process deep dive
Section titled “Discovery process deep dive”When Gemini CLI starts, it performs MCP server discovery through the following detailed process:
1. Server iteration and connection
Section titled “1. Server iteration and connection”For each configured server in mcpServers:
- Status tracking begins: Server status is set to
CONNECTING - Transport selection: Based on configuration properties:
httpUrl→StreamableHTTPClientTransporturl→SSEClientTransportcommand→StdioClientTransport
- Connection establishment: The MCP client attempts to connect with the configured timeout
- Error handling: Connection failures are logged and the
server status is
set to
DISCONNECTED
2. Tool discovery
Section titled “2. Tool discovery”Upon successful connection:
- Tool listing: The client calls the MCP server’s tool listing endpoint
- Schema validation: Each tool’s function declaration is validated
- Tool filtering: Tools are filtered based on
includeToolsandexcludeToolsconfiguration - Name sanitization: Tool names are cleaned to meet
Gemini API
requirements:
- Characters other than letters, numbers, underscore (
_), hyphen (-), dot (.), and colon (:) are replaced with underscores - Names longer than 63 characters are truncated with middle
replacement
(
...)
- Characters other than letters, numbers, underscore (
3. Tool naming and namespaces
Section titled “3. Tool naming and namespaces”To prevent collisions across multiple servers or conflicting built-in tools, every discovered MCP tool is assigned a strict namespace.
- Automatic FQN: All MCP tools are unconditionally
assigned a fully
qualified name (FQN) using the format
mcp_{serverName}_{toolName}. - Registry tracking: The tool registry maintains metadata mappings between these FQNs and their original server identities.
- Overwrites: If two servers share the exact same alias in your configuration and provide tools with the exact same name, the last registered tool overwrites the previous one.
- Policies: To configure permissions (like auto-approval or denial) for MCP tools, see Special syntax for MCP tools in the Policy Engine documentation.
4. Schema processing
Section titled “4. Schema processing”Tool parameter schemas undergo sanitization for Gemini API compatibility:
$schemaproperties are removedadditionalPropertiesare strippedanyOfwithdefaulthave their default values removed (Vertex AI compatibility)- Recursive processing applies to nested schemas
5. Connection management
Section titled “5. Connection management”After discovery:
- Persistent connections: Servers that successfully register tools maintain their connections
- Cleanup: Servers that provide no usable tools have their connections closed
- Status updates: Final server statuses are set to
CONNECTEDorDISCONNECTED
Tool execution flow
Section titled “Tool execution flow”When the Gemini model decides to use an MCP tool, the following execution flow occurs:
1. Tool invocation
Section titled “1. Tool invocation”The model generates a FunctionCall with:
- Tool name: The registered name (potentially prefixed)
- Arguments: JSON object matching the tool’s parameter schema
2. Confirmation process
Section titled “2. Confirmation process”Each DiscoveredMCPTool implements sophisticated
confirmation logic:
Trust-based bypass
Section titled “Trust-based bypass”if (this.trust) { return false; // No confirmation needed}
Dynamic allow-listing
Section titled “Dynamic allow-listing”The system maintains internal allow-lists for:
- Server-level:
serverName→ All tools from this server are trusted - Tool-level:
serverName.toolName→ This specific tool is trusted
User choice handling
Section titled “User choice handling”When confirmation is required, users can choose:
- Proceed once: Execute this time only
- Always allow this tool: Add to tool-level allow-list
- Always allow this server: Add to server-level allow-list
- Cancel: Abort execution
3. Execution
Section titled “3. Execution”Upon confirmation (or trust bypass):
-
Parameter preparation: Arguments are validated against the tool’s schema
-
MCP call: The underlying
CallableToolinvokes the server with:const functionCalls = [{name: this.serverToolName, // Original server tool nameargs: params,},]; -
Response processing: Results are formatted for both LLM context and user display
4. Response handling
Section titled “4. Response handling”The execution result contains:
llmContent: Raw response parts for the language model’s contextreturnDisplay: Formatted output for user display (often JSON in markdown code blocks)
How to interact with your MCP server
Section titled “How to interact with your MCP server”Using the /mcp
command
Section titled “Using the /mcp
command”
The /mcp command provides comprehensive information
about your MCP server
setup:
/mcp
This displays:
- Server list: All configured MCP servers
- Connection status:
CONNECTED,CONNECTING, orDISCONNECTED - Server details: Configuration summary (excluding sensitive data)
- Available tools: List of tools from each server with descriptions
- Discovery state: Overall discovery process status
Example /mcp output
Section titled “Example /mcp
output”
MCP Servers Status:
📡 pythonTools (CONNECTED) Command: python -m my_mcp_server --port 8080 Working Directory: ./mcp-servers/python Timeout: 15000ms Tools: calculate_sum, file_analyzer, data_processor
🔌 nodeServer (DISCONNECTED) Command: node dist/server.js --verbose Error: Connection refused
🐳 dockerizedServer (CONNECTED) Command: docker run -i --rm -e API_KEY my-mcp-server:latest Tools: mcp_dockerizedServer_docker_deploy, mcp_dockerizedServer_docker_status
Discovery State: COMPLETED
Tool usage
Section titled “Tool usage”Once discovered, MCP tools are available to the Gemini model like built-in tools. The model will automatically:
- Select appropriate tools based on your requests
- Present confirmation dialogs (unless the server is trusted)
- Execute tools with proper parameters
- Display results in a user-friendly format
Status monitoring and troubleshooting
Section titled “Status monitoring and troubleshooting”Connection states
Section titled “Connection states”The MCP integration tracks several states:
Overriding extension configurations
Section titled “Overriding extension configurations”If an MCP server is provided by an extension (for example, the
google-workspace extension), you can still override
its settings in your local
settings.json. Gemini CLI merges your local
configuration with the extension’s
defaults:
-
Tool lists: Tool lists are merged securely to ensure the most restrictive policy wins:
- Exclusions (
excludeTools): Arrays are combined (unioned). If either source blocks a tool, it remains disabled. - Inclusions (
includeTools): Arrays are intersected. If both sources provide an allowlist, only tools present in both lists are enabled. If only one source provides an allowlist, that list is respected. - Precedence:
excludeToolsalways takes precedence overincludeTools.
This ensures you always have veto power over tools provided by an extension and that an extension cannot re-enable tools you have omitted from your personal allowlist.
- Exclusions (
-
Environment variables: The
envobjects are merged. If the same variable is defined in both places, your local value takes precedence. -
Scalar properties: Properties like
command,url, andtimeoutare replaced by your local values if provided.
Example override:
{ "mcpServers": { "google-workspace": { "excludeTools": ["gmail.send"] } }}
Server status (MCPServerStatus)
Section titled “Server status
(MCPServerStatus)”
DISCONNECTED: Server is not connected or has errorsCONNECTING: Connection attempt in progressCONNECTED: Server is connected and ready
Discovery state (MCPDiscoveryState)
Section titled “Discovery state
(MCPDiscoveryState)”
NOT_STARTED: Discovery hasn’t begunIN_PROGRESS: Currently discovering serversCOMPLETED: Discovery finished (with or without errors)
Common issues and solutions
Section titled “Common issues and solutions”Server won’t connect
Section titled “Server won’t connect”Symptoms: Server shows DISCONNECTED
status
Troubleshooting:
- Check configuration: Verify
command,args, andcwdare correct - Test manually: Run the server command directly to ensure it works
- Check dependencies: Ensure all required packages are installed
- Review logs: Look for error messages in the CLI output
- Verify permissions: Ensure the CLI can execute the server command
No tools discovered
Section titled “No tools discovered”Symptoms: Server connects but no tools are available
Troubleshooting:
- Verify tool registration: Ensure your server actually registers tools
- Check MCP protocol: Confirm your server implements the MCP tool listing correctly
- Review server logs: Check stderr output for server-side errors
- Test tool listing: Manually test your server’s tool discovery endpoint
Tools not executing
Section titled “Tools not executing”Symptoms: Tools are discovered but fail during execution
Troubleshooting:
- Parameter validation: Ensure your tool accepts the expected parameters
- Schema compatibility: Verify your input schemas are valid JSON Schema
- Error handling: Check if your tool is throwing unhandled exceptions
- Timeout issues: Consider increasing the
timeoutsetting
Sandbox compatibility
Section titled “Sandbox compatibility”Symptoms: MCP servers fail when sandboxing is enabled
Solutions:
- Docker-based servers: Use Docker containers that include all dependencies
- Path accessibility: Ensure server executables are available in the sandbox
- Network access: Configure sandbox to allow necessary network connections
- Environment variables: Verify required environment variables are passed through
Debugging tips
Section titled “Debugging tips”- Enable debug mode: Run the CLI with
--debugfor verbose output (use F12 to open debug console in interactive mode) - Check stderr: MCP server stderr is captured and logged (INFO messages filtered)
- Test isolation: Test your MCP server independently before integrating
- Incremental setup: Start with simple tools before adding complex functionality
- Use
/mcpfrequently: Monitor server status during development
Important notes
Section titled “Important notes”Security considerations
Section titled “Security considerations”- Trust settings: The
trustoption bypasses all confirmation dialogs. Use cautiously and only for servers you completely control - Access tokens: Be security-aware when configuring environment variables containing API keys or tokens. See Security and environment sanitization for details on how Gemini CLI protects your credentials.
- Sandbox compatibility: When using sandboxing, ensure MCP servers are available within the sandbox environment
- Private data: Using broadly scoped personal access tokens can lead to information leakage between repositories.
Performance and resource management
Section titled “Performance and resource management”- Connection persistence: The CLI maintains persistent connections to servers that successfully register tools
- Automatic cleanup: Connections to servers providing no tools are automatically closed
- Timeout management: Configure appropriate timeouts based on your server’s response characteristics
- Resource monitoring: MCP servers run as separate processes and consume system resources
Schema compatibility
Section titled “Schema compatibility”- Property stripping: The system automatically removes
certain schema
properties (
$schema,additionalProperties) for Gemini API compatibility - Name sanitization: Tool names are automatically sanitized to meet API requirements
- Conflict resolution: Tool name conflicts between servers are resolved through automatic prefixing
This comprehensive integration makes MCP servers a powerful way to extend the Gemini CLI’s capabilities while maintaining security, reliability, and ease of use.
Returning rich content from tools
Section titled “Returning rich content from tools”MCP tools are not limited to returning simple text. You can return rich, multi-part content, including text, images, audio, and other binary data in a single tool response. This lets you build powerful tools that can provide diverse information to the model in a single turn.
All data returned from the tool is processed and sent to the model as context for its next generation, enabling it to reason about or summarize the provided information.
How it works
Section titled “How it works”To return rich content, your tool’s response must adhere to the MCP
specification for a
CallToolResult.
The content field of the result should be an array
of ContentBlock objects.
Gemini CLI will correctly process this array, separating text from binary
data
and packaging it for the model.
You can mix and match different content block types in the content array. The
supported block types include:
textimageaudioresource(embedded content)resource_link
Example: Returning text and an image
Section titled “Example: Returning text and an image”Here is an example of a valid JSON response from an MCP tool that returns both a text description and an image:
{ "content": [ { "type": "text", "text": "Here is the logo you requested." }, { "type": "image", "data": "BASE64_ENCODED_IMAGE_DATA_HERE", "mimeType": "image/png" }, { "type": "text", "text": "The logo was created in 2025." } ]}
When Gemini CLI receives this response, it will:
- Extract all the text and combine it into a single
functionResponsepart for the model. - Present the image data as a separate
inlineDatapart. - Provide a clean, user-friendly summary in the CLI, indicating that both text and an image were received.
This enables you to build sophisticated tools that can provide rich, multi-modal context to the Gemini model.
MCP prompts as slash commands
Section titled “MCP prompts as slash commands”In addition to tools, MCP servers can expose predefined prompts that can be executed as slash commands within Gemini CLI. This lets you create shortcuts for common or complex queries that can be easily invoked by name.
Defining prompts on the server
Section titled “Defining prompts on the server”Here’s a small example of a stdio MCP server that defines prompts:
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';import { z } from 'zod';
const server = new McpServer({ name: 'prompt-server', version: '1.0.0',});
server.registerPrompt( 'poem-writer', { title: 'Poem Writer', description: 'Write a nice haiku', argsSchema: { title: z.string(), mood: z.string().optional() }, }, ({ title, mood }) => ({ messages: [ { role: 'user', content: { type: 'text', text: `Write a haiku${mood ? ` with the mood ${mood}` : ''} called ${title}. Note that a haiku is 5 syllables followed by 7 syllables followed by 5 syllables `, }, }, ], }),);
const transport = new StdioServerTransport();await server.connect(transport);
This can be included in settings.json under mcpServers with:
{ "mcpServers": { "nodeServer": { "command": "node", "args": ["filename.ts"] } }}
Invoking prompts
Section titled “Invoking prompts”Once a prompt is discovered, you can invoke it using its name as a slash command. The CLI will automatically handle parsing arguments.
/poem-writer --title="Gemini CLI" --mood="reverent"
or, using positional arguments:
/poem-writer "Gemini CLI" reverent
When you run this command, Gemini CLI executes the prompts/get method on the
MCP server with the provided arguments. The server is responsible for
substituting the arguments into the prompt template and returning the final
prompt text. The CLI then sends this prompt to the model for execution. This
provides a convenient way to automate and share common workflows.
Managing MCP servers with
gemini mcp
Section titled “Managing MCP
servers with gemini mcp”
While you can always configure MCP servers by manually editing your
settings.json file, Gemini CLI provides a convenient
set of commands to manage
your server configurations programmatically. These commands streamline the
process of adding, listing, and removing MCP servers without needing to
directly
edit JSON files.
Adding a server (gemini mcp add)
Section titled “Adding a server
(gemini mcp add)”
The add command configures a new MCP server in your
settings.json. Based on
the scope (-s, --scope), it will be added to either
the user config
~/.gemini/settings.json or the project config .gemini/settings.json file.
Command:
gemini mcp add [options] <name> <commandOrUrl> [args...]
<name>: A unique name for the server.<commandOrUrl>: The command to execute (forstdio) or the URL (forhttp/sse).[args...]: Optional arguments for astdiocommand.
Options (flags):
-s, --scope: Configuration scope (user or project). [default: “project”]-t, --transport: Transport type (stdio, sse, http). [default: “stdio”]-e, --env: Set environment variables (for example,-e KEY=value).-H, --header: Set HTTP headers for SSE and HTTP transports (for example,-H "X-Api-Key: abc123" -H "Authorization: Bearer abc123").--timeout: Set connection timeout in milliseconds.--trust: Trust the server (bypass all tool call confirmation prompts).--description: Set the description for the server.--include-tools: A comma-separated list of tools to include.--exclude-tools: A comma-separated list of tools to exclude.
Adding an stdio server
Section titled “Adding an stdio server”This is the default transport for running local servers.
# Basic syntaxgemini mcp add [options] <name> <command> [args...]
# Example: Adding a local servergemini mcp add -e API_KEY=123 -e DEBUG=true my-stdio-server /path/to/server arg1 arg2 arg3
# Example: Adding a local python servergemini mcp add python-server python server.py -- --server-arg my-value
Adding an HTTP server
Section titled “Adding an HTTP server”This transport is for servers that use the streamable HTTP transport.
# Basic syntaxgemini mcp add --transport http <name> <url>
# Example: Adding an HTTP servergemini mcp add --transport http http-server https://api.example.com/mcp/
# Example: Adding an HTTP server with an authentication headergemini mcp add --transport http --header "Authorization: Bearer abc123" secure-http https://api.example.com/mcp/
Adding an SSE server
Section titled “Adding an SSE server”This transport is for servers that use Server-Sent Events (SSE).
# Basic syntaxgemini mcp add --transport sse <name> <url>
# Example: Adding an SSE servergemini mcp add --transport sse sse-server https://api.example.com/sse/
# Example: Adding an SSE server with an authentication headergemini mcp add --transport sse --header "Authorization: Bearer abc123" secure-sse https://api.example.com/sse/
Listing servers (gemini mcp list)
Section titled “Listing servers
(gemini mcp list)”
To view all MCP servers currently configured, use the list command. It
displays each server’s name, configuration details, and connection status.
This
command has no flags.
Command:
gemini mcp list
Example output:
✓ stdio-server: command: python3 server.py (stdio) - Connected✓ http-server: https://api.example.com/mcp (http) - Connected✗ sse-server: https://api.example.com/sse (sse) - Disconnected
Troubleshooting and Diagnostics
Section titled “Troubleshooting and Diagnostics”To minimize noise during startup, MCP connection errors for background servers are “silent by default.” If issues are detected during startup, a single informational hint will be shown: “MCP issues detected. Run /mcp list for status.”
Detailed, actionable diagnostics for a specific server are automatically re-enabled when:
- You run an interactive command like
/mcp list,/mcp auth, etc. - The model attempts to execute a tool from that server.
- You invoke an MCP prompt from that server.
You can also use gemini mcp list from your shell to
see connection errors for
all configured servers.
Removing a server (gemini mcp remove)
Section titled “Removing a server
(gemini mcp remove)”
To delete a server from your configuration, use the remove command with the
server’s name.
Command:
gemini mcp remove <name>
Options (flags):
-s, --scope: Configuration scope (user or project). [default: “project”]
Example:
gemini mcp remove my-server
This will find and delete the “my-server” entry from the mcpServers object in
the appropriate settings.json file based on the
scope (-s, --scope).
Enabling/disabling a server (gemini mcp enable,
gemini mcp disable)
Section titled
“Enabling/disabling a server (gemini mcp enable, gemini mcp
disable)”
Temporarily disable an MCP server without removing its configuration, or re-enable a previously disabled server.
Commands:
gemini mcp enable <name> [--session]gemini mcp disable <name> [--session]
Options (flags):
--session: Apply change only for this session (not persisted to file).
Disabled servers appear in /mcp status as “Disabled”
but won’t connect or
provide tools. Enablement state is stored in
~/.gemini/mcp-server-enablement.json.
The same commands are available as slash commands during an active session:
/mcp enable <name> and /mcp disable <name>.
Instructions
Section titled “Instructions”Gemini CLI supports MCP server instructions, which will be appended to the system instructions.