ai
Google Gemini MCP Integration for LangGraph
Connect Google Gemini to LangGraph through the local KosmoKrator MCP gateway with scoped tools, credentials, and write policy.
Connect Google Gemini to LangGraph
Run KosmoKrator integration calls from LangGraph nodes while preserving local credentials and permissions.
Use a graph node that calls the KosmoKrator CLI for deterministic steps or an MCP client for dynamic tool selection. The gateway is local, scoped to this integration, and starts with
--write=deny so LangGraph can inspect read-capable tools without receiving write access by default.
Google Gemini MCP Config for LangGraph
Headless CLI calls fit repeatable graph edges; MCP fits exploratory agent nodes.
{
"mcpServers": {
"kosmokrator-google-gemini": {
"type": "stdio",
"command": "kosmo",
"args": [
"mcp:serve",
"--integration=google-gemini",
"--write=deny"
]
}
}
} Run the Gateway Manually
kosmokrator mcp:serve --integration=google-gemini --write=deny Why Use KosmoKrator Here
Expose only Google Gemini instead of a broad multi-service tool list.
Reuse credentials already configured for the KosmoKrator CLI and Lua runtime.
Start read-only, then opt into ask or allow for trusted workspaces.
Google Gemini Tools Visible to LangGraph
LangGraph sees stable MCP tool names generated from the Google Gemini integration catalog.
| MCP tool | Source function | Type | Description |
|---|---|---|---|
integration__google_gemini__gemini_list_models | google-gemini.gemini_list_models | Read | List available Gemini AI models. Returns model names, display names, supported generation methods, and other metadata. |
integration__google_gemini__gemini_get_model | google-gemini.gemini_get_model | Read | Get detailed information about a specific Gemini model, including supported generation methods, input/output token limits, and capabilities. |
integration__google_gemini__gemini_generate_content | google-gemini.gemini_generate_content | Write | Generate content using a Gemini model. Send text prompts and receive AI-generated responses. Supports configurable generation parameters like temperature, topP, and maxOutputTokens. |
integration__google_gemini__gemini_list_files | google-gemini.gemini_list_files | Read | List files uploaded to the Gemini File API. Returns file names, MIME types, sizes, and states. |
integration__google_gemini__gemini_get_file | google-gemini.gemini_get_file | Read | Get metadata for an uploaded file in the Gemini File API, including its name, display name, MIME type, size, and processing state. |
integration__google_gemini__gemini_list_tuned_models | google-gemini.gemini_list_tuned_models | Read | List tuned (fine-tuned) Gemini models in your project. Returns model names, base models, tuning tasks, and display names. |
integration__google_gemini__gemini_get_current_user | google-gemini.gemini_get_current_user | Read | Get information about the currently authenticated Google user, including permissions and account details. |