ai
Firecrawl CLI for Shell Scripts
Use the Firecrawl CLI for shell scripts with headless JSON commands, schema discovery, credentials, and permission controls.
6 functions 6 read 0 write API key auth
Firecrawl CLI for Shell Scripts
Call integration functions from shell scripts with stable JSON input and output.
Use shell scripts for small local automations that need one or more integration calls. The Firecrawl CLI uses the same integration registry as the TUI, Lua runtime, and MCP gateway, but returns predictable command output for automation.
Command Shape
# Firecrawl CLI for Shell Scripts
kosmokrator integrations:configure firecrawl --set api_key="$FIRECRAWL_API_KEY" --enable --read allow --write ask --json
kosmo integrations:call firecrawl.firecrawl_scrape '{"url":"example_url","formats":"example_formats","onlyMainContent":true,"includeTags":"example_includeTags","excludeTags":"example_excludeTags","waitFor":1,"timeout":1,"actions":"example_actions"}' --json Discovery Before Execution
Agents and scripts can inspect Firecrawl docs and schemas before choosing a function.
kosmo integrations:docs firecrawl --json
kosmo integrations:docs firecrawl.firecrawl_scrape --json
kosmo integrations:schema firecrawl.firecrawl_scrape --json
kosmo integrations:search "Firecrawl" --json
kosmo integrations:list --json Useful Firecrawl CLI Functions
| Function | Type | Parameters | Description |
|---|---|---|---|
firecrawl.firecrawl_scrape | Read | url, formats, onlyMainContent, includeTags, excludeTags, waitFor, timeout, actions | Scrape a single URL and extract its content. Returns the page content in the requested format (markdown by default). Supports actions like waiting for JavaScript, taking screenshots, and extracting specific elements. |
firecrawl.firecrawl_crawl | Read | url, limit, maxDepth, formats, excludePaths, includePaths, allowBackwardLinks, allowExternalLinks, onlyMainContent | Start a crawl job to scrape all pages from a website starting at the given URL. Returns a crawl job ID — use firecrawl_get_crawl_status to check progress and retrieve results. |
firecrawl.firecrawl_get_crawl_status | Read | id | Check the status and retrieve results of a crawl job. Returns the current status (scraping, completed, failed, cancelled) and all scraped data once complete. |
firecrawl.firecrawl_map | Read | url, limit, includeSubdomains, search, ignoreSitemap, includePaths, excludePaths | Map a website to discover all linked URLs. Returns a list of all URLs found on the site without scraping full content. Useful for understanding site structure before crawling. |
firecrawl.firecrawl_extract | Read | urls, prompt, schema, systemPrompt, allowExternalLinks, enableWebSearch, includeSubdomains | Extract structured data from one or more URLs using AI. Provide a prompt describing what to extract, or a JSON schema for the expected output format. Ideal for pulling specific data points from web pages. |
firecrawl.firecrawl_get_current_user | Read | none | Get the authenticated user's account information, including plan details and usage statistics. Useful for verifying API key validity and checking remaining credits. |
Automation Notes
- Use
--jsonfor machine-readable output. - Keep credentials out of argv by using environment variables or stored KosmoKrator configuration.
- Configure read/write policy before unattended runs; use
--forceonly for trusted automation. - Use the MCP gateway instead when the agent needs dynamic tool discovery inside a conversation.