KosmoKrator

ai

Eden AI Lua API for KosmoKrator Agents

Agent-facing Lua documentation and function reference for the Eden AI KosmoKrator integration.

6 functions 4 read 2 write API key auth

Lua Namespace

Agents call this integration through app.integrations.eden_ai.*. Use lua_read_doc("integrations.eden-ai") inside KosmoKrator to discover the same reference at runtime.

Agent-Facing Lua Docs

This is the rendered version of the full Lua documentation exposed to agents when they inspect the integration namespace.

Eden AI — Lua API Reference

generate_text

Generate text using AI models through Eden AI.

Parameters

NameTypeRequiredDescription
providersstringyesComma-separated providers (e.g., "openai", "openai,anthropic")
textstringno*Prompt text for single-turn generation
conversationarrayno*Multi-turn conversation with {role, message} objects
temperaturenumbernoSampling temperature (0.0–1.0), default 0.0
max_tokensintegernoMaximum tokens in response
fallback_providersstringnoComma-separated fallback providers

*Either text or conversation is required.

Available Providers

openai, anthropic, google, mistral, cohere, meta

Examples

Simple text generation

local result = app.integrations["eden-ai"].generate_text({
  providers = "openai",
  text = "Explain quantum computing in one paragraph.",
  temperature = 0.7,
  max_tokens = 256
})

for _, r in ipairs(result.results) do
  print(r.provider .. ": " .. r.text)
end

Multi-provider generation

local result = app.integrations["eden-ai"].generate_text({
  providers = "openai,anthropic",
  text = "Write a haiku about programming.",
  temperature = 0.8
})

Multi-turn conversation

local result = app.integrations["eden-ai"].generate_text({
  providers = "openai",
  conversation = {
    { role = "system", message = "You are a helpful coding assistant." },
    { role = "user", message = "How do I sort a table in Lua?" }
  }
})

analyze_image

Analyze images for content, objects, and features.

Parameters

NameTypeRequiredDescription
providersstringyesComma-separated providers (e.g., "google")
image_urlstringno*URL of the image to analyze
image_base64stringno*Base64-encoded image data
featuresarraynoAnalysis features to request
fallback_providersstringnoComma-separated fallback providers

*Either image_url or image_base64 is required.

Example

local result = app.integrations["eden-ai"].analyze_image({
  providers = "google",
  image_url = "https://example.com/photo.jpg",
  features = { "explicit_content", "object_detection" }
})

translate_text

Translate text between languages.

Parameters

NameTypeRequiredDescription
providersstringyesComma-separated providers (e.g., "google", "deepl")
textstringyesText to translate
target_languagestringyesTarget language code (e.g., "fr", "de", "ja")
source_languagestringnoSource language code. Omit to auto-detect.
fallback_providersstringnoComma-separated fallback providers

Example

local result = app.integrations["eden-ai"].translate_text({
  providers = "google",
  text = "Hello, world!",
  target_language = "fr"
})

for _, r in ipairs(result.results) do
  print(r.provider .. ": " .. r.translation)
end

transcribe_audio

Convert audio or video to text.

Parameters

NameTypeRequiredDescription
providersstringyesComma-separated providers (e.g., "openai")
audio_urlstringno*URL of the audio file
audio_base64stringno*Base64-encoded audio data
languagestringnoLanguage code. Omit for auto-detection.
speakersintegernoNumber of speakers for diarization
fallback_providersstringnoComma-separated fallback providers

*Either audio_url or audio_base64 is required.

Example

local result = app.integrations["eden-ai"].transcribe_audio({
  providers = "openai",
  audio_url = "https://example.com/recording.mp3",
  language = "en"
})

for _, r in ipairs(result.results) do
  print(r.provider .. ": " .. r.transcription)
end

ocr

Extract text from images and documents. This is an asynchronous operation.

Parameters

NameTypeRequiredDescription
providersstringyesComma-separated providers (e.g., "google")
document_urlstringno*URL of the document to process
document_base64stringno*Base64-encoded document data
languagestringnoLanguage hint for better accuracy
fallback_providersstringnoComma-separated fallback providers

*Either document_url or document_base64 is required.

Example

local result = app.integrations["eden-ai"].ocr({
  providers = "google",
  document_url = "https://example.com/invoice.pdf"
})

-- Async: returns a job ID
print("Job ID: " .. result.jobId)
print("Status: " .. result.status)

get_current_user

Get the authenticated user’s account information.

Parameters

None.

Example

local result = app.integrations["eden-ai"].get_current_user({})
print("Email: " .. result.email)
print("Plan: " .. (result.plan or "unknown"))

Multi-Account Usage

If you have multiple Eden AI accounts configured, use account-specific namespaces:

-- Default account (always works)
app.integrations["eden-ai"].generate_text({...})

-- Explicit default (portable across setups)
app.integrations["eden-ai"].default.generate_text({...})

-- Named accounts
app.integrations["eden-ai"].work.generate_text({...})
app.integrations["eden-ai"].personal.generate_text({...})

All functions are identical across accounts — only the credentials differ.

Raw agent markdown
# Eden AI — Lua API Reference

## generate_text

Generate text using AI models through Eden AI.

### Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `providers` | string | yes | Comma-separated providers (e.g., `"openai"`, `"openai,anthropic"`) |
| `text` | string | no* | Prompt text for single-turn generation |
| `conversation` | array | no* | Multi-turn conversation with `{role, message}` objects |
| `temperature` | number | no | Sampling temperature (0.0–1.0), default 0.0 |
| `max_tokens` | integer | no | Maximum tokens in response |
| `fallback_providers` | string | no | Comma-separated fallback providers |

*Either `text` or `conversation` is required.

### Available Providers

`openai`, `anthropic`, `google`, `mistral`, `cohere`, `meta`

### Examples

#### Simple text generation

```lua
local result = app.integrations["eden-ai"].generate_text({
  providers = "openai",
  text = "Explain quantum computing in one paragraph.",
  temperature = 0.7,
  max_tokens = 256
})

for _, r in ipairs(result.results) do
  print(r.provider .. ": " .. r.text)
end
```

#### Multi-provider generation

```lua
local result = app.integrations["eden-ai"].generate_text({
  providers = "openai,anthropic",
  text = "Write a haiku about programming.",
  temperature = 0.8
})
```

#### Multi-turn conversation

```lua
local result = app.integrations["eden-ai"].generate_text({
  providers = "openai",
  conversation = {
    { role = "system", message = "You are a helpful coding assistant." },
    { role = "user", message = "How do I sort a table in Lua?" }
  }
})
```

---

## analyze_image

Analyze images for content, objects, and features.

### Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `providers` | string | yes | Comma-separated providers (e.g., `"google"`) |
| `image_url` | string | no* | URL of the image to analyze |
| `image_base64` | string | no* | Base64-encoded image data |
| `features` | array | no | Analysis features to request |
| `fallback_providers` | string | no | Comma-separated fallback providers |

*Either `image_url` or `image_base64` is required.

### Example

```lua
local result = app.integrations["eden-ai"].analyze_image({
  providers = "google",
  image_url = "https://example.com/photo.jpg",
  features = { "explicit_content", "object_detection" }
})
```

---

## translate_text

Translate text between languages.

### Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `providers` | string | yes | Comma-separated providers (e.g., `"google"`, `"deepl"`) |
| `text` | string | yes | Text to translate |
| `target_language` | string | yes | Target language code (e.g., `"fr"`, `"de"`, `"ja"`) |
| `source_language` | string | no | Source language code. Omit to auto-detect. |
| `fallback_providers` | string | no | Comma-separated fallback providers |

### Example

```lua
local result = app.integrations["eden-ai"].translate_text({
  providers = "google",
  text = "Hello, world!",
  target_language = "fr"
})

for _, r in ipairs(result.results) do
  print(r.provider .. ": " .. r.translation)
end
```

---

## transcribe_audio

Convert audio or video to text.

### Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `providers` | string | yes | Comma-separated providers (e.g., `"openai"`) |
| `audio_url` | string | no* | URL of the audio file |
| `audio_base64` | string | no* | Base64-encoded audio data |
| `language` | string | no | Language code. Omit for auto-detection. |
| `speakers` | integer | no | Number of speakers for diarization |
| `fallback_providers` | string | no | Comma-separated fallback providers |

*Either `audio_url` or `audio_base64` is required.

### Example

```lua
local result = app.integrations["eden-ai"].transcribe_audio({
  providers = "openai",
  audio_url = "https://example.com/recording.mp3",
  language = "en"
})

for _, r in ipairs(result.results) do
  print(r.provider .. ": " .. r.transcription)
end
```

---

## ocr

Extract text from images and documents. This is an asynchronous operation.

### Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `providers` | string | yes | Comma-separated providers (e.g., `"google"`) |
| `document_url` | string | no* | URL of the document to process |
| `document_base64` | string | no* | Base64-encoded document data |
| `language` | string | no | Language hint for better accuracy |
| `fallback_providers` | string | no | Comma-separated fallback providers |

*Either `document_url` or `document_base64` is required.

### Example

```lua
local result = app.integrations["eden-ai"].ocr({
  providers = "google",
  document_url = "https://example.com/invoice.pdf"
})

-- Async: returns a job ID
print("Job ID: " .. result.jobId)
print("Status: " .. result.status)
```

---

## get_current_user

Get the authenticated user's account information.

### Parameters

None.

### Example

```lua
local result = app.integrations["eden-ai"].get_current_user({})
print("Email: " .. result.email)
print("Plan: " .. (result.plan or "unknown"))
```

---

## Multi-Account Usage

If you have multiple Eden AI accounts configured, use account-specific namespaces:

```lua
-- Default account (always works)
app.integrations["eden-ai"].generate_text({...})

-- Explicit default (portable across setups)
app.integrations["eden-ai"].default.generate_text({...})

-- Named accounts
app.integrations["eden-ai"].work.generate_text({...})
app.integrations["eden-ai"].personal.generate_text({...})
```

All functions are identical across accounts — only the credentials differ.

Metadata-Derived Lua Example

local result = app.integrations.eden_ai.edenai_generate_text({
  providers = "example_providers",
  text = "example_text",
  conversation = "example_conversation",
  temperature = 1,
  max_tokens = 1,
  fallback_providers = "example_fallback_providers"
})
print(result)

Functions

edenai_generate_text

Generate text using AI models via Eden AI. Supports providers like OpenAI (GPT-4), Anthropic (Claude), Google (Gemini), Mistral, Cohere, and more. You can send a single prompt or a conversation history.

Operation
Write write
Full name
eden-ai.edenai_generate_text
ParameterTypeRequiredDescription
providers string yes Comma-separated list of AI providers (e.g., "openai", "openai,anthropic", "google"). Use "openai" for GPT-4, "anthropic" for Claude, "google" for Gemini, "mistral" for Mistral, "cohere" for Cohere.
text string no The prompt text to send to the AI. Use this for simple single-turn generation.
conversation array no Conversation history as an array of message objects with "role" (system, user, assistant) and "message" keys. Use this for multi-turn conversations.
temperature number no Sampling temperature (0.0–1.0). Higher values increase randomness. Default: 0.0.
max_tokens integer no Maximum number of tokens to generate in the response.
fallback_providers string no Comma-separated list of fallback providers if the primary provider fails.

edenai_analyze_image

Analyze images using AI models via Eden AI. Supports object detection, explicit content detection, scene description, and more. Provide an image as a URL or base64-encoded string.

Operation
Read read
Full name
eden-ai.edenai_analyze_image
ParameterTypeRequiredDescription
providers string yes Comma-separated list of AI providers (e.g., "google", "amazon", "microsoft").
image_url string no URL of the image to analyze. Use this OR image_base64, not both.
image_base64 string no Base64-encoded image data. Use this OR image_url, not both.
features array no Analysis features to request (e.g., ["explicit_content", "object_detection", "scene_classification"]).
fallback_providers string no Comma-separated list of fallback providers if the primary fails.

edenai_translate_text

Translate text between languages using AI models via Eden AI. Supports providers like Google Translate, DeepL, Amazon Translate, Microsoft Translator, and more. Detects the source language automatically if not specified.

Operation
Write write
Full name
eden-ai.edenai_translate_text
ParameterTypeRequiredDescription
providers string yes Comma-separated list of translation providers (e.g., "google", "deepl", "amazon", "microsoft").
text string yes The text to translate.
source_language string no Source language code (e.g., "en", "fr", "de"). Omit to auto-detect.
target_language string yes Target language code (e.g., "en", "fr", "de", "es", "ja", "zh").
fallback_providers string no Comma-separated list of fallback providers if the primary fails.

edenai_transcribe_audio

Transcribe audio or video to text using AI models via Eden AI. Supports providers like OpenAI (Whisper), Google Speech-to-Text, Amazon Transcribe, Microsoft Azure, and more. Provide audio as a URL or base64-encoded string.

Operation
Read read
Full name
eden-ai.edenai_transcribe_audio
ParameterTypeRequiredDescription
providers string yes Comma-separated list of transcription providers (e.g., "openai", "google", "amazon").
audio_url string no URL of the audio file to transcribe. Use this OR audio_base64, not both.
audio_base64 string no Base64-encoded audio data. Use this OR audio_url, not both.
language string no Language code for the audio (e.g., "en", "fr", "de"). Omit for auto-detection.
speakers integer no Number of speakers in the audio for speaker diarization.
fallback_providers string no Comma-separated list of fallback providers if the primary fails.

edenai_ocr

Extract text from images and documents using OCR via Eden AI. Supports providers like Google Cloud Vision, Amazon Textract, Microsoft Azure, and more. This is an async operation — the response may contain a public_job_id for tracking. Provide the document as a URL or base64-encoded string.

Operation
Read read
Full name
eden-ai.edenai_ocr
ParameterTypeRequiredDescription
providers string yes Comma-separated list of OCR providers (e.g., "google", "amazon", "microsoft").
document_url string no URL of the image or document to process. Use this OR document_base64, not both.
document_base64 string no Base64-encoded document data. Use this OR document_url, not both.
language string no Language hint for OCR (e.g., "en", "fr", "de"). Improves accuracy for specific languages.
fallback_providers string no Comma-separated list of fallback providers if the primary fails.

edenai_get_current_user

Get the current Eden AI user's account information, including email, plan, and usage details. Useful for verifying API connectivity and checking account status.

Operation
Read read
Full name
eden-ai.edenai_get_current_user
ParameterTypeRequiredDescription
No parameters.