Skip to content

Agents

Forage creates AI agents with configurable chat models, memory providers, and guardrails for LangChain4j integration.

Quick Start

forage.myAgent.agent.model.kind=ollama
forage.myAgent.agent.model.name=granite4:3b
forage.myAgent.agent.base.url=http://localhost:11434
forage.myAgent.agent.features=memory
forage.myAgent.agent.memory.kind=message-window
forage.myAgent.agent.memory.max.messages=20
- to:
    uri: langchain4j-agent:myAgent
    parameters:
      agent: "#myAgent"

Properties

Property Description Type Default Required
forage.multi.agent.names Comma-separated list of named agent prefixes for multi-agent setup string
forage.multi.agent.id.source Source for extracting agent ID (route-id, header, property, variable) string
forage.provider.features Comma-separated list of agent features to enable (e.g., memory) string
forage.agent.model.kind The model provider kind (e.g., ollama, openai, google-gemini, azure-openai, anthropic) bean-name Yes
forage.agent.features Comma-separated list of enabled features (e.g., memory) string
forage.agent.memory.kind The memory provider kind (e.g., message-window, redis, infinispan) bean-name
forage.agent.base.url Base URL for the model provider API string
forage.agent.model.name The specific model name to use string
forage.agent.temperature Temperature for response randomness (0.0-2.0) double
forage.agent.endpoint Azure OpenAI resource endpoint URL string
forage.agent.deployment.name Azure OpenAI deployment name string
forage.agent.timeout Request timeout duration in ISO-8601 format (e.g. PT120S for 120 seconds) string
forage.agent.memory.max.messages Maximum number of messages to retain in memory integer 20
forage.agent.memory.redis.host Redis server hostname string localhost
forage.agent.memory.redis.port Redis server port integer 6379
forage.agent.memory.infinispan.server-list Comma-separated list of Infinispan servers string localhost:11222
forage.agent.memory.infinispan.cache-name Infinispan cache name for storing messages string chat-memory
forage.agent.in.memory.store.file.source Path to a file to be loaded into store. string Yes
forage.agent.in.memory.store.max.size The maximum size of the segment, defined in characters. int
forage.agent.in.memory.store.overlap.size The maximum size of the overlap, defined in characters. int
forage.agent.embedding.model.name The specific model name to use string
forage.agent.embedding.model.timeout Used for the HttpClientBuilder that will be used to communicate with Ollama Duration
forage.agent.embedding.model.max.retries Used for the HttpClientBuilder that will be used to communicate with Ollama int
forage.agent.rag.max.results The maximum number of Contents to retrieve. int
forage.agent.rag.min.score The minimum relevance score for the returned Contents. double

Security

Property Description Type Default Required
forage.agent.api.key API key for authentication with the model provider password
forage.agent.memory.redis.password Redis authentication password password

Advanced

Property Description Type Default Required
forage.multi.agent.id.source.header Exchange header name to extract agent ID from string
forage.multi.agent.id.source.property Exchange property name to extract agent ID from string
forage.multi.agent.id.source.variable Exchange variable name to extract agent ID from string
forage.provider.model.factory.class Fully qualified class name of the model provider factory string
forage.provider.features.memory.factory.class Fully qualified class name of the chat memory factory string
forage.provider.agent.class Fully qualified class name of the agent factory implementation string
forage.guardrails.input.classes Comma-separated list of input guardrail class names string
forage.guardrails.output.classes Comma-separated list of output guardrail class names string
forage.agent.max.tokens Maximum number of tokens in the response integer
forage.agent.top.p Top-P (nucleus) sampling parameter (0.0-1.0) double
forage.agent.top.k Top-K sampling parameter integer
forage.agent.log.requests Enable request logging boolean
forage.agent.log.responses Enable response logging boolean

Available Chat Models

Name Description
azure-openai OpenAI models hosted on Microsoft Azure
openai OpenAI API-compatible models
ollama Locally-hosted models via Ollama (Llama, Mistral, etc.)
google-gemini Google Gemini models
anthropic Anthropic Claude models
bedrock Amazon Bedrock multi-model provider supporting Claude, Llama, Titan, Cohere, and Mistral
dashscope Alibaba Cloud Qwen models via DashScope (placeholder)
hugging-face Open-source models via HuggingFace Inference API
local-ai Self-hosted models via LocalAI (OpenAI-compatible)
mistral-ai Mistral AI models
watsonx-ai IBM Watsonx.ai models

Available Memory Providers

Name Description
infinispan Distributed storage using Infinispan
redis Persistent storage using Redis
message-window In-memory storage with configurable message window size

Available Input Guardrails

Name Description
code-injection Detects code injection attacks (SQL, shell, XSS, path traversal, etc.)
pii-detector Detects PII (email, phone, SSN, credit card, IP address) in input messages
input-length Validates input message length (min/max characters)
prompt-injection Detects prompt injection attacks (role manipulation, jailbreak, etc.)
keyword-filter Blocks messages containing specific keywords or phrases

Available Output Guardrails

Name Description
json-format Validates output is valid JSON format with optional required fields
sensitive-data Detects and optionally redacts sensitive data (API keys, secrets, PII) in output
output-length Validates output message length (min/max characters, optional truncation)