Skip to main content
Aigentic helps you build
|
AI agents faster

Streamline your LLM development journey with a powerful Kotlin DSL for building and integrating AI agents into applications

Intuitive Kotlin
DSL

Building and integrating AI agents into applications can be complex and challenging. Aigentic streamlines this process with a powerful Kotlin DSL that helps you:

  • Rapidly create and deploy LLM agents
  • Seamlessly integrate with multiple LLM providers
  • Easily define and manage agent tools and capabilities
  • Efficiently handle message processing and context management
  • Transition smoothly from proof of concept to production
  • Iterate quickly based on real-world feedback
aigentic-kotlin-dsl
InvoiceExtractor.kt

@AigenticParameter
data class InvoiceLine(
val description: String,
val quantity: Int,
val lineTotal: Double
)

@AigenticParameter
data class InvoiceComponents(
val invoiceNumber: String,
val lines: List<InvoiceLine>,
@Description("Total amount before tax")
val totalAmount: Double,
@Description("Tax amount applied to the invoice")
val tax: Double
)

@AigenticParameter
data class SaveResult(
val message: String
)

// Configure the agent
val invoiceExtractorAgent =
agent<Unit, InvoiceComponents> {
// Configure the model for the agent, other models are also available
geminiModel {
apiKey("YOUR_API_KEY")
modelIdentifier(GeminiModelIdentifier.Gemini2_0Flash)
}

// Configure the task for the agent
task("Extract structured invoice data from images") {
addInstruction("Extract invoice number, line items, amounts, and tax information")
addInstruction("Save each extracted invoice")
}

// Add a tool to save invoice data
addTool("saveInvoiceData", "Save extracted invoice data to system") { input: InvoiceComponents ->
SaveResult(message = "Saved invoice ${input.invoiceNumber} with ${input.lines.size} line items successfully")
}
}

// Start the agent with a PDF attachment containing an invoice
val run = invoiceExtractorAgent.start(Attachment.Base64.pdf(invoiceBase64))

// Print the result
when (val outcome = run.outcome) {
is Outcome.Finished -> println("Extracted invoice ${outcome.response")
is Outcome.Stuck -> println("Agent is stuck: ${outcome.reason}")
is Outcome.Fatal -> println("Error: ${outcome.message}")
}

// Print token usage summary to monitor resource consumption
println(run.getTokenUsageSummary())

Why Aigentic

Aigentic is a Kotlin Multiplatform library that provides a powerful DSL for building and integrating AI agents into applications. It streamlines the process of creating, deploying, and managing LLM agents within your ecosystem.

By offering a model-agnostic approach, Aigentic supports multiple LLM providers including OpenAI, Gemini, Ollama, VertexAI, and more. This flexibility allows you to choose the best model for your specific use case without being locked into a single provider.

Additionally, Aigentic bridges the gap between proof of concept and production-ready applications, enabling rapid iteration and improvement based on real-world feedback. In short, Aigentic accelerates AI development, reduces complexity, and enables integration with existing systems.

Create and deploy AI agents with intuitive Kotlin DSL

Integrate with multiple LLM providers seamlessly

Accelerate from proof of concept to production

How

Define

Aigentic provides an intuitive Kotlin DSL for defining AI agents with clear, concise syntax. This approach allows you to specify agent behavior, tools, and capabilities in a type-safe manner. The DSL abstracts away the complexity of working with different LLM providers while giving you full control over your agent's functionality.

Explore the DSL
WeatherAgent.kt

// Describe the agents response type
@AigenticParameter
data class WeatherResponse(
@Description("Current temperature in degrees Celsius")
val temperature: Double,
@Description("Current weather conditions description")
val conditions: String,
@Description("Name of the location for the weather data")
val location: String
)

// Describe the getWeather tool request type
@AigenticParameter
data class WeatherRequest(
@Description("Name of the location to get weather information for")
val location: String
)

// Configure the agent
val weatherAgent =
agent<String, WeatherResponse> {
// Configure the model for the agent, other models are also available
geminiModel {
apiKey("YOUR_API_KEY")
modelIdentifier(GeminiModelIdentifier.Gemini2_5Flash)
}

// Configure the task for the agent
task("Provide weather information") {
addInstruction("Respond to user queries about weather")
}

// Add a weather tool to give the agent live weather information capabilities
addTool("getWeather", "Get the current weather for a location") { req: WeatherRequest ->
WeatherClient.requestWeather(req.location)
}
}

// Start the agent and get a run
val run = weatherAgent.start("What's the weather like in Amsterdam?")

// Print the result
when (val outcome = run.outcome) {
is Outcome.Finished -> println("Weather in ${outcome.response?.location}: ${outcome.response?.temperature}°C, ${outcome.response?.conditions}")
is Outcome.Stuck -> println("Agent is stuck: ${outcome.reason}")
is Outcome.Fatal -> println("Error: ${outcome.message}")
}

// Print token usage summary to monitor resource consumption
println(run.getTokenUsageSummary())
val model = openAIModel {
apiKey("YOUR_API_KEY")
modelIdentifier(OpenAIModelIdentifier.GPT4Turbo)
}

Providers

Aigentic is designed to be model-agnostic, allowing you to seamlessly integrate with various LLM providers including OpenAI, Gemini, Ollama, VertexAI, or your own. This flexibility enables you to choose the best model for your specific use case without being locked into a single provider. You can switch between providers with minimal code changes.

Explore Providers

Validate

Aigentic bridges the gap between proof of concept and production-ready applications. With its Kotlin Multiplatform foundation, you can validate your AI agents across different platforms and environments. Aigentic provides tools for monitoring, logging, and managing your agents in production, enabling you to iterate quickly based on real-world feedback.

Vision
Deployment Diagram
DocumentWorkflow.kt

@AigenticParameter
data class Document(val content: String)

@AigenticParameter
data class ProcessedDoc(val cleanedContent: String)

@AigenticParameter
data class AnalyzedDoc(
val content: String,
val keyTopics: List<String>,
val sentiment: String
)

@AigenticParameter
data class Summary(val summary: String)

// Create specialized agents
val cleaner = agent<Document, ProcessedDoc> {
geminiModel {
apiKey("YOUR_API_KEY")
modelIdentifier(GeminiModelIdentifier.Gemini2_5Flash)
}
task("Clean and format document") {
addInstruction("Remove formatting artifacts")
addInstruction("Fix grammar and spelling")
}
}

val analyzer = agent<ProcessedDoc, AnalyzedDoc> {
geminiModel {
apiKey("YOUR_API_KEY")
modelIdentifier(GeminiModelIdentifier.Gemini2_5Flash)
}
task("Analyze document content") {
addInstruction("Identify key topics and themes")
addInstruction("Determine overall sentiment")
}
}

val summarizer = agent<AnalyzedDoc, Summary> {
geminiModel {
apiKey("YOUR_API_KEY")
modelIdentifier(GeminiModelIdentifier.Gemini2_5Flash)
}
task("Create concise summary") {
addInstruction("Highlight key points and topics")
addInstruction("Keep under 100 words")
}
}

// Chain agents into workflow
val workflow = cleaner thenProcess analyzer thenProcess summarizer

// Execute workflow
val run = workflow.start(Document("Raw document text..."))

when (val outcome = run.outcome) {
is Outcome.Finished -> println(outcome.response?.summary)
is Outcome.Stuck -> println("Workflow stuck: ${outcome.reason}")
is Outcome.Fatal -> println("Error: ${outcome.message}")
}

Workflows

For complex tasks that require multiple processing steps, Aigentic workflows help you chain agents together. Each agent processes the output from the previous agent, creating seamless workflows of AI-powered agents.

Workflows are fully type-safe, ensuring that agent outputs match the inputs of the next agent in the chain. This approach helps you break down complex problems into focused, manageable steps.

Explore Workflows

Key capabilities

LLM Providers

Aigentic supports multiple LLM providers, giving you flexibility and choice.

Supported: OpenAI, Gemini, Ollama, VertexAI, or your own

Multiplatform

Built with Kotlin Multiplatform for cross-platform compatibility.

Platforms: JVM, JavaScript, Node, Native

Integrations

Aigentic includes a variety of built-in tools for common agent integrations.

Included: HTTP tools, OpenAPI tools

Context

Efficient handling of conversation context and message history.

Features: Memory management, Context caching

Validation

Comprehensive testing utilities for AI agent development.

Features: Mock LLM responses, Test fixtures, Scenario testing

Workflow Chaining

Chain multiple agents together to create powerful multi-step processing pipelines.

Features: Type-safe chaining, Multi-agent orchestration, Complex task decomposition

Why Aigentic

Choosing Aigentic means selecting a powerful, flexible library designed for developers who want to build production-ready AI applications with Kotlin's type safety and expressiveness.

Our commitment to ongoing innovation and quality ensures that Aigentic not only meets current demands but also evolves with emerging AI technologies and best practices.

By actively engaging in the open-source community and maintaining transparent development processes, we ensure that Aigentic remains at the forefront of AI agent development. Our focus on developer experience, performance, and reliability provides you with a framework that elevates your AI projects to new heights.