Live on AWS Bedrock

Agent Oriented
Architecture

A modular AI framework built from Agentic Units — self-contained microservices that discover, collaborate, and adapt.

Launch Intent Studio

How It Works

Three core services orchestrate a network of specialized AI agents

Planner

Translates natural language intent into a directed acyclic graph (DAG) of agent tasks. Uses few-shot prompting with Claude Sonnet on Bedrock.

Intent → DAG → Steps

Registry

Discovers the best agent for each task using semantic search over capability embeddings stored in PostgreSQL with pgvector.

Capability → Embedding → Match

Orchestrator

Executes the plan by dispatching tasks to agents via Redis queues, tracking progress, and aggregating results in real time.

Task → Queue → Result

Agent Network

Each Agentic Unit is a self-contained microservice with its own model, manifest, and task queue

👁

Vision OCR

GPU-accelerated document OCR using DeepSeek vision models. Extracts text from scanned PDFs and images.

GPU Vision
📄

Document Parser

Multi-format document parsing for PDF, DOCX, and plain text. Structures content for downstream agents.

PDF DOCX

Quality Evaluator

LLM-based evaluation and fraud detection. Scores document quality and flags anomalies.

LLM Scoring
📊

Report Generator

Generates structured Markdown reports from processed data. Summarizes findings across agent outputs.

Markdown LLM
🔊

Audio Generator

Text-to-speech audio report generation. Creates audio summaries using Amazon Polly on AWS.

TTS Polly
🎨

Infographic Generator

Visual summary generation using Bedrock image models. Creates infographic-style representations.

Image Bedrock
🗃

Structured Storage

PostgreSQL with text-to-SQL. Stores structured data and enables natural language queries over results.

SQL Postgres
🔍

VAT Checker

EU VAT number validation via the VIES API. Verifies business registration and tax compliance.

API EU VIES
🌐

Web Researcher

Web search and content extraction. Enriches workflows with external data from the web.

Search Scraping

Architecture Principles

Design decisions that make AOA production-ready

AU Agentic Units

Each AU encapsulates a focused capability with its own model, manifest, and task queue. Agent Cards capture NFR metadata — performance, reliability, cost — enabling fitness-based discovery, not just keyword matching.

Right-Sized Intelligence

Not every task needs a frontier model. AOA uses the smallest model that gets the job done — cloud LLMs for planning, specialized models for OCR, simple APIs for validation. Cost-efficient without sacrificing quality.

Memory as System Property

Three layers of memory emerge from the architecture: planner DAGs as episodic memory, registry embeddings as semantic memory, and orchestrator logs as working memory. The system learns from what it has solved before.

Model-Agent Separation

Agent logic is decoupled from model inference. The same agent code runs on a DGX Spark with local GPU models or on AWS with Bedrock APIs — just change an environment variable. No code changes needed.

Tech Stack

Production-grade infrastructure, all containerized

AWS Bedrock Claude Sonnet 4 PostgreSQL + pgvector Apache AGE Redis + RQ Docker Compose FastAPI React + TypeScript MinIO Amazon Polly Nginx Cognito + Google OAuth Let's Encrypt Python 3.12