Governed Agentic AI · Reference Architecture
Governance was always
an enforcement problem.
AI just removed the friction that hid it.
A multi-agent financial crime investigation system built on Neo4j, Anthropic Claude, and Kong — demonstrating that a gateway control plane is the only way to make AI governance real at execution.
Most AI governance frameworks describe what should happen. This system enforces what can happen. Entity Risk AI is a fully governed multi-agent investigation platform. Every tool call and every LLM invocation routes through Kong's dual gateway layer before reaching any upstream service. Denied actions are blocked, logged, and traceable. No agent — and no human — bypasses the control plane. The audit trail lives in Neo4j as a queryable subgraph, not a flat log file.
Governance Lives Outside the System
Organisations have policies, roles, and approval workflows. What most don't have is a single point where those policies are enforced at the moment a decision is made. Governance describes intent. Execution determines reality.
AI agents make this gap impossible to ignore. They don't hesitate. They don't ask for exceptions. They execute every permitted path, consistently, at machine speed. What was occasional becomes guaranteed.
The Fix Is Architectural, Not Cultural
The answer is not more policy documentation. It is a control plane — a single enforcement layer that sits between intent and execution for every action, human or agent, before it happens.
That layer must be centralised (no bypass paths), fail-closed (denied actions blocked and logged), fully traceable, and role-consistent — the same gate applies to humans and agents equally.
Routes all 15 investigation tools through Kong's /mcp route with key-auth validation. Consumer groups enforce per-role restrictions. Denials return HTTP 403 and are traced.
All Anthropic calls route through Kong's AI Gateway. The planner uses /ai/sonnet; agents use /ai. The app never holds the Anthropic API key in Kong mode.
- 1Agents don't introduce new risk. They make existing system permissiveness executable at scale. This addresses the root cause — the absence of an enforcement layer.
- 2Kong becomes the control plane for agentic AI. The single enforcement point between intent and execution — for both tool calls and model invocations.
- 3The audit trail is proof, not documentation. Every action captured at execution time in a queryable graph. Retroactively demonstrable and reliable.
- 4The same rules apply to everyone. Humans and agents governed identically. Same gate. Same policy. Every time. No carve-outs.
- 5Production-deployable, not a proof of concept. Open-sourced Apache 2.0, hosted on Railway + Neo4j AuraDB, built on public Companies House UBO data.
What Enforcement Actually Requires
A single control plane, enforced across multiple execution points. Every action, human or agent, evaluated before it reaches any system. · Click any component to explore it.
to explore it
Solution Architecture
↗ Click any component to explore its role
to explore it
System Control Flow
↗ Click any participant or message row to see what happens at that step
or message row
to explore it
Graph Data Model
↗ Click any node or relationship to explore its full schema · Business graph (left) and Trace subgraph (right) shown together
or relationship
to explore its schema
Control Plane Configuration using Kong
Click any route or tool to inspect its full configuration · /kongai routes excluded
entity-risk-ai-kong-ai-gatewayentity-risk-ai-app (X-Kong-API-Key auth)mcp-upstream-serviceentity-risk-ai-production.up.railway.app