KB AI is a knowledge-based hybrid AI that uses precise fact-based knowledge to help LLMs and other applications with decision making and planning
It solves AI accuracy and explainability.
Can be taught from a single example and managed by non technical users.
Pay as You Go Free until 1 June 2025Try Now | Enterprise From $495/seat/year Book a Demo | |
Pricing | ||
Knowledge base edits | 100 credits for knowledge base creation or modification provided for free, additional usage credits can be purchased as needed | Unmetered (subject to AUP) |
Inference API calls | Unmetered, subject to concurrency limit | Unmetered, subject to concurrency limit |
Fact-based knowledge base editor | ||
Access for a single user | + | + |
Team access with configurable permissions | - | + |
Code generation | ||
Generate inference engine and use inference API | + | + |
Export function definitions for use with LLM function calling | + | + |
Export generated code (in JavaScript natively, can be translated to other languages) | - | + |
On-premise knowledge base editor deployment | - | Per agreement |
Access to the inference API | ||
Maximum parallel requests | 10 | 1,000 |
Token-protected access to API | - | + |
Configure CORS settings for public web API | - | + |
Follow these steps to create, deploy, and start using your hybrid AI agent with fact-based reasoning.
Use a simple HTTP request to query your knowledge base. Example using curl
:
curl -X POST https://<your-knowledge-base-url> \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-token>" \
-d '{"fact": "screenshot.applicationCompliant"}
Response:
{
"stopReason": "FACT_NEEDED",
"facts": {},
"log": [
{
"code": "RULE_STARTED",
"message": "Inferring screenshot.applicationCompliant using rule 'Is the system compliant based on the visibility and approval status of applications?'",
"fact": "screenshot.applicationCompliant",
"dependencies": {
"type": "object",
"properties": {
"application.isApproved": {
"type": "boolean",
"parameters": {
"type": "object",
"required": [
"application"
],
"properties": {
"application": {
"type": "string"
}
}
}
},
"screenshot.visibleApplications": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
},
{
"code": "FACT_NEEDED",
"message": "Inference stopped due to unknown fact, re-run with fact screenshot.visibleApplications provided",
"fact": "screenshot.visibleApplications"
}
]
}
KBAI can be interfaced with LLMs in two directions.
Firstly, KBAI can be used to provide a set of facts and a completed reasoning chain to LLM. This can be done either an an initial prompt, by adding the facts output of KBAI to it, or offering KBAI inference as a function to LLM.
Usually it's sufficient to provide the facts alone and LLMs can make conclusion of their meaning by their names, however, in some cases, you may wish to consider adding description of the reasoning steps and the rule definitions that were used to reach the conclusion.
Another way to offer KBAI output to LLM is to expose KBAI rules to LLM as functions. For this purpose, KBAI exports ready-to-use function parameters JSONB schemas.
Simply click "Export parameters" link next to the rule you're interested to provide as output to LLM and use the parameters to let LLM know what information the function expects.
Secondly, in many practical applications execution of the rules may require answers that can be obtained by LLM from an unstructured source data (texts, files, document images etc). In such cases it makes sense to configure LLM to provide answers whenever KBAI inference stops with FACT_NEEDED code.
Combining KBAI and LLMs in this way allows creating agentic approach where KBAI reasoning guides LLM to make step-by-step decisions based on the source data, while keeping the more complex reasoning defined by explainable KBAI rules, which allows for creation of hybrid AI agents that are much more powerful and predictable, compared to the traditional all-LLM approach.
Send us an email or book a call below: