KB AI is a knowledge-based hybrid AI that uses precise fact-based knowledge to help LLMs and other applications with decision making and planning

It solves AI accuracy and explainability.

Can be taught from a single example and managed by non technical users.

Pricing

Pay as You Go
Free until 1 June 2025Try Now
Enterprise
From $495/seat/year
Book a Demo
Pricing
Knowledge base edits100 credits for knowledge base creation or modification provided for free, additional usage credits can be purchased as neededUnmetered (subject to AUP)
Inference API callsUnmetered, subject to concurrency limitUnmetered, subject to concurrency limit
Fact-based knowledge base editor
Access for a single user++
Team access with configurable permissions-+
Code generation
Generate inference engine and use inference API++
Export function definitions for use with LLM function calling++
Export generated code (in JavaScript natively, can be translated to other languages)-+
On-premise knowledge base editor deployment-Per agreement
Access to the inference API
Maximum parallel requests101,000
Token-protected access to API-+
Configure CORS settings for public web API-+

Quick start

Follow these steps to create, deploy, and start using your hybrid AI agent with fact-based reasoning.

Step 1. Create knowledge base

  1. Sign Up or Log In
    • Visit /users/sign_up and create an account (free for single users) or log in with your credentials.
    • For Enterprise users, contact sales to set up team access.
  2. Access the Knowledge Base Editor
    • From the dashboard, press "Create New".
    • Name your knowledge base (e.g., "Sales Process" or "Compliance Rules").
  3. Add Facts
    • Press "Modify" and use the editor to type or paste rules.
    • KB AI learns best from formalized rules and workflows, such as policies, process or job descriptions.
  4. Save Your Knowledge Base
    • Click "Save changes" to store your initial version. Note that processing may take a few minutes - do not refresh the page until it's completed. Free accounts get 10 credits for creation/modification (additional credits available for purchase). Enterprise users enjoy unmetered edits.
  5. Test Your Knowledge Base
    • Press "Evaluate" button next to a rule to see how it executes and what parameters it takes.
    • If you want to change rules, press "Modify" and enter additional rules that should be incorporated in the knowledge base (or mention what should be removed).

Step 2: Deploy Your Knowledge Base

  1. Choose Deployment Option
    • API Mode: press "Deploy" to deploy as an inference API for real-time use (see Step 3). Save your API endpoint URL, when it's displayed, for future use.
    • Code Export (Enterprise Only): Click "Export Code" to download JavaScript function definitions. Use these with your LLM or translate to Python, Java, etc., via the built-in converter.

Step 3: Run the Inference API

  1. Make an API Call

    Use a simple HTTP request to query your knowledge base. Example using curl:

    curl -X POST https://<your-knowledge-base-url> \
          -H "Content-Type: application/json" \
          -H "Authorization: Bearer <your-token>" \
          -d '{"fact": "screenshot.applicationCompliant"}

    Response:

    {
        "stopReason": "FACT_NEEDED",
        "facts": {},
        "log": [
            {
                "code": "RULE_STARTED",
                "message": "Inferring screenshot.applicationCompliant using rule 'Is the system compliant based on the visibility and approval status of applications?'",
                "fact": "screenshot.applicationCompliant",
                "dependencies": {
                    "type": "object",
                    "properties": {
                        "application.isApproved": {
                            "type": "boolean",
                            "parameters": {
                                "type": "object",
                                "required": [
                                    "application"
                                ],
                                "properties": {
                                    "application": {
                                        "type": "string"
                                    }
                                }
                            }
                        },
                        "screenshot.visibleApplications": {
                            "type": "array",
                            "items": {
                                "type": "string"
                            }
                        }
                    }
                }
            },
            {
                "code": "FACT_NEEDED",
                "message": "Inference stopped due to unknown fact, re-run with fact screenshot.visibleApplications provided",
                "fact": "screenshot.visibleApplications"
            }
        ]
    }
  2. Request parameters:
    • fact – name of the fact to infer, required
    • facts – initial set of facts to use for inference, optional
  3. Response fields:
    • stopReason – COMPLETED if answer obtained (will be contained in facts, alongside with the intermediate reasoning outcomes) or FACT_NEEDED if inference stopped requiring a fact. To re-run inference, call the endpoint providing facts from the output, as well as the missing fact.
    • facts – all facts that were provided or inferred
    • log – step-by-step reasoning log, specifying rules in order of execution. If inference stopped with FACT_NEEDED code, the last log entry of FACT_NEEDED type will contain the name of the missing fact. Each rule in the log includes description and dependencies JSON schema defining expected data types for the input facts.

Inferfacing with LLMs

KBAI can be interfaced with LLMs in two directions.

Firstly, KBAI can be used to provide a set of facts and a completed reasoning chain to LLM. This can be done either an an initial prompt, by adding the facts output of KBAI to it, or offering KBAI inference as a function to LLM.

Usually it's sufficient to provide the facts alone and LLMs can make conclusion of their meaning by their names, however, in some cases, you may wish to consider adding description of the reasoning steps and the rule definitions that were used to reach the conclusion.

Another way to offer KBAI output to LLM is to expose KBAI rules to LLM as functions. For this purpose, KBAI exports ready-to-use function parameters JSONB schemas.

Simply click "Export parameters" link next to the rule you're interested to provide as output to LLM and use the parameters to let LLM know what information the function expects.

Secondly, in many practical applications execution of the rules may require answers that can be obtained by LLM from an unstructured source data (texts, files, document images etc). In such cases it makes sense to configure LLM to provide answers whenever KBAI inference stops with FACT_NEEDED code.

Combining KBAI and LLMs in this way allows creating agentic approach where KBAI reasoning guides LLM to make step-by-step decisions based on the source data, while keeping the more complex reasoning defined by explainable KBAI rules, which allows for creation of hybrid AI agents that are much more powerful and predictable, compared to the traditional all-LLM approach.

Book a demo

Send us an email or book a call below: