Skip to content
OnticBeta

Application layer

Meet Goober

A governance-grounded conversational agent built on Phi-4 14B.

Goober answers governance questions using your risk profile and 54 verified oracle sources covering GDPR, HIPAA, PCI DSS, the EU AI Act, ISO 42001, and more. It doesn’t guess. It retrieves, cites, and stays in its lane.

Without this

Without a governance-grounded agent, compliance questions get answered by general-purpose chatbots that confabulate regulatory details. Professionals rely on unverified responses for decisions that carry legal liability.

Profile-grounded, not prompt-stuffed

Goober isn’t a generic chatbot with a compliance system prompt. It loads your governance profile — industry, segment, risk category, EAD scores — from the 4-question wizard you already completed. Every answer is scoped to your context before the model sees a single token.

On each message, Goober runs a hybrid vector + full-text search across 54 oracle documents to retrieve the specific regulatory or framework guidance relevant to your question. What it finds shapes the answer. What it doesn’t find, it says so.

After retrieval, every claim in the response passes through an entailment verification pipeline — embedding-based similarity scoring against the retrieved oracle evidence. Claims that can’t be grounded are flagged, not silently passed through.

Your governance profile

Industry, segment, deployment tier, EAD risk axes, and Ontic-needed signal — all injected into the system prompt so Goober never asks you to repeat yourself.

54 oracle sources

Regulatory frameworks (GDPR, HIPAA, EU AI Act, DOJ ECCP), industry standards (ISO 27001, ISO 42001, PCI DSS, SOC 2), and 30 industry-specific oracles — all embedded with pgvector and retrieved by relevance.

Boundary awareness

18 boundary topics (legal advice, medical practice, tax guidance, etc.) are detected at retrieval time. When Goober hits a boundary, it redirects to a qualified professional instead of guessing.

Entailment verification

Every factual claim is scored against retrieved oracle evidence using embedding-based entailment. Claims below the support threshold are flagged — the model’s output is verified, not trusted at face value.

The model is the brain. The oracles are the library. Your profile is the scope. Together, the answer is grounded.

How it works

Four questions → governance profile → grounded chat.

1

Take the wizard

Four questions about your AI deployment. Takes about 90 seconds. Produces a governance profile with industry, segment, risk category, and EAD scores.

2

Chat with Goober

Your profile is loaded automatically. Ask about frameworks, compliance requirements, implementation costs, or whether you need Ontic at all.

3

Get grounded answers

When your governance profile exists, oracle grounding is automatic. Answers are backed by verified sources with provenance badges. For general topics outside your governance scope, Goober answers naturally.

4

See the sources

Every grounded answer shows which oracles were consulted, what tier they are, and how relevant the match was. No black boxes.

What Goober knows

54 oracle documents. 19 frameworks, 30 industries.

Regulatory frameworks

GDPRHIPAAEU AI ActDOJ ECCP

Industry standards

ISO 27001ISO 42001ISO 23894PCI DSSSOC 2NIST CSF 2.0NIST AI RMFNIST AI 600-1

Industry oracles

Financial servicesHealthcareSoftwareLegalEducationGovernmentEnergyCybersecurityand 22 more

Boundaries

Legal adviceMedical diagnosisTax guidanceInvestment adviceConsumer lendingEmployment discriminationand 12 more

Sources are curated and verified by Ontic governance pipelines.

Why this model size

Not a limitation. A design decision.

Goober runs on Phi-4 14B and fits in practical single-node deployments. The model handles orchestration — conversation, structured output, routing. The oracles handle truth. You don’t need frontier-scale parameters when the knowledge is in the retrieval layer.

The governed workload decomposes into tasks compact models handle well: instruction following, classification, and conversational flow. The hard part — knowing what GDPR Article 35 actually says — is in the oracle, not the weights.

Cost

Everyone else is building bigger. We went the other direction.

Frontier modelGoober (Phi-4 14B)
TrainingTens of thousands of GPUs, months, $100M+Single consumer GPU, 120 steps, under an hour, <$1
InferenceMulti-GPU clusters, $0.01–$0.06 per 1K tokensSingle GPU via Ollama, orders of magnitude cheaper
DeploymentGPU cluster, specialized cooling, dedicated ops teamEC2 instance with Ollama. HTTPS via Caddy.
PowerMegawatts. Literal power plant negotiations.One EC2 instance. Standard compute.

If your governance stack requires a data center, you’ve built a dependency, not a solution.

Tradeoffs

You gain

  • +Answers grounded in verified regulatory and framework sources
  • +Oracle provenance on every response — see which sources were consulted
  • +Boundary detection — redirects to professionals instead of guessing
  • +Profile-scoped context — no cold start, no “tell me about your business”
  • +Transparent confidence — tells you when it doesn’t have verified data
  • +Runs on a single GPU, dramatically lower cost than frontier models

You lose

  • Not a general-purpose chatbot — optimized for governance and compliance
  • Requires the wizard for oracle grounding — answers general questions without it
  • Compact-model reasoning ceiling — complex multi-step analysis may need human review

We think that’s a good trade.

Try it now

Take the 4-question wizard, then chat with Goober grounded in your governance profile and verified oracle sources.

Who uses this

Operator

Compliance teams

Governance practitioners who use Goober for regulatory research, framework comparison, and implementation guidance.

Consumer

Anyone asking compliance questions

Risk officers, legal teams, product managers, and auditors who need grounded answers backed by evidence.