EARLY ACCESS β€” API NOT YET AVAILABLE

The Human.Exe governance SDK and API are not yet publicly available. This page is a reference preview for the Phase 3 developer release. API access opens in Phase 3 β€” Developer tier ($30/mo) and above. Current live tiers (Observer, Citizen, Scholar) provide content access only, not API access.

API DOCUMENTATION Β· PHASE 3 PREVIEW

Human🍁Exe API

The Human-Link.

Turbocharge your AI workflow. Already have an AI API key? You're in the right place.

The Governance Layer for AI That Thinks β€” structural, provider-agnostic, session-aware.

Human.Exe provides a structural AI governance layer you deploy between your application and any AI provider. Your model. Our governance. Every decision traceable.

Most AI tools give you a response. Human.Exe governs a decision. The difference is architectural, not cosmetic.

What Human.Exe is

  • A governance intelligence layer β€” structural, provider-agnostic, session-aware
  • The layer that makes AI decisions traceable, coherent, and auditable at the architecture level
  • Sits between your app and your AI provider β€” every inference governed, not just logged

What Human.Exe is not

  • Not a model. Not an AI assistant. Not a chatbot wrapper.
  • Not a compliance tool that monitors outputs after the fact
  • Not another OpenAI wrapper
FIRST OF ITS KIND

This is not a wrapper. This is the first public implementation of structurally governed AI.

The Human.Exe API is a working proof of concept and a real product β€” the first public example of an AI governance layer that structurally controls how inference behaves, not just what it outputs. Every request is governed at the architecture level: coherency-scored, session-tracked, drift-detected, and fully auditable. This is fundamentally different from how AI is currently deployed anywhere.

The API is permanent infrastructure for those it serves β€” but it's not even the tip of the iceberg. What you see here is the surface layer of something significantly larger. The governance architecture underneath this product was designed for a class of intelligence that doesn't exist yet in public. It will.

Quickstart Β· PREVIEW

The SDK and API endpoint below are not yet released. This walkthrough previews the integration experience for Phase 3.

When the API launches, get your first governed inference in under five minutes.

1

Choose your AI provider

Bring your own API key β€” OpenAI, Anthropic, Google, or self-hosted. Human.Exe wraps any supported provider. You are not locked in.
2

Select your tier

API access starts at Developer ($30/mo, Phase 3). The Observer free tier includes the governance surface viewer but does not include API or SDK access.
3

Install the SDK

npm install @human-exe/governance
# or
pnpm add @human-exe/governance
4

Configure governance

import { GovernanceLayer } from '@human-exe/governance';

const gov = new GovernanceLayer({
  apiKey: process.env.HUMAN_EXE_API_KEY,
  provider: 'anthropic',           // your provider
  providerKey: process.env.ANTHROPIC_API_KEY,
  tier: 'citizen',
  audit: true,                      // structured audit trail
  coherency: true,                  // session coherency tracking
});
5

Make your first governed call

const result = await gov.infer({
  sessionId: 'gs-demo-001',
  prompt: 'Analyse this quarterly report for risk indicators.',
  context: { domain: 'financial-analysis' },
});

console.log(result.response);          // governed model response
console.log(result.governance);         // audit trail object
console.log(result.governance.session_coherency);  // 0.0 – 1.0

Authentication Β· PREVIEW

API keys and the endpoint below are not yet active. This section documents the auth model for Phase 3.

Every request to the Human.Exe API will require two credentials:

Human.Exe API Key

Identifies your account and tier. Obtained from your dashboard after signup. Passed as Authorization: Bearer hxe_...

Provider API Key

Your own key for the upstream AI provider (OpenAI, Anthropic, etc.). Never stored β€” passed through the governance layer per-request. You retain full provider ownership.

Request Headers
POST https://api.human-exe.ca/v1/govern
Authorization: Bearer hxe_your_api_key
Content-Type: application/json
X-Provider-Key: sk-your_provider_key

The Governance Layer

Existing governance tools audit what AI does after deployment. Human.Exe embeds governance into the reasoning process before the response is generated.

Your App→Human.Exe Governance Layer→Your AI Provider→Governed Response + Audit Trail

The governance layer intercepts every inference request, applies structural governance β€” scope validation, drift detection, complexity routing β€” and returns the model response alongside a complete audit trail object. The developer sees inputs and governed outputs. The internals are opaque by design.

What the developer gets

  • Governed inference routing β€” routes requests to the appropriate model tier based on complexity
  • Structured audit trail β€” per request: decision context, model used, confidence indicators, governance applied
  • Session coherency tracking β€” flags drift, contradiction, scope violation across multi-turn interactions
  • Provider-agnostic integration β€” one SDK, any provider

Governed Sessions

A single AI response can look great. A session β€” 20 turns in, with context accumulated, contradictions built up, scope drifting β€” is where ungoverned AI falls apart. Coherence isn't a feature. It's an architectural property.

A governed session is a sequence of AI interactions managed by the governance layer. Coherency is tracked, drift is flagged, and the audit trail is maintained across every turn β€” not reconstructed after the fact.

Multi-turn governed session
// Turn 1
const t1 = await gov.infer({
  sessionId: 'gs-risk-analysis-001',
  prompt: 'Identify the top 3 risk factors in this dataset.',
  context: { domain: 'risk-assessment' },
});

// Turn 2 β€” governance tracks coherency across turns
const t2 = await gov.infer({
  sessionId: 'gs-risk-analysis-001',
  prompt: 'Now cross-reference those risks against the Q3 report.',
});

// Coherency monitored across the session
console.log(t2.governance.session_coherency);  // 0.94
console.log(t2.governance.coherency_delta);    // -0.02 (slight drift)
console.log(t2.governance.drift_flags);        // []

Session Best Practice

Always pass a sessionId for multi-turn interactions. A score below 0.75 means the session is drifting β€” consider re-anchoring or resetting. Rate limits are per-governed-session: a 10-turn governed session counts differently than 10 independent calls.

Audit Trail

Every critical system produces an audit trail. Medical devices. Financial transactions. Legal records. AI systems making decisions that affect people should produce one too. Human.Exe makes the audit trail a structural output of the governance process β€” not a logging add-on.

Governance response object
{
  "response": "Based on the dataset analysis, the three primary risk...",
  "governance": {
    "session_id": "gs-risk-analysis-001",
    "request_id": "req-a7f3c901",
    "model_used": "claude-3-5-sonnet",
    "tier_routed": "citizen",
    "coherency_delta": +0.03,
    "session_coherency": 0.91,
    "audit_hash": "sha256:e3b0c44298fc1c149afbf4c8996fb924...",
    "governance_applied": [
      "scope_validation",
      "drift_detection",
      "complexity_routing",
      "audit_logging"
    ],
    "confidence": 0.87,
    "timestamp_utc": "2026-03-27T14:32:01.441Z"
  }
}

Retained by Human.Exe

Audit trails retained for 90 days on standard tiers. Enterprise plans extend retention with configurable policies. All trails are SHA-256 hashed and tamper-evident.

Your compliance records

Store the audit_hash locally for your own compliance. The hash is cryptographically tied to the full governance record β€” independently verifiable.

Coherency Tracking

A session-level metric that measures how structurally consistent responses are across turns. Flags drift, contradiction, scope violation, and sycophantic pattern collapse.

Score RangeAssessmentAction
0.90 – 1.00Structurally coherentContinue β€” session integrity is high
0.75 – 0.89Minor drift detectedMonitor β€” consider re-anchoring context
Below 0.75Coherency degradationReset session or re-establish scope

Coherency tracking is structural β€” it operates on the governance architecture itself, not as a heuristic bolted onto outputs. The difference between measuring coherency after inference and embedding it in the governance process is the difference between a dashboard and an architecture.

Governed Sparsity Routing

Most AI applications route every request to the most expensive model available. That's architecturally naive. Governed sparsity routing sends 95%+ of requests to appropriately-scoped models based on complexity β€” producing 10–20Γ— cost advantage at scale.

The math

At 100,000 users, the difference between ungoverned and governed routing is approximately $38,000/month. Governed routing doesn't sacrifice capability β€” it matches capability to complexity. Simple queries don't need frontier models. The governance layer knows the difference.

Don't route simple tasks to Sovereign-tier models. The governance layer auto-routes by default, but you can override with explicit tier preferences per request.

Override routing tier
const result = await gov.infer({
  sessionId: 'gs-demo-002',
  prompt: 'Summarise this paragraph.',
  routing: { preferTier: 'observer' },  // force lightweight model
});

Response Format

Every governed API call returns the standard model response plus a governance object. This is not optional β€” governance is structural, not an add-on.

FieldTypeDescription
responsestringThe governed model response
governance.session_idstringGoverned session identifier
governance.request_idstringUnique request identifier
governance.model_usedstringActual model that served the request
governance.tier_routedstringRouting tier selected by governance
governance.coherency_deltanumberChange in coherency since last turn (-1.0 to +1.0)
governance.session_coherencynumberCurrent session coherency score (0.0 to 1.0)
governance.audit_hashstringSHA-256 hash of the full audit record
governance.governance_appliedstring[]Governance operations applied to this request
governance.confidencenumberModel confidence indicator (0.0 to 1.0)
governance.timestamp_utcstringISO 8601 timestamp

Approved Models

The following models have been evaluated against Human.Exe structural governance criteria. They are recommended for use through the governance layer. Performance under governance differs from raw API performance β€” governed sessions produce more coherent, auditable, and cost-efficient results.

ProviderModelTier FitNotes
AnthropicClaude Opus 4Developer / EnterpriseHighest structural reasoning β€” full governance compatibility, lowest drift observed
AnthropicClaude Sonnet 4Scholar / DeveloperStrong structural reasoning, low drift, excellent audit trail compatibility
OpenAIGPT-5.4Scholar / DeveloperValidated for complex multi-step governed sessions β€” positive cognitive test results
GoogleGemini 3 ProScholar / DeveloperStrong multimodal governance, validated coherency under extended sessions
GoogleGemini 3 FlashCitizen / ScholarCost-effective with validated governance compatibility at speed
OpenAIGPT-4oObserverMarginal pass β€” suitable for low-complexity governed tasks at Observer tier only

Cognitive validation is rigorous

Approved models have passed Human.Exe's structural cognitive benchmarks β€” reasoning coherency, drift resistance, adversarial stability, and multi-turn consistency. Models that don't meet governance criteria are not listed. A shorter approved list signals rigour, not limitation. Unapproved models can still be integrated but operate without governance optimisation guarantees.

Efficiency Best Practices

The governance layer is designed to reduce cost and increase reliability. These practices maximise both.

01

Use session IDs consistently

Always pass a sessionId for multi-turn interactions. This enables coherency tracking across turns and prevents redundant context loading. A 10-turn governed session is structurally different from 10 independent calls.

02

Trust the routing

Default governed routing sends requests to the most cost-appropriate model for the complexity level. Overriding to a higher tier for simple tasks wastes budget without improving outcomes.

03

Monitor coherency scores

A coherency score below 0.75 means the session is degrading. Continuing to push requests into a degraded session produces lower-quality outputs at the same cost. Reset or re-anchor early.

04

Store audit hashes locally

Human.Exe retains audit trails for 90 days (standard tiers). Store the audit_hash locally for your own compliance records β€” it's cryptographically verifiable against our retained record.

05

Scope your governance configuration

Set explicit domain context on each session. Governance calibrates differently for financial analysis, content generation, and code review. Scoped sessions produce higher coherency and more useful audit trails.

06

Batch low-complexity requests

For high-volume, low-complexity tasks (summarisation, classification, extraction), use batch mode. The governance layer optimises routing across the batch β€” often achieving Observer-tier pricing on Citizen accounts.

Governance Layer, Not Intelligence

Let the industry name their capability threshold whatever they want β€” AGI, Advanced Intelligence, frontier models. That's not our fight. Human.Exe is not a model. We are not an intelligence system. We are the governance layer that sits between users and whatever intelligence system you choose to deploy.

The problem we're solving isn't that models aren't capable enough. The problem is that end-user behavior is unpredictable β€” and no model, regardless of capability, is structurally safe without a governance layer enforcing bounds. Users drift. Users probe limits. Users behave in ways no training run anticipated. The governance layer is what closes that gap.

Let them build the intelligence. We govern the interaction.

The governance effect isn't about model capability. It's about what happens when a real user β€” with real unpredictability β€” reaches that capability. Session bounds, behavioral constraints, adversarial resistance, audit trails. That's what Human.Exe enforces. The model is theirs. The governance layer is ours.

What the governance layer enforces

  • Session bounds that don't erode under user pressure
  • Behavioral constraints enforced per-tier, not per-prompt
  • End-user drift detection across multi-turn sessions
  • Adversarial resistance to prompt-based manipulation
  • Audit trail that proves what the user actually sent

End-user unpredictability β€” the real problem

  • Off-domain queries sent to domain-specific deployments
  • Behavioral escalation beyond tier authorization
  • Context poisoning through crafted input sequences
  • Jailbreak attempts across session continuity
  • Social engineering against session-aware agents
DEEP DIVE β€” COMING SOON
πŸŽ™β€œThe AGI Illusion” β€” AGI R&D: why the industry's intelligence race sidesteps the governance problem entirely
βœŽβ€œThe Governance Problem Nobody Is Solving” β€” Why every AI governance tool on the market governs the model, not the user
βœŽβ€œWhat Session Coherence Means and Why Your AI Doesn't Have It” β€” The architectural property that separates governed intelligence from autocomplete

Cognitive Benchmarks

Human.Exe maintains an ongoing agent cognitive benchmark programme β€” structured tests across 5+ major model families measuring reasoning coherency, session drift resistance, adversarial stability, and structural consistency across multi-turn interactions.

What we measure

  • Reasoning coherency across extended sessions
  • Session drift resistance under load
  • Adversarial stability β€” response integrity under prompt pressure
  • Structural consistency across multi-turn chains

What we've found

  • Structural governance produces measurably lower drift rates than raw API access
  • Governed sessions show higher coherency scores across multi-turn interactions
  • 12–18 month methodology reproduction barrier β€” independently verifiable
  • Tested and validated across 5+ major model families

Models that pass Human.Exe governance criteria are listed as approved. All approved models are recommended for use through the governance layer. Benchmark methodology and scoring internals are proprietary β€” results are reproducible and independently verifiable.

Pricing

Phase 2 β€” Observer, Citizen ($8/mo), and Scholar ($15/mo) are available in early access. Higher tiers open by phase.

Observer gives you the real governance layer, real audit trail, and the full content catalogue. That’s the baseline.

Annual and multi-year plans will be available for Citizen and above when tiers open β€” save up to 30% with a 3-year commitment.

Subscription Plans β€” Promotional Rates
1 YearSave 20%
2 months free β€” best for annual planning
2 YearsSave 25%
6 months free over the term
3 YearsSave 30%
Best rate β€” 12.6 months free over the term
Free● EARLY ACCESS
ARCH-001 Β· P1 Β· FOUNDATION BUILD
Observer
$0/mo
Full blog + podcast catalogue Β· ARCHITECT Β· QUANTA Β· ADVERSARY series Β· Governance surface viewer Β· Session awareness Β· Social: read-only Β· No credit card
β€œThe governance layer is live. You're in.”
Basic● EARLY ACCESS
ARCH-001 Β· P2 Β· REVENUE SIGNAL
Citizen
$8/mo
5,000 req/day Β· Full audit trail Β· 3 providers Β· ~40–60% cost reduction Β· ~50–70% intelligence increase Β· Coherency scoring
β€œYou're part of the signal.”
Pro● EARLY ACCESS
ARCH-001 Β· P3 Β· API PUBLIC LAUNCH
Scholar
$15/mo
25,000 req/day Β· Coherency reports Β· Webhooks Β· Priority routing Β· ~60–80% cost reduction Β· ~60–80% intelligence increase Β· All approved models
β€œYour questions train a different kind of system.”
DeveloperPHASE 3 Β· GATED
ARCH-001 Β· P3 Β· SDK RELEASE
Developer
$30/mo
Unlimited Β· All providers Β· Full API access Β· ~80–95% cost reduction Β· ~80–95% intelligence increase Β· AI supervisor (2CE) Β· Governance layer active
β€œTurbo mode. Experience the structured governance workflow firsthand.”
PrincipalPHASE 3 Β· GATEDπŸ”’ 7-DAY UNLOCK
ARCH-001 Β· P4 Β· INSTITUTIONAL SCALE
Principal
$60/mo
Everything in Developer Β· Tier 2 Attention Wave Β· Full skill delivery Β· Priority governance routing Β· ~80–95% cost + intelligence Β· 7-day unlock after signup
β€œYou're composing infrastructure for an AI category that doesn't exist yet commercially.”
OperatorPHASE 4 Β· GATEDπŸ”’ 14-DAY UNLOCK
ARCH-001 Β· P4 Β· INSTITUTIONAL SCALE
Operator
$80/mo
Everything in Principal Β· Governance SDK Β· ~80–95% cost reduction + intelligence increase Β· Performance orchestration Β· 14-day unlock after signup
β€œFull stack. Full governance.”
DirectorPHASE 4 Β· GATEDπŸ”’ 14-DAY UNLOCK
ARCH-001 Β· P4 Β· INSTITUTIONAL SCALE
Director
$120/mo
Everything in Operator Β· Full AGK AI Operations Β· Governance-signed audit Β· Full Attention Wave Β· ~80–95% cost + intelligence at full governance depth Β· 14-day unlock
β€œStrategic architecture at scale.”
CommandPHASE 5 Β· GATEDπŸ”’ 14-DAY UNLOCK
ARCH-001 Β· P4 Β· INSTITUTIONAL SCALE
Command
$240/mo
Everything in Director Β· Subagent deployment Β· Custom skill delivery Β· Full performance orchestration Β· ~80–95% cost + intelligence at institutional scale Β· 14-day unlock
β€œTotal systemic control.”
BuilderPHASE 4 Β· GATED
ARCH-001 Β· P4 Β· SOVEREIGNTY ARCHITECTURE
Builder
Contact us
Governance SDK Β· Full skill delivery Β· Advanced coherency analytics Β· Custom integration support Β· All Developer features Β· Waiver required
β€œYou're building on the architecture.”
EnterprisePHASE 5 Β· GATED
ARCH-001 Β· P4 Β· SOVEREIGNTY COMPUTE
Enterprise
By arrangement
SLA Β· Subagent deployment Β· Governance-signed audit Β· Custom governance config Β· Canadian sovereignty compute Β· Waiver required
β€œEarly infrastructure partner. The next class of intelligence starts here.”

Educational & Inquiry

Research institutions, educators, and policy inquiries engage through a separate pathway. Academic Scholar pricing and curriculum access are structured for institutional review. Contact academic@human-exe.ca.

Government & Enterprise

Enterprise and government tiers are contact-only β€” scope, residency, SLA, and audit requirements vary by mandate. Contact enterprise@human-exe.ca to begin the conversation.

Observer is free and always will be. Citizen and Scholar are available in early access β€” purchase below. Developer-level access and above will require a signed liability waiver and cognitive waiver before activation. Enterprise and government tiers are contact-only β€” scope, residency, and audit requirements vary by mandate.

Observer Β· Free

Full blog, podcast catalogue, governance surface viewer. No credit card.

You're already here β†’
Citizen Β· $8/mo CAD

Full podcast catalogue, clean episode versions, extended show notes, forum access.

Scholar Β· $15/mo CAD

Adversary series, research surface, priority drops, structured curriculum.

Developer+ Β· Phase 3

API access, governance SDK, AI supervisor. Gated until Phase 3.

Developer

Support the Mission

This project is funded by voluntary contributions to ALSI Inc. (Atkinson Lineage Sovereign Infrastructure Inc.) β€” the Canadian R&D entity behind Human.Exe and GiL. Contributions go directly toward infrastructure, research, and sovereign compute development.

Contribute via ALSI Inc.

Engagement Briefs

For parties evaluating Human.Exe at the investment, government, or strategic level.

Observer is in early access β€” no brief required to access the governance layer. Paid tiers open by phase. These briefs exist for a different conversation: due diligence, institutional evaluation, partnership exploration, and financial strategy. If you're here to build, start on Observer and you'll see the upgrade path clearly. If you're here to understand what you're looking at β€” request a brief.

Non-Disclosure Agreement

One-way NDA by default β€” ALSI Inc. discloses to the evaluating party under standard confidentiality terms. Mutual NDA available where clear collaboration or partnership is being explored.

Liability Waiver

Required for advanced tiers (Developer and above). Acknowledges that Human.Exe structures the inference process but does not control the underlying model. You bring your own AI provider key β€” we route, analyse, and audit, but provider-side behaviour is outside our boundary.

Cognitive Waiver

Required for any complex inquiry β€” including mentorship pathways, multi-domain engagement, and advanced governed sessions. Acknowledges the structural nature of the governance process: outputs are shaped by a structured multi-domain governed session workflow, not by a single model or provider. You accept that inference is governed, routed, and audited across domains β€” and that cognitive outputs reflect the governance architecture, not raw model behaviour. Paired with NDA by default.

Architecture & Strategy Brief

Technical and financial overview of the platform architecture, IP position, market thesis, and integration surface. Designed for investors, government evaluators, and strategic partners. Provided after NDA execution.

AFTER NDA

Who engages

01
Investors
Due diligence on IP, architecture, and market position
02
Government
Regulatory evaluation, sovereignty compute, compliance
03
Strategic partners
Integration planning, co-development, licensing
04
Enterprise
Custom deployment, SLA negotiation, audit requirements

Request engagement briefs

We need to understand your intention, purpose, and relevance before sending governance documents. All responses are reviewed by a human.

WHO YOU ARE

What are you trying to build, integrate, or evaluate? Be specific β€” "explore AI governance" isn't enough.

Why governance β€” not just AI. What breaks without it?

Why us? We're not the right fit for everyone β€” help us understand why we might be right for you.

AGI R&D

From the AGI R&D Division @Human.EXE

Deep-form content on structural AI governance. No fluff. Respects your intelligence.

PODCAST

The signal through the static.

10–20 minute episodes on the governance problem, the August 2026 EU AI Act deadline, cost economics, session coherence, and audit trail architecture.

BLOG

Architecture-level dispatches.

The governance problem nobody is solving. The EU AI Act deadline that changes everything. The math behind 10–20Γ— cheaper inference. Session coherence explained. The audit trail your AI isn't producing.

🍁
PRODUCT OF CANADA
Designed, developed, and governed under Canadian sovereignty Β· ALSI Inc. Β· Federal Corporation
Β© 2026 Human.Exe Β· human-exe.ca