The Human.Exe governance SDK and API are not yet publicly available. This page is a reference preview for the Phase 3 developer release. API access opens in Phase 3 β Developer tier ($30/mo) and above. Current live tiers (Observer, Citizen, Scholar) provide content access only, not API access.
HumanπExe API
The Human-Link.
Turbocharge your AI workflow. Already have an AI API key? You're in the right place.
The Governance Layer for AI That Thinks β structural, provider-agnostic, session-aware.
Human.Exe provides a structural AI governance layer you deploy between your application and any AI provider. Your model. Our governance. Every decision traceable.
Most AI tools give you a response. Human.Exe governs a decision. The difference is architectural, not cosmetic.
What Human.Exe is
- A governance intelligence layer β structural, provider-agnostic, session-aware
- The layer that makes AI decisions traceable, coherent, and auditable at the architecture level
- Sits between your app and your AI provider β every inference governed, not just logged
What Human.Exe is not
- Not a model. Not an AI assistant. Not a chatbot wrapper.
- Not a compliance tool that monitors outputs after the fact
- Not another OpenAI wrapper
This is not a wrapper. This is the first public implementation of structurally governed AI.
The Human.Exe API is a working proof of concept and a real product β the first public example of an AI governance layer that structurally controls how inference behaves, not just what it outputs. Every request is governed at the architecture level: coherency-scored, session-tracked, drift-detected, and fully auditable. This is fundamentally different from how AI is currently deployed anywhere.
The API is permanent infrastructure for those it serves β but it's not even the tip of the iceberg. What you see here is the surface layer of something significantly larger. The governance architecture underneath this product was designed for a class of intelligence that doesn't exist yet in public. It will.
Quickstart Β· PREVIEW
When the API launches, get your first governed inference in under five minutes.
Choose your AI provider
Select your tier
Install the SDK
npm install @human-exe/governance # or pnpm add @human-exe/governance
Configure governance
import { GovernanceLayer } from '@human-exe/governance';
const gov = new GovernanceLayer({
apiKey: process.env.HUMAN_EXE_API_KEY,
provider: 'anthropic', // your provider
providerKey: process.env.ANTHROPIC_API_KEY,
tier: 'citizen',
audit: true, // structured audit trail
coherency: true, // session coherency tracking
});Make your first governed call
const result = await gov.infer({
sessionId: 'gs-demo-001',
prompt: 'Analyse this quarterly report for risk indicators.',
context: { domain: 'financial-analysis' },
});
console.log(result.response); // governed model response
console.log(result.governance); // audit trail object
console.log(result.governance.session_coherency); // 0.0 β 1.0Authentication Β· PREVIEW
Every request to the Human.Exe API will require two credentials:
Human.Exe API Key
Identifies your account and tier. Obtained from your dashboard after signup. Passed as Authorization: Bearer hxe_...
Provider API Key
Your own key for the upstream AI provider (OpenAI, Anthropic, etc.). Never stored β passed through the governance layer per-request. You retain full provider ownership.
POST https://api.human-exe.ca/v1/govern Authorization: Bearer hxe_your_api_key Content-Type: application/json X-Provider-Key: sk-your_provider_key
The Governance Layer
Existing governance tools audit what AI does after deployment. Human.Exe embeds governance into the reasoning process before the response is generated.
The governance layer intercepts every inference request, applies structural governance β scope validation, drift detection, complexity routing β and returns the model response alongside a complete audit trail object. The developer sees inputs and governed outputs. The internals are opaque by design.
What the developer gets
- Governed inference routing β routes requests to the appropriate model tier based on complexity
- Structured audit trail β per request: decision context, model used, confidence indicators, governance applied
- Session coherency tracking β flags drift, contradiction, scope violation across multi-turn interactions
- Provider-agnostic integration β one SDK, any provider
Governed Sessions
A single AI response can look great. A session β 20 turns in, with context accumulated, contradictions built up, scope drifting β is where ungoverned AI falls apart. Coherence isn't a feature. It's an architectural property.
A governed session is a sequence of AI interactions managed by the governance layer. Coherency is tracked, drift is flagged, and the audit trail is maintained across every turn β not reconstructed after the fact.
// Turn 1
const t1 = await gov.infer({
sessionId: 'gs-risk-analysis-001',
prompt: 'Identify the top 3 risk factors in this dataset.',
context: { domain: 'risk-assessment' },
});
// Turn 2 β governance tracks coherency across turns
const t2 = await gov.infer({
sessionId: 'gs-risk-analysis-001',
prompt: 'Now cross-reference those risks against the Q3 report.',
});
// Coherency monitored across the session
console.log(t2.governance.session_coherency); // 0.94
console.log(t2.governance.coherency_delta); // -0.02 (slight drift)
console.log(t2.governance.drift_flags); // []Session Best Practice
Always pass a sessionId for multi-turn interactions. A score below 0.75 means the session is drifting β consider re-anchoring or resetting. Rate limits are per-governed-session: a 10-turn governed session counts differently than 10 independent calls.
Audit Trail
Every critical system produces an audit trail. Medical devices. Financial transactions. Legal records. AI systems making decisions that affect people should produce one too. Human.Exe makes the audit trail a structural output of the governance process β not a logging add-on.
{
"response": "Based on the dataset analysis, the three primary risk...",
"governance": {
"session_id": "gs-risk-analysis-001",
"request_id": "req-a7f3c901",
"model_used": "claude-3-5-sonnet",
"tier_routed": "citizen",
"coherency_delta": +0.03,
"session_coherency": 0.91,
"audit_hash": "sha256:e3b0c44298fc1c149afbf4c8996fb924...",
"governance_applied": [
"scope_validation",
"drift_detection",
"complexity_routing",
"audit_logging"
],
"confidence": 0.87,
"timestamp_utc": "2026-03-27T14:32:01.441Z"
}
}Retained by Human.Exe
Audit trails retained for 90 days on standard tiers. Enterprise plans extend retention with configurable policies. All trails are SHA-256 hashed and tamper-evident.
Your compliance records
Store the audit_hash locally for your own compliance. The hash is cryptographically tied to the full governance record β independently verifiable.
Coherency Tracking
A session-level metric that measures how structurally consistent responses are across turns. Flags drift, contradiction, scope violation, and sycophantic pattern collapse.
| Score Range | Assessment | Action |
|---|---|---|
| 0.90 β 1.00 | Structurally coherent | Continue β session integrity is high |
| 0.75 β 0.89 | Minor drift detected | Monitor β consider re-anchoring context |
| Below 0.75 | Coherency degradation | Reset session or re-establish scope |
Coherency tracking is structural β it operates on the governance architecture itself, not as a heuristic bolted onto outputs. The difference between measuring coherency after inference and embedding it in the governance process is the difference between a dashboard and an architecture.
Governed Sparsity Routing
Most AI applications route every request to the most expensive model available. That's architecturally naive. Governed sparsity routing sends 95%+ of requests to appropriately-scoped models based on complexity β producing 10β20Γ cost advantage at scale.
The math
At 100,000 users, the difference between ungoverned and governed routing is approximately $38,000/month. Governed routing doesn't sacrifice capability β it matches capability to complexity. Simple queries don't need frontier models. The governance layer knows the difference.
Don't route simple tasks to Sovereign-tier models. The governance layer auto-routes by default, but you can override with explicit tier preferences per request.
const result = await gov.infer({
sessionId: 'gs-demo-002',
prompt: 'Summarise this paragraph.',
routing: { preferTier: 'observer' }, // force lightweight model
});Response Format
Every governed API call returns the standard model response plus a governance object. This is not optional β governance is structural, not an add-on.
| Field | Type | Description |
|---|---|---|
response | string | The governed model response |
governance.session_id | string | Governed session identifier |
governance.request_id | string | Unique request identifier |
governance.model_used | string | Actual model that served the request |
governance.tier_routed | string | Routing tier selected by governance |
governance.coherency_delta | number | Change in coherency since last turn (-1.0 to +1.0) |
governance.session_coherency | number | Current session coherency score (0.0 to 1.0) |
governance.audit_hash | string | SHA-256 hash of the full audit record |
governance.governance_applied | string[] | Governance operations applied to this request |
governance.confidence | number | Model confidence indicator (0.0 to 1.0) |
governance.timestamp_utc | string | ISO 8601 timestamp |
Approved Models
The following models have been evaluated against Human.Exe structural governance criteria. They are recommended for use through the governance layer. Performance under governance differs from raw API performance β governed sessions produce more coherent, auditable, and cost-efficient results.
| Provider | Model | Tier Fit | Notes |
|---|---|---|---|
| Anthropic | Claude Opus 4 | Developer / Enterprise | Highest structural reasoning β full governance compatibility, lowest drift observed |
| Anthropic | Claude Sonnet 4 | Scholar / Developer | Strong structural reasoning, low drift, excellent audit trail compatibility |
| OpenAI | GPT-5.4 | Scholar / Developer | Validated for complex multi-step governed sessions β positive cognitive test results |
| Gemini 3 Pro | Scholar / Developer | Strong multimodal governance, validated coherency under extended sessions | |
| Gemini 3 Flash | Citizen / Scholar | Cost-effective with validated governance compatibility at speed | |
| OpenAI | GPT-4o | Observer | Marginal pass β suitable for low-complexity governed tasks at Observer tier only |
Cognitive validation is rigorous
Approved models have passed Human.Exe's structural cognitive benchmarks β reasoning coherency, drift resistance, adversarial stability, and multi-turn consistency. Models that don't meet governance criteria are not listed. A shorter approved list signals rigour, not limitation. Unapproved models can still be integrated but operate without governance optimisation guarantees.
Efficiency Best Practices
The governance layer is designed to reduce cost and increase reliability. These practices maximise both.
Use session IDs consistently
Always pass a sessionId for multi-turn interactions. This enables coherency tracking across turns and prevents redundant context loading. A 10-turn governed session is structurally different from 10 independent calls.
Trust the routing
Default governed routing sends requests to the most cost-appropriate model for the complexity level. Overriding to a higher tier for simple tasks wastes budget without improving outcomes.
Monitor coherency scores
A coherency score below 0.75 means the session is degrading. Continuing to push requests into a degraded session produces lower-quality outputs at the same cost. Reset or re-anchor early.
Store audit hashes locally
Human.Exe retains audit trails for 90 days (standard tiers). Store the audit_hash locally for your own compliance records β it's cryptographically verifiable against our retained record.
Scope your governance configuration
Set explicit domain context on each session. Governance calibrates differently for financial analysis, content generation, and code review. Scoped sessions produce higher coherency and more useful audit trails.
Batch low-complexity requests
For high-volume, low-complexity tasks (summarisation, classification, extraction), use batch mode. The governance layer optimises routing across the batch β often achieving Observer-tier pricing on Citizen accounts.
Governance Layer, Not Intelligence
Let the industry name their capability threshold whatever they want β AGI, Advanced Intelligence, frontier models. That's not our fight. Human.Exe is not a model. We are not an intelligence system. We are the governance layer that sits between users and whatever intelligence system you choose to deploy.
The problem we're solving isn't that models aren't capable enough. The problem is that end-user behavior is unpredictable β and no model, regardless of capability, is structurally safe without a governance layer enforcing bounds. Users drift. Users probe limits. Users behave in ways no training run anticipated. The governance layer is what closes that gap.
Let them build the intelligence. We govern the interaction.
The governance effect isn't about model capability. It's about what happens when a real user β with real unpredictability β reaches that capability. Session bounds, behavioral constraints, adversarial resistance, audit trails. That's what Human.Exe enforces. The model is theirs. The governance layer is ours.
What the governance layer enforces
- Session bounds that don't erode under user pressure
- Behavioral constraints enforced per-tier, not per-prompt
- End-user drift detection across multi-turn sessions
- Adversarial resistance to prompt-based manipulation
- Audit trail that proves what the user actually sent
End-user unpredictability β the real problem
- Off-domain queries sent to domain-specific deployments
- Behavioral escalation beyond tier authorization
- Context poisoning through crafted input sequences
- Jailbreak attempts across session continuity
- Social engineering against session-aware agents
Cognitive Benchmarks
Human.Exe maintains an ongoing agent cognitive benchmark programme β structured tests across 5+ major model families measuring reasoning coherency, session drift resistance, adversarial stability, and structural consistency across multi-turn interactions.
What we measure
- Reasoning coherency across extended sessions
- Session drift resistance under load
- Adversarial stability β response integrity under prompt pressure
- Structural consistency across multi-turn chains
What we've found
- Structural governance produces measurably lower drift rates than raw API access
- Governed sessions show higher coherency scores across multi-turn interactions
- 12β18 month methodology reproduction barrier β independently verifiable
- Tested and validated across 5+ major model families
Models that pass Human.Exe governance criteria are listed as approved. All approved models are recommended for use through the governance layer. Benchmark methodology and scoring internals are proprietary β results are reproducible and independently verifiable.
Pricing
Phase 2 β Observer, Citizen ($8/mo), and Scholar ($15/mo) are available in early access. Higher tiers open by phase.
Observer gives you the real governance layer, real audit trail, and the full content catalogue. Thatβs the baseline.
Annual and multi-year plans will be available for Citizen and above when tiers open β save up to 30% with a 3-year commitment.
Educational & Inquiry
Research institutions, educators, and policy inquiries engage through a separate pathway. Academic Scholar pricing and curriculum access are structured for institutional review. Contact academic@human-exe.ca.
Government & Enterprise
Enterprise and government tiers are contact-only β scope, residency, SLA, and audit requirements vary by mandate. Contact enterprise@human-exe.ca to begin the conversation.
Observer is free and always will be. Citizen and Scholar are available in early access β purchase below. Developer-level access and above will require a signed liability waiver and cognitive waiver before activation. Enterprise and government tiers are contact-only β scope, residency, and audit requirements vary by mandate.
Full blog, podcast catalogue, governance surface viewer. No credit card.
You're already here βFull podcast catalogue, clean episode versions, extended show notes, forum access.
Adversary series, research surface, priority drops, structured curriculum.
API access, governance SDK, AI supervisor. Gated until Phase 3.
DeveloperSupport the Mission
This project is funded by voluntary contributions to ALSI Inc. (Atkinson Lineage Sovereign Infrastructure Inc.) β the Canadian R&D entity behind Human.Exe and GiL. Contributions go directly toward infrastructure, research, and sovereign compute development.
Contribute via ALSI Inc.Engagement Briefs
For parties evaluating Human.Exe at the investment, government, or strategic level.
Observer is in early access β no brief required to access the governance layer. Paid tiers open by phase. These briefs exist for a different conversation: due diligence, institutional evaluation, partnership exploration, and financial strategy. If you're here to build, start on Observer and you'll see the upgrade path clearly. If you're here to understand what you're looking at β request a brief.
Non-Disclosure Agreement
One-way NDA by default β ALSI Inc. discloses to the evaluating party under standard confidentiality terms. Mutual NDA available where clear collaboration or partnership is being explored.
Liability Waiver
Required for advanced tiers (Developer and above). Acknowledges that Human.Exe structures the inference process but does not control the underlying model. You bring your own AI provider key β we route, analyse, and audit, but provider-side behaviour is outside our boundary.
Cognitive Waiver
Required for any complex inquiry β including mentorship pathways, multi-domain engagement, and advanced governed sessions. Acknowledges the structural nature of the governance process: outputs are shaped by a structured multi-domain governed session workflow, not by a single model or provider. You accept that inference is governed, routed, and audited across domains β and that cognitive outputs reflect the governance architecture, not raw model behaviour. Paired with NDA by default.
Architecture & Strategy Brief
Technical and financial overview of the platform architecture, IP position, market thesis, and integration surface. Designed for investors, government evaluators, and strategic partners. Provided after NDA execution.
Who engages
AGI R&D
From the AGI R&D Division @Human.EXE
Deep-form content on structural AI governance. No fluff. Respects your intelligence.
The signal through the static.
10β20 minute episodes on the governance problem, the August 2026 EU AI Act deadline, cost economics, session coherence, and audit trail architecture.
Architecture-level dispatches.
The governance problem nobody is solving. The EU AI Act deadline that changes everything. The math behind 10β20Γ cheaper inference. Session coherence explained. The audit trail your AI isn't producing.