Agents of S.E.A.L.E.D
An agentic framework for safer, auditable AI assistance in complex workflows. Built with strong guardrails and real-world constraints in mind.
See flagship initiatives →AXONVERTEX AI is an AI research and AI engineering studio focused on AI deployments in private & low resource settings.
We finetune tiny foundation models, build private-compute AI inference stacks, and design responsible AI workflows that can be deployed under real-world constraints.
An agentic framework for safer, auditable AI assistance in complex workflows. Built with strong guardrails and real-world constraints in mind.
See flagship initiatives →AXONVERTEX AI sits at the intersection of independent research and deployed systems: designing, stress-testing, and shipping AI that can withstand real-world constraints.
Self-directed research across agents, FHIR graphs, vector embeddings, and evaluation, captured in talks (Agents of S.E.A.L.E.D, FHIR in the W.H.O.L.E), NIST challenges, and open artifacts on Hugging Face.
Design, evaluation, and governance aligned with NIST-oriented ARIA practices – from risk assessment and documentation to human-in-the-loop control.
FHIR, graphs, and longitudinal patient context powering decision support and trial automation with stringent privacy guarantees.
Secure, low-latency inference on Ollama and vLLM, tuned for constrained hardware and sensitive environments where data cannot leave the boundary.
A sample of the projects where AXONVERTEX AI combines responsible AI research with production-grade engineering.
Safety-Engineered Agentic Learning for Evidence-Driven Decisions
An agentic system architecture for safety-critical domains. Agents of S.E.A.L.E.D combines explicit policy constraints, memory, and oversight to keep complex AI workflows auditable and aligned with human operators.
Graphing clinical data for smarter AI systems
FHIR in the W.H.O.L.E explores how FHIR resources and knowledge graphs combine to create rich, queryable clinical contexts that modern AI systems can reason over safely.
Built with HI10x Innovation & Transformation GmbH
TrialBridge focuses on connecting sponsors, sites, and patients through responsible AI that respects regulatory and ethical boundaries while increasing trial velocity.
Co-designed with HI10x to fit real clinical operations instead of abstract benchmarks.
Visit TrialBridgePrompt injection + tool misuse · Local Linux workspace
An interactive demo deck showing how to turn agent autonomy into an enforceable, auditable system boundary: deterministic policy, scanner pipeline, and an observability ledger.
AXONVERTEX AI’s work is grounded in NIST-aligned approaches to AI risk, assurance, and governance. We design systems where accountability is a feature, not an afterthought.
Contributions to NIST’s 2024 ARIA challenge, focusing on practical methods to identify and measure risk in deployed AI systems.
Ongoing participation in NIST Generative AI challenges, helping shape evaluation approaches that connect model behaviour to system-level outcomes in safety-critical domains.
Human-in-the-loop controls, clear escalation paths, and operational playbooks keep responsible AI principles active throughout the lifecycle, not just at launch, with ARIA-style evidence at every stage.
Sensitive workloads demand private, controllable infrastructure. AXONVERTEX AI builds inference stacks that keep data close to where it is generated, while still delivering state-of-the-art performance.
We leverage Ollama to run compact models on laptops, edge servers, and controlled clinical environments – enabling offline or near-offline operation with reproducible model stacks.
For larger models and multi-tenant workloads, we rely on vLLM to deliver efficient high-throughput serving with attention to latency, cost, and resource isolation.
From data ingestion to monitoring, we assemble ecosystems that treat privacy as a non-negotiable design constraint, not an optional add-on.
We aim to publish methods, tools, and finetuning tiny foundational models openly, with a focus on reproducibility and practical impact.
Architectures optimized for constrained environments that still deliver high task performance, with transparent training recipes and evaluation results.
Retrieval-Augmented Generation and Retrieval-Augmented Fine-Tuning pipelines that combine structured and unstructured data to produce grounded, auditable outputs.
Representation learning research focused on embeddings that capture clinical and operational context while remaining efficient for search, clustering, and routing.
Hardening data and model pipelines with encryption, access control, and rigorous monitoring, suited for regulatory environments.
Techniques to identify, measure, and mitigate bias across datasets, models, and full systems, with emphasis on high-stakes use cases.
Alignment with emerging common frameworks for AI safety and governance, making systems easier to audit and integrate.
For collaborations, pilots, or research partnerships, reach out directly. We prioritise work where responsible AI and rigorous engineering materially improve outcomes.
For general enquiries and partnership discussions, email [email protected].
Follow updates and announcements on LinkedIn .
Models, prompts, and experimental artifacts will be published on our Hugging Face organization .