Compressed External Latent State (CELS)

tl;dr: We find that by emulating a language model’s internal latent space, represented as a graph, we achieve 99% fewer tokens while retaining the level of semantic meaning. What Latent Space Actually Is Latent space is the hidden geometry where modern language models actually think. Inside every transformer layer, the model continuously compresses meaning into […]

How to Protect Your Agent from AI Cyber Espionage with Guardrails

If you’re building AI agents right now, the Anthropic GTG-1002 story should be stuck in your head. Last week, Anthropic published a report detailing what they’ve determined as the first reported “AI-orchestrated cyber espionage campaign”. Chinese state actors used Claude to orchestrate full-scale cyber espionage. Not hypothetically. Not in a lab. In the real world, against real […]

Agentic AI Fundamentals

Understanding Core Concepts of Agentic AI A core challenge in Artificial Intelligence (AI) is reasoning under uncertainty. Historically, AI models were used transactionally for well-defined objectives. However, with the emergent knowledge and reasoning capabilities of Large Language Models (LLMs) attained over the last decade, even more complex and open-ended problems can now be solved autonomously. […]