A low-poly dark hand holding a golden globe

Our Mission Statement

The Peripheral is an evidence-first intelligence platform that transforms the noise of global conflict reporting into structured, verified, source-linked knowledge.

We monitor over 1,000 sources — news agencies, social media, on-ground footage, and government communications — to build a real-time picture of events as they unfold across the world.

How we work

Every piece of intelligence passes through a multi-stage pipeline. Sources are ingested continuously, entities are extracted and cross-referenced using AI, and stories are clustered by event — not by headline. The result is a structured knowledge graph where every claim links back to its original source.

We don't editorialize. We don't speculate. We structure what's reported, track who reported it, and surface patterns that emerge from the data.

Who it's for

The Peripheral is built for journalists verifying breaking events, analysts tracking geopolitical shifts, researchers studying conflict dynamics, and anyone who needs to cut through information noise to find what actually happened.

Open source intelligence

OSINT — open source intelligence — is information derived from publicly available sources. It's the foundation of modern investigative journalism and conflict monitoring. The Peripheral automates the most time-consuming parts of OSINT work: collection, deduplication, entity extraction, and cross-referencing — so investigators can focus on analysis and verification.

Planning for the Post-Truth Era and Beyond

The Mission

Our mission is to ensure that truthful, verifiable information remains accessible to humanity — to build intelligence infrastructure that elevates the signal above the noise, and to do so in a way that empowers individuals rather than replacing their judgment.

If successful, this work could help restore something we've lost: the ability for people to understand what is actually happening in the world. The Peripheral exists because the current information ecosystem is failing at its most fundamental purpose.

The Problem We Face

The information environment has crossed a threshold. For most of human history, the limiting factor was access to information. Libraries, newspapers, and broadcast media served as filters — imperfect, often biased, but operating with some accountability to truth. Today, the problem has inverted. We are drowning in information while starving for knowledge.

This inversion creates several compounding crises.

Algorithmic amplification rewards engagement, not accuracy. Social platforms and news aggregators are optimized for attention capture. Sensationalism, outrage, and tribal signaling spread faster than careful analysis. A false claim can circle the globe while the correction is still being drafted.

AI-generated content is scaling faster than verification. We are entering an era where synthetic media, automated article generation, and sophisticated disinformation can be produced at a cost approaching zero. The verification infrastructure that took decades to build cannot keep pace with content that can be generated in milliseconds.

Coverage fragmentation obscures systemic patterns. A crisis in one region might generate thousands of articles, each covering a fragment, while the connections between events remain invisible. Context collapses. The forest becomes indistinguishable from individual trees.

Professional intelligence remains siloed and expensive. Governments, corporations, and well-funded research institutions have access to comprehensive situational awareness. Independent journalists, civil society organizations, and ordinary citizens do not. This asymmetry is corrosive to democratic accountability.

These problems compound each other. Algorithmic amplification spreads AI-generated disinformation across fragmented channels, while those with the resources to understand the full picture have little incentive to share their methods.

What We Believe

We believe the solution is not to restrict information, but to build better infrastructure for understanding it.

Evidence should be primary. Every claim should be traceable to its sources. Every source should be verifiable. The default mode of information consumption has become assertion — someone says something, and you either believe it or don't based on tribal affiliation. We want to shift this toward evidence — here is what happened, here is how we know, here is what we don't know.

Structured knowledge beats unstructured content. A thousand articles about the same event contain redundant information scattered across disconnected pages. A knowledge graph that extracts entities, relationships, and temporal patterns transforms this chaos into something navigable. The same underlying reality, represented in a form that supports reasoning rather than just reading.

Source verification is non-negotiable. In an era of synthetic media and coordinated manipulation, provenance matters more than ever. Every piece of information should carry its lineage — where it came from, who published it, what corroborating sources exist, what contradictions have been identified.

Intelligence should be democratized, not dumbed down. Professional analysts use sophisticated tools and frameworks because they work. We believe these capabilities should be accessible to journalists working on limited budgets, researchers investigating abuses, and citizens trying to understand their world. This is not about making everything simple — it's about making powerful tools available to people who need them.

Humans remain in the loop. AI can process information at scales impossible for humans. But AI can also hallucinate, miss context, and encode biases. The Peripheral is designed to augment human analysts, not replace them. Every AI-generated summary, entity extraction, or relationship inference should be auditable and correctable.

The Opportunity

If we succeed in building this infrastructure, several things become possible.

Journalists investigating corruption could follow money and connections across jurisdictions in hours rather than months. They could verify whether a source's claim matches the documented record, trace the spread of a narrative across platforms, and understand the broader context their story fits into.

OSINT analysts monitoring conflicts could maintain real-time situational awareness across multiple theaters, with automatic extraction of geographic features, unit movements, and equipment sightings — all linked back to primary sources.

Researchers studying disinformation could trace how false narratives emerge, mutate, and spread, identifying the nodes in the network that amplify them.

Ordinary citizens could access the same quality of information analysis that governments and corporations take for granted. Not simplified summaries, but the actual structured intelligence, with the tools to explore it.

This is not utopian. It's the same capability that already exists within classified government systems and expensive commercial intelligence platforms. We believe it should be available to anyone who needs to understand what's happening in the world.

The Risks We Take Seriously

Building better intelligence tools creates risks we must acknowledge.

The dual-use problem. The same tools that help journalists verify information can help propagandists identify which narratives are gaining traction. Entity extraction that helps researchers track human rights abuses can help authoritarians track dissidents. We cannot build powerful tools while pretending they will only be used for good.

Our approach to this is not to cripple our tools, but to design for transparency. Tools that work by illuminating sources and evidence create an audit trail. They advantage those who are seeking truth over those who are manufacturing it — though imperfectly.

The accuracy burden. When we structure information and present it as intelligence, we take on responsibility for that structure. A misidentified entity, an incorrect relationship inference, or a false confidence score can mislead the very people we're trying to help. The failure mode of a search engine is showing bad results. The failure mode of an intelligence platform is creating false understanding.

We address this through relentless source verification, confidence scoring, and transparency about the limitations of automated analysis. Every inference should be auditable. Users should always be able to see why the system believes what it believes.

The attention economy trap. There is a business model that works by maximizing engagement rather than understanding. We could build features that increase time-on-platform by making information addictive rather than useful. We will not do this. Our success metric is whether users understand their subject better, not whether they scroll longer.

The centralization risk. Concentrating intelligence capability in a single platform creates a target — for censorship, for hacking, for manipulation. If The Peripheral becomes critical infrastructure for independent journalism and research, its compromise would be catastrophic.

Our response is to build with openness where possible. The knowledge graph structures, the verification protocols, the extraction pipelines — these should be public and reproducible. We are building infrastructure for an ecosystem, not a walled garden.

How We Approach the Work

Ship incrementally, learn continuously. We do not believe it is possible to design a perfect system in advance. The information environment is adversarial and evolving. The only way to build tools that work is to deploy them, learn from their failures, and iterate. This means accepting that early versions will be imperfect while maintaining commitment to improvement.

Start with the hardest use cases. Journalists covering active conflicts and OSINT analysts monitoring geopolitical events are sophisticated users with high standards. If we can build tools that meet their needs, we can serve anyone. Building for casual consumers first would optimize for the wrong things.

Maintain human editorial judgment. Algorithms should do what algorithms do well: process large volumes, extract patterns, identify connections. Humans should do what humans do well: assess context, weigh evidence, make editorial judgments about significance. The Peripheral is a tool for analysts, not a replacement for analysis.

Be transparent about methodology. How we collect, how we extract, how we verify, how we score confidence — all of this should be documented and auditable. Black-box intelligence is not intelligence; it's assertion.

Resist the temptation to become an oracle. The goal is not to tell people what to think but to give them the structured information they need to think well. We will present evidence and relationships. We will not tell users which faction is right in a conflict, which policy is correct, or what they should believe about contested questions.

The Path Forward

We are building something that sits between raw information and human understanding. The scope of this problem exceeds what any single team can solve. But it has to start somewhere.

In the near term, we focus on the core: reliable collection from diverse sources, AI-powered extraction and structuring, rigorous source verification, and interfaces that make complex information navigable. We work with journalists and OSINT professionals who can stress-test our tools against real-world needs.

Over time, we aim to build something larger: a knowledge infrastructure that can serve as the foundation for others building in this space. APIs for structured intelligence. Open protocols for verification. A demonstration that information ecosystems can be designed for understanding rather than engagement.

We cannot predict exactly how this will unfold. The information environment is changing faster than anyone can fully track. New manipulation techniques will emerge. New verification challenges will arise. We will need to adapt.

But the underlying commitment remains constant: truthful, verifiable, structured information should be accessible to everyone who needs to understand what is happening in the world. This is not a technical problem alone, nor a business problem alone. It is an infrastructure problem for the coming decades.

We intend to help solve it.