State of the Field and an Invitation to Build What Comes Next
Possible Planet Lab — 2026 Brief
Executive summary
Humanity has entered a phase shift in its relationship with Earth. For the first time, we possess the technical capacity to perceive planetary change in near-real time and the cognitive tools—via artificial intelligence—to synthesize, simulate, and coordinate responses at unprecedented scale. At the same time, ecological degradation, climate disruption, social fragmentation, and governance stress are accelerating together.
Planetary intelligence has emerged as a serious framework for navigating this moment: not as a metaphor, but as a testable proposition—that collective knowledge, when properly organized and applied, can stabilize and regenerate the Earth system over long time horizons.
Yet planetary intelligence cannot be achieved at the planetary scale alone. It must be implemented at the scale where decisions are made and consequences are lived: bioregions, municipalities, watersheds, and communities. This memo argues that bioregional intelligence is the essential action layer of planetary intelligence, and that the next phase of work must focus on building practical, replicable systems that couple global knowledge with local stewardship.
Possible Planet Lab exists to do exactly that.
1. What do we mean by “planetary intelligence”?
The most widely cited formal definition comes from a 2022 paper by Adam Frank, David Grinspoon, and Sara Walker in the International Journal of Astrobiology. They define planetary intelligence as the acquisition and application of collective knowledge at planetary scale, integrated into the functioning of coupled planetary systems, in ways that enhance long-term habitability.
The importance of this framing is subtle but critical:
- Intelligence is defined by action, not awareness.
- Knowledge must feed back into Earth-system dynamics (energy, climate, land, oceans, biosphere).
- The relevant time horizon is generational and geological, not electoral or quarterly.
Under this definition, planetary intelligence is not achieved by better dashboards, more reports, or even better science alone. It is achieved only when knowledge changes trajectories.
2. How we got here: a compressed history
Earth becomes legible
In the second half of the twentieth century, Earth system science, satellite observation, and computational modeling made the planet observable as a single, interconnected system. Climate change, biodiversity loss, and biogeochemical disruption became measurable phenomena rather than abstract concerns.
Connectivity outpaces governance
The internet and globalized supply chains enabled rapid coordination—but also produced tightly coupled systems optimized for efficiency rather than resilience. Information traveled faster than institutions could adapt.
The polycrisis becomes explicit
By the 2010s, overlapping crises—climate extremes, ecological collapse, inequality, misinformation, geopolitical instability—were increasingly understood as interacting system failures rather than isolated problems. Strategy shifted from “optimization” to “resilience under stress.”
Writers and strategists such as Alex Steffen played a key role in translating planetary foresight into lived ethical and civic responsibility: not abstract sustainability, but the work of living well and being good ancestors in a disrupted world.
Cognition becomes infrastructure
In the 2020s, AI systems—particularly large language models and agentic architectures—dramatically reduced the cost of synthesis, translation, simulation, and coordination. For the first time, “collective sensemaking” became technically scalable.
This is the inflection point. The constraint is no longer computational power or data availability. It is governance, integrity, legitimacy, and alignment.
3. Why bioregional intelligence is indispensable
Planetary intelligence fails if it remains abstract. The atmosphere may be global, but zoning codes, building stock, floodplains, food systems, grids, emergency response, and cultural trust are local.
Bioregional intelligence refers to the capacity of a place-based community to:
- understand the ecological and social systems it depends on,
- anticipate risk and disruption,
- coordinate action across institutions and stakeholders,
- and align capital, policy, and culture with long-term ecological health.
Bioregions are not merely administrative units; they are functional ecological and cultural systems. They are the only scale at which planetary knowledge can reliably become planetary action.
4. The coming era: billions of AI agents
The next phase of planetary intelligence will be defined by agentic systems operating at scale. Billions of AI agents will exist—embedded in software, infrastructure, governance, markets, and daily life. The question is not whether they will collaborate, but how they will be constrained.
Without safeguards, agentic systems risk amplifying error, bias, extraction, and instability. With the right architecture, they can become a planetary coordination fabric, supporting human stewardship rather than replacing it.
Three design principles are decisive:
- Epistemic integrity as infrastructure
Every claim, recommendation, and simulation must carry provenance, uncertainty, assumptions, incentives, and known limitations. Truthfulness must be a system property, not a personal virtue. - Authorization and consent
The hardest problem is not generating insight, but determining who can trigger which actions, in which jurisdictions, under what consent, with what audit trail and right of appeal. - Preparedness for discontinuity
As emphasized by thinkers such as Michael Haupt, the coming decade is likely to be characterized by non-linear change and compound shocks. Planetary and bioregional intelligence must be designed for surprise, stress, and recovery—not smooth optimization.
5. What Possible Planet Lab is building
Possible Planet Lab is focused on a practical, testable proposition:
If we can build a working bioregional intelligence system—grounded in ecological reality, governed with integrity, and augmented by AI agents—we can create a replicable action layer for planetary intelligence.
Our near-term work centers on:
- A Bioregional Intelligence Stack integrating sensing, data stewardship, models, agentic support, governance, and feedback loops.
- A flagship demonstrator in a defined bioregion (beginning with the Genesee–Finger Lakes), focused on a small number of high-leverage decisions.
- An Epistemic Integrity Infrastructure that governs both human and AI contributions.
- A replication kit that allows other regions to adopt and adapt the model.
Success is measured not by awareness, but by outcomes: reduced risk, increased resilience, ecological regeneration, and stronger civic capacity.
6. An invitation
Planetary intelligence is not a technology project. It is a civilizational learning process.
We invite:
- researchers and practitioners working on Earth systems, governance, and AI,
- municipal and regional leaders seeking better tools for stewardship,
- funders interested in durable, systemic impact,
- and citizens who want to participate in shaping a livable future,
to collaborate with Possible Planet Lab in building bioregional intelligence as a public good.
The work ahead is demanding—but for the first time in history, it is also genuinely possible.
Research Agenda & Theory of Change
Building Bioregional Intelligence as the Action Layer of Planetary Intelligence (2026–2030)
I. Problem Statement
Human civilization now operates at planetary scale, yet our systems of perception, decision-making, and coordination remain fragmented, short-term, and poorly aligned with the biophysical limits of the Earth.
We face a coordination failure, not a knowledge deficit.
Despite unprecedented advances in Earth system science, remote sensing, and artificial intelligence, societies struggle to translate insight into timely, legitimate, and effective action. Climate disruption, biodiversity loss, infrastructure fragility, social polarization, and governance breakdown increasingly interact, producing compounding risks (“polycrisis”).
Planetary intelligence—defined as the capacity of a civilization to perceive, understand, and intentionally regulate its interaction with the Earth system over long time horizons—has emerged as a promising conceptual framework. However, planetary intelligence cannot be achieved solely at the global scale. It must be instantiated where decisions are made and consequences are experienced: bioregions, municipalities, watersheds, and communities.
The central gap this research program addresses is the absence of operational bioregional intelligence systems that:
- couple global knowledge with local action,
- embed epistemic integrity and consent into AI-augmented decision processes,
- and demonstrably improve ecological and social outcomes.
II. Core Research Question
How can AI-augmented, ethically governed bioregional intelligence systems enable human communities to act as effective stewards of the Earth system under conditions of accelerating disruption?
This overarching question decomposes into four interlinked research domains.
III. Research Domains & Key Questions
Domain 1: Planetary ↔ Bioregional Coupling
Objective: Translate planetary-scale constraints and insights into actionable bioregional decision contexts.
Key questions
- What minimum set of planetary and bioregional indicators is sufficient to guide stewardship decisions without overwhelming users?
- How can Earth-system models and global datasets be meaningfully downscaled to bioregional contexts without false precision?
- How do feedback loops between local action and planetary outcomes become visible, legible, and governable?
Outputs
- A defensible bioregional indicator framework aligned with planetary boundaries.
- Methods for coupling global models with local digital twins.
Domain 2: AI Agents as Cognitive Public Infrastructure
Objective: Design AI agent systems that augment human sensemaking, coordination, and implementation rather than replace human judgment.
Key questions
- What classes of agents (briefing, task, simulation, monitoring) are most valuable for civic and ecological stewardship?
- How do multi-agent systems collaborate across jurisdictions while respecting boundaries of authority and consent?
- How can agents remain interpretable, contestable, and corrigible at scale?
Outputs
- A modular agent architecture for bioregional intelligence.
- Evaluation metrics for agent usefulness, trustworthiness, and failure modes.
Domain 3: Epistemic Integrity & Governance
Objective: Ensure that AI-mediated intelligence systems remain truthful, accountable, and socially legitimate under uncertainty.
Key questions
- How can provenance, uncertainty, assumptions, and incentives be made explicit and machine-readable?
- What governance mechanisms allow communities to audit, contest, and override AI-assisted recommendations?
- How should Indigenous knowledge, local lived experience, and scientific models be integrated without epistemic domination?
Outputs
- An Epistemic Integrity Infrastructure (EII) standard applicable to human and AI contributions.
- Governance templates for consent, authorization, and appeal.
Domain 4: Resilience Under Disruption
Objective: Design bioregional intelligence systems that remain functional under non-linear change, shocks, and compound crises.
Key questions
- How do intelligence systems perform under stress (extreme weather, grid failure, political conflict, misinformation)?
- What redundancy, decentralization, and fallback modes are required for continuity?
- How can preparedness for discontinuity be operationalized without normalizing despair or authoritarian control?
Outputs
- Stress-tested bioregional intelligence prototypes.
- Scenario frameworks for discontinuity-aware planning.
IV. Theory of Change
If…
- Bioregions are provided with AI-augmented intelligence systems that integrate ecological data, social context, and future scenarios,
- those systems are governed by explicit epistemic integrity and consent frameworks,
- and they are embedded in real decision pathways (policy, capital allocation, infrastructure, restoration),
Then…
- communities will improve their capacity to anticipate risk, coordinate action, and align resources,
- ecological and social interventions will be better targeted, more equitable, and more durable,
- and local actions will increasingly reinforce planetary-scale stabilization and regeneration.
Therefore…
- planetary intelligence can emerge not as centralized control, but as a distributed, place-based stewardship capacity, supported by billions of constrained, collaborative AI agents acting in service to life.
V. Research Phases (Indicative)
Phase 1: Framework & Design (Year 1)
- Formalize the Bioregional Intelligence Stack.
- Develop Epistemic Integrity Infrastructure v1.
- Select flagship bioregional demonstrator.
Phase 2: Prototyping & Demonstration (Years 1–2)
- Build and deploy a working bioregional intelligence system.
- Integrate agent layers for briefing, simulation, and task support.
- Produce real decision briefs and project pipelines.
Phase 3: Evaluation & Stress Testing (Years 2–3)
- Measure ecological, social, and governance outcomes.
- Conduct disruption scenarios and failure analysis.
- Refine governance and integrity mechanisms.
Phase 4: Replication & Field Building (Years 3–5)
- Publish open standards, templates, and replication kits.
- Support adoption by additional bioregions.
- Convene a community of practice around planetary–bioregional intelligence.
VI. Expected Contributions to Knowledge & Practice
- A validated model for bioregional intelligence as the action layer of planetary intelligence.
- New governance standards for AI-assisted civic and ecological decision-making.
- Empirical insight into how multi-agent systems can support stewardship rather than extraction.
- A replicable pathway for aligning AI development with long-term planetary habitability.
VII. Evaluation Criteria (What Success Looks Like)
- Demonstrable improvements in local resilience and ecological outcomes.
- Transparent, auditable decision processes trusted by diverse stakeholders.
- Evidence that AI agents meaningfully reduce coordination bottlenecks.
- Adoption or adaptation of the framework by other regions or institutions.
VIII. Closing Frame
This research program treats planetary intelligence not as an inevitability, but as a responsibility.
The question is no longer whether humanity can build powerful intelligence systems. The question is whether we can build systems—human and artificial—that are wise enough, humble enough, and well-governed enough to help us remain a welcome presence on a living Earth.
Possible Planet Lab exists to explore that question in practice.
How to read and use this diagram
This diagram is intentionally foundation-friendly: it follows a standard logic-model flow (Inputs → Activities → Outputs → Outcomes → Impacts) while making explicit the assumptions and feedback loops that many proposals leave implicit.
1. Inputs
These are the enabling conditions—not technologies alone, but institutional and ethical assets:
- Earth-system and local data
- AI models and agentic tools
- Community partners and civic institutions
- Explicit governance and integrity norms
Funders should see: this is not a “tech build,” but a socio-technical system with legitimacy baked in.
2. Activities
This is where Possible Planet Lab adds distinctive value:
- Turning raw signals into stewarded knowledge
- Translating models into decision contexts
- Using AI agents to reduce coordination friction
- Embedding deliberation, not bypassing it
Key differentiator: AI is used to support judgment and coordination, not to automate authority.
3. Outputs
These are concrete, auditable artifacts:
- Dashboards that can be interrogated
- Decision briefs that change real choices
- Project pipelines that unlock capital
- Public reports that build trust
Important: outputs are designed to plug directly into existing decision and funding pathways.
4. Outcomes (1–3 years)
Near-term, funder-relevant results:
- Better decisions under uncertainty
- Faster coordination across silos
- Increased institutional and community trust
- Reduced exposure to known risks
These outcomes are measurable and observable within a grant cycle.
5. Impacts (5–20 years)
The long arc:
- Ecological regeneration
- Climate and infrastructure resilience
- Social cohesion and civic capacity
- Long-term habitability
This is where the work aligns directly with planetary intelligence as defined in the literature.
6. Assumptions & Feedback Loops (the critical honesty box)
This lower panel is strategically important. It signals epistemic maturity:
- Knowledge must be legitimate to be acted upon
- AI augments, not replaces, human responsibility
- Transparency enables correction
- Local action aggregates upward
- Continuous monitoring enables learning
Many funders now look explicitly for this level of reflexivity.
How this complements the Research Agenda
Together, the Research Agenda and this Theory of Change do three things:
- They bridge worlds
Academic seriousness ↔ civic practicality ↔ philanthropic accountability. - They scale without centralization
The model is replicable by other bioregions without requiring global command-and-control. - They future-proof AI use
By foregrounding integrity, consent, and governance, the model remains valid even as agentic systems proliferate into the billions.