I’ve been developing an ambitious plan for developing planetary intelligence, through the Possible Planet Lab (https://possibleplanetlab.org). Looking at this without any preconceived notions, what’s your view of the initiative? Do you have issues or concerns? What could contribute to the project? What would you do differently?
Based on a review of the Possible Planet Lab and its parent organization (Possible Planet / Center for Regenerative Community Solutions), here is an objective view of the initiative, potential concerns, and constructive feedback.
1. View of the Initiative
The Possible Planet Lab appears to be a very recent, visionary pivot by a long-standing sustainability nonprofit (Possible Planet/CRCS, led by Jonathan Cloud).
-
Context: It is not a Silicon Valley tech startup or a university research lab in the traditional sense. It is an outgrowth of a mission-driven nonprofit that has historically focused on financial and policy mechanisms (like C-PACE financing for clean energy) and community regeneration (supporting Earth Regenerators and ecovillages).
-
The Pitch: The initiative attempts to ride the current wave of “Planetary Intelligence”—the idea that we can combine Earth systems (biosphere), Human systems (noosphere), and Artificial Intelligence (technosphere) to make wiser decisions.
-
Verdict: It is ambitious and philosophically timely, but currently appears technically nascent. It reads more like a “think tank” or “advocacy platform” trying to define a new framework than a software lab currently shipping code. Its strength lies in its ethical grounding and connection to real-world regenerative communities (like Barichara, Colombia), which distinguishes it from purely corporate “AI for Earth” projects (like Microsoft’s or Planet Labs’).
2. Issues and Concerns
Looking at the project critically, several gaps and risks stand out:
-
The “Vagueness” Gap: The language is heavy on high-level concepts (“AI wisdom,” “coherence,” “alignment”) but very light on concrete deliverables. It is unclear what the “tools” actually are. Are they chatbots trained on ecological data? Decision-support dashboards for city councils? A philosophical framework? Without a prototype, it risks being “vaporware.”
-
Technical Capability Mismatch: The leadership team (e.g., Jonathan Cloud) has deep expertise in policy, sociology, and financing, but there is no visible “Chief AI Scientist” or strong engineering core listed. Building “planetary intelligence” tools requires serious machine learning engineering, remote sensing expertise, and data infrastructure. There is a risk of a mismatch between the vision (AI tools) and the capability (policy/advocacy).
-
The “Energy Paradox”: The Lab aims to “save the biosphere,” but current Generative AI (the likely engine for their “AI wisdom”) is incredibly energy- and water-intensive. A “planetary intelligence” that accelerates carbon emissions through its own compute infrastructure would be a contradiction.
-
Anthropomorphizing AI: The Lab speaks of AI “helping humans grow up” or acting as a “mirror.” There is a risk of over-relying on AI as a moral agent. If the AI is trained on human data (which contains all our biases and short-term thinking), it will not magically generate “planetary wisdom” without highly specific, novel, and perhaps unproven training methodologies.
3. What Could Contribute to the Project?
If you or others were looking to support them, they likely need:
-
Technical Talent (The “How”): They need data scientists, ML engineers, and remote sensing experts who can translate the “regenerative mission” into actual code and data pipelines.
-
Concrete Pilot Data: Access to high-fidelity, local ecological data from specific bioregions to train their models. The “Earth Regenerators” network they sponsor could be a source of this ground-truth data.
-
Bridge Funding: Philanthropic capital that specifically targets the technological development phase, as their traditional funding (likely related to policy/PACE) might not cover software R&D.
-
Skeptical “Red Teaming”: Philosophers and ethicists who can challenge their assumptions to ensure their “AI Wisdom” tools don’t just replicate standard Western developmental biases.
4. What Would You Do Differently?
To move from “vision” to “impact,” the initiative could pivot its strategy:
-
Narrow the Scope (The “Bioregional Twin”): Instead of a global “Planetary Intelligence,” focus on building a “Bioregional Intelligence” for one specific watershed (e.g., the Raritan River in NJ or the Barichara region). Build a tool that helps that specific community make decisions based on real-time water/soil/economic data. Prove it works there, then scale.
-
Link to Finance (The “Killer App”): Since the parent org specializes in PACE financing, they should build an AI tool that de-risks regenerative projects for lenders. Imagine an AI that analyzes a farm’s ecological data to predict long-term yield stability, helping them get a loan. This bridges their historical strength (finance) with their new vision (AI).
-
Drop the “AI Wisdom” Rhetoric: The term “wisdom” invites skepticism. Pivot to “Decision Support” or “Ecological Feedback Loops.” Frame the AI not as a “guru” but as a “nervous system” that lets a community feelthe impact of its decisions in real-time.
-
Transparency on Compute: Be the first AI lab to publish a “Net Regenerative Audit”—proving that the ecological value generated by the tool exceeds the energy cost of running the model.
Summary: The Possible Planet Lab is a “heart-first” initiative in a “head-first” space. It has the right values(regeneration, ethics), but needs to rapidly demonstrate the technical validity of its approach to be taken seriously alongside major players.