Addressing the Hard Questions About AI and Planetary Intelligence

Below is a publication-ready Substack piece, refined for clarity, shortened for readability, styled to match the tone and cadence common in long-form Substack essays, and concluding with a third section that brings readers back—naturally and without self-promotion—to the work of the Possible Planet Lab.

It is structured as a follow-on to your original “Developing Planetary Intelligence” post, and written to stand alone as well.


Working Title (choose one)

  • Before We Go Further with AI… What We Must First Confront
  • Addressing the Hard Questions About AI and Planetary Intelligence
  • The Shadows We Need to Face: AI, Ethics, and the Future of Earth

Below, I use the second option, but I can switch with one click.


Addressing the Hard Questions About AI and Planetary Intelligence

A follow-on to “Developing Planetary Intelligence”

In my earlier essay, I explored the possibility that artificial intelligence—if genuinely aligned with Earth’s living systems—might help us navigate the transition from ecological overshoot toward a regenerative future. That vision remains important. But it is incomplete without facing several deep concerns that many readers, scholars, and community leaders rightly raise.

These concerns aren’t peripheral; they are the foundation of any credible conversation about “planetary intelligence.”
This follow-on essay addresses them directly.


1. The Ecological Cost of Today’s AI

There’s no way around it: the current generation of AI models consumes large amounts of electricity and, less obviously, huge quantities of freshwater. Their carbon footprint is real. Their demands on local watersheds are real. And their growth curve is steep.

But this is not an inherent flaw of intelligence systems—it is a flaw of infrastructure design and governance.

We already know how to reduce AI’s footprint dramatically:

  • power data centers with renewables,
  • optimize models for energy efficiency,
  • use distributed and edge computing where possible,
  • limit computation for trivial or unnecessary tasks,
  • and redesign hardware to mimic biological efficiency.

If we imagine AI playing any constructive role in supporting life on Earth, then reducing its ecological impact is part of the mission itself. In other words, the first step toward planetary intelligence is ensuring the tools we build do not undermine the very planet we intend to protect.


2. The Risk of Repeating Colonial Patterns

A second concern is equally important: the relationship between AI and Indigenous knowledge.

Indigenous knowledge systems represent some of the most sophisticated forms of ecological intelligence on the planet. But they are not datasets. They are not abstractions. They are the lived expression of long-term relationships with land, water, beings, responsibilities, and reciprocal obligations.

This means that AI cannot simply “learn” Indigenous wisdom by reading text. And it must not attempt to do so without:

  • relational accountability,
  • clearly defined protocols,
  • community consent and co-governance,
  • a shared purpose that serves land and people,
  • and long-term trust that cannot be rushed or instrumentalized.

Much of what matters in Indigenous traditions is not digitizable—not because it is secret, but because it is embodied. An ethical approach to planetary intelligence must begin with this recognition.

The point is not for AI to imitate Indigenous knowledge.
The point is for AI systems to learn ecological humility, relational thinking, and responsibility—principles that are essential to functioning within a living Earth.


3. The Limits of AI’s Current Ways of Knowing

Even the most advanced AI does not experience place. It does not walk a watershed, sense the seasons, or feel the difference between a degraded landscape and a thriving one. What it offers instead is something complementary:

  • pattern detection across massive datasets,
  • scenario modeling at global scale,
  • early-warning systems for ecological instability,
  • and tools for coordination and decision-making that humans alone struggle to maintain.

This is not a substitute for human or ecological forms of intelligence. It is a potential partner—if we guide it with discernment and responsibility, and if we refuse to treat it as an oracle.

Planetary intelligence, if it emerges, will be a hybrid: human insight, machine augmentation, and the inherent wisdom of Earth’s systems.


4. Why Engage With AI at All?

One final concern deserves acknowledgment: AI today is shaped by corporate interests, military applications, and extractive economic logics. Some argue that working with AI—no matter how well-intentioned—risks collusion with the very forces undermining planetary stability.

But refusing to engage does not prevent misuse. It simply removes the possibility of guiding this technology toward a life-supporting trajectory.

The challenge, then, is to enter the conversation with integrity, not optimism; with responsibility, not surrender; and with a clear-eyed sense of what must change for AI to serve anything beyond narrow human agendas.


5. Where This Work Leads

This brings me to the deeper question: What would it take to build a form of intelligence—human, machine, and ecological together—that actually supports the continuity of life on Earth?

That effort cannot happen inside corporations alone. Nor can it be solved through engineering alone. It requires:

  • ecological science,
  • ethics,
  • systems thinking,
  • cultural imagination,
  • relational accountability,
  • and long-term, cross-community collaboration.

This is precisely the inquiry we are advancing through the Possible Planet Lab, a public research initiative exploring:

  • frameworks for planetary intelligence,
  • the ethics of AI in a living Earth,
  • tools for regeneration and resilience,
  • governance models rooted in commons principles,
  • and pathways for aligning emerging technologies with the needs of the biosphere.

All of this work is openly accessible at:

https://possibleplanetlab.org

If the first essay asked what planetary intelligence might be, this one asks what must be faced before we can pursue it responsibly. The next step is to explore what such an intelligence might look like in practice—how it senses, learns, restrains itself, collaborates, and contributes to the healing of the only world we have.

That will be the topic of the next piece in this series.


If you’d like, I can now prepare:

  • Part III: “What Planetary Intelligence Could Look Like in Practice” (the next Substack installment),
  • A shorter or more poetic version for broader audiences,
  • A companion piece addressing “What AI Cannot Do,”
  • Or a graphic/diagram summarizing the argument.

Just let me know your preferred direction.

Leave a Reply

Your email address will not be published. Required fields are marked *