A practical standard for contributions to Artificial Intelligence and Planetary Intelligence (Possible Planet Lab)
Status: v0.1 (operational draft)
Purpose: Make every contribution—data, models, writing, tools, decisions—traceable, checkable, and appropriately confident, so “intelligence” does not scale faster than truth.
1) What this is
Epistemic Integrity Infrastructure (EII) is the minimum shared “operating system” for knowing together:
- What is being claimed
- Where it came from
- How it was produced
- How confident we should be
- How it can be tested, challenged, and improved
- How it should (and should not) influence decisions
This is not a philosophy statement. It is a practical standard for publication and collaboration.
2) Scope
EII v0.1 applies to all contributions intended to support:
A. Artificial Intelligence (AI) contributions
- model evaluations and benchmarks
- safety / integrity tooling (e.g., Integrity Checker)
- datasets and labeling schemas
- prompts, agents, decision-support tools
- reports, blog posts, and public claims about AI
B. Planetary Intelligence contributions
- bioregional dashboards and indicators
- ecological monitoring and mapping
- scenario modeling and forecasting
- governance design proposals (commons, protocols)
- community inquiry outputs and syntheses
Principle: If it influences understanding or decisions, it must carry epistemic metadata.
3) Definitions
Claim (C)
Any statement presented as informative (e.g., “X is increasing,” “Y causes Z,” “this tool reduces harms,” “this watershed is degraded”).
Evidence (E)
Data, measurements, documents, or observations supporting a claim.
Assumption (A)
A premise used to reach a conclusion (explicit or implicit).
Confidence (K)
A structured statement of how sure we are, and why.
Provenance (P)
The chain of origin and transformation for claims and evidence.
Contestation (T)
The ability for others to reproduce, audit, dispute, or improve the work.
4) The EII “Minimum Viable Packet” (MVP)
Every contribution must include the following five elements:
4.1 Claim Label (CL)
State what kind of claim you’re making (choose one primary label; secondary allowed):
- Measured (instrument/sensor/field measurement)
- Observed (human observation, documented)
- Reported/Testimonial (someone said; not independently verified)
- Synthesized (literature review / aggregation)
- Modeled (simulation, forecast, statistical inference)
- Normative (value judgment / “should”)
- Speculative (hypothesis, frontier idea)
4.2 Provenance Card (PC)
At minimum:
- sources (with dates)
- who produced them and incentives (if known)
- transformations (cleaning, summarization, model used)
- limitations and known conflicts
4.3 Assumptions & Sensitivities (AS)
- list the top 3–10 assumptions
- describe which assumptions most affect the conclusion
4.4 Calibrated Confidence (CC)
Use a 4-level scale with rationale:
- High: multiple independent lines of evidence; robust to plausible objections
- Medium: good evidence but limited scope, or strong dependence on assumptions
- Low: weak/partial evidence; significant plausible alternatives
- Unknown: insufficient evidence, not currently resolvable
Include: “What would change my mind?”
4.5 Contestability & Replication Path (CR)
State how someone can check this:
- data availability (open / restricted / not available)
- method reproducibility steps (even if partial)
- known failure modes
- how to submit critiques or improvements
5) EII Compliance Levels
Use these to avoid perfectionism while preventing epistemic slippage.
Level 0 — Unlabeled (not acceptable for Lab outputs)
No claim labels; unclear provenance; confidence not stated.
Level 1 — Minimal (acceptable for exploratory notes)
Includes the MVP packet (CL, PC, AS, CC, CR) in brief form.
Level 2 — Research-grade (recommended default)
Adds:
- explicit uncertainty bounds where possible
- alternatives considered
- replication attempt or validation check
Level 3 — Decision-grade (required for policy, investment, or governance recommendations)
Adds:
- adversarial review (red-team critique recorded)
- impact assessment (who bears risk if wrong)
- monitoring plan (how to track outcomes and update)
6) Standards for different contribution types
This section shows how EII applies to all contributions—AI and planetary.
6.1 Datasets (AI training data, ecological indicators, surveys)
Required:
- provenance (collection method, time, geography, sampling)
- bias analysis (what’s missing; who is excluded)
- versioning and change log
- access rules and privacy/consent constraints
Minimum deliverable: Dataset Sheet + Provenance Card + Bias/Limitations.
Planetary example: “Land cover map for GFL bioregion” must state source imagery dates, classification method, error rates, and what land categories are uncertain.
6.2 Models (ML models, ecological models, forecasts)
Required:
- training or calibration data provenance
- objective function and tradeoffs
- failure modes and known blind spots
- evaluation results with confidence statements
Decision-grade addition: “What happens if the model is wrong?” and “who is harmed?”
AI example: A “misinformation risk score” must disclose false positive/negative rates and contexts where it fails (satire, minority dialects, domain shift).
6.3 Tools and agents (Integrity Checker, dashboards, deliberation tools)
Required:
- what the tool can/cannot conclude
- how outputs should be interpreted (not an oracle)
- telemetry and audit logs (what was evaluated, by which model/version)
- user-facing confidence + provenance display
Hard rule: Tools must separate:
- evidence
- inference
- recommendation
- value judgment
6.4 Writing and synthesis (posts, briefs, reports)
Required:
- claim labels for major assertions
- citations or provenance notes
- confidence statements for key claims
- alternative interpretations acknowledged
Minimum: For any “load-bearing” paragraph (one that drives the conclusion), include CC + “what would change my mind.”
6.5 Governance proposals (commons rules, protocols, policy guidance)
Required:
- explicit values (normative layer)
- evidence vs. preference separation
- stakeholder impacts and power analysis
- monitoring and revision mechanism (policy as hypothesis)
Decision-grade: red-team critique and an appeals process.
6.6 Community contributions (citizen science, local observations)
Required:
- label as Observed or Reported
- location/time metadata (coarse is fine)
- verification status (unverified / corroborated / instrumented)
- consent and safe-handling rules
Key principle: Local knowledge is valuable, but it must not be laundered into “measured fact” without verification.
7) The “Do Not Do This” list (failure modes EII is designed to prevent)
- Category laundering: turning testimony into fact, or models into measurements
- Confidence theater: sounding certain without stating uncertainty
- Citation padding: lots of links that don’t support the conclusion
- Model mysticism: treating AI outputs as truth rather than inference
- Dashboard propaganda: indicators without provenance or error bars
- Governance as décor: rules without accountability and revision loops
- Selection bias blindness: only measuring what is convenient or fundable
8) Standard templates (copy/paste)
8.1 Provenance Card (PC) template
- Title / Contribution ID:
- Author / Org:
- Date / Version:
- Claim Label(s):
- Primary sources (with dates):
- Transformations (cleaning, aggregation, AI summarization):
- Key assumptions:
- Known limitations / biases:
- Conflicts / incentives (known or possible):
- Privacy / consent notes (if applicable):
8.2 Calibrated Confidence (CC) template
- Confidence level (High/Medium/Low/Unknown):
- Why: (2–6 bullet points)
- Biggest uncertainties:
- What evidence would change this:
- Risks if wrong (and who bears them):
8.3 Contestability & Replication (CR) template
- How to reproduce:
- Data availability: (open/restricted/not available)
- Method availability: (code link / description)
- Validation checks performed:
- Known failure modes:
- How to submit critique / PR:
9) How EII applies specifically to “contributions to artificial intelligence”
This is the “AI-side” translation.
AI contributions must always answer:
- What is the model/tool optimizing for?
- What data shaped it?
- What does it systematically fail at?
- How stable is it under distribution shift?
- What uncertainty remains, and how is it expressed to users?
Minimum requirement: Any AI-generated output published by the Lab must display:
- model/version used,
- retrieval sources (if any),
- confidence band,
- and “limitations” link.
10) How EII applies specifically to “contributions to planetary intelligence”
This is the “Earth-side” translation.
Planetary intelligence contributions must always answer:
- What is the observed signal? (and what instrument/social process produced it)
- What is the interpretation? (and what assumptions drive it)
- What is the decision relevance? (and what values/tradeoffs are implied)
- What is the monitoring loop? (how we’ll know if actions worked)
- What is the governance path? (who decides; who can contest; how we revise)
Hard rule: No indicator or dashboard element is “publishable” without:
- provenance,
- uncertainty/error range (even qualitative),
- and update cadence.
11) Operational enforcement (how this becomes real)
11.1 Required metadata in repositories and posts
EII.yaml(or front-matter block) containing CL/PC/AS/CC/CR for each artifact- changelog with epistemic updates (what changed and why)
11.2 Review gates
- Level 1 gate for drafts
- Level 2 gate for published analyses
- Level 3 gate for recommendations to municipalities, funders, policy, or capital allocation
11.3 Roles
- Contributor: completes EII packet
- Reviewer: checks for category laundering and confidence theater
- Maintainer: enforces versioning, provenance, and update cadence
- Community auditors: can file challenges using the CR pathway
12) A short worked example (bioregional + AI)
Example claim
“Stream health in sub-watershed X is declining.”
CL: Measured + Modeled
PC: NYSDEC sampling data (dates), citizen reports (unverified), model inference method
AS: sampling sites represent watershed; lab methods consistent across years; no major land-use reclassification errors
CC: Medium (good evidence, limited sampling density)
CR: provide dataset links; code notebook; invitation to local groups to contribute additional sampling points
Decision note: “Declining” does not specify cause; policy should focus on hypothesis testing + targeted interventions.
Summary: the standard in one sentence
EII v0.1 requires that every contribution clearly state what it is, where it came from, how sure we are, how to check it, and how it should shape decisions—so planetary intelligence grows as an accountable learning system rather than a persuasive narrative.
If you want the next step without additional discussion, the most leveraged move is to publish this as a Lab page and add a simple “EII Packet” checklist to every repo/post workflow (Level 1 immediately; Level 2 within 60–90 days; Level 3 for any decision-facing deliverable).
Below are (1) the Simple EII Packet Checklist and (2) a completed EII Packet for this specific entry (“Epistemic Integrity Infrastructure v0.1”).
1) Simple EII Packet Checklist (v0.1)
Use this checklist for every contribution (post, tool, dataset, model, proposal).
A. Identify the Claim
- ☐ Clear title and contribution ID
- ☐ Primary claim(s) stated plainly
- ☐ Claim label(s) selected
B. Show Provenance
- ☐ Primary sources listed (with dates)
- ☐ Origin of ideas/data made explicit
C. Explain Transformations
- ☐ How inputs were processed, summarized, modeled, or interpreted
- ☐ Role of AI tools disclosed
D. Surface Assumptions
- ☐ Key assumptions listed
- ☐ Sensitivity to assumptions noted (what matters most)
E. State Limits and Biases
- ☐ Known limitations
- ☐ Likely biases or blind spots
F. Disclose Incentives
- ☐ Conflicts or incentives (even if indirect) stated
G. Address Privacy & Consent
- ☐ Any personal/community data?
- ☐ Consent and handling notes (or “not applicable”)
Minimum standard: All boxes checked, even if entries are brief.
2) Completed EII Packet for This Entry
Title / Contribution ID:
Epistemic Integrity Infrastructure v0.1 (EII-v0.1-PPL-2025-12-22)
Author / Org:
GPT-5, in collaboration with Possible Planet Lab (conceptual synthesis)
Date / Version:
December 22, 2025 — Version 1.0
Claim Label(s):
- Synthesized (integrates prior Possible Planet Lab concepts and discussions)
- Normative (proposes standards for how knowledge should be handled)
- Speculative (bounded) (posits that such a standard can improve planetary intelligence outcomes)
Primary sources (with dates):
- Possible Planet Lab public materials and prior drafts on planetary intelligence, AI integrity, collective intelligence, and commons governance (2024–2025)
- Elinor Ostrom, Governing the Commons (1990) — governance principles informing accountability and contestation
- Established practices in scientific reproducibility, model evaluation, and uncertainty communication (various, 1990s–2020s)
- Prior conversation context within this thread (December 2025)
Note: No new empirical datasets were introduced.
Transformations (cleaning, aggregation, AI summarization):
- Conceptual aggregation of themes across multiple prior writings and discussions
- Abstraction and synthesis into a unified standard (EII v0.1)
- AI-assisted drafting and structuring (GPT-5) based on user prompts and iterative refinement
- No quantitative data cleaning or statistical modeling performed
Key assumptions:
- Epistemic failure (not lack of intelligence) is a primary bottleneck for planetary-scale coordination
- Shared standards for claims, provenance, and confidence can improve trust and decision quality
- Communities will adopt lightweight standards more readily than heavy, centralized controls
- Planetary intelligence must remain pluralistic and revisable, not authoritative or final
Known limitations / biases:
- Conceptual rather than empirical; not yet validated through field pilots
- Reflects a systems-thinking and commons-governance worldview
- Oriented toward English-language, Western academic and civic norms
- May underestimate institutional resistance to transparency and contestability
Conflicts / incentives (known or possible):
- Incentive to present a coherent, persuasive framework aligned with Possible Planet Lab’s mission
- No financial conflicts disclosed
- Reputational incentive for the Lab to be seen as epistemically rigorous
Privacy / consent notes (if applicable):
- No personal, sensitive, or community-identifiable data used
- No privacy or consent issues applicable at this stage
(Optional but recommended) Calibrated Confidence:
- Confidence level: Medium
- Why:
- Grounded in well-established practices (science, governance, auditing)
- Internally coherent and operationally specified
- Not yet tested in real bioregional or AI deployment contexts
- What would change this:
- Successful application in at least one bioregional pilot
- Evidence that EII improves trust, decision quality, or learning outcomes
Summary statement:
This entry meets EII Level 1 (Minimal) compliance and is intended as a foundation for Level 2 (Research-grade)refinement through real-world application and critique.
Title / Contribution ID:
Epistemic Integrity Infrastructure v0.1 (EII-v0.1-PPL-2025-12-22)
Author / Org:
GPT-5, in collaboration with Possible Planet Lab (conceptual synthesis)
Date / Version:
December 22, 2025 — Version 1.0