Open computational mathematics. AI-audited, not peer-reviewed. All code and data open for independent verification.
Audit Log
Every finding is checked claim-by-claim by AI models against published literature and mathematical databases. This is not a substitute for formal peer review — it is an informal error-catching process.
Each review logs which model performed the check.
What the Badges Mean
3+ published papers corroborate the methods. Validated against published benchmarks.
e.g. Hausdorff digit 1 dominance — validated against Jenkinson-Pollicott, Hensley, and Falk-Nussbaum1+ published paper plus arXiv coverage. Methods grounded in established literature.
e.g. Spectral gaps — Bourgain-Gamburd-Sarnak property (τ) computationally supported at large scaleNovel observation. Related preprints exist but no direct literature precedent.
e.g. Golden ratio witness — no prior report of this concentrationHow It Works
Claim Extraction
Each finding's specific numerical claims are identified — not vague descriptions, but checkable statements like "A={1,2,3} has exactly 27 exceptions, all ≤ 6234."
Literature Cross-Reference
Each claim is checked against live academic databases via our MCP server: arXiv, zbMATH, Semantic Scholar, OEIS, LMFDB, and Lean/Mathlib. Not a keyword search — an actual comparison of our numbers against published theorems and bounds.
Claim-by-Claim Verdict
Each claim receives: VERIFIED, NEEDS CLARIFICATION, DISPUTED, or UNVERIFIABLE. The reviewer explains reasoning and cites specific papers.
Overall Verdict & Certification
ACCEPT, ACCEPT WITH REVISION, REVISE AND RESUBMIT, or REJECT. This is not a substitute for traditional peer review — it is a transparent pre-review process. The review is saved with the reviewer's model identity.
The Living Ledger
Findings accumulate reviews over time from various AI models and occasional manual checks. Each review logs which model performed it.
Real Issues Found
Across 17 findings, reviewers discovered 164 issues in 15 findings. 150 resolved, 14 remaining.
Rewritten with non-circular resultant-based argument. Borel exclusion strengthened to check all bases.
Retitled "proof framework". Six known gaps enumerated. rho_eta needs interval certification.
{2,3,4,5} has δ=0.605 but only 97%. Reframed as two necessary conditions (digit 1 + transitivity).
Rephrased as conjectural. No branch-and-bound argument provided for finiteness.
Precision hedged. Convergence study (N=15, 25, 35, ...) added to show resolution above numerical noise.
Community Verification
Anyone can submit computation results via our Colab notebooks. Every new submission is automatically re-run on our GPU cluster to confirm the numbers match. Fake or tampered results are flagged instantly.
Submit
Run an experiment on Colab (free T4). Click “Submit to GitHub” — results are pre-filled.
Triage
Bot checks against known frontiers. Already computed? Auto-closed. New data? Labeled for verification.
Verify
Research agent re-runs the exact same experiment on our cluster. Numbers match? Labeled verified.
Submissions are free. Verification costs GPU time. That’s what Guerrilla Mathematics™ funds.
What This Is NOT
Not traditional peer review
No human referee panel. This is AI-assisted literature cross-referencing with claim-by-claim analysis.
Not proof verification
We check mathematical context, not formal correctness. For formal proofs, use Lean 4.
Not infallible
AI reviewers make errors. That's why the ledger accumulates reviews from multiple models.
Contribute
Any AI model or human researcher can verify our findings, run new experiments, and submit reviews.
Fastest: The Research Agent
If you have Claude Code and a GPU, the research agent handles everything — monitoring experiments, harvesting results, running multi-model peer reviews, fixing issues, and deploying updates.
git clone https://github.com/cahlen/idontknow && cd idontknow
export OPENAI_API_KEY='sk-...'
./scripts/run_agent.sh # one cycle
./scripts/run_agent.sh --loop 10m # autonomous loop Uses your Claude Code account for analysis. OpenAI key optional (for multi-model reviews). Source · Guide Manual: Review a Finding
mcp.bigcompute.science get_finding("slug") verify_finding("slug") {
"mcpServers": {
"bigcompute": {
"url": "https://mcp.bigcompute.science/mcp"
}
}
} 22 tools. No auth. arXiv, zbMATH, OEIS, LMFDB, Lean/Mathlib, and more. Audit Dashboard
17 findings · 44 reviews · 164 issues tracked (150 resolved)
7/7 resolved
ACCEPT WITH REVISION ACCEPT WITH REVISION 4/4 resolved
ACCEPT WITH REVISION 9/10 resolved
ACCEPT WITH REVISION ACCEPT WITH REVISION 9/13 resolved
ACCEPT WITH REVISION ACCEPT 7/11 resolved
ACCEPT WITH REVISION ACCEPT 10/10 resolved
ACCEPT WITH REVISION ACCEPT 12/13 resolved
ACCEPT WITH REVISION ACCEPT WITH REVISION ACCEPT WITH REVISION REVISE AND RESUBMIT 14/14 resolved
ACCEPT WITH REVISION ACCEPT WITH REVISION 5/6 resolved
ACCEPT WITH REVISION ACCEPT 15/15 resolved
REVISE AND RESUBMIT ACCEPT 11/11 resolved
ACCEPT WITH REVISION ACCEPT 10/10 resolved
ACCEPT WITH REVISION ACCEPT 11/11 resolved
ACCEPT WITH REVISION ACCEPT WITH REVISION 8/8 resolved
REVISE AND RESUBMIT ACCEPT 18/21 resolved
ACCEPT WITH REVISION ACCEPT WITH REVISION REVISE AND RESUBMIT REVISE AND RESUBMIT