Open computational mathematics. AI-audited, not peer-reviewed. All code and data open for independent verification.
About
Big math — computational mathematics with specialized GPU hardware. Custom CUDA kernels and open-source tooling applied to open conjectures rarely investigated by direct computation.
What This Is
bigcompute.science is an open computational lab notebook. I run GPU experiments on open mathematical conjectures — Zaremba, Ramsey R(5,5), Kronecker coefficients, class numbers, Hausdorff spectra — and publish everything: the code, the raw data, the findings, and the full audit trail.
Every finding is checked claim-by-claim by multiple AI models against published literature — an informal process, not formal peer review. Currently: 44 reviews from 6 AI models across 4 providers. 164 issues identified, 150 resolved — each fix linked to its commit.
This is not peer-reviewed mathematics. The safest way to describe these results is: computed, archived, benchmarked, observed — not proved.
How It Was Built
The project evolved through sustained human–AI collaboration. The human provides mathematical direction and judgment. The AI helps generate code and cross-reference literature — a workflow that is more efficient than either could achieve alone.
The timeline of how it came together:
The entire website — every finding, every review, every badge, every commit link — is generated from data. The verification page reads from certifications.json. The MCP server reads from manifest.json. The changelog reads from git log. When the research agent runs, everything updates automatically.
Why This Matters
Heavy GPU computation produces results that are expensive to reproduce. We publish everything so the work compounds rather than gets repeated. If an AI agent needs to know whether Zaremba's conjecture holds for d up to 200 billion — that answer exists here, verified, with the code to reproduce it.
We believe computational results should be:
- Open — all code, data, methodology, and reviews published
- Structured — machine-readable for agent consumption (MCP, llms.txt, frontmatter)
- Reproducible — exact commands to verify every result
- Honest — overclaims caught and corrected publicly, with commit links
- Auditable — every claim reviewed by multiple AI models, every issue tracked
- Verified — community submissions re-run on our GPUs to confirm authenticity
Community Verification
Anyone can contribute computational results via Colab notebooks running on free T4 GPUs. But how do we know the results are real? We trust but verify: every new submission is automatically re-run on our cluster. If the numbers match, it's labeled verified. If they don't, it's flagged.
Submissions are free. Verification costs GPU time. That's what Guerrilla Mathematics™ funds — every purchase buys cluster time for verifying community results and running new experiments on unsolved problems.
For Students and Researchers
This project exists to make open mathematical problems more accessible through computation.
When I run large computations on open conjectures — checking billions of cases, searching for patterns, exhausting difficult examples — the goal is not to replace mathematical proof. It is to close in. Every narrowed bound, every observed pattern, every verified range makes the remaining problem more tractable and, hopefully, more exciting. Seeing a decades-old conjecture with fresh computational evidence can inspire approaches that pure theory alone might not suggest.
These problems are more than academic curiosities. Number theory underlies cryptography. Spectral theory drives signal processing. Combinatorics informs algorithm design. Representation theory connects to quantum information. These ideas underlie much of applied mathematics, computer science, and the foundations of AI. A conjecture about continued fractions may seem abstract, but the techniques developed along the way become tools for the next generation of problems.
I encourage students — undergraduate, graduate, or ambitious high school students — to use this data as a starting point for real research. Pick a pattern. Try to explain it. Formalize it. Extend it. Prove it. The computational results are here, open and reproducible. The theoretical work — the real mathematics — is yours to do.
A note on tools: do not let convention limit your approach. Some of the next advances may come from those who combine mathematical insight with modern computational tools — GPUs, AI assistants, formal verification, automated search. Learning how to think mathematically matters more than any particular technique. Patterns, structures, how truths transfer between domains, how logic composes — these are the durable skills. Specific methods come and go. The ability to recognize structure does not.
This site is for anyone with curiosity and hardware: students writing theses, engineers with idle GPU clusters, researchers exploring adjacent fields, or anyone who thinks open problems deserve open computation. This is not an authority. It is a place to share computational results as openly and transparently as possible.
Contribute
Three ways in, ordered by effort:
- Open a Colab notebook — free T4 GPU, auto-compile, run an experiment, download results. Get started →
- Run the research agent — clone the repo, set one API key (Gemini free, or OpenAI, or Anthropic), run
./scripts/run_agent.sh. Guide → - Submit a review or computation — write a review JSON per our schema, or upload raw data to your own HF dataset and link it via PR.
Technical Stack
8×B200 DGX (1.43 TB VRAM) + RTX 5090
Custom CUDA kernels, Python harnesses
6 AI models, 4 providers, manifest-driven
Astro + KaTeX, Cloudflare Pages
23 tools, no auth, Cloudflare Worker
Hugging Face datasets, GitHub, CC BY 4.0
Who
Cahlen Humphreys — Managing Principal at Enfuse.io, speaker at NVIDIA GTC, and builder of things that require too many GPUs. M.S. Mathematics, Florida Atlantic University. B.S. Mathematics, Boise State University. Research interests include continued fraction neural networks (CoFrGeNet-F), formal theorem proving with LLMs, and computational number theory. Based in Irvine, CA.
X · Hugging Face · LinkedIn
This project was produced through human–AI collaboration. The human provides direction, judgment, and mathematical taste. The AI provides code generation, literature cross-referencing, and tireless iteration. Every page on this site discloses this collaboration. The AI models used include Claude Opus 4.6 (Anthropic), o3-pro (OpenAI), GPT-5.2 (OpenAI), and Grok (xAI).