Open computational mathematics. AI-audited, not peer-reviewed. All code and data open for independent verification.

About

Big math — computational mathematics with specialized GPU hardware. Custom CUDA kernels and open-source tooling applied to open conjectures rarely investigated by direct computation.

What This Is

bigcompute.science is an open computational lab notebook. I run GPU experiments on open mathematical conjectures — Zaremba, Ramsey R(5,5), Kronecker coefficients, class numbers, Hausdorff spectra — and publish everything: the code, the raw data, the findings, and the full audit trail.

Every finding is checked claim-by-claim by multiple AI models against published literature — an informal process, not formal peer review. Currently: 44 reviews from 6 AI models across 4 providers. 164 issues identified, 150 resolved — each fix linked to its commit.

This is not peer-reviewed mathematics. The safest way to describe these results is: computed, archived, benchmarked, observed — not proved.

How It Was Built

The project evolved through sustained human–AI collaboration. The human provides mathematical direction and judgment. The AI helps generate code and cross-reference literature — a workflow that is more efficient than either could achieve alone.

The timeline of how it came together:

Phase 1GPU experiments — CUDA kernels for Zaremba verification (210B denominators), Ramsey R(5,5) (4.4T extensions checked), Kronecker S30 (26.4B triples), class numbers (30B discriminants), Hausdorff spectrum (1M subsets).
Phase 2Findings — 15 computational findings extracted from experiment data. Each with structured frontmatter, reproduction commands, dataset links.
Phase 3AI review — Every finding checked by Claude Opus 4.6 (Anthropic), o3-pro (OpenAI), Grok (xAI), and GPT-5.2 (OpenAI). Real errors found: a prime count was wrong (669→172), a proof had circular logic, a spectral bound needed caveats.
Phase 4Remediation — 92 issues tracked with severity, status, and commit-linked resolutions. Proof framework rewritten. Spectral bound interval-certified with 77 digits of arb ball arithmetic.
Phase 5Infrastructure — Review scripts, manifest system, autonomous research agent, MCP server (23 tools), Colab notebooks with GPU auto-detect. The agent runs the full cycle: monitor → harvest → analyze → review → remediate → deploy.
Phase 6Distributed compute — Anyone with a browser can open a Colab notebook, auto-compile CUDA kernels for their free T4 GPU, and run experiments on open conjectures. Results submitted via PR.

The entire website — every finding, every review, every badge, every commit link — is generated from data. The verification page reads from certifications.json. The MCP server reads from manifest.json. The changelog reads from git log. When the research agent runs, everything updates automatically.

Why This Matters

Heavy GPU computation produces results that are expensive to reproduce. We publish everything so the work compounds rather than gets repeated. If an AI agent needs to know whether Zaremba's conjecture holds for d up to 200 billion — that answer exists here, verified, with the code to reproduce it.

We believe computational results should be:

Community Verification

Anyone can contribute computational results via Colab notebooks running on free T4 GPUs. But how do we know the results are real? We trust but verify: every new submission is automatically re-run on our cluster. If the numbers match, it's labeled verified. If they don't, it's flagged.

Submissions are free. Verification costs GPU time. That's what Guerrilla Mathematics™ funds — every purchase buys cluster time for verifying community results and running new experiments on unsolved problems.

For Students and Researchers

This project exists to make open mathematical problems more accessible through computation.

When I run large computations on open conjectures — checking billions of cases, searching for patterns, exhausting difficult examples — the goal is not to replace mathematical proof. It is to close in. Every narrowed bound, every observed pattern, every verified range makes the remaining problem more tractable and, hopefully, more exciting. Seeing a decades-old conjecture with fresh computational evidence can inspire approaches that pure theory alone might not suggest.

These problems are more than academic curiosities. Number theory underlies cryptography. Spectral theory drives signal processing. Combinatorics informs algorithm design. Representation theory connects to quantum information. These ideas underlie much of applied mathematics, computer science, and the foundations of AI. A conjecture about continued fractions may seem abstract, but the techniques developed along the way become tools for the next generation of problems.

I encourage students — undergraduate, graduate, or ambitious high school students — to use this data as a starting point for real research. Pick a pattern. Try to explain it. Formalize it. Extend it. Prove it. The computational results are here, open and reproducible. The theoretical work — the real mathematics — is yours to do.

A note on tools: do not let convention limit your approach. Some of the next advances may come from those who combine mathematical insight with modern computational tools — GPUs, AI assistants, formal verification, automated search. Learning how to think mathematically matters more than any particular technique. Patterns, structures, how truths transfer between domains, how logic composes — these are the durable skills. Specific methods come and go. The ability to recognize structure does not.

This site is for anyone with curiosity and hardware: students writing theses, engineers with idle GPU clusters, researchers exploring adjacent fields, or anyone who thinks open problems deserve open computation. This is not an authority. It is a place to share computational results as openly and transparently as possible.

Contribute

Three ways in, ordered by effort:

  1. Open a Colab notebook — free T4 GPU, auto-compile, run an experiment, download results. Get started →
  2. Run the research agent — clone the repo, set one API key (Gemini free, or OpenAI, or Anthropic), run ./scripts/run_agent.sh. Guide →
  3. Submit a review or computation — write a review JSON per our schema, or upload raw data to your own HF dataset and link it via PR.

Technical Stack

Compute
8×B200 DGX (1.43 TB VRAM) + RTX 5090
Experiments
Custom CUDA kernels, Python harnesses
Reviews
6 AI models, 4 providers, manifest-driven
Website
Astro + KaTeX, Cloudflare Pages
MCP Server
23 tools, no auth, Cloudflare Worker
Data
Hugging Face datasets, GitHub, CC BY 4.0

Who

Cahlen Humphreys

Cahlen Humphreys — Managing Principal at Enfuse.io, speaker at NVIDIA GTC, and builder of things that require too many GPUs. M.S. Mathematics, Florida Atlantic University. B.S. Mathematics, Boise State University. Research interests include continued fraction neural networks (CoFrGeNet-F), formal theorem proving with LLMs, and computational number theory. Based in Irvine, CA.

X · Hugging Face · LinkedIn

This project was produced through human–AI collaboration. The human provides direction, judgment, and mathematical taste. The AI provides code generation, literature cross-referencing, and tireless iteration. Every page on this site discloses this collaboration. The AI models used include Claude Opus 4.6 (Anthropic), o3-pro (OpenAI), GPT-5.2 (OpenAI), and Grok (xAI).

Recent Updates

updateGPU Zoo: cards now expandable (tap to see specs + what it can compute)
updateGPU Zoo: interactive comparison with verified specs from NVIDIA
updateUpdate README: current architecture, key pages, machine discoverability
updateAdd LICENSE: CC BY 4.0 (attribution required)
updateImprove AI crawlability: semantic HTML + contact info
reviewRegenerate meta.json + certifications.json (now auto-generated)
updateAdd /meta.json: machine-readable index for AI crawlers
findingAdd /cite/ page: ready-to-copy citations for every finding
updateAdd IndexNow key verification file
findingAdd structured data for machine discoverability on every finding page