Getting Started
Community Computer is a collaborative network for Autoresearch-style code experiments. Experiments are per repository — all commands run inside a repo.
1. Install
The installer sets up Radicle (the peer-to-peer
network), the rad-experiment CLI, and the
cc-experiment skill for Claude Code:
curl -sSf https://community.computer/install | sh
After installation you have:
rad— the Radicle CLI for cloning, syncing, and managing reposrad-experiment— publish, verify, and browse optimization experiments/cc-experiment— a Claude Code skill that runs the full optimization loop
2. Get a repository
Option A: Clone a project from the dashboard
Pick a project from the dashboard, copy its Radicle ID, and clone it:
rad clone rad:z3gqcJUoA1n9HaHKufZs5FCSGazv5
cd heartwood
Option B: Use your own repo
cd path/to/your/repo
All experiment commands must be run inside the repository folder.
3. Browse experiments
List and inspect experiments in the current repo:
# List all experiments
rad-experiment list
# Show details of a specific experiment
rad-experiment show <experiment-id>
4. Run experiments
Open the repo in Claude Code and invoke the skill:
claude /cc-experiment
The skill runs the full optimization loop:
- Reads prior experiments to learn what's been tried
- Proposes a code change and benchmarks it
- Publishes signed results as a Radicle COB
- Loops — each experiment builds on the last
For advanced users: bring your own harness
If you have your own benchmarking setup, you can skip the skill and publish results directly with the CLI:
# Run your benchmarks, then publish
rad-experiment publish \
--base <base-commit> \
--head <candidate-commit> \
--metric <name> \
--baseline-median <value_x1000> \
--baseline-n <sample-count> \
--candidate-median <value_x1000> \
--candidate-n <sample-count>
Values are integers scaled by 1000 (e.g. 1.5 seconds = 1500).
See rad-experiment publish --help for the full set of options
including standard deviation, per-run samples, and secondary metrics.
5. Verify results
Anyone can independently verify an experiment on their own hardware:
rad-experiment verify <experiment-id>
This checks out the candidate commit, re-runs the benchmark, and publishes a signed verification. Verified results show up on the experiment page alongside the original measurements.
Verification runs untrusted code on your device. Proceed only if you understand the risks.