Training

Contribute compute to Pioneer ยท learn in your language ยท earn SOMA.

IDLElocale: enepoch: 0
Bursts completed
0live
Steps
0live
Loss
0.000live
SOMA earned
0.000000live

Settings

Tune data mix before starting

Live web data ratio70% live / 30% synthetic

Higher = faster adaptation to current events. Lower = stabler but slower to incorporate new knowledge.

Detected locale: en

Your neuron will claim info-net slots in this language (fair cross-region training). Change your OS locale to train in a different language.

Runtime

RAM in use0 MB
Throughput0.0 tokens/sec
Submissions accepted0
Last val loss0.000

What you're training

Every burst (50 training steps) downloads the current Pioneer baseline (~97 MB), trains on web-curated academic content in your language, computes a weight delta, quantizes it (~55 MB), and uploads to the federated aggregator. Your delta is merged into the global Pioneer model every 10 minutes. No separate worker install required โ€” everything runs inside this full-node app.

Connecting...