Contribute compute to Pioneer ยท learn in your language ยท earn SOMA.
Tune data mix before starting
Higher = faster adaptation to current events. Lower = stabler but slower to incorporate new knowledge.
Your neuron will claim info-net slots in this language (fair cross-region training). Change your OS locale to train in a different language.
Every burst (50 training steps) downloads the current Pioneer baseline (~97 MB), trains on web-curated academic content in your language, computes a weight delta, quantizes it (~55 MB), and uploads to the federated aggregator. Your delta is merged into the global Pioneer model every 10 minutes. No separate worker install required โ everything runs inside this full-node app.