Batch N°01 · May 2026

Training a
Community
World Model.

Organizational structure and design are rapidly evolving in the age of LLMs. In “From Hierarchy to Intelligence,” Jack Dorsey and Roelof Botha argue that AI can replace the hierarchy companies have used for two thousand years to coordinate themselves. Block is building a company world model, trained on its own operations. We’re building one for a community, trained on the community’s own activity.

A community world model is a model tuned to help a specific group coordinate toward positive-sum outcomes that collective action problems would otherwise prevent. The broader vision: any group of people, however loosely defined, should be able to train a model on their own behavior, even when their coordination isn’t fully legible to a state or a balance sheet. This co-op is an attempt to build one on our own compute.

Alignment isn’t handed down. It accumulates from below: humans aligning to each other in service of goals that align with all of humanity. A model that helps a specific group coordinate is the smallest honest instance of that work.

Can an AI agent provably help a non-hierarchical group create positive-sum outcomes?

This thesis is bigger than any one firm. Many kinds of communities should be able to build trust in their own models: transparent about training, governed by users, tuned for outcomes that compound across groups. Not many are building that infrastructure yet. Five sketches of what it could look like →

The first batch runs on a single NVIDIA GB300 workstation at The Grove, in Greenpoint. This is new territory for finetuning and post-training. As far as we can tell, no cohort has trained and owned its own coordination model on its own compute before. We’re finetuning open-weight frontier models on the Shape Rotator cohort’s activity, running reinforcement learning on top, and building on top of open-source agentic harnesses. The question we’re testing: whether a cohort’s own behavior is enough signal to turn general-purpose models into ones that help that cohort coordinate. Specific base models and RL methods are still open. A position paper describing the experiment and its framing is being submitted to the Pluralistic Alignment workshop at ICML 2026. Progress posts here as the batch runs. How to contribute is below.

Contribute

Contributions pay for hardware, electricity, and cooling. Members get first access to every model this batch produces. Contributors will have their name etched into the hardware, the finetuned model, and with a little bit of luck, history itself.

Treasury Last update ·
25%
50%
75%
% to goal ·
A. Onchain · Ethereum Preferred

Send ETH, stablecoins, or any ERC-20 to the cooperative treasury. Contributions are publicly visible onchain. For private donations, route through Railgun ↗.

Zcash · Shielded address forthcoming
Copied · 0x742d · · · f44e

Donations to this batch are not tax-deductible unfortunately.

B. Traditional · Bank transfer & Wire Fiat

Contact james@generalsemantics.nyc for bank routing information.

Green Point · Compute · Co-op · Greenpoint · Brooklyn