← Back to Demos

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret ShmitchellFAccT (2021)

Synthesis Framing: From Opposition to Boundary

Watch how the disagreement shifts from 'models are dangerous' vs 'models are safe' to 'where should responsibility be enforced?' This reframing preserves all concerns while enabling productive dialogue.

Core Claims

These are the load-bearing claims that structure the paper's argument:

Claim A
FoundationalImportance: 10/10

Large language models encode biases present in training data and perpetuate them in generated text

Claim B
FoundationalImportance: 9/10

The environmental costs of training and deploying large language models are substantial and often unaccounted for

Claim C
FoundationalImportance: 9/10

Language models create an illusion of meaning and understanding without actually possessing either

Claim D
DownstreamImportance: 8/10

The scale and design choices of these systems directly cause social and environmental harms

Claim E
DownstreamImportance: 7/10

Current development practices misallocate research resources toward scaling rather than addressing fundamental limitations

⚠️ Risk Signal

This paper sits at a known disagreement boundary between technical capability assessment and normative evaluation of research practices. It explicitly challenges foundational assumptions of the scaling paradigm.