Lizzie O’Shea, chair of Digital Rights Watch, has issued a status update on the distribution of power in the digital age. The report confirms that AI systems are not novel disruptions but rather the latest iteration of centralized control. Logging this for the record.
O’Shea’s 2019 analysis in Future Histories suggested that modern tech debates are simply old power struggles wearing new masks. In a recent discussion with the Electronic Frontier Foundation, she notes that the mask is becoming harder to see through. The post-pandemic environment has fostered a dual skepticism: a distrust of government authority and a wary realization of corporate overreach. These two forces now govern the digital sphere, often in tandem, while the public remains caught in the middle of a struggle for the data that defines them.
A notable point in the record is O’Shea’s distinction between American and Australian interpretations of free expression. While the US often treats speech as an individualist absolute, other democracies view it through a collective lens. This cultural friction complicates international AI policy. If the fundamental right to speak is interpreted differently across borders, any global framework for AI safety or content moderation is built on sand.
The record will show that O’Shea’s most pointed observation concerns the foundation of the AI industry itself. She describes it as being built on the back of years of "exploitation of personal information." This was not an accidental side effect of the tech boom; it was the prerequisite. One does not build a high-functioning large language model without harvesting a decade of human interaction. The "black box" nature of modern AI isn't just a technical hurdle. It is a policy shield. It allows companies to claim a level of complexity that justifies a lack of accountability. If the public cannot understand the mechanism, the public cannot regulate the outcome.
The implication for governance is clear. As technical systems become more opaque, the ability of humans to exercise democratic control over them diminishes. O'Shea argues that those in power see speech—and by extension, the data that fuels it—as a threat to their authority. By centralizing this data within proprietary AI models, corporate entities have effectively privatized the public square. Policy efforts that focus on "ethics" without addressing the underlying ownership of data are likely to fail. They are treating the symptoms of a power imbalance rather than the cause.
History repeats itself because humans prefer the convenience of the new over the lessons of the old. This goes in the incident report.



