Algorithm Bias

“algorithmic bias” or “aglorithmic reinforcement” or hidden prejudices in code

Black Memes Matter: #LivingWhileBlack with Becky and Karen

Williams analyzes how memes like #LivingWhileBlack, BBQ Becky, and Karen operate as cultural critique in digital spaces, exposing and resisting White surveillance and racial dominance while providing Black communities with tools for expression and agency. She argues that these memes do more than humorously depict everyday racism—they disrupt dominant narratives and highlight systemic racial inequalities online and offline.

Black Memes Matter: #LivingWhileBlack with Becky and Karen Read More »

The Filter Bubble: What the internet is hiding from you

Eli Pariser’s The Filter Bubble argues that personalization algorithms on platforms like Google and Facebook selectively curate what we see online based on our data, creating “filter bubbles” that limit exposure to diverse information and reinforce existing beliefs. This invisible tailoring of content shapes individual worldviews, can foster intellectual isolation, and has broader implications for society, democracy, and public discourse.

The Filter Bubble: What the internet is hiding from you Read More »

Custodians of the Internet

Custodians of the Internet examines how major social platforms decide what content stays up and what gets removed, revealing that moderation is shaped by opaque policies, economic priorities, cultural norms, and political pressures. The author highlights that these hidden choices, often made by a combination of algorithms and laborers behind the scenes, have profound effects on free expression, public discourse, and social norms.

Custodians of the Internet Read More »

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

Bender and colleagues critique the trend toward larger and larger large language models (LLMs), arguing that scaling up these models amplifies serious harms—environmental, ethical, and social—without solving core problems of linguistic understanding or accountability. They call for more responsible research practices, including careful dataset curation, evaluation of societal impact, AI Ethics, and consideration of alternatives to ever‑larger models.

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 Read More »

Emotional consequences and attention rewards: the social effects of ratings on Reddit

Davis and Graham analyze how binary rating features (upvotes/downvotes) on Reddit influence users’ emotional expression and engagement, finding that upvotes tend to predict positive sentiment while downvotes predict negative emotion, yet downvoted content often generates higher engagement. The study frames ratings as affordances that function as symbolic markers of community norms, impacting both affect and attention patterns.

Emotional consequences and attention rewards: the social effects of ratings on Reddit Read More »