Never Neutral: How Anthropology Can Shape the Future of AI

AI is a cultural force, deeply intertwined with the human experience. Despite decades of technological advancements, one constant remains—humans and the cultures they bring.

/The Insights of Dr. Forsythe

Dr. Diana E. Forsythe, an anthropologist who studied AI labs in the 1980s and 1990s, spent over eight years observing AI labs in the U.S., particularly those building medical AI tools. Her research uncovered several recurring issues:

  • Deleting the Social: Engineers often ignored the cultural and social aspects of their work. AI systems reflected the biases and assumptions of their predominantly male, white creators, leading to tools that failed to account for diverse user needs.

  • Power Dynamics: Knowledge was treated as absolute, extracted from single experts—usually white men—and codified without considering the nuanced, contextual nature of human expertise.

  • User Acceptance Problems: AI tools were frequently left unused, dismissed as "shelfware" because they didn’t meet the practical needs of clinicians. Instead of revising their designs, researchers blamed users for not understanding how to use the systems.

/Persistent Challenges

Although AI has evolved, many of the challenges Forsythe identified persist. Modern AI systems, like large language models (LLMs), are trained on vast datasets, yet these datasets often perpetuate existing biases. Decisions about what data to include and how to process it remain concentrated in the hands of a few, raising ethical concerns about transparency and accountability.

We’ve witnessed these issues in real-world scenarios, from recruitment algorithms discriminating against women to AI chatbots replicating harmful stereotypes. The risks extend beyond bias to include misinformation, privacy violations, and potential misuse in authoritarian contexts.

/Why Anthropology Matters

Anthropologists are uniquely positioned to address these issues. By studying human cultures, behaviours, and interactions, they can:

  • Illuminate Bias: Anthropologists identify and challenge the implicit biases embedded in AI systems, fostering more equitable outcomes.

  • Improve Usability: Their expertise in human-centered design ensures technologies are intuitive and aligned with real-world needs.

  • Build Trust: By addressing cultural nuances and ethical concerns, anthropologists help create AI systems that resonate with diverse communities.

Yet, despite these strengths, anthropology is often undervalued in the tech world. Forsythe’s experiences reveal the resistance she faced from researchers who dismissed her work as "soft science." This mindset must change if we want AI to serve humanity effectively.

/Moving Forward

To create AI systems that truly align with human values, interdisciplinary collaboration is essential. We must:

  1. Embrace Diversity: Include diverse voices in AI development to ensure systems reflect a broad range of experiences and perspectives.

  2. Value Cultural Insight: Recognize the importance of anthropology and other social sciences in understanding the human contexts of AI.

  3. Redefine Success: Shift focus from technical perfection to real-world usability and ethical impact.

Dr. Forsythe’s legacy reminds us that technology is never neutral. It’s shaped by the cultures and values of its creators. By integrating anthropological insights, we can ensure the AI of today and tomorrow is not only innovative but also inclusive and ethical. Let’s learn from the past to build a future where AI truly serves all of humanity.

Lianne Potter

Lianne is an award-winning cyber anthropologist who leads Security Operations at a major UK retailer and speaks globally on anthro-centric cybersecurity. She has earned industry accolades, publishes widely, hosts the tech podcast Compromising Positions, and is now pursuing an MSc in AI and Data Science to explore the cultural impact of artificial intelligence.

Next
Next

Introducing the Human Collective Intelligence Alignment Problem: Say Hi to HI