How a Black Woman is Making AI a Force for Equity
From refugee to a respected AI expert, Timnit Gebru is shaping technology into a tool for community-informed solutions.
When designed with purpose and deployed with care, artificial intelligence (AI) holds immense potential to meet community needs, transform lives, and advance health equity.
But structural barriers and the influence of big-tech firms often stifle that promise. Determined to reimagine this narrative, Timnit Gebru founded the Distributed Artificial Intelligence Research Institute (DAIR) in 2020—a bold leap toward unlocking AI’s power to benefit everyone, regardless of race, gender, class or other factors. While multiple definitions of AI exist, Gebru defines it as “a field that attempts to get machines to do something other than what's purely programmed into them at the onset.”
RWJF support for DAIR is based on recognition that racism and bias are embedded in the very algorithms that drive AI. Who gets to ask the questions matters deeply and can bias data and harm marginalized communities. The lack of people of color among computer scientists worsens this distortion. Ensuring representation in AI is essential to realizing AI’s potential for health equity.
Timnit Gebru is the kind of scholar, practitioner, and leader helping to change all that.
Bringing her whole self to the work
Gebru’s journey to becoming one of Time Magazine’s “100 Most Influential People” in 2022 began in Ethiopia where, as a little girl, she dreamed of becoming a scientist. Her family encouraged her ambition.
But in 1998 Gebru and her family had to flee war in Ethiopia. She arrived in the United States as a refugee and enrolled in high school where her determination to succeed drove her to pursue advanced placement (AP) classes. Soon, she learned how racism permeates the systems and structures that shape Black people’s experiences in America when a teacher questioned her ability to succeed in AP classes.
Experiencing these barriers left Gebru disillusioned yet determined to fight back. She excelled academically and completed bachelor’s and master’s degrees in electrical engineering at Stanford University, joining Apple as an intern while there. After graduation, she joined Apple as an audio hardware engineer, before returning to Standford to earn her PhD in computer vision, a field that advances the capacity of AI to interpret digital images. A stint as a post-doctoral researcher at Microsoft followed and then came an offer from Google in 2018.
Speaking truth to power in AI
As she gained visibility in her field, Gebru grew disturbed by its lack of diversity. Typically, she would be among just a handful of Black researchers at high-level conferences; once, she was the sole Black woman among 8,500 participants. Those experiences led her to co-found Black in AI, which brings together academics, entrepreneurs, thought leaders, and others to broaden inclusivity and embed equity into the evolving technological revolution.
Though she was eager to hone her technical skills, Gebru grew troubled by the implications of her work. The myth lingers that data are inherently objective truth-tellers. But she found the idea that AI could advance in a value-free zone without considering its consequences increasingly uncomfortable.
No longer able to separate intellectual curiosity from concern for societal impacts, she began speaking out about the risks of using facial-recognition technology in law enforcement and the potential for abuse when computer vision becomes a surveillance tool. It was becoming clear to her that the push for AI was fueled by profit-focused business models and the quest for new military tools, not concern for equity.
A career-altering event came when she co-authored an article about the rush to build large language models, which are designed to reproduce the patterns in human language. The article raised important questions about their size, risks, and the need for ethical oversight. The paper prompted push back at Google, where she was working. Google fired Gebru after she expressed concern about being silenced for speaking out against the challenges of diversity in tech.
DAIR takes root
Buoyed by the support of more than 1,200 colleagues who protested her departure, Gebru launched DAIR to reimagine AI’s potential. DAIR’s mission is twofold: to expose and disrupt the harms of technology and “to cultivate spaces to accelerate imagination and creation of new technologies and tools to build a better future.”
DAIR’s inclusive approach entails collaborating with diverse communities. For example, researchers are partnering with refugees to combat the misuse of AI-based monitoring technologies, biometric security systems, and lie detection tests at national borders. They are also listening to gig workers who moderate violent AI-generated content and suffer from post-traumatic stress disorder (PTSD) as a result. It means being open to the knowledge of activists and organizers who see the consequences of AI in their communities.
-
Timnit Gebru discusses the need for stricter oversight of AI.
Part of DAIR’s role is to empower advocates with data that can promote change and redirect AI technology in a positive direction. In South Africa, for example, computer vision and satellite images have allowed local teams to map land use patterns that capture the lingering impact of apartheid in neighborhoods across the country. Along with documenting inequities, their data pinpoint vacant land in underserved areas that could be developed for housing and other public uses.
Another DAIR-sponsored project uses machine learning to analyze anti-racist campus protests in both the United States and Canada. In the United States, the periods of greatest protest activity were waves of mass mobilization across the country on often racialized issues, such as racist police violence and racially hostile campus climates. In Canada, protest activity was most intense during provincial or local campaigns led by formal student organizations and unions.
Other AI-designed projects have involved wage theft, the impact of social media, and labor conditions for data workers in Germany, Kenya, Syria, and Venezuela. Each one, says Gebru, underscores DAIR’s commitment to technology that serves local people.
A shared vision for equity
This work aligns with RWJF’s commitment to reimagining the systems by which health science knowledge accumulates. RWJF believes that academic research must stand alongside cultural and community knowledge as equally valid sources for decisionmaking. Together we are building a future where AI supports communities, amplifies diverse voices, and gives everyone a fair and just opportunity to thrive.
Related Content
Health Science Knowledge System
Our vision is bold: a system rooted in equity and justice that addresses the wide-ranging barriers to health; a system where knowledge is co-created with communities, and diverse ways of knowing are embraced and valued.
Building Community Power to Advance Health Equity
Structural Racism and Health
1-min read