When the World Economic Forum published its 2023 Future of Jobs Report, it contained a finding that should have dominated the conversation about AI and work but was largely buried in the broader automation narrative: the jobs most at risk from large language model displacement are disproportionately held by women.

Administrative assistants, customer service representatives, data entry clerks, bookkeepers, paralegals, medical coders — these are female-dominated professions. They are also precisely the professions where AI is already automating significant portions of the work. The AI revolution, in its first wave of impact on white-collar employment, is hitting women harder and earlier than men.

This is not inevitable. It is a consequence of choices — about which work is automated first, about which industries receive AI investment first, about whose jobs are considered worth protecting and whose are considered friction to be optimised away. Understanding it as a consequence of choices rather than a technological inevitability is the first step toward making different choices.

The Job Displacement Picture

The McKinsey Global Institute’s analysis of AI’s workforce impact has consistently found that women face higher displacement risk than men — not because women are less skilled, but because female-dominated work clusters in the categories where AI substitution is currently most advanced.

Language tasks — drafting, summarising, classifying, translating — are precisely what large language models like GPT-4 and its successors do well. The administrative and clerical workforce that performs these tasks is approximately 70% female in most developed economies.

The counter-argument — that AI will create new jobs, as automation has always created new jobs — is historically accurate but temporarily unhelpful. The new jobs created by automation have typically required different skills, and the transition periods have historically fallen harder on lower-income workers. Women who lose administrative jobs are not automatically positioned to enter AI engineering or data science.

The question is not whether AI will displace female workers — it is already doing this — but what support structures exist for the transition and who is designing them.

The Bias That’s Already There

The story of AI bias is largely a story about what happens when systems are built by homogeneous teams on homogeneous data.

The medical AI case is perhaps the most serious. A 2019 study in Science found that a widely-used health algorithm — used by US hospitals to allocate care — systematically underserved Black patients because it used historical healthcare costs as a proxy for health needs, and Black patients historically received less costly care due to systemic inequity. The algorithm thus perpetuated the inequity it was trained on.

The equivalent gender bias appears throughout medical AI. Diagnostic algorithms for heart disease have been trained predominantly on male clinical data; heart disease in women presents differently and is consistently underdiagnosed, and AI systems trained on biased data perpetuate the bias. Dermatology AI trained on images of lighter skin performs poorly on darker skin.

Beyond medicine: sentiment analysis systems have been shown to classify women’s language as more negative than men’s language expressing identical emotional content. Hiring algorithms trained on historical data systematically disadvantage women for technical roles. Credit scoring algorithms have been shown to assign lower scores to women even with identical financial profiles to men.

The mechanism in every case is the same: the system learns from human decisions, human decisions reflect human biases, and the system encodes and scales those biases at a speed and consistency that human bias cannot match.

What Women Are Doing in AI

The public conversation about AI is disproportionately male. The list of names — Sam Altman, Elon Musk, Demis Hassabis, Yann LeCun — that appear in AI coverage is almost entirely male. This reflects something real about AI leadership; it does not reflect the actual landscape of AI research and development.

Fei-Fei Li is the most consequential figure in the development of computer vision — her ImageNet project created the training data that enabled the deep learning revolution. She now leads Stanford’s Human-Centered AI Institute, which does the most rigorous work on making AI systems beneficial across human diversity rather than optimised for majority populations.

Timnit Gebru is the most important voice in AI ethics working outside corporate structures. After being fired from Google’s AI ethics team in 2020, she founded the DAIR Institute, which produced some of the most significant research on large language model risks, on facial recognition misuse, and on the social consequences of deploying systems without adequate testing on marginalised populations.

Joy Buolamwini translated her research on facial recognition bias into policy change — her testimony before the US Congress and her work with the Algorithmic Justice League contributed to several city-level facial recognition bans and to IBM’s decision to exit the facial recognition market.

Alondra Nelson, former director of the White House Office of Science and Technology Policy, was instrumental in developing the AI Bill of Rights — a framework for how AI systems should be designed and deployed in ways that protect civil liberties. It is non-binding; it establishes a standard.

The Philosophical Questions Nobody’s Asking

The public discourse about AI is dominated by technical questions (how capable?) and economic questions (which jobs?) and existential questions (will it kill us?). The questions about gender, about care work, about whose values get encoded in systems that will govern healthcare, hiring, credit, and justice are comparatively neglected.

Who does the invisible labour of AI? The content moderation that keeps social media platforms from becoming completely unusable is performed by a largely female, largely non-Western workforce that views traumatic content for low pay under difficult conditions. The data labelling that trains AI systems — the human work of identifying what is in images, of correcting model outputs, of generating training examples — is performed by distributed low-wage workforces that are significantly female. The AI revolution, at its base, is supported by invisible female labour.

What happens to care work? Nursing, social work, childcare, elder care — these professions are female-dominated, difficult to automate, and currently undervalued economically. Some argue that AI will finally allow care work to be valued by freeing workers from administrative burden. Others argue that AI will be used to further intensify care work without increasing compensation. Which of these happens is a policy question, not a technological inevitability.

Whose voice is in the AI? Language models are trained on internet text, which over-represents educated, English-speaking, Western, and male perspectives. When these models are used to generate medical advice, legal guidance, or educational content, they bring the biases of their training into contexts where those biases have real consequences.

Who decides what’s harmful? Content moderation policies, AI output filters, the decisions about what AI systems will and will not help with — these are not technical decisions. They are value decisions, and they are currently being made primarily by a small number of companies with low diversity in their decision-making ranks.

What an Honest Opportunity Looks Like

The genuine opportunities that AI creates for women are also real and should not be ignored in a rightly critical account.

AI tools are genuinely reducing certain barriers to productivity for women who are running small businesses, handling administrative functions without support staff, combining care responsibilities with professional work. The AI assistant that drafts the first version of a document, that schedules, that summarises, that researches — deployed thoughtfully, it redistributes the administrative labour that has historically fallen on women.

AI-enabled remote and flexible work has expanded access for women with care responsibilities to professional opportunities previously accessible only to those with conventional availability.

AI in healthcare, if built correctly — with diverse training data, tested for gender and racial equity, regulated for accuracy — could narrow the diagnostic gaps that have historically meant women’s symptoms are taken less seriously and understood less completely than men’s.

The “if built correctly” is not a technical qualifier. It is a political one. Correct building requires the people most affected by the systems to be involved in designing them. It requires regulatory frameworks that mandate testing for disparate impact. It requires a workforce that reflects the diversity of the population these systems serve.

The women who understand this best are already building it. The question is whether the institutions deploying AI at scale will be required to listen.


Related reading: