Introduction
Artificial intelligence (AI) is rapidly transforming workplaces, raising pressing questions about its impact on gender disparities. Across Europe and the United States, women continue to earn less on average and remain underrepresented in leadership. In 2023, women in the EU earned about 12–13% less per hour than men, and in the US, full-time female workers earned roughly 83 cents for every dollar men earned. Such pay gaps are only one facet of a broader gender gap that also encompasses hiring biases, promotion barriers, and risks from workplace automation. This report examines four major industries – education, technology, healthcare, and finance – to analyse whether AI is helping bridge these gender gaps or exacerbating them.
We consider developments in Europe and the US, highlighting positive uses of AI as well as negative case studies where AI has amplified bias.
Key dimensions of disparity in each industry are explored, including hiring and recruitment, access to promotions and leadership, and the looming impact of AI-driven automation on women’s jobs.

Education: AI and the Gender Gap in Schools and Academia
Gender disparity in the education sector is paradoxical: women are prevalent in education jobs, but still face gaps in pay and leadership. In many countries, teaching and academic roles are female-dominated yet comparatively underpaid, which contributes to the overall pay gap. EU analysis finds that about 24% of the gender pay gap is attributable to the overrepresentation of women in lower-paid sectors like care, health and education, where work tends to be systematically undervalued. Indeed, while women hold the majority of teaching positions, they are less likely to hold top leadership posts (such as school principals or university deans), and this vertical segregation reflects a “glass ceiling” effect. In higher education, for example, women in the US make on average 82 cents per dollar compared to their white male colleagues, a gap largely driven by men occupying more senior, higher-paid professorships and administrative roles. The profession of manager in education also shows a pay gap in Europe, as women managers earn about 23% less than men in similar roles , indicating unequal access to the upper rungs of educational leadership.
AI as a tool for progress:
- On the positive side, AI offers new ways to promote equity in education if used thoughtfully. Intelligent tutoring systems and personalised learning platforms can adapt to each student’s needs, potentially helping both girls and boys excel by countering stereotypes (for example, by encouraging girls in STEM subjects through tailored content). Some AI-driven software is being used to detect and correct subtle biases; for instance, language analysis tools can flag gender-biased wording in learning materials or feedback. This can make classroom content and assessments more fair. Moreover, AI can automate administrative tasks like grading quizzes or scheduling, which may free teachers (a workforce largely female) from routine work and give them more time to focus on student engagement and professional development. By reducing workload burdens that often fall on female staff, AI could improve job satisfaction and retention.
- There are also initiatives to leverage AI to support female educators. For example, experimental programs use AI coaches to assist teachers in reflective practice, potentially helping overcome biases (such as unintentional favouring of male students in class participation). Though still in its infancy, such tools point to AI’s promise in making the education system more inclusive.
- Crucially, education systems themselves are beginning to emphasise AI literacy and digital skills for girls. Recognising that women’s participation in AI remains under 30% globally as of 2023, schools and universities in Europe and the US are introducing programs to spark girls’ interest in tech and AI from an early age. The OECD notes that inclusive AI education must “begin early and extend across the career lifecycle” to ensure women and girls are not left behind in the digital future. By equipping more young women with AI skills, the education sector is laying the groundwork for greater gender parity in the tech workforce of tomorrow.
Education
Workforce Share: Women represent 69% of education professionals.
Overall Pay Gap: Women earn 91 cents per male dollar.
Higher Education:
-
White Women: 83 cents
-
Black Women: 75 cents
-
Hispanic/Latina Women: 73 cents
-
Native American Women: 69 cents
-
Asian Women: 99 cents

AI Pitfalls & Setbacks
- Despite its potential, AI in education carries risks of reinforcing bias if not carefully managed. Algorithms are only as unbiased as the data and designs behind them. If an AI system used for student assessment or university admissions is trained on historical data imbued with gender or socio-economic bias, it could perpetuate those injustices.
- One cautionary example was the UK’s 2020 exam-grading algorithm (applied when exams were cancelled during the pandemic). The algorithm was later scrapped amid findings that it systematically downgraded certain students; while the bias was more strongly linked to school advantage and socio-economic status, it underscored how automated decision tools can produce unfair outcomes in education. In a gender context, imagine an AI that predicts students’ aptitude in maths or science: if it learns from past data where girls were less encouraged in these subjects, it might wrongly rate female students as having lower potential, thus limiting opportunities (e.g. not recommending them for advanced courses).
- Bias can creep into seemingly innocuous tools too. A university might use an AI to screen scholarship applications or suggest which faculty applicants to interview. Without explicit checks, the AI could pick up on gendered patterns – perhaps undervaluing leadership experience in women if historically fewer women were in student government or sports, for instance. The National Education Association (NEA) warns that “as AI technology increases in classrooms, so do concerns about perpetuating bias”, noting cases from facial recognition that fails to recognise certain students to plagiarism detectors misflagging non-native English writers. Although not specifically gendered, these issues illustrate how AI can reflect existing inequities.
- Another challenge is that automating teaching tasks might lead policymakers to over-rely on technology and undervalue teachers. If AI tutors become very effective, there is a risk (however distant) that some teaching or support roles – often held by women – could be reduced or eliminated as cost-saving measures. For example, administrative support staff in schools (a role largely staffed by women) might be cut back if AI scheduling and chatbot systems handle front-office tasks. Workplace automation and job security are thus concerns in education as in other sectors.
- The key will be using AI to assist and augment educators, not replace their human touch. Ensuring women in education have a voice in selecting and designing classroom AI tools is also critical. Without diverse input, ed-tech products might inadvertently cater to only a subset of students or perpetuate male-centric curricula. In short, AI isn’t neutral – and the education sector must “lead the way” in demanding inclusive design so that women and girls truly benefit from these innovations.

Technology: AI in the Tech Industry – Promise and Peril
- The technology industry has long struggled with gender imbalance, and the advent of AI is a double-edged sword. On one hand, AI tools can be deployed to reduce human bias in hiring and promotion decisions; on the other hand, poorly designed AI can bake in the very biases tech is trying to overcome. In both Europe and the US, tech remains a male-dominated field at most levels. Women comprise only about 25–28% of the tech workforce by recent estimates. This under-representation becomes even more pronounced higher up the ladder – women hold just around 10% of senior leadership roles in tech (one analysis found only ~10.9% of tech CEOs or senior leaders are female, and only 8–9% of positions like CTO, CIO or technical team lead are held by women.
- The consequences are evident in pay statistics. In the UK, the median gender pay gap in tech is ~17.5% as of 2024, significantly wider than the economy-wide gap (~10%). A study of 50 top UK tech firms showed modest progress – the gap fell from ~20.5% in 2021 to 17.5% in 2025 – but also revealed that progress has plateaued, with some firms even backsliding. In one positive case, a London fintech company narrowed its pay gap from 31% in 2021 to 13% in 2025 by concerted efforts, whereas another high-profile tech firm saw its gap widen from 3% to 15% over the same period.
- Across Europe’s tech sector more broadly, women earn about 25% less than men on an unadjusted basis (median). Much of this disparity is due to the lack of women in high-paying technical and executive roles: when adjusting for job level, function and country, the gap in European tech shrinks to roughly 2.5%. This stark difference between the 25% raw gap and 2.5% adjusted gap “underscores a huge – and somewhat hidden – problem in tech: there is a critical lack of women at senior positions earning higher salaries”. In the United States, the picture is similar; women in tech earn around 84 cents on the dollar compared to men, translating to nearly $10,000 less in median annual pay. This gap has narrowed only slowly over time, and for some groups it has even widened – for instance, Black and Hispanic women in tech have seen pay disparities increase in recent years. Clearly, the tech industry’s gender gap goes beyond pay to fundamental issues of representation and advancement.

AI tackling bias in hiring and promotion (positive case studies)
- Given these challenges, many tech companies and startups are turning to AI-driven tools to improve fairness in recruitment and career development. AI has the potential to eliminate human bias from hiring by focusing on skills and competencies instead of demographics. For example, some firms use AI-based text analysis to craft gender-neutral job postings. Tools like Textio act as a “spell check for gender bias,” highlighting phrases in job adverts that might inadvertently deter female candidates. Textio’s analysis of millions of postings shows that subtle wording changes can shift who applies – certain terms (e.g. “proven track record” or “competitive environment”) tend to attract more male candidates, whereas alternatives (“validated track record,” “collaborative environment”) yield a more gender-balanced applicant pool.
- By recommending such tweaks, the AI helps employers draw in talented women who might have self-selected out. Another area is blind recruitment: AI systems can mask personal details (name, gender, background) on CVs during initial screening, forcing evaluators to consider only qualifications. Some large tech companies have reported improvements in diversity of hires after implementing AI resume-screening that ignores demographic data. Even in performance management, AI is being piloted to reduce bias – for instance, using algorithms to analyze performance objectively or flag biased language in evaluations. This is important because traditional performance reviews often contain discrepancies: studies find women are more likely to receive vague feedback focused on personality rather than actionable skills.
- One report found women receive 22% more feedback about their personality traits than men do in reviews, which can hurt their promotion prospects. AI could help here by standardising evaluation criteria and alerting managers to potential bias (e.g. if a female engineer’s review includes words like “aggressive” or “emotional” that rarely appear in men’s reviews). Companies like Salesforce, Intel and others have also applied analytics (not necessarily AI, but data-driven tools) to proactively identify internal pay gaps and adjust salaries, which has narrowed tech pay gaps for women. Another promising example is Unilever’s use of an AI-driven hiring platform for entry-level roles: it anonymises applications and uses gamified assessments and digital interviews scored by AI. The result was a notable increase in the share of women (and minority) candidates progressing to final rounds, because the AI focused on merit factors and filtered out bias-prone human judgement. These case studies suggest that with the right approach, AI can act as a solution, illuminating and mitigating biases that humans might overlook.
AI amplifying bias in tech (negative case studies):
- Unfortunately, there are equally prominent examples of AI backfiring and entrenching bias in the tech industry. The most infamous is Amazon’s experimental AI recruiting tool, which was revealed to be discriminating against women applicants. In 2018, Amazon had to scrap this internal tool after discovering it systematically downgraded resumes that included the word “women’s” (for example, participation in a “women’s chess club”) and even penalised graduates of all-female college.. The AI had taught itself these sexist rules by training on 10 years of past hiring data – a period in which the tech industry (and Amazon’s own workforce) was overwhelmingly male. In effect, the algorithm learned that being female was an undesirable trait for software engineering jobs.

- Amazon’s engineers tried to correct the specific flags (like telling it to ignore the word “women’s”), but they realised the model could find other, more insidious ways to sort candidates by gender. Eventually, the project was abandoned when they lost hope that the AI could ever be truly gender-neutral.. This case became a cautionary tale across industries: it illustrates how “AI isn’t magic; it will reflect the biases in its input data”. If the tech sector’s past hiring was biased, a naive AI will simply automate that bias at scale. Beyond hiring, AI-driven performance analytics might also perpetuate disparities. Consider an AI that identifies top performers by analyzing code contributions on GitHub or Stack Overflow activity – if women were discouraged from public contributions or got fewer upvotes historically, the AI’s rankings will skew male.
- Similarly, AI used in professional networking or promotion recommendations could underrate women if it equates leadership potential with characteristics more common to male CVs (e.g. longer tenure, certain titles, which women may lack due to interrupted careers or exclusion from projects). Workplace culture biases can be amplified: an AI parsing internal communications might pick up that male engineers speak up more in meetings (perhaps because women were talked over) and conclude men are more “engaged” employees – a flawed inference that could influence promotions.
- There is also evidence that women in tech teams often end up performing more “office housework” and routine tasks (like note-taking, documentation, organising team processes) than their male peers. Surveys show women are frequently tasked with these non-promotable activities.Alarmingly, many of these tasks are exactly what new AI tools are poised to automate. One report observes that “there are a higher number of women in supporting roles or doing routine administrative work … many of which are tasks AI is likely to replace”. If companies implement AI to handle code QA, documentation generation, scheduling, etc., it might disproportionately make roles held by women redundant.
- In other words, without upskilling and role redesign, automation could sideline women in tech who were pigeonholed into those tasks. This dynamic is part of a broader trend: recent research by Code First Girls in the UK warns that overall “job automation is 40% more likely to affect women than men”, partly due to the kinds of roles women occupy and partly due to biases in AI development itself. It is notable that 90% of software engineers are male, which raises the risk that AI systems may be developed with blind spots that reflect male perspectives. In response, several jurisdictions are stepping in with regulations. New York City’s 2023 Bias Audit Law now requires any AI or algorithmic tool used in hiring to undergo an annual independent bias audit and disclose the results, or else employers cannot use it. This law (one of the first of its kind) came about after growing concern that unregulated AI hiring tools could silently perpetuate discrimination in tech and other sectors. Europe is also addressing these issues in its proposed AI Act, which will classify recruitment AI as “high-risk” and mandate strict transparency and fairness checks. These measures highlight that, in tech, AI’s role in the gender gap is not predetermined – it depends on how we govern and design these systems. Tech companies that treat diversity as a core design criterion (and include women in the design teams) are more likely to produce AI that helps close gaps rather than widen them.
Healthcare: AI’s Impact on Gender Disparities in Health Professions
- The healthcare and life sciences industry presents a contrasting gender landscape: women form the majority of the workforce, yet men dominate the highest-paid specialties and leadership roles. In Europe and the US, healthcare is a feminised sector – about 67–77% of health workers are women – but this has not translated into parity in pay or power. In fact, the gender pay gap in health is notoriously persistent.
- A 2022 global study by the ILO and WHO found that even after accounting for factors like education, position, and hours, women in the health and care sector earn about 24% less than men on average. Much of this gap is unexplained by objective measures, pointing to systemic bias (sometimes dubbed a “care premium” gap – society undervaluing work done by women).
- In the United States, research in 2024 showed that despite women making up 77% of healthcare workers, they face substantial wage gaps across roles. The gap is narrowest in nursing, a heavily female profession, yet female registered nurses still make only 82% of male nurses’ wages. Even this 18% gap has worsened – it was about 87% two decades ago, meaning pay equity in nursing has slid backwards. Meanwhile, physicians and advanced practitioners have among the widest gaps: female doctors earn roughly 70% of what male doctors earn in the US (translating to a 30% gap) and women in other advanced practice roles ~68%. Over a lifetime, this can sum to millions in lost earnings for women doctor.
- The pattern is similar in Europe: in EU countries where data is available, “financial and insurance activities” (which include private health insurance and finance roles in health) have some of the highest pay gaps, often well above 20% and up to 36%. Even in public health systems with standardized pay scales (like the NHS in the UK), disparities creep in through overtime, bonuses, and the concentration of men in higher-paid specialties (e.g. surgery, cardiology) versus women in lower-paid ones (e.g. paediatrics, general practice).
- Leadership remains male-dominated: women are underrepresented among hospital CEOs, department chiefs, and medical school deans. For example, in the US only ~18% of hospital CEOs and 16% of deans of medical schools are women (according to industry associations as of mid-2020s), despite the workforce being majority female. This leadership gap not only affects pay but also influences workplace culture and priorities (historically, male leaders meant issues like childcare for nurses, or harassment policies, were insufficiently addressed).

AI as a solution for equity in healthcare
- There are optimistic developments suggesting AI can help reduce gender inequities in health – both for providers and patients. First, AI is being employed to tackle biases in clinical practice that disadvantage women. For decades, medical research and training focused on male patients, which led to women often being under-diagnosed or misdiagnosed (for instance, a woman having a heart attack is more likely to be misdiagnosed than a man because her symptoms may differ from the “classic” male pattern). If left unchecked, AI could encode these biases – but if deliberately guided, AI could correct them.
- One example is using AI to better recognize heart disease in women: since women’s heart attack symptoms (fatigue, nausea, neck pain) are often missed by conventional algorithms trained on men’s data, researchers are developing AI models fed with female-specific data to improve diagnostic accuracy. Similarly, AI-driven analysis of medical images or lab results can be tuned to account for sex differences – e.g. interpreting biomarkers or symptom descriptions differently for female patients. The EU has explicitly funded digital health projects focusing on women’s health.
- On World Health Day 2025 (the theme of which was maternal and newborn health), the EU highlighted a range of AI-driven health innovations aimed at making care more accessible and equitable. These include AI tools for maternal risk prediction (to reduce preventable maternal deaths) and personalized treatment recommendations for women’s conditions. By investing in such solutions, policymakers hope to close long-standing gender gaps in health outcomes.
From the workforce perspective...
- AI could assist in workflow automation that benefits women professionals. For instance, AI-powered assistants can handle routine documentation (like writing up case notes via speech recognition), which often eats into doctors’ and nurses’ time (studies show female physicians spend more time in electronic health record systems, possibly due to spending more time with patients or thoroughness). Automating these admin burdens could improve work-life balance, which especially helps women who often juggle professional and family roles.
- AI-driven scheduling systems can also optimise staffing in hospitals – if used properly, they could ensure more predictable and fair shift patterns, benefiting many nurses (mostly women) who currently suffer from unsociable, rotating shifts. In public health, AI analytics are helping identify gaps in care (for example, analyzing data to find that women in certain regions aren’t getting equal access to treatments), which then informs policy actions. There is also potential for AI to illuminate pay and promotion disparities within large health organisations.
- Some hospitals have begun using data analysis (if not full AI) to examine compensation and raise distributions by gender, aiming to flag unjustified gaps. As awareness grows, health systems in Europe have been subject to gender pay gap reporting requirements, pushing them to take corrective steps (the UK NHS, for example, now publishes gender pay gap reports highlighting that male doctors earn more on average and pledging strategies to address this). These efforts can be bolstered by AI systems that simulate the impact of different HR policies on closing the gap.

AI Exacerbating Bias In Healthcare (challenges):
- Despite the promise, experts caution that AI could just as easily perpetuate or worsen gender bias in healthcare if we are not vigilant. The core issue is that many AI systems learn from historical health data that already reflect biases – whether in how patients were treated or how healthcare workers were evaluated. A striking example is an algorithm (widely used in US hospitals) intended to identify patients who would benefit from extra care management. Researchers found it was biased against black patients due to using healthcare spending as a proxy for need – black patients spent less (due to access barriers), so the algorithm underestimated their risk. In gender terms, similar pitfalls exist. If an AI uses past data on treatment decisions, it might learn, say, that women “complain more” about pain (when in reality their pain was undertreated) and therefore downplay female patients’ pain reports.
- Auto-diagnosis apps and symptom checkers have been found at times to give different suggestions for men vs women with identical inputs, likely due to biased training data. AI used in hiring or evaluating healthcare staff could also inherit bias. For example, an AI screening hospital residency applicants might notice male applicants have more surgical research publications (perhaps because women had less mentorship in that subfield) and thus rate male candidates higher, reinforcing the cycle that keeps women out of top specialties. Another risk arises in clinical decision support AI. Many diagnostic algorithms do not explicitly account for sex-based differences. One article warns that “AI tools reflect the social and scientific biases embedded in their training data” – and in healthcare, historically “the system has been shaped around male bodies and male-centric definitions of disease”.
- Without deliberate action, “AI will not correct these gaps; it will encode them and automate their reproduction”. The result could be, for instance, an AI that less frequently flags women for certain interventions because the male norm was assumed, leading to even longer diagnostic delays for women that already plague fields like cardiology and auto-immune diseases. Furthermore, AI’s impact on jobs in healthcare needs careful thought. Automation in healthcare could threaten some roles: for example, AI-reading of radiology scans can handle tasks that junior radiologists or radiology techs (roles where women are present but men still hold many senior posts) used to do. If not managed, this might reduce the pipeline of certain specialties or shift job profiles in ways that disadvantage women. Administrative roles in health (medical billing, coding, scheduling), often dominated by women, are highly ripe for AI automation and some are already being eliminated as software takes over.
- A recent study noted that globally, the pandemic’s fallout and automation trends have disproportionately affected lower-paid health workers, “most of whom are women”. In the longer term, if AI improves efficiency, there is hope it could cut costs in health systems – but there is a concern that this may be used as an excuse to hold down wages in what are still heavily female professions (nursing, midwifery, care work), unless pay equity is made a priority. Finally, a lack of diversity in the teams developing healthcare AI can be problematic. If mostly male engineers and clinicians design an AI system, they might not anticipate issues that primarily affect female staff or patients. For instance, an AI scheduling tool built without input from nurses might fail to incorporate flexibility needed for staff with maternity responsibilities, thereby disadvantaging female employees.
- In summary, AI’s influence in healthcare could be a great equaliser or a setback. There is a growing recognition of this in both Europe and the US. The World Economic Forum’s 2023 report on gender gaps emphasised that health parity is close to achieved in outcomes, but gaps in the health workforce (pay and leadership) remain a concern. The WHO has urged that health AI be developed with gender in mind, and some governments are moving to require that clinical AI tools undergo bias evaluation before approval. The onus is on healthcare leaders to ensure AI is introduced in a way that actively reduces bias – by improving data diversity, auditing algorithms, and involving women in design and decision-making. If done right, AI can help overworked health teams and improve care for female patients; if done poorly, it could mechanise old biases and devalue the human workforce.
Finance: AI, Gender Bias, and Equality in Financial Services
- The finance industry (including banking, investment, insurance, and fintech) has historically had one of the largest gender gaps in both pay and leadership – and AI’s growing role in finance brings both opportunities and risks for gender equality. In Europe, financial services consistently show the widest gender pay gaps of any major sector. In 2023, Eurostat reported that in every EU country (data excluding one outlier) the pay gap in financial and insurance activities exceeded the national average, ranging from about 14% (Belgium) up to 36.4% (Czechia).
- Many large EU economies have finance pay gaps well above 20%. This reflects not only unequal pay for equal work, but also the concentration of men in the highest-paying roles (traders, fund managers, executives) while women are overrepresented in lower-paid, client-service or administrative roles. Similarly, in the United States, the finance and insurance sector stands out for its disparity: women earn roughly 63 cents on the dollar compared to men in this industry, implying a 37% unadjusted pay gap, one of the largest in any field. (Law is another field with extreme gaps; e.g. female lawyers ~55% of male earnings. Even after adjusting for role and experience, significant gaps often remain – a Bank of America review found female finance professionals at the same level earned about 8-10% less than males on average, for instance.
- The gender leadership gap in finance is stark. While women make up 45–50% of bank employees, they comprise only about 15% of executive roles. A recent European report bluntly observed that “the situation for women’s leadership in finance is particularly dire”: as of 2019, only 6 out of 107 major European financial institutions had female CEOs. The pipeline leakage is evident: women join banks in near-equal numbers, but few reach the C-suite or portfolio management positions. Cultural factors (like exclusion from informal networks, bias in promotions, or a lack of mentors) have long been cited for this imbalance. Consequently, decisions in finance – from capital allocation to product design – have often been made without women at the table, which can perpetuate subtle forms of discrimination (e.g. products not meeting women’s needs, or workplace norms that are unaccommodating).

AI Aiding Inclusion In Finance
- If used conscientiously, AI could help chip away at some biases in financial services. One area is in recruitment and career development within finance firms. AI-based recruiting platforms that anonymize applications and use skill-based matching can help women get a fair shot at roles traditionally dominated by men. For example, some large banks have begun using AI to sift graduate job applications without viewing candidates’ gender or ethnicity, leading to more diverse interview pools. An international financial services company reported a 14% increase in the diversity of its candidate slates after implementing an AI hiring tool, along with a 24% uptick in diverse hires for certain programs.
- The AI achieved this by widening the search to talent from non-traditional backgrounds and countering hiring managers’ unconscious tendency to “clone” previous hires. AI-driven talent analytics can also identify promotion gaps – for instance, highlighting that women are stuck longer at certain levels – prompting management to intervene with sponsorship programs or bias training. Some progressive firms are deploying AI to analyze pay and performance data internally, flagging disparities. For instance, Citigroup and UK banks under new pay-transparency regulations now examine if women are consistently rated lower in performance or getting smaller bonuses, and AI can support this analysis at scale.
- Customer-facing AI in finance also holds promise. Chatbots and robo-advisors, if designed inclusively, could improve women’s access to financial advice. Studies have found that women often feel less confident dealing with aggressively sales-oriented financial advisors (an industry stereotype), which leads to lower investment participation. A well-crafted robo-advisor AI, which provides unbiased financial guidance, could encourage more women to invest and borrow in line with their needs, potentially narrowing the financial literacy and investment gap between genders.
- Moreover, fintech solutions using AI have sprung up targeting issues like the funding gap for female entrepreneurs – for example, algorithmic credit scoring that uses alternative data (education, business performance) instead of traditional credit history can benefit women who may not have had equal opportunity to build credit under old models. If these AI scoring models are designed to avoid gender-correlated biases, they could extend more credit to women-owned small businesses who historically were underfunded by male-biased lending processes. Indeed, closing gender gaps in access to finance has substantial economic benefits – the World Bank estimates that closing gender gaps in employment and entrepreneurship (which includes access to capital) could boost global GDP by about 20%.
- Policymakers in Europe are aware of this, and alongside AI innovation, they have introduced measures like the EU Pay Transparency Directive (2023) to compel financial firms (and others) to disclose gender pay gaps and salary ranges, aiming to accelerate closing of the pay gap. Combined with AI tools that continuously monitor pay equity, such policies could create a more level playing field.
AI perpetuating or worsening bias in finance
- Despite these positives, there are serious concerns that AI could become a setback for gender equality in finance if not properly controlled. The world of finance has already seen real-life examples of algorithmic bias. A notable case emerged in 2019 with the launch of a tech giant’s new credit card in the US: soon after, multiple users (including a well-known tech entrepreneur) complained that the Apple Card’s AI-driven credit limit algorithm was giving women far lower credit limits than men with similar profiles. In one publicised example, a man received a credit line 20 times higher than his wife, despite her having a better credit score. This sparked investigations into Goldman Sachs, the card’s underwriter, for potential sex discrimination. While regulators ultimately did not find unlawful intent (the exact algorithm was proprietary and complex), the incident revealed how AI in lending can produce opaque, adverse outcomes that “feel” discriminatory to users. It highlighted that a model may unintentionally redline on gender if it uses inputs that correlate with gender (like income, credit usage patterns, or even shopping habits) in ways that reflect societal inequities.
- The Apple Card saga underscored a “black box” problem: even if not intentionally biased, AI can reinforce historical patterns – for instance, if women on average had less credit or lower limits historically due to lower incomes or taking time out of work, an AI might perpetuate that gap moving forward instead of challenging it. Another area of concern is automated hiring and promotion in banks using AI. If those tools are trained on past employee data, where perhaps more men were promoted to trading desks and women to customer service, the AI could start to suggest male candidates for analytical roles and female for support roles, cementing gender segmentation. Workplace automation in finance also carries gender-differentiated impacts.
- Many roles on the chopping block from AI and automation are those with heavy routine components – and quite a few such roles in finance have high female representation. For example, bank tellers (largely female in many countries) have been declining for years with ATMs and now online banking; AI chatbots further reduce the need for customer support reps (also often women). In accounting and finance departments, entry-level analysts or clerks (positions where women are well represented) can be replaced by AI that can reconcile accounts or flag anomalies automatically.
- The LSE Business Review highlighted that roles like cashiers, secretaries, and bookkeeping clerks – jobs disproportionately held by women – are being replaced by AI systems, creating “gendered patterns of job loss”. In the US, it’s estimated that 79% of women (versus 58% of men) work in jobs at high risk of automation, precisely because women are concentrated in administrative, sales, and service roles that AI is quickly learning to do. Finance has many such roles. Conversely, some highly paid finance jobs that might be reduced by AI (like high-frequency trading or quantitative analysis) have been male-dominated, meaning men could also lose some ground. Interestingly, the advent of algorithmic trading and robo-advisors has already reduced certain macho high-paying roles (e.g. floor stock traders).
- However, the net effect could still disadvantage women if the roles women occupy are more broadly eliminated and if women are not prepared (or encouraged) to move into the new tech-heavy roles that emerge. Another worry is that AI might bake in subtle institutional biases in finance. Consider performance evaluation algorithms for bankers: if historically men tended to get higher revenue assignments or bigger client portfolios (maybe due to old boys’ networks), an AI that evaluates performance purely on numbers will keep rating men higher and suggest them for promotion or bonus, while women get stuck in a lower tier – not because of ability but due to how opportunities were allocated. Such a feedback loop would be hard to notice without explicitly auditing the AI’s outputs by gender.
Financial firms are aware of these pitfalls and some have begun instituting bias audits for their AI systems voluntarily. In the US, the Equal Credit Opportunity Act (ECOA) now is being interpreted to cover AI lending algorithms, meaning lenders must ensure their models do not produce disparate impacts on protected groups including gender. The New York City bias audit law mentioned earlier also heavily affects large financial institutions using AI for hiring in NYC. Europe’s GDPR and upcoming AI Act both emphasize the right to explanation for algorithmic decisions – so a woman denied a loan by an AI has the right to know why, which forces some transparency that can uncover bias.
The bottom line for finance is that AI can either streamline fairness or systematise bias. It’s encouraging that some firms are actively using AI to advance diversity (like analyzing language in performance reviews or using AI to suggest diverse slates for promotions). But the industry’s legacy of inequality means any AI touching pay, promotions, or credit decisions needs rigorous checks. As one headline put it, “The AI gender trap” in finance could see women facing triple the automation risk and continued exclusion if we simply let algorithms run on autopilot.
Avoiding that outcome will require combining AI with human judgement consciously aimed at equity – for example, using AI’s efficiency to free up resources that can be invested in upskilling women, and using AI’s insights to actively correct biases (like setting female advisors up with high-net-worth clients based on data-driven potential rather than assumptions). Finance, perhaps more than any other sector, shows how AI’s impact on the gender gap is not predetermined – it depends on the values we encode and the oversight we implement.

Conclusion and Way Forward
Across education, technology, healthcare, and finance, our analysis shows that AI’s role in the gender gap is profoundly ambivalent. In each industry, we found positive instances where AI is a tool for progress – from helping identify biased language and practices, to facilitating flexible work, to improving health outcomes for women. We also found troubling evidence that AI, if left unchecked, will mirror and even amplify the gender biases entrenched in our societies. A clear pattern is that AI tends to inherit the character of its creators and data. Sectors with greater inclusion and awareness stand to leverage AI to close gaps, whereas sectors that have been traditionally male-dominated risk designing AI that unwittingly sidelines women. The year 2025 finds us at a crossroads. On one side, we have encouraging developments: governments in Europe and some US states are pushing for algorithmic transparency and fairness audits, companies are increasingly conscious of diversity in their AI strategies, and women’s engagement with digital tools is rising (Deloitte predicts that by end of 2025, women’s usage of emerging tech like generative AI will equal or exceed that of men in the US). On the other side, progress in closing core gender gaps remains sluggish. The World Economic Forum’s Global Gender Gap Report 2024 estimates it will still take 130+ years to reach full parity globally at the current rate, and even in advanced economies the last mile – equal pay, equal leadership representation – is distant. AI could either accelerate this timeline or push it further out.
What can be done to ensure AI is more solution than setback? A few key strategies emerge from this analysis:
-
Integrate Diversity in AI Development: Having more women and diverse teams building AI is crucial. As noted, women currently make up only about 30% of AI professionals. Increasing this share will bring perspectives that catch biases early. Diverse development teams are empirically shown to design fairer systems. Europe and the US are investing in STEM education for women and girls to widen the pipeline, but companies should also actively recruit, retain, and promote women in AI roles. Inclusion means not just presence, but empowering women to shape AI tools’ objectives and parameters.
-
Bias Audits and Transparency: Requiring regular bias audits of AI systems (as NYC has legislated for hiring tools) should become standard practice, especially for high-stakes uses like hiring, lending, or healthcare decisions. These audits need to check outcomes by gender (and other protected traits) and be transparent about findings. The EU’s draft AI Act will likely mandate such scrutiny for many systems. Firms can get ahead by voluntarily publishing bias audit results and improvements made – this builds trust and accountability. Additionally, “explainable AI” should be pursued so that decisions can be interrogated for fairness. A bank, for example, should be able to explain why an algorithm consistently gave lower credit scores to women, and fix it.
-
Inclusive Data and Testing: The garbage-in-garbage-out adage is especially true for AI bias. Efforts must be made to ensure training datasets are representative of both women and men (and intersecting attributes). In healthcare AI, this means making sure data includes women’s medical data; in education AI, ensuring models are tested on diverse student groups; in finance, including scenarios reflective of women’s financial behaviour. Before deployment, AI should be tested for disparate impacts – e.g. does a hiring AI select significantly fewer women? does a performance algorithm’s recommendations favour one gender? Such testing allows tweaks (rebalancing data, adjusting thresholds) before harm occurs. “If you can’t measure it, it doesn’t exist,” as a Davos 2025 panel on women’s health noted, collecting gender-disaggregated data on AI outcomes is vital.
-
Human Oversight and Governance: Companies should treat AI outputs as advisory, not absolute, especially where biases are possible. Human managers – properly trained in bias awareness – need to review AI-driven decisions that affect careers or livelihoods. For instance, an AI might rank candidates, but a diverse hiring panel should make the final call and be empowered to deviate from AI recommendations if needed for fairness. Internal governance structures (like ethics committees or AI councils including gender experts) can oversee algorithmic practices. In finance, this might mean compliance and risk officers vet AI models for bias as thoroughly as they do for credit risk. In tech firms, it means instilling an ethical review at each stage of product development.
-
Empower Women through Transition: With automation set to disrupt many roles, proactive upskilling and career transition support for women is imperative. The Code First Girls report emphasizes reskilling programs to prepare women for the jobs of the future (AI-related or otherwise) Industries like finance and healthcare should identify roles likely to be automated and help employees (especially women in routine jobs) train for more complex roles that AI can’t easily do – for example, moving a bank teller towards a financial advisor role, or a medical biller towards a healthcare IT analyst role. If done right, AI could actually remove drudgery and open pathways for women to move into more rewarding positions, provided they are given the tools and opportunities to do so.
-
Policy and Legislation: Continued government action can provide the floor of protection. Equal pay laws, pay transparency mandates, and non-discrimination regulations need updating for the AI age. For example, clarifying that algorithmic bias is a form of discrimination is important (the US CFPB has done this for lending algorithms under ECOA). The EU Pay Transparency Directive (which requires firms to report gender pay gaps and forbids pay secrecy) will come into force in coming years and should help shrink pay gaps. Enforcement of such laws will push organisations to rectify unjust disparities (sometimes using AI tools to do so). Moreover, international cooperation on AI ethics – such as the OECD AI Principles focusing on fairness – should continue to ensure gender is a core consideration in global AI standards.
In conclusion, AI is neither a panacea nor an inherent threat for the gender gap – it is a powerful amplifier. If we apply it with a gender lens, it can accelerate the closing of gaps by rooting out hidden biases and enabling more flexible, personalised approaches that benefit women. Many of the case studies above, from AI tools boosting women’s hiring in finance to AI aiding female patients’ diagnoses, show this potential. However, if we deploy AI carelessly, it will reflect and lock in the inequities of the past, from sexist hiring practices to undervaluing women’s work. As one commentator aptly noted, the question is not whether AI will change these industries, but “whether it will do so in ways that advance or undermine equity.” The answer hinges on our choices now. With deliberate action, collaboration between technologists, gender experts, policymakers and workers, and a commitment to fairness, AI can become a tool of inclusion – helping to finally move the needle on gender parity in education, tech, healthcare, finance and beyond. The year 2025 can be remembered as a turning point where we harnessed intelligent machines and human will to ensure the future of work is equitable for all.
Sources:
-
Eurostat, Gender pay gap statistics (Data extracted March 2025)ec.europa.euec.europa.eu.
-
UK Tech News (via LinkedIn), UK tech firms narrow gender pay gap to 17.5%linkedin.comlinkedin.com.
-
Ravio Pay Equity in Tech Report 2024, summarized in How does the gender pay gap vary across Europe?ravio.comravio.com.
-
CodersLink Blog (C. Vázquez, 2025), Addressing the Gender Pay Gap in Techcoderslink.com.
-
Reuters (J. Dastin, 2018), Amazon scraps secret AI recruiting tool that showed bias against womenreuters.comreuters.com.
-
Vox (L. Gannes, 2015), “Textio Spell Checks for Gender Bias”vox.com.
-
Textio report via Sanford Heisler Sharp LLP, Gender bias in job postingspaycor.com.
-
Textio report (2023), Language Bias in Performance Feedbacktextio.com.
-
OECD Education Blog (M. Encinas-Martin, 15 April 2025), “AI isn’t neutral – why education must lead the way”oecd.orgoecd.org.
-
World Bank Blog (Barron & Bentil, Apr 2024), Bridging the AI divideblogs.worldbank.orgblogs.worldbank.org.
-
LSE Business Review (Theunissen & Novoa, Feb 2025), “AI threatens women’s job market participation”blogs.lse.ac.ukblogs.lse.ac.uk.
-
Computer Weekly (C. McDonald, Aug 2024), Women face greater risk of job displacement from automationcomputerweekly.comcomputerweekly.com.
-
CUPA-HR (J. Burrell, Mar 2024), Women in Higher Ed Paid 82 Cents on the Dollarcupahr.org.
-
IE University (Uncover IE, Oct 2024), Women in finance leadership statsie.edu.
-
WomenTech Network, Women in Tech Statistics 2025womentech.net.
-
AJC/Health Affairs Scholar (H. Boyce, Mar 2024), Healthcare gender pay gap by roleajc.comajc.com.
-
WHO/ILO, Gender pay gap in health and care sector (2022)who.int.
-
PharmaSalmanac (K. P. Smith, 2023), “The Gender Bias Built Into AI — Threat to Women’s Health”pharmasalmanac.compharmasalmanac.com.
-
European Commission HaDEA, World Health Day 2025 – AI in maternal healthhadea.ec.europa.euhadea.ec.europa.eu.
-
Reuters (E. Marshall, 2021), Apple Card algorithm bias allegationsreuters.com.
-
Nixon Peabody LLP, NYC Bias Audit Law summary (2023)morganlewis.com.
-
Qureos Hiring Guide (Tatheer Zehra, 2024), Gender Pay Gap by Industry and Region 2025qureos.com.
-
World Economic Forum, Global Gender Gap Report 2024 (highlights)weforum.orgqureos.com.