After identifying which roles and demographics face the greatest AI disruption, this post explores solutions through two lenses: the practical steps workers can take now, and the ethical responsibilities companies can no longer ignore.
Table of Contents
In my previous post on AI job displacement, I mapped out the landscape: which roles are most vulnerable, which sectors are likely to grow, and which demographics stand to bear the brunt of AI-driven workforce change. But now I’d like to explore the solutions to AI job loss.
First, a quick recap: We saw that routine clerical and administrative roles are declining fastest, that up to 30% of work hours could be automated by 2030, and that an estimated 59% of the global workforce will require reskilling. We also saw that the burden isn’t equally shared — women, lower-income workers, Black workers, and older workers face disproportionate risk.
That was the diagnosis. Now let’s talk about solutions.
To meaningfully examine the solutions to AI job loss — including how to help level the increasingly widening playing field for marginalized groups — I think we need to look through two lenses. I define them as:
- 1. The practical: How do workers adapt? How do we upskill?
- 2. The ethical: What responsibility do companies have not to turn AI in the workforce into a pyrrhic victory?
Let’s start with the practical.
The practical path forward for AI job loss: Upskilling & adaptation

We’re beyond the point where learning AI tools are “nice to have” at work. The World Economic Forum predicts 59% of workers must retrain or upskill by 2030, and McKinsey reports that over two-thirds of organizations are already using AI across multiple business functions. If workers don’t adapt, the technology will move without them.
AI is changing job requirements — not in five years, but now. This includes AI job loss, new jobs created due to emerging technologies, and new skill sets required as workers adapt their roles or move into new ones. But what does adaptation actually look like to avoid AI job loss over the next five years?
The 3 skills that matter most
When it comes to avoiding AI job loss, upskilling is the crucial differentiator (in addition to the biases we previously discussed) that determines who will be left behind and who can adapt to evolving workplace demands. We can divide the skills that complement AI — rather than compete with it — into three main categories:
1) Digital & AI literacy
Digital and AI literacy isn’t about being an engineer; it’s about fluency. This means workers:
- Understanding how AI tools work
- Utilize prompt engineering
- Have basic data interpretation skills
- Can integrate AI into their workflows
Digital literacy is so crucial to avoiding AI job loss that Brookings actually names digital access as a precursor to AI adoption and use. Workers who want to get ahead and stay competitive need to be self-sufficient in integrating AI tools into their workflows.
2) Human-centric skills
AI can do a lot, but it isn’t human. Workers who can emphasize the value of their human-centric skills can avoid AI job loss because they offer things machines cannot. This includes leadership skills, empathy, strong negotiation tactics, strategic thinking, storytelling & creativity, and nuanced judgment.
These skills are the insulation layer against replacement. Consider this: AI can write you a blog about any subject you instruct it to. But it cannot bring the nuance, experience, and real scenarios that a human can. Would you rather learn about a subject from a chatbot alone or reap the experience of an expert who has actually lived and learned what they are writing about?
The truth is, it doesn’t have to be one or the other. AI can give humans a head start to amplify their skill sets and tasks — rather than replace them.
3) Hybrid & interdisciplinary skills
AI job loss fears are real. According to Time, 1 in 4 workers fear AI will take their jobs. But there are opportunities, like we briefly discussed above, where humans and AI together outperform what either can accomplish solo. This includes:
- HR + analytics
- Marketing + automation
- Healthcare + AI diagnostics
- Trades + robotics
- Education + adaptive systems
The winning workers will be those who stack skills. And the winning companies will be the ones who recognize that AI should augment — not substitute — human workers. Companies that have tried replacing staff with AI are already facing significant issues as they learn this lesson the hard way.
Continued Reading: The Most Urgent American Rights Currently Being Threatened
Actionable steps for workers facing AI job loss

We hear a lot of talk about “upskilling” to avoid AI job loss. But what does that look like in practice? In order to become AI-fluent, it’s crucial to understand how to adapt to your role. In actual practice, upskilling should look like:
- Testing AI tools relevant to your job
- Mapping which of your tasks are automatable vs. irreplaceable
- Pursuing micro-credentials in data literacy or AI basics
- Building portfolios demonstrating AI-augmented work
- Joining community training programs or MOOCs
- Asking employers for upskilling support — before layoffs force it
If the future looks too bleak in your current role, it’s also critical to adapt. The best path forward might be exploring career transitions into growth roles, like STEM, healthcare, or green energy.
The path forward is not pretending AI isn’t here. It’s learning to use it before it uses us. Or, more accurately, before companies believe human workers are so easily replaceable. Because remember: AI is not sentient. It’s a program. It’s humans who decide what it should do and which jobs it should replace.
Leveling the AI job loss playing field
When it comes to AI job loss and displacement, we already discovered the following: The playing field is not even. We diagnosed clear demographic disparities:
- Women’s jobs are nearly twice as exposed to AI job loss.
- Black workers are overrepresented in highly automatable roles, leaving them more susceptible to AI job loss.
- Lower-income workers face more displacement and fewer retraining pathways.
- Older workers risk exclusion from AI-related upskilling opportunities.
If these are the groups most affected, they must also be the groups most supported. For AI job loss to not continue to widen the job market gaps for marginalized workers, equity must be intentional in the path forward.
Strategies to support at-risk demographics
I’m not going to claim to have all the answers to level the AI job loss playing field. But for companies that claim to care about their workforce, here are some ideas for ensuring that marginalized groups can receive more hiring opportunities.
And, of course, all companies should be investing their resources in upskilling their employees before they look to AI job loss and layoffs.
For women
- Dedicated STEM/AI training scholarships (example for students)
- Mentorship + returnship programs (marginalized students in STEM)
- Childcare support for training periods
- Closing promotion gaps in tech roles
For Black & POC workers
- Subsidized tuition for high-growth AI careers
- Partnerships with community colleges & HBCUs
- Bias audits in hiring & layoffs
- Government incentives for retrain-before-replace
Here’s an organization to explore: Jobs for the Future (JFF).
For lower-income workers
- Free or low-cost training programs (example from an online university)
- Employer-funded credentialing
- Apprenticeships in growing sectors
- Public funding for digital literacy
For older workers
- Age-inclusive retraining
- Ergonomic role redesign
- Phased AI integration
- Anti-bias protections
Here’s a Foothold America article to check out: “Reskilling Older Workers for an AI Future.”
You may be wondering why this list only appeals to the demographics we determined were most at risk of AI job loss. The truth is, it’s not enough to say AI impacts are uneven. Solutions must be uneven, too.
An unfair world calls for staggered solutions
Before we dive into part two of this post (where I explore the ethical implications of AI job loss), I invite you to consider something I often think about.
Back when Blue Lives Matter was created as a counter-movement to Black Lives Matter, I saw people online claiming those who didn’t support Blue Lives Matter didn’t support police officers. What they failed to see was that creating a second movement took the spotlight off the first.
It wasn’t necessarily that people didn’t support the law enforcement; while some surely didn’t, the larger issue was that trying to support everyone at once takes away support for the groups that need it most. I then read a metaphor that still sticks with me.
Someone pointed out that when you’re playing Mario Kart, you don’t get the speed-up mushroom if you’re already in first place. It’s those of us who are behind that need the extra boost. In other words, if everyone gets the boost, the playing field will never change. 🍄
The moral question: What do companies owe us?
Now that we’ve explored some practical responses to AI job loss, I’d like to consider ethical implications. This is the part most corporate messaging conveniently smooths over — because AI responses are polarizing.
I see AI being lauded on LinkedIn and vehemently rejected in real life. I see AI becoming a daily companion to some and verbally torn apart by others.
When it comes to AI, it’s important not to catastrophize. (It’s a bit harder to stay calm about AI job loss.) But I also think some companies can slow down before entirely embracing it. AI advancements have already dramatically changed the way we work — automating monotonous processes, providing starting points and new ideas, and accelerating tasks.
But at what cost?
Is AI becoming a pyrrhic victory?
When considering the ethics of AI job loss, I kept coming back to one question: Do we want to maximize profit at the expense of people?
Because here’s another truth about AI at work that we’re not supposed to say out loud: It’s not necessarily improving the lives of workers. Rather than save time so that workers can dedicate more of their days to what many deem the important things — passions, family time, health, charity — many workers are increasingly finding themselves asked to raise output thanks to AI tools.
So, time spent working doesn’t change.
Workforces instead adapt to receiving 4x the output from a single employee, meaning AI tools don’t ease their workloads — they just lead to a higher demand. And guess what? With each worker asked to individually increase contribution, it’s more likely that workforces will be “lean” or “restructured.” In other words, AI job loss occurs. Let’s look at this reality from both sides:
Higher overall productivity. Reduced time per task. Increased savings. Leaner staffing. Heavier individual demands. Greater risk of burnout.
And, increasingly, job displacement.
Look, I have neither the savvy nor the success of a C-suite executive. I’m someone who places people first … and often finds myself entrenched in existential pondering. But isn’t there a point where we have to ask ourselves if AI productivity is a bit of a pyrrhic victory — especially if a portion of the people who built companies are being replaced by streamlined processes?
I mean, what matters most of all? Maximum productivity in a company that creates the bare minimum of human jobs? Increased unemployment? Why not focus on upskilling, adapting, retention, and work-life balance? Because when I continue to unravel the existential thread, all I see is a workforce facing increasing pressure, meaning fewer people to buy the products created in the first place.
There’s the famous African proverb often attributed to Malian writer Amadou Hampâté Bâ: Every time an old man dies, a library burns. I can’t help but feel that every time a longstanding worker is replaced with automation tools, all of their knowledge, experiences, and resources are taken from the company.
A company may “win” the efficiency race while losing loyalty, institutional knowledge, employee well-being, reputation, diversity, and social responsibility. But is there another way to approach AI job loss and displacement?
A people-first framework for AI
I believe we have options. AI should not be a shortcut to downsizing. It should be a catalyst for redesigning work in a way that keeps humans central.
Rather than focus on maximizing profits, companies can design ways to maximize the productivity — and well-being, skills, and security — of their existing workforce. Companies can choose to:
- Retrain before they replace
- Use AI to reduce hours, not expand output
- Share productivity gains through wages or flexibility
- Implement transparent automation policies
- Deploy AI to augment human strengths
- Include diverse workers in implementation decisions
- Ensure layoffs are a last resort, not a first impulse
Because if AI is used solely to maximize profit, we shouldn’t call that innovation.
We should call it abandonment.
A few AI job loss questions to consider
I don’t have all the answers when it comes to the ethics of AI job loss. But there are some ethical questions we can ponder in addition to the logistics of profit. For example:
- Should companies be required to retrain workers before replacing them?
- Should governments incentivize AI adoption differently?
- How should productivity gains be shared — reduced hours or higher wages?
- Who should regulate AI bias — companies, government, or independent bodies?
- How do we ensure AI doesn’t widen the gap between rich and poor nations and demographics?
- Can AI serve as a force for equity rather than inequality?
With that in mind, we can move on to my final thoughts on AI job loss.
We have the power to shape AI for better or worse

This isn’t a groundbreaking statement, but it’s one that often becomes buried or sidetracked in AI conversations. People often act like AI is something that’s happening to humanity — and not something that humanity is creating, shaping, and employing.
AI tools and advancements come from humans. They are utilized by humans. We have the choice to decide to use them for the majority of our tasks — and risk laziness and decreased brain performance — or utilize the tools to create a new way to work.
But let’s also cut the motivational speech for a moment. I’m not Ted Lasso.

In actuality, many of us are not sitting in positions where we can choose how AI usage is being employed in our fields, let alone our companies. We aren’t the change makers; we’re the people being told to adapt, or we’ll be replaced. But we also don’t have to stand for corporate silence.
I wish to end this piece on a note of hope, as I’m always of the belief that we can shape our outcomes. (Okay, maybe I’m channeling a bit of Ted Lasso.) Life is not something that passively happens to us; we can choose to actively participate. That means that workers can learn how AI can empower their positions — and take a stance when they believe it’s being misused.
Furthermore, we can also look at AI with joy rather than focus solely on AI job loss. In addition to creating new jobs, the wonders of AI are promised to go far beyond what I can reap as an office worker. From improved healthcare to increased education access, we are being promised a shiny future. But it all must come with governance. Which brings me to my final words …
AI job loss is a real fear. But many of us have the grit to adapt to the changing world. And some of us have the privilege needed and the bravery required to be in positions to fight back when we see positions unfairly being replaced.
The real danger isn’t AI itself — it’s indifference. We have to play an active role in the way these emerging technologies are shaping the future of work.
Continued Reading: Corporate Silence: The Dangerous Politics of Convenience
Main photo by helloimnik for Unsplash.
