Just over two years after ChatGPT’s debut, it’s no longer just entry-level, repetitive jobs at risk. Now, even mid-level, highly-skilled professionals are staring down the barrel of automation, crystallizing fears of widespread human displacement by machines.
Staggering speed of the shift
We’ve long comforted ourselves with the idea that humanity has always managed to adapt during technological revolutions (and with fingers crossed, we will again!). But what makes Zuckerberg’s announcement so unsettling isn’t just its impact on engineers—it’s the staggering speed of this shift.
AI’s ability to replace highly-skilled workers isn’t unfolding over decades —it’s happening now, leaving our attempts to “upskill” and adapt lagging hopelessly behind.
Yet, for these engineers, the rapid pace of automation brings an even more critical responsibility. With their deep technical expertise in code and engineering, they are uniquely positioned to address the urgent challenge of AI alignment—the process of ensuring AI systems operate in harmony with human values and interests.
Repurposing of roles
This need becomes even more pressing as the replacement of human labor and increased automation of higher-level and technical roles accelerate the development of increasingly advanced and “intelligent” systems. Whether he realizes it or not, Zuckerberg’s move may position Meta as a leader in addressing this growing challenge of AI alignment.
This is where the role of engineers shifts. Instead of doing engineering work or just writing code, they will oversee, guide, and audit AI systems to ensure they align with human ethics and priorities. Their work will involve addressing vulnerabilities, preventing AI from optimizing for harmful outcomes, and embedding safeguards to ensure these systems reflect human values rather than purely machine logic.
The issue is now becoming less theoretical. Philosopher Nick Bostrom once warned of an AI optimizing for its goal of producing paperclips--logically converting everything—including humans—into raw materials to achieve its goal. AI alignment ensures that such scenarios don’t spiral out of control, where efficiency comes at the cost of everything else.
Human oversight a must
The European Union has taken these concerns seriously with the EU AI Act, which mandates human oversight for high-risk AI systems, especially those that could harm health, safety, or fundamental rights. The Act reinforces the principle that no matter how advanced AI becomes, humans must remain accountable for the systems they create.
Former U.S. President Joe Biden’s Executive Order 14110 took a different approach but similarly recognized the importance of oversight. It outlined guidelines for AI safety, requiring sharing of test results, establishing testing standards, and having federal agencies assess risks.
However, the idea of AI oversight faced a setback when President Donald Trump rescinded the order, citing the need to “solidify [America’s] position as the global leader in AI” and implying that the guidelines acted as “barriers to American AI innovation.” While we are unsure how Trump’s AI policy would evolve, the repeal reflects a shift toward rapid AI deployment, with less emphasis on oversight—at least in the United States.
Fragile AI governance
It’s easy to understand the pressure to accelerate AI development. With competitors from China like DeepSeek and Alibaba introducing AI models that rival American systems at significantly lower costs and at a shorter period, coupled with the geopolitical imperative to emerge victorious in the AI race, global competition naturally reduces incentives for regulation. In this climate, safety and oversight risk become afterthoughts.
Zuckerberg’s decision to replace mid-level engineers with AI was already alarming, but the removal of federal oversight underscores just how fragile AI governance—or governance in general—truly is.
The absence of regulation gives companies like Meta freedom to prioritize efficiency at the expense of ethics and AI alignment, leaving society to deal with the potential fallout of unchecked automation.
Global AI competition
Zuckerberg’s engineers represent more than just technological efficiency. They symbolize labor displacement—displacement that we know can be redirected toward meaningful roles. They also reflect humanity’s capacity to innovate at breathtaking pace--while reminding us that we also have the capacity to perform proper checks and oversight.
Most critically, the growing preference for automating highly skilled and technical roles highlights the urgent need for AI alignment, a challenge that increasingly clashes with the priorities of global AI competition. Balancing these priorities, as the EU aims to do, means recognizing that innovation and regulation are not inherently opposed. Together they can ensure progress continues without sacrificing safety and ethics.
We’ve seen humanity screw itself over in slow motion before: the unregulated financial markets leading to the 2008 crash, or climate policies ignored for decades in the name of economic growth. And now, we may be watching it happen with AI, but at a destructive speed.
If safety is sacrificed in the race of AI dominance, we will be left with systems that are autonomous, unpredictable, and dangerously misaligned with human needs – reducing us to nothing more than fodder for Nick Bostrom's paper clips.
Nikki Mendez is a corporate lawyer specializing in technology, including cloud computing, cybersecurity, privacy, and intelligent systems, guiding pivotal technology transactions and policy developments.