General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsBrian Merchant - AI Killed My Job: Tech workers (best piece yet on AI replacing/ruining jobs & degrading code & product)
Brian's a tech journalist doing outstanding work covering AI. A month and a half ago he let people know he wanted to hear from them if AI was killing their jobs - not just stealing it completely, but ruining it in various ways. He got an avalanche of responses. So many that he'll be doing a series of articles on different job sectors. Today's article on tech workers is the first in the series
https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-in-the-tech-f39
and even with as much as I've read and heard about the harm AI is doing, some of it was shocking. AI used for coding is hurting reliability and security even for CrowdStrike, the country's leading cybersecurity firm (leading to errors that were caught by customers and were "embarrassing" for the company). Employees that aren't fired are often just babysitting AI tools that make lots of mistakes. CEOs are planning to replace well-trained college grads with high school grads who'll be giving prompts they don't understand to AI tools that will instantly churn out results those kids won't understand well enough to be able to review and correct.
This article includes a small selection, just 15, of the personal stories Brian was sent by tech workers.
One excerpt, from the story from a front end software engineer at a major software company:
Then in February, the CEO declared that what we have been doing is no longer a growth business and we were introducing an AI control tower and agents, effectively making us an AI first company. The agents themselves had names and AI-generated profile pictures of minorities that aren't actually represented in the upper levels of the company, which I find kind of gross. Since then, the CEO has been pretty insistent about AI in every communication and therefore there's an increased downward pressure to use it everywhere. He has never been as involved in the day-to-day workings of the company as he has been about AI. Most consequential is somewhere he has gotten the idea that because code can now be generated in a matter of minutes, whole SAS6 applications, like the ones we've been developing for years, can be built in a matter of days. He's read all these hype articles declaring 60-75% increase in engineering productivity. I guess there was a competitor in one of our verticals that has just come on the scene and done basically what our app can do, but with more functionality. A number things could explain this, but the conclusion has been that they used AI and made our app in a month. So ever since then, it's been a relentless stream of pressure to fully use AI everywhere to "improve efficiency" and get things out as fast as possible. They've started mandating tracking AI usage in our JIRA stories7, the CEO has led Engineering all-hands8 (he has no engineering background), and now he is mandating that we go from idea to release in a single sprint (2 weeks) or be able to explain why we're not able to meet that goal.
I've been working under increasingly more compressed deadlines for about a year and am pretty burned out right now, and we haven't even started pushing the AI warp speed churn that they've proposed recently. It's been pretty well documented how inaccurate and insecure these LLMs are and, for me, it seems like we're on a pretty self-destructive path here. We ostensibly do have a company AI code of conduct, but I don't know how this proposed shift in engineering priority doesn't break every guideline. I'm not the greatest developer in the world, but I try to write solid code that works, so I've been very resistant to using LLMs in code. I want my work to be reliable and understandable in case it does need to be fixed. I don't have time to mess around and go down rabbit holes that the code chatbots would inevitably send me down. So I foresee the major bugs and outages just sky-rocketing under this new status quo. How they pitch it to us is that we can generate the code fast and have plenty of time to think about architecture, keep a good work/life balance, etc.
But in practice, we will be under the gun of an endless stream of 2 week deadlines and management that won't be happy at how long everything takes or the quality of the output. The people making these decisions love the speed of code generation but never consider the accuracy and how big the problem is of even small errors perpetuated at scale. No one else is speaking up to these dangers, but I feel like if I do (well, more loudly than just to immediate low-level managers), I'll be let go. It's pretty disheartening and I would love to leave, but of course it's hard to find another job competing with all the other talented folks that have been let go through all this. Working in software development for so long and seeing so many colleagues accept that we are just prompt generators banging out substandard products has been rough. I'm imagining this must be kind of what it feels like to be in a zombie movie. I'm not sure how this all turns out, but it doesn't look great at the moment.
This article reinforces everything I'd heard about AI coding tools adding an incredible amount of bad code to computer systems across the country. Another story Brian included is from a software engineer at a health startup, where they now have one AI-crazed engineer who will soon be adding 30,000 lines of new AI-generated code to their codebase "without a single unit test" - making it impossible to do a proper review so it'll become "a maintenance nightmare and possibly a security hazard."
The malign engine behind all these harms is the con job by AI tech lords like Sam Altman, all of whom are becoming richer and more powerful as their endlessly hyped but badly flawed tools undermine our economy and society.
And of course now that the tech lords have lined up behind Trump, there already is and will continue to be official pressure to use these hallucinating AI models everywhere in government that the AI-addled can imagine using them.

sop
(15,321 posts)Now these same people are pushing AI technology when everyone knows its main goal is to replace "real" workers. I guess all the anti-immigration hysteria was never really about jobs.
highplainsdem
(57,554 posts)SheltieLover
(72,131 posts)
highplainsdem
(57,554 posts)
SheltieLover
(72,131 posts)
OC375
(141 posts)53 years old. Been doing IT since the last century. SysAdmin for Unix, SAP & Oracle. When AI ends this job someday, Im leaving IT and taking my troubleshooting skills out of corporate circulation. Id rather live cheap in a small town and making small money doing less stressful work for small business or small government. Lets see how they like Gen X going Galt.
highplainsdem
(57,554 posts)The response to this piece has been wild. It quickly became one of my most-read posts, and I've been hearing more and more stories from workers impacted by AI in tech and other sectors. Thanks to all who've read, shared, and reached out.
— Brian Merchant (@bcmerchant.bsky.social) 2025-06-30T22:15:43.166Z
May also have a couple new scoops to share before too long.
Amishman
(5,894 posts)Executives don't understand technology, and how complex and high maintenance modern corporate infrastructure is.
They also don't understand how company specific a lot of it is.
All they see are the dollar signs of the salaries of expensive IT specialists and are overeager to listen to any snake oil salesman promising to eliminate those awful expenses.
For 20 years I've seen clueless senior leaders try to cut IT costs with contractors and offshoring. For 20 years I've seen it backfire spectacularly time and time again. I can only think of a few instances where those efforts had a neutral or better outcome, and those were cases where very simplistic tasks were offshored (ones that could be fully scripted and defined, with no gray areas). Most cases a year or two later, onshore teams would be painfully and expensively restaffed to pick up the pieces. In one case, they had to sell off the entire business unit.
I expect the shortcomings of this latest flavor of cost cutting replacement strategy will burn quite a few companies in the next few years.
Can advanced automation and machine learning (I hate the phrase 'AI' because it's not actually intelligent) increase efficiency in IT development and operations? Absolutely. Can it do away with those expensive experts? Not really - because you are simply trading some of the hands on data analysis and code grunt work for increased testing and maintenance and overhead of the models of the new toolset. If anything you might reduce headcount slightly, but need even higher skilled people who are able to understand what the 'AI' is doing and what its limitations are. Not to mention increased security needs as the toolsets of potential malicious actors have improved dramatically as well.
Improved automation absolutely will kill a lot of jobs in the next decade, but the people able to build and maintain sophisticated systems are not going to be the ones successfully eliminated.