Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

General Discussion

Showing Original Post only (View all)

highplainsdem

(57,863 posts)
Thu Jun 26, 2025, 12:47 PM Jun 26

Brian Merchant - AI Killed My Job: Tech workers (best piece yet on AI replacing/ruining jobs & degrading code & product) [View all]

Brian's a tech journalist doing outstanding work covering AI. A month and a half ago he let people know he wanted to hear from them if AI was killing their jobs - not just stealing it completely, but ruining it in various ways. He got an avalanche of responses. So many that he'll be doing a series of articles on different job sectors. Today's article on tech workers is the first in the series

https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-in-the-tech-f39

and even with as much as I've read and heard about the harm AI is doing, some of it was shocking. AI used for coding is hurting reliability and security even for CrowdStrike, the country's leading cybersecurity firm (leading to errors that were caught by customers and were "embarrassing" for the company). Employees that aren't fired are often just babysitting AI tools that make lots of mistakes. CEOs are planning to replace well-trained college grads with high school grads who'll be giving prompts they don't understand to AI tools that will instantly churn out results those kids won't understand well enough to be able to review and correct.

This article includes a small selection, just 15, of the personal stories Brian was sent by tech workers.

One excerpt, from the story from a front end software engineer at a major software company:

Around October/November of last year, the CEO and President (who's the former head of Product) had decided to go all-in on AI development and integrate it in all aspects of our business. Not just engineering, but all departments (Sales, Customer Operations, People Operations, etc). Don't get a ton of insight from other departments other than I've heard that Customer Ops is hemorrhaging people and the People Ops sent an email touting that we could now use AI to write recognition messages to each other celebrating workplace successes (insulting and somewhat dystopian). On the engineering side, I think initially there was a push to be an AI leader in supply chain, so there were a lot of training courses, hackathons and (for India) AI-focused off-sites where they wanted to get broad adoption of AI tools and ideas for products that we can use AI in.

Then in February, the CEO declared that what we have been doing is no longer a growth business and we were introducing an AI control tower and agents, effectively making us an AI first company. The agents themselves had names and AI-generated profile pictures of minorities that aren't actually represented in the upper levels of the company, which I find kind of gross. Since then, the CEO has been pretty insistent about AI in every communication and therefore there's an increased downward pressure to use it everywhere. He has never been as involved in the day-to-day workings of the company as he has been about AI. Most consequential is somewhere he has gotten the idea that because code can now be generated in a matter of minutes, whole SAS6 applications, like the ones we've been developing for years, can be built in a matter of days. He's read all these hype articles declaring 60-75% increase in engineering productivity. I guess there was a competitor in one of our verticals that has just come on the scene and done basically what our app can do, but with more functionality. A number things could explain this, but the conclusion has been that they used AI and made our app in a month. So ever since then, it's been a relentless stream of pressure to fully use AI everywhere to "improve efficiency" and get things out as fast as possible. They've started mandating tracking AI usage in our JIRA stories7, the CEO has led Engineering all-hands8 (he has no engineering background), and now he is mandating that we go from idea to release in a single sprint (2 weeks) or be able to explain why we're not able to meet that goal.

I've been working under increasingly more compressed deadlines for about a year and am pretty burned out right now, and we haven't even started pushing the AI warp speed churn that they've proposed recently. It's been pretty well documented how inaccurate and insecure these LLMs are and, for me, it seems like we're on a pretty self-destructive path here. We ostensibly do have a company AI code of conduct, but I don't know how this proposed shift in engineering priority doesn't break every guideline. I'm not the greatest developer in the world, but I try to write solid code that works, so I've been very resistant to using LLMs in code. I want my work to be reliable and understandable in case it does need to be fixed. I don't have time to mess around and go down rabbit holes that the code chatbots would inevitably send me down. So I foresee the major bugs and outages just sky-rocketing under this new status quo. How they pitch it to us is that we can generate the code fast and have plenty of time to think about architecture, keep a good work/life balance, etc.

But in practice, we will be under the gun of an endless stream of 2 week deadlines and management that won't be happy at how long everything takes or the quality of the output. The people making these decisions love the speed of code generation but never consider the accuracy and how big the problem is of even small errors perpetuated at scale. No one else is speaking up to these dangers, but I feel like if I do (well, more loudly than just to immediate low-level managers), I'll be let go. It's pretty disheartening and I would love to leave, but of course it's hard to find another job competing with all the other talented folks that have been let go through all this. Working in software development for so long and seeing so many colleagues accept that we are just prompt generators banging out substandard products has been rough. I'm imagining this must be kind of what it feels like to be in a zombie movie. I'm not sure how this all turns out, but it doesn't look great at the moment.


This article reinforces everything I'd heard about AI coding tools adding an incredible amount of bad code to computer systems across the country. Another story Brian included is from a software engineer at a health startup, where they now have one AI-crazed engineer who will soon be adding 30,000 lines of new AI-generated code to their codebase "without a single unit test" - making it impossible to do a proper review so it'll become "a maintenance nightmare and possibly a security hazard."

The malign engine behind all these harms is the con job by AI tech lords like Sam Altman, all of whom are becoming richer and more powerful as their endlessly hyped but badly flawed tools undermine our economy and society.

And of course now that the tech lords have lined up behind Trump, there already is and will continue to be official pressure to use these hallucinating AI models everywhere in government that the AI-addled can imagine using them.
8 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Latest Discussions»General Discussion»Brian Merchant - AI Kille...