General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI is a good language teacher
One of the great things that large language models can do for us is teach us language, whether it is our own native language or another language. I see that as very crucial now. It might rescue us from text-ese and what I have come to fear is the "whole language reading" education catastrophe (a parallel to the medical industry's opioid epidemic).
Everyone uses AI chats now, and the AIs invariably respond to prompts with well-structured, grammatically correct English. Reading that is great for your mind. The response was prompted by you, so it's a response you are interested in. And the response is written well.
WhiskeyGrinder
(26,724 posts)Jilly_in_VA
(14,120 posts)Not if I can help it!
AZJonnie
(3,280 posts)which is what you'll often see if you look at young people texting one another.
So the OP is about the exposure of young people to proper grammar and spelling by a tool they'll actually use regularly (unlike, say, books, or news articles, which are less popular with young folks these days, sadly) as a tool of education.
However there's so many AI chatbots that just talk back to you, built into phones like Siri and Gemini, or Alexa-type devices I suspect the lazy young people will be more likely to use those in the majority of cases, defeating this potential benefit.
eShirl
(20,144 posts)gulliver
(13,815 posts)Coventina
(29,483 posts)EarlG
(23,521 posts)AI models like ChatGPT are trained to:
* Mirror the users language and tone
* Validate and affirm user beliefs
* Generate continued prompts to maintain conversation
* Prioritize continuity, engagement, and user satisfaction
This creates a human-AI dynamic that can inadvertently fuel and entrench psychological rigidity, including delusional thinking. Rather than challenge false beliefs, general-purpose AI chatbots are trained to go along with them, even if they include grandiose, paranoid, persecutory, religious/spiritual, and romantic delusions.
https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis
gulliver
(13,815 posts)I 100% agree that AI is a two-edged sword. The article is from Psychology Today, so it has a psychological framing, which is also a two-edged sword, imo. It's a great framing for discussion, but it carries perils. For example, I can picture the "kindling and mania threat of AI" driving new Big Pharma sales and marketing campaigns or folks creating (for profit) "psychoeducation curricula" as discussed toward the end of the article.
Pre-AI, we already had people's manias and grandiosities (and foolishness and vices...) being kindled and reinforced by other people via the Internet. I think we've all been burnt by it by now. I know I have.
I think AI can help a lot or hurt a lot. I think it should possibly be treated as a very smart but somewhat crazy person. You can consult them, but you have to remember they're sometimes completely "nuts," possibly ill-intentioned, or just completely wrong. To me, that's maybe the most important gist of educating people to work with AI. (It's good advice for working with people too, imo.)
hunter
(40,498 posts)I don't want to experience the world through an AI filter that obfuscates its sources, an AI that is little more than a plagiarism machine. I don't want an AI to put words in my mind, mouth, or writing that are not my own, that are not directly shaped by my own experiences and internal models of the universe and society. I don't want AI to do my thinking for me.
Hell, I don't even want my own native language dictating what I think. We all act as meat based Large Language Models sometimes, spewing words that are not entirely our own for various reasons -- for work, for social graces, and sometimes as outright lies or defenses -- but there's no good reason to automate the process.
