OpenAI tries to hook college kids on AI with “OpenAI Wants to Hook College Kids Hooked”?

Gizmodo

AI chatbots like OpenAI’s ChatGPT have been shown repeatedly to provide false information, hallucinate completely made-up sources and facts, and lead people astray with their confidently wrong answers to questions.
So, of course, OpenAI and its competitors are targeting colleges and pushing its services on students—concerns be damned.
According to the New York Times, OpenAI is in the midst of a major push to make ChatGPT a fixture on college campuses, replacing many aspects of the college experience with AI alternatives.
According to the report, the company wants college students to have a “personalized AI account” as soon as they step on campus, same as how they receive a school email address.
They all produced false information, hallucinated cases that did not exist, and made errors.

NEGATIVE

It has been repeatedly demonstrated that AI chatbots, such as OpenAI’s ChatGPT, give misleading information, create completely fictitious sources and facts in their minds, and mislead people with their assuredly incorrect answers to questions. Because of this, a lot of educators are skeptical of AI tools. It follows that OpenAI and its rivals are promoting their services to students at universities, worries be damned.

The New York Times claims that OpenAI is currently engaged in a significant campaign to establish ChatGPT as a standard on college campuses, substituting AI for many aspects of the college experience. The company wants college students to have a “personalized AI account” as soon as they arrive on campus, similar to how they are given a school email address, according to the report. ChatGPT is envisioned as a career assistant that assists students in finding employment following graduation, as well as a personal tutor and teacher’s assistant.

Though the educational community initially greeted AI with mistrust and outright bans, some schools are already on board. According to the Times, universities like California State University, Duke University, and the University of Maryland have all enrolled in OpenAI’s premium service, ChatGPT Edu, and have begun incorporating the chatbot into various course components.

It is also not the only one aiming for higher education. Google is now giving students free access to its Gemini AI suite through the end of the 2025–2026 school year, and Elon Musk’s xAI provided free access to its chatbot Grok during exam season. However, that is not part of the actual higher education infrastructure, which is where OpenAI is trying to function.

It is regrettable that universities have chosen to accept AI after first adopting strong stances against it due to concerns about cheating. Evidence is already mounting that AI is not very helpful if learning and remembering correct information is your aim. Reliance on AI can weaken critical thinking abilities, according to a study released earlier this year. Others have discovered that people will use AI as a shortcut and “offload” the more challenging cognitive tasks. AI undercuts the goal of education if it is to teach students how to think.

Not to mention the false information that surrounds it all. To find out how AI might be used in a targeted learning environment, researchers trained various models on a patent law casebook and observed how well they responded to questions about the content. They were all mistaken, gave misleading information, and conjured up cases that never happened. Approximately 25% of the time, the researchers found that OpenAI’s GPT model provided responses that were “unacceptable” and “harmful for learning.”. That isn’t the best.

scroll to top