*
الخميس: 25 ديسمبر 2025
  • 23 ديسمبر 2025
  • 08:51
Does Chat GPT weaken childrens cognitive abilities

Khaberni - While students are fascinated by the simplicity of artificial intelligence, the ease of obtaining solutions is not always to their advantage. A new study at the Massachusetts Institute of Technology has shown that excessive reliance by the students on artificial intelligence leads to a decline in their intellectual abilities, deterioration in their writings, and decreased brain activity, which leads to cognitive regression.

In the study, students who used Chat GPT, or other artificial intelligence models, while writing essays showed the lowest levels of brain activity, according to "Oxford Learning".

Furthermore, their writings became increasingly formulaic, easily forgettable, and lacking in creativity. Over time, the students became more passive and isolated.

Many of them could not remember what they wrote or review their work without the help of artificial intelligence, clearly indicating that they had not learned effectively.

Passive Learning and Critical Thinking

According to "Psychology Today", relying on artificial intelligence for homework may seem beneficial in the short term, but it neglects the mental processes that lead to long-term learning.

When students simply enter their commands into programs such as Chat GPT or other machine learning software and paste the result, they are not analyzing, synthesizing, or thinking, but merely receiving.

 

This type of passive learning:

 

Weakens neural connections.

 

Reduces the brain's ability to retain information.

 

Limits the development of critical thinking and creativity.

 

Encourages intellectual laziness.

 

As one researcher said: "The task was completed, but nothing was integrated into brain memory networks."

Hidden Risks

 

In addition to cognitive laziness, artificial intelligence tools pose another significant challenge: they are not always accurate or neutral.

 

These models are trained on massive data sets, which in turn generate texts based on specific patterns instead of true understanding.

 

This means that they can produce information that may seem plausible at first but is incorrect, reflects biases present in the training data, or even falsifies sources.

One study that examined Chat GPT4's ability to write an academic paper found numerous undocumented claims and errors in references.

 

For children or teenagers who are still developing their knowledge and evaluative skills, passively accepting content generated by artificial intelligence can lead to the assimilation and dissemination of misleading information.

 

Moreover, the authoritative tone often adopted by artificial intelligence makes it difficult to question its outputs.

مواضيع قد تعجبك