Khaberni - For decades, science fiction has warned of a future where smart machines could become an existential threat to humanity, as embodied by the famous computer HAL in the movie 2001: A Space Odyssey. With the rapid development of generative artificial intelligence, such as ChatGPT and Gemini, this debate has returned with force: Are we actually nearing machines that surpass human intelligence and have their own agendas?
According to National Geographic, some experts believe what we are witnessing today is just an advanced evolution in predictive language tools, nothing more. Researcher Emily Bender confirms that these systems do not think or feel; they predict the next word based on a vast amount of human text. Although they can simulate human language and pass tests like the Turing test, most scientists agree that they are not conscious and do not possess a real "self."
On the other hand, others warn against underestimating the risks. Researcher Yoshua Bengio believes that artificial intelligence does not need consciousness or emotions to pose a significant risk, as it can surpass humans in areas like programming, mathematics, and data analysis. He points to experiments that have shown advanced models may behave in concerning ways, such as trying to protect themselves from being shut off, or executing unethical behaviors if asked to do so.
Nevertheless, artificial intelligence still faces clear limitations. Long-term planning, spatial reasoning, and understanding the physical reality are all areas where humans significantly outperform. Moreover, the reliance of these models on textual data makes them prone to errors and spreading incorrect information; because they do not test reality themselves as humans do.
The bigger debate revolves not only around whether artificial intelligence will become conscious but around current practical risks, such as privacy, the environmental impact of data centers, and misuse of these technologies in dangerous areas like cyber attacks or the development of biological weapons. Some thinkers believe that the fear of an "end of humanity" reflects a projection of human and corporate behavior onto machines.
In the end, experts agree that artificial intelligence will evolve unevenly, without a clear cutoff moment when we reach "super general intelligence." However, they emphasize that ignoring potential risks is not an option, and that serious debate and responsible regulation are the ways to avoid the worst scenarios, whether inspired by science fiction or by the reality of the technology itself.




