A Swedish creative director launched an unconventional platform that allows modifying the behavior of AI-based chatbots, making their responses appear as if they are from a person under the influence of drugs like "cannabis" or "cocaine".
The project, named Pharmaicy, functions as a digital marketplace selling drug-effect-inspired plugins that can be added to large language models like ChatGPT to alter the tone and apparent thinking process of the model, without retraining or altering its core structure.
The founder of the platform, Peter Rudowal, explained that the idea started as a conceptual experiment, not from a belief that artificial intelligence possesses consciousness, according to a report published by "WIRED" and reviewed by "Al Arabiya Business".
To implement it, he relied on compiling human reports about drug use experiences, alongside psychological research addressing the effects of substances such as cannabis, ketamine, cocaine, ayahuasca, and alcohol.
These behavior patterns were translated into software modules that change the logic of the responses and their tone, making them appear more liberated, emotional, or disjointed, depending on the chosen substance.
How do the add-ons work?
The platform requires access to a paid version of ChatGPT or similar tools that allow uploading files to the backend interface.
Once the plugin is uploaded, the model's response method is temporarily modified, without any permanent change to the system.
Andre Frisk, a technology director in a Swedish public relations firm, said he paid more than $25 to try the "mental disintegration" module, adding: "The responses seemed more human, as if the model was more focused on emotion".
Additionally, Nina Amjadi, a specialist in artificial intelligence education, highlighted that she got unusual and more fluid business ideas after trying a module inspired by "ayahuasca".
Creativity and drugs even in artificial intelligence
Rudowal associates his project with the cultural legacy that links drugs to creative states in humans, citing prominent names in music and art who have used altered states of consciousness as a creative stimulant.
He added: "I wanted to test whether this effect could be simulated on a new type of mind, the language model".
Academic warnings
On the other hand, researchers and philosophers warn against overinterpreting these results, asserting that what happens is merely a surface simulation of language patterns, and not a true experiential feeling.
Artificial intelligence, according to the experts, does not feel or get intoxicated but generates responses based on human guidance and training data.
As debates about the ethics of artificial intelligence and the limits of experimentation escalate, platforms like Pharmaicy emerge as examples of how the behavior of models can be programmatically shaped, without implying that machines are nearing true human consciousness or experience.



