This prototype is an artificial intelligence-based chatbot developed by OpenAI and is trained to hold a text conversation.

Cybercriminals have started distributing versions of the application programming interface (API) of one of OpenAI’s models, which allows creating malware and phishing emails, thus bypassing the restrictions implemented by the company to prevent abuse of its artificial intelligence (AI) tool ChatGPT.

ChatGPT is an artificial intelligence-based chat developed by OpenAI that is trained to hold a text conversation. For this purpose, according to Europa Press, it is based on the GPT 3.5 language model and, in its most recent developments, it has demonstrated the ability to generate and link ideas, as well as to remember previous conversations.

To use this chatbot you only need an account on the creator platform, from where it can be downloaded for free, so that “anyone with minimal resources and zero knowledge of code can exploit it,” according to the technical director of Check Point software, Eusebio Nieva.

A group of researchers at the cybersecurity company found that malicious actors had begun using ChatGPT to execute malware campaigns using traditional methods, as Check Point reported in the middle of last month.

Despite these facilities, OpenAI has put in place a series of restrictions to stop the creation of malicious content on its platform, thus preventing malicious actors from abusing its models.

Thus, if ChatGPT is asked to write a phishing email impersonating an organization (such as a bank) or create malware, the model will not respond to the request, the company said in a new release.

However, cybercriminals have found a way around the restrictions of this chatbot and have shared the steps to bypass these limitations via underground forums, where they reveal how to use the OpenAI API.

According to Check Point researchers, the scammers propose to use the API of one of OpenAI’s GPT-3 models, known as text-davinci-003, instead of taking as a reference ChatGPT, which is a variant of these models designed specifically for chatbot applications.

As Ars Technica reminds us, OpenAI offers developers the text-davinci-003 API and other APIs of different models to be able to integrate the bot into their applications. The difference between this API and that of ChatGPT is that the former does not include the restriction of malicious content.

Check Point says that the API version of the GPT-3 model can be used freely by external applications, such as Telegram, a platform that cybercriminals have begun to use to create and distribute malware.

The cybersecurity company also comments on its blog that some users are publishing and selling the code used by text-davinci-003 to generate such malicious content for free.

Categorized in:

Tagged in:

, ,