Preloader Close

ChatGPT

Source: cio.economictimes.indiatimes.com |      Published on: Mar 18, 2023

Is chatGPT a threat to cybersecurity?

ChatGPT can be both a blessing and a curse in the realm of cybersecurity, say infosec experts

Generative AI tool has taken the world by storm. It is a technique that uses artificial intelligence (AI) and Machine Learning (ML) to create algorithms to generate new digital content, including codes. There is no doubt that it is going to radically impact industries and domains and has the potential to transform the future of mankind. Cybersecurity is one such domain.


What will be its impact on the domain of cybersecurity?

According to reports, hackers are already on it. Though chatGPT's abilities of producing software codes and scripts are currently limited, researchers say hackers and cybercriminals are already bypassing the tool's safeguards in order to produce malicious content.

"ChatGPT is going to help threat actors and make it easy for them. One thing I personally feel is that it will help to draft a much better phishing email to trick users," says Kumaran Mudaliar, VP, Cybersecurity at Everise.

“Forensic investigation of deepfake audio, video, images, synthetic identity, and AI-generated malware will be the biggest challenge before cybersecurity leaders. Moreover, handling copyright and IP issues in the content creation domain will be another tough task for investigators. It will be difficult to authenticate the original creator," says Prof. Triveni Singh, SP, Cyber Crime at Uttar Pradesh Police.

In an official blog post, researchers at cybersecurity company Cyberark revealed how they were able to utilize chatGPT to create polymorphic malware, a malicious software which can alter its own code in order to avoid detection and make removal more complex.

”One of the powerful capabilities of chatGPT is the ability to create and continuously mutate injectors. By querying it and receiving a different code each time, it is therefore possible to create polymorphic programs that are evasive and difficult to detect,” said the researchers in the blog post.

"The lack of validation checks and content scrutiny is dangerous. AI can be particularly biased towards women as some studies indicate, is another concern. Considering it's an open source software, all the security concerns from an open source API can come into play," says Mayurakshi Ray, a cybersecurity leader and VigiTrust Global Advisory Board Member


Disclaimer:

The information is provided solely for general informational and educational purposes and is not intended to be a substitute for professional advice. As a result, before acting on such information, we recommend that you consult with the appropriate professionals.