Connect with us

Tech

Scientists succeed in ‘mind-reading’ using ChatGPT

Published

on

With the help of artificial intelligence (AI) powered ChatGPT, neuroscientists believe they have found a way to translate the activity of the brain into words, a major discovery that can help patients with conditions like “locked-in” syndrome, stroke, etc that render them unable to communicate.

The scientists from the University of Texas in Austin used the groundbreaking OpenAI’s human-like chatbot showing its applications in the healthcare sector as AI is on the way to modernisation and advancement, eventually touching every part of our daily lives.

Alexander Huth, assistant professor of neuroscience and computer science at the University of Texas at Austin, told CNN, “so, we don’t like to use the term mind reading. We think it conjures up things that we’re actually not capable of.”

Professor Huth participated in research and spent spending 20 hours in the confines of an fMRI (functional magnetic resonance imaging)  machine with audio clips he was listening to. Meanwhile, the machine captured detailed snaps of his brain activity.

OpenAIs ChatGPT logo is seen in this illustration. — Reuters/File
OpenAI’s ChatGPT logo is seen in this illustration. — Reuters/File

The AI system analysed his brain activity and the audio he was listening leading the technology to eventually foretell the words he was hearing just by watching his brain.

The technology researchers used was OpenAI’s chatGPT-1 model — which developed a huge database of books and websites.

The researchers found that the AI system accurately predicted what participants were listening to and watching by observing mental activity.

Despite its initial stages, the technology shows promise. It also underlines that AI cannot easily read our minds.

“The real potential application of this is in helping people who are unable to communicate,” Huth explained.

The researchers believed that this technology could be used in the future by people with “locked-in” syndrome, stroke and others whose brains are working but they could not speak.

“Ours is the first demonstration that we can get this level of accuracy without brain surgery. So we think that this is kind of step one along this road to actually helping people who are unable to speak without them needing to get neurosurgery,” he said.

A screen can be seen showing the OpenAI logo with ChatGPT visible behind the phone. — AFP/File
A screen can be seen showing the OpenAI logo with ChatGPT visible behind the phone. — AFP/File

Though the results of the technology are promising, it also raised concerns about how it would be used in controversial areas.

The researchers noted that brain scans “need to occur in an fMRI machine, the AI technology needs to be trained on an individual’s brain for many hours, and subjects need to give their consent.”

If someone resists listening to audio and does not think as per the requirement, it simply won’t work.

Jerry Tang, the lead author of a paper explained: “We think that everyone’s brain data should be kept private. Our brains are kind of one of the final frontiers of our privacy.”

Tang explained that “obviously there are concerns that brain decoding technology could be used in dangerous ways.”

Huth stated: “What we can get is the big ideas that you’re thinking about. The story that somebody is telling you, if you’re trying to tell a story inside your head, we can kind of get at that as well.”

Voicing concerns, Tang told CNN that lawmakers need to take “mental privacy” seriously to protect “brain data” — our thoughts — two of the more dystopian terms I’ve heard in the era of AI.

“It’s important not to get a false sense of security and think that things will be this way forever,” Tang warned.

“Technology can improve and that could change how well we can decode and change whether decoders require a person’s cooperation.”

Latest News

Pakistan declares AI chatbots to be dangers to security.

Published

on

By

The National Computer Emergency Response Team (CERT) has released a security advisory concerning the increasing utilization of artificial intelligence (AI) chatbots, emphasizing potential hazards related to the exposing of private data.

The recommendation recognizes that AI chatbots, like ChatGPT, have gained significant popularity for personal and professional duties owing to their capacity to improve productivity and engagement. Nonetheless, the CERT cautions that these AI systems frequently retain sensitive information, so posing a danger of data breaches.

Engagements with AI chatbots may encompass sensitive information, such as corporate strategy, personal dialogues, or confidential correspondence, which could be compromised if inadequately safeguarded. The warning emphasizes the necessity for a comprehensive cybersecurity framework to alleviate concerns associated with AI chatbot utilization.

Users are advised against inputting critical information into AI chatbots and are encouraged to deactivate any chat-saving functionalities to mitigate the danger of unwanted data access. The CERT additionally advises performing routine system security checks and employing monitoring tools to identify any anomalous behavior from AI chatbots.

Organizations are urged to adopt rigorous security protocols to safeguard against possible data breaches resulting from AI-driven interactions.

Continue Reading

Latest News

Unlawful VPNs: Terrorists Utilize Unregistered VPNs to Disseminate Propaganda

Published

on

By

Terrorists utilize illicit VPNs to disseminate propaganda and misinformation while concealing their identities.

Several accounts have been uncovered and are under investigation.

The cessation of non-registered VPNs is crucial for the nation’s economy.

The PTA is blocking illegal URLs and websites disseminating objectionable material upon verification.

Forensic investigations of websites have yielded startling results.

Disseminating erroneous information is more facile using unregistered VPNs.

Terrorists were exploiting these unverified VPNs.

Continue Reading

Latest News

Air University Holds A Revolutionary Event For Students Focusing On Cybersecurity

Published

on

By

Air Marshal (R) Asad Lodhi has praised the Pakistan Cyber Security Challenge, an initiative by the Air University and Higher Education Commission that aims to train future cyber security experts and pioneers. At the opening ceremony of the two-day Pakistan Cyber Security Challenge, which took place at Air University in Islamabad, he was the chief guest.
Showdown challenges, the Ideas Cup, and the Pakistan Crypt Challenge are all part of the Pakistan Cyber Security Challenge, which lasts for two days.

Air Marshal (R) Abdul Moeed Khan, VC of Air University, made a speech praising the cyber security abilities of Pakistani youth, saying that they are among the best in the world and will help the country face cyber threats. A haven for cyber security excellence, he said, Pakistan Cyber Security Challenge 2024. Additionally, he praised the Air University for its innovative and outstanding work in this area.
At the inaugural ceremony of the Pakistan Cyber Security Challenge, Dr. Zia Ul Qayyum, executive director of the Higher Education Commission, also spoke to the crowd. He discussed how HEC has created a welcoming atmosphere and helped facilitate projects such as the Pakistan Cyber Security Challenge.
As part of the inauguration event, an MoU was also signed. Guests and participants were given souvenirs as the opening ceremony came to a close.

Continue Reading

Trending