Artificial intelligence chatbots are everywhere – and they’re able to make difficult jobs much easier for millions of people.

This month, OpenAI showed off its new ChatGPT o1 model, which is capable of “thinking” and “reasoning”.

“We’ve developed a new series of AI models designed to spend more time thinking before they respond,” OpenAI explained.


Advertisement


“They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.”

It’s the latest major advancement to ChatGPT since the AI chatbot first launched in late 2022.

And it’s available in an early preview for paying ChatGPT members now (but in a limited way).

The Sun spoke to security expert Dr Andrew Bolster, who revealed how this kind of advancement could be a huge win for cyber-criminals.

“Large Language Models (LLMs) continue to improve over time, and OpenAI’s release of their ‘o1’ model is no exception to this trend,” said Dr Bolster, of the Synopsys Software Integrity Group, speaking to The Sun.

Where this generation of LLM’s excel is in how they go about appearing to ‘reason’.

“Where intermediate steps are done by the overall conversational system to draw out more creative or ‘clever’ appearing decisions and responses.

“Or, indeed to self-correct before expressing incorrect responses.”

He warned this brainy new system could be used for carrying out clever scams.

“In the context of cybersecurity, this would naturally make any conversations with these ‘reasoning machines’ more challenging for end-users to differentiate from humans,” Dr Bolster said.

“Lending their use to romance scammers or other cybercriminals leveraging these tools to reach huge numbers of vulnerable ‘marks’.”

He warned that they’d be able to carry out lucrative scams cheaply “at scale for a dollar per hundred responses”.

So how do regular users stay safe?

The good news is that all the old rules for dodging online scams still apply.

“Web users should always be wary of deals that are ‘too good to be true’,” Dr Bolster told us.

“And [they] should always consult with friends and family members to get a second opinion.

“Especially when someone (or something) on the end of a chat window or even a phone call is trying to pressure you into something.”

To combat the new ChatGPT being abused, OpenAI has fitted it out with a whole host of new safety measures.

“As part of developing these new models, we have come up with a new safety training approach that harnesses their reasoning capabilities to make them adhere to safety and alignment guidelines,” OpenAI said.

“By being able to reason about our safety rules in context, it can apply them more effectively.”

“One way we measure safety is by testing how well our model continues to follow its safety rules if a user tries to bypass them (known as ‘jailbreaking’).

Author

  • End Time Headlines

    End Time Headlines is a Ministry that provides News and Headlines from a "Prophetic Perspective" as well as weekly podcasts to inform and equip believers of the Signs and Seasons that we are living in today.

    View all posts