Blog Layout

Three different types of artificial intelligence, explained

Aug 10, 2023

While the performance of current AI systems may seem impressive, there’s a long way to go before we’re likely to see true human-like capabilities.

AI is everywhere – or so you might be forgiven for thinking. Every corporate announcement and government initiative these days seems to cite it as a badge of honor.

Artificial intelligence has come a long way since the 1950s when the term was coined by Stanford emeritus professor John McCarthy and defined as “the science and engineering of making intelligent machines.”

But the words actually cover a number of different technologies and concepts, which can have radically different characteristics and capabilities. So how do the experts categorize the different stages of AI, and how can we expect to see it develop in the future? Let’s dive into the terms of artificial narrow intelligence, artificial general intelligence, and artificial superintelligence.

Artificial narrow intelligence (ANI)

Artificial narrow intelligence (ANI), also sometimes known as Weak ANI, is regarded as the most basic stage of AI, embracing everything from simple rule-based systems and decision-tree systems to artificial neural networks that can recognize patterns and make decisions based on them.

It includes ‘genetic algorithms’ and ‘evolutionary computation’ that use natural selection to improve performance over time, as well as fuzzy logic systems and Bayesian networks that can make use of imprecise or incomplete information.

However, ANI is reliant on the data it’s trained on, and can’t teach itself to perform new tasks.

It’s used widely in tasks such as manufacturing assembly, supply chain management, customer service, healthcare diagnosis, financial data analysis, and in personal assistants such as Siri and Alexa.

Right now, perhaps the most prominent example of ANI is ChatGPT and other similar artificial generative AIs. Fed with data harvested from the internet, such models – entertainingly and rather accurately described as ‘auto-complete on steroids’ – may appear sophisticated but in fact perform only a narrow function and are incapable of reasoning or learning.

They are also prone to, well, making things up – a phenomenon known as hallucination.

Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI), also known as Strong AI, is what’s being seen as the next stage for AI – and is really what most people would imagine a true AI to be.

An AGI would be capable of the same level of learning and understanding as a human being, and of carrying out the same level of intellectual tasks – while having instant access to a far greater range of data.

Unsurprisingly, the concept has caused a certain level of alarm, with Stephen Hawking warning that it could even spell the end of the human race: “It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

Some are concerned that AGI could be just around the corner – including, for what it’s worth, Elon Musk.

“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast – it is growing at a pace close to exponential,” he commented on an essay by computer scientist Jaron Lanier.

“The risk of something seriously dangerous happening is in the five-year timeframe. Ten years at most.”

But before you start to panic, it’s worth noting that he made this comment way back in 2014. And other, perhaps more expert observers, currently believe that AGI isn’t going to be taking over the world any time soon.

In 2017, for example, Richard Sutton, professor of computer science at the University of Alberta, suggested that there was a 25% chance of it emerging by 2030, a 50% chance by 2040, and a 10% chance that it would never materialize at all.

Artificial Super Intelligence (ASI)

This is where we get into real science fiction territory. Artificial Super Intelligence (ASI) would, as the name implies, surpass human intelligence in every way. It could be self-aware, with its own emotions, beliefs, and desires.

Two years ago, an international group of researchers concluded that it would not be possible to contain an ASI, as it simply wouldn’t be possible to create a containment algorithm that could ensure that it couldn’t harm people under any circumstances.

“If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations,” explained team member Iyad Rahwan, director of the Center for Humans and Machines.

“If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable.”

Just this month, however, ChatGPT creator OpenAI has announced the creation of a dedicated team charged with working to ensure that these fears never come to pass.

It says it plans to dedicate 20% of its computing power towards this end, and is currently recruiting for staff.

“Our alignment research aims to make artificial general intelligence (AGI) aligned with human values and follow human intent,” the team explains.

“We take an iterative, empirical approach: by attempting to align highly capable AI systems, we can learn what works and what doesn’t, thus refining our ability to make AI systems safer and more aligned.”

In the long term, we can expect to see more initiatives such as this, as well as increased national and international controls on the development and use of AI. And while there’s no doubt about the potential dangers of an ASI, we’re a lot further from seeing one than might be apparent from much of the hype.

In short, there’s plenty of time.

26 Apr, 2024
In continuation of Microsoft’s series of data security incidents, employees accidentally exposed internal data to the public. The leak exposed an unprotected Azure storage server containing code, scripts, and configuration files. Microsoft has announced that it has fixed a security breach that exposed internal company credentials and files to the open internet. The breach was first discovered by security researchers from cybersecurity firm SOC Radar. According to their report, an internal error resulted in an Azure storage server without password protection being given public access. The exposed data was primarily related to Microsoft’s Bing search engine, including configuration files, code, and scripts that employees used to access a range of internal systems and databases. Consequently, bad actors could identify and access locations for Microsoft's internal data. So far, it has not been made clear how long the data has been exposed. Anuj Mudaliar Assistant Editor - Tech, SWZD opens a new window opens a new window Anuj Mudaliar is a content development professional with a keen interest in emerging technologies, particularly advances in AI. As a tech editor for Spiceworks, Anuj covers many topics, including cloud, cybersecurity, emerging tech innovation, AI, and hardware. When not at work, he spends his time outdoors - trekking, camping, and stargazing. He is also interested in cooking and experiencing cuisine from around the world.
26 Apr, 2024
AT&T is notifying 51 million former and current customers, warning them of a data breach that exposed their personal information on a hacking forum. However, the company has still not disclosed how the data was obtained. These notifications are related to the recent leak of a massive amount of AT&T customer data on the Breach hacking forums that was offered for sale for $1 million in 2021. When threat actor ShinyHunters first listed the AT&T data for sale in 2021, the company told BleepingComputer that the collection did not belong to them and that their systems had not been breached. Last month, when another threat actor known as 'MajorNelson' leaked the entire dataset on the hacking forum, AT&T once again told BleepingComputer that the data did not originate from them and their systems were not breached. After BleepingComputer confirmed that the data belonged to AT&T and DirectTV accounts, and TechCrunch reported AT&T passcodes were in the data dump, AT&T finally confirmed that the data belonged to them. While the leak contained information for more than 70 million people, AT&T is now saying that it impacted a total of 51,226,382 customers. "The [exposed] information varied by individual and account, but may have included full name, email address, mailing address, phone number, social security number, date of birth, AT&T account number and AT&T passcode," reads the notification. "To the best of our knowledge, personal financial information and call history were not included. Based on our investigation to date, the data appears to be from June 2019 or earlier." BleepingComputer contacted AT&T as to why there is such a large difference in impacted customers and was told that some of the people had multiple accounts in the dataset. "We are sending a communication to each person whose sensitive personal information was included. Some people had more than one account in the dataset, and others did not have sensitive personal information," AT&T told BleepingComputer. The company has still not disclosed how the data was stolen and why it took them almost five years to confirm that it belonged to them and to alert customers. Furthermore, the company told the Maine Attorney General's Office that they first learned of the breach on March 26, 2024, yet BleepingComputer first contacted AT&T about it on March 17th and the information was for sale first in 2021. While it is likely too late, as the data has been privately circulating for years, AT&T is offering one year of identity theft protection and credit monitoring services through Experian, with instructions enclosed in the notices. The enrollment deadline was set to August 30, 2024, but exposed people should move much faster to protect themselves. Recipients are urged to stay vigilant, monitor their accounts and credit reports for suspicious activity, and treat unsolicited communications with elevated caution. For the admitted security lapse and the massive delay in verifying the data breach claims and informing affected customers accordingly, AT&T is facing multiple class-action lawsuits in the U.S. Considering that the data was stolen in 2021, cybercriminals have had ample opportunity to exploit the dataset and launch targeted attacks against exposed AT&T customers. However, the dataset has now been leaked to the broader cybercrime community, exponentially increasing the risk for former and current AT&T customers. Update 4/10/24: Added statement from AT&T about discrepancy in numbers. BILL TOULAS Bill Toulas is a tech writer and infosec news reporter with over a decade of experience working on various online publications, covering open-source, Linux, malware, data breach incidents, and hacks.
26 Apr, 2024
Home improvement retailer Home Depot confirmed with multiple publishers that it suffered a data break due to a third-party SaaS vendor inadvertently exposing a subset of employee data. IntelBroker, the threat actor behind the attack claims it has the information of 10,000 Home Depot employees. A Home Depot software vendor suffered a data breach leading to the compromise of an undisclosed number of employees. IntelBroker, the threat actor behind the attack claims it has the information of 10,000 Home Depot employees. Home improvement retailer Home Depot confirmed with multiple publishers that it suffered a data break due to a third-party software vendor inadvertently exposing a subset of employee data. Reportedly, the breach was caused by a misconfigured software-as-a-service (SaaS) application.
More Posts
Share by: