Artificial intelligence – or AI – is a game changer because it is a genuine leap forward in the capabilities of computers. It enables them to mimic aspects of the workings of the human mind and indeed to exceed them. You could say it allows computers to think for themselves.
The key processes AI is capable of are knowledge acquisition, reasoning, learning from mistakes, problem solving and creativity.
The launch of AI chatbot ChatGPT by US company OpenAI in November brought the technology to many people’s attention. So much so that by February it had 100 million active users and was, according to a UBS study, the fastest growing app ever released.
In February Google offered Bard as its AI chatbot, which is free to over 18s with a personal Google account or via Workspace accounts if enabled by the employer. Google describes it as a conversational AI tool and it offers more complex answers than those provided by a Google search. Examples provided by Google include help starting or finishing creative projects, drafting blog posts, and answering technical queries about coding and software error messages.
In May Google launched its Search Generative Experiment, which is available to users who signed up for the experiment via Google Labs. This offers an AI-powered search experience that, like Bard, can give discursive answers to conversational enquiries. It is able to support answers with videos and images. Google searches will rely more and more on AI to provide fuller and more personalised answers to searches.
Over at Microsoft, Bing Chat has launched in collaboration with OpenAI. Microsoft has made major investments in OpenAI, and is using developments there to help it integrate AI into the search experience. Bing Chat is free to use and also answers complex questions in a conversational style. It can write (not very good) poems and essays, offer recipe tips, answer technical and scientific questions, and indeed have a stab at answering pretty much any question people throw at it.
However, before these innovations, people had already been using AI without necessarily realising it because personal digital assistants such as the Apple iPhone’s Siri and Google’s Alexa rely on deep learning and machine learning algorithms.
To aid with understanding the technology it is worth having a look at its terminology, starting with its fundamental building block.
Algorithms
These are step-by-step instructions or rules followed by machines to enable them to complete a given task.
Computer vision
This technology uses AI to analyse and recognise images. It is finding applications in medicine, for example, to help clinicians with diagnoses by examining scans. The Amazon Go shops that have started to appear on UK high streets also use the technology to enable people to pay for goods without needing to stop at a till. Not surprisingly it is an essential part of how autonomous vehicles (AVs) navigate the roads.
Deep learning
A more complex form of machine learning (see below) that can process a wider range of data types. It uses neural networks, which are inspired by how neurons in the brain interact, and has a deeper and more complex analytical capability. ‘Deep’ refers to the multiple layers that data passes through from entering the system to leaving it. Banks use it to detect potentially fraudulent transactions and companies rely on it to patrol the internet for copyright infringements or the misuse of corporate identity. AVs, which are now in limited use in parts of the US, also use the technology.
Expert systems
These often have practical uses supporting the likes of financial analysts, lawyers and doctors. Essentially they are made up of a knowledge base, an inference engine, which uses the knowledge base to make deductions and predictions, and an interface with the user. Robo-advisers, that is computerised asset managers, are an example. Expert systems can help lawyers decide if a case is winnable. Or a doctor might turn to one to decide which antibiotic to give a patient.
Generative AI
Examples include Siri and ChatGPT and Bing Chat. So called because they generate content in response to a prompt such as a question.
Machine learning
This is heavily based on algorithms. Machine learning systems usually have very large datasets fed into them that they use to detect patterns and make predictions. Further data can be added to them to improve decision making and they can learn from previous predictions. Examples of uses include weather forecasting, medical results analysis and digital twins, where a virtual copy of a mechanical or engineering process or system is made to help predict when faults might develop or servicing is required.
Natural language processing
NLP processes human language. Its most common use is probably spam detection. NLP software looks at your incoming message, both subject line and text, and if it thinks it is junk, chucks it into the spam folder. Translation apps and speech recognition also use it.
Should I trust AI?
Companies are incorporating AI-speak into their marketing so it is worth being on the lookout for hyped-up claims. Your brand new AI-powered toothbrush is probably no better than the electric one you just threw out.
And here have been some high-profile examples of AI getting it wrong. The share price of Google owner Alphabet fell by $100 billion in February after Bard made a factual mistake in its first demonstration.
Users, too, have reported various errors and biases in answers but AI’s computational power is improving rapidly and it learns from its mistakes. In all likelihood, in a few years, we will struggle to remember how we managed without it, just as we do now with the internet.