AI | What does it mean? Decoding commonly used phrases in artificial intelligence

This week, UK Government holds a global summit to discuss the future of artificial intelligence (AI) and how we can prepare for its future growth – and ensure that AI is used responsibly and safely around the world. The revolutionary potential of AI means that it will have a profound impact on how we all live our lives, which means there is a pressing need to ensure that this change is managed responsibly. At the Hartree Centre, we specialise in translating new AI and machine learning techniques into cutting edge industry applications to solve challenges from fusion energy to healthcare.  

Here, we define some commonly used terms in AI to enhance your understanding of the discussions happening at Bletchley Park this week.  

1. Artificial Intelligence (AI)

“In programming a problem for the machine […] it is quite difficult to put oneself in the position of doing without any of the hints which intelligence and experience would suggest to a human.”

Douglas Hartree 

Artificial intelligence, or AI, refers to computer systems or models that can perform tasks that would typically require human intelligence. This includes tasks like recognising objects within an image, understanding the meaning in a piece of writing, or driving a car.  

Because AI is sometimes difficult to define, you may see it used interchangeably with terms such as “Machine Learning” or “Deep Learning”. These both refer to more specific concepts or methods, but what they have in common is using computational power to learn (or “train”) from collected information, or data. This often requires supercomputers, or high performance computing (HPC), to provide the processing power required to cope with tremendous amounts of data.  

AI may be defined as “Narrow” or “General”; a Narrow AI (ANI) is goal-oriented and specific to the task it has been trained for, a General AI (AGI) is a hypothetical agent that could learn to accomplish any task that humans can perform. 

2. Frontier AI

“If a machine can think, it might think more intelligently than we do, and then where should we be?”

Alan Turing

Whilst AI can refer to a computer model that can perform any kind of task, the term “Frontier AI” refers to those models that are at very high levels of capability – meaning they can do the tasks very well. These models sometimes exhibit abilities that we’ve never seen in AI before, but being the very cutting edge of research can mean that they are not fully understood yet, even by the experts. One key concern of the AI Safety Summit 2023 is the potential of “Frontier AI” to exhibit unexpected behavior where the developer cannot always explain or understand with 100% certainty how or why the AI came to a specific conclusion. This may lead to decisions being made with real life consequences which will require a valid explanation. This is a key concern that motivates the need for increased regulation in the field.

3. Foundation Model

“Foundation models can be built ‘on top of’ to develop different applications for many purposes, this makes them difficult – but important – to regulate.”

Elliot Jones

The extreme amounts of resources often needed to train an AI means that it can be more efficient to make smaller changes to a pre-built model, leading to the growth of foundation models. Foundation models are like a basic model you can develop and adapt in different ways. First, they are trained to perform a simple task, for example next-word prediction, on a large, broad volume of data. From these simple objectives, strong, generalised capabilities emerge. These base models can then be adapted at a later point to perform more specific tasks. For example, you could train a foundation model to recognise the entirety of the English language, and use that to develop a variety of more specific AI systems, such as a chatbot that can discuss medical conditions with a patient. 

BERT, ChatGPT, and DALL-E are popular examples of foundation models. Building on these models allows us to utilise their strengths but can also expose us to any weaknesses that the original model may have. When you consider that the name “foundation model” refers to the foundations of a building, and the impact a weak foundation would have on the resulting building, you can understand the importance of these models being reliable and unbiased. 

4. Generative AI

“As soon as it works, no one calls it AI any more.”

John McCarthy

Generative AI is the name given to AI systems which seek to mimic human creativity and produce completely new content on demand. The abilities of these models have grown rapidly, with AI capable of writing high-quality text or generating super-realistic images, all based on a “prompt” of user input. The ability to generate “new” material opens the door to AI assistants that can write emails or produce complete documents based on short requests, and free up team members to work on other more creative complex creative tasks.

We recently published a blog post considering how generative AI can be used in UK businesses to enhance productivity.

However, when outsourcing work to AI assistants, we need to ensure people or organisations don’t place unearned trust in them. We need to understand how generative AI works, before relying on what it generates.

Generative AI has come under criticism from various creative industries because it uses or derives meaning from existing data in order to generate new outputs, such as new pieces of artwork or new video content. This means there are many issues yet to be addressed regarding copyright and intellectual property. To navigate this issue, we need transparency around what data was used to create a model and who holds the rights to that data.

5. Natural Language Processing & Large Language Models

“I’m sorry, but as an AI language model… ”

ChatGPT

Natural language Processing (NLP) are the methods we use to help computers to understand human language. One important challenge in NLP is the construction of Language Models. These are statistical tools that encode text while capturing its meaning, so the AI can understand, predict and generate specific sequences of words. 

Models that achieve a general linguistic capability similar to human conversation are known as Large Language Models (LLMs). Some examples of Large Language Models include BERT, GPT4, PaLM and LLaMa.  

From this fundamental capability, Language Models can be used as foundation models for various tasks such as text generation, language translation, classification and even for non-natural language tasks, like generating computer code. 

Language Models that are more advanced can also be called “Neural Networks”. These have billions of “Neurons” or trained weights in the network, enabling them to be expressive enough to capture the nuances of speech. Because of this, they take an enormous volume of data to train, or to learn the language. 

6. AI Ethics & Responsible AI

“AI is good at describing the world as it is today with all of its biases, but it does not know how the world should be.”

Joanne Chen

AI Ethics, Ethical AI or Responsible AI are all terms that are used to describe the issues related to ethical or human consequences of the way an AI works or behaves. The decisions an AI makes do not occur in isolation from the rest of our world.  

If AI is trained on a biased dataset, then the AI model will reflect and even reinforce that bias in its decisions and outputs. For example, if you train an AI to screen job applications to find the top 10% of applications most likely to lead to a successful recruitment based on previous recruitment data, in male-dominated industries the AI may decide to rank applications from men higher because they appear to the algorithm to be more employable. The AI won’t know to take into account any social factors affecting application numbers or bias that may have been inherent in the human-led recruitment process which created the source data it was trained on. 

An example of this issue was widely publicised in 2018, when Amazon abandoned an AI recruiting tool that exhibited bias against women applicants 

Bias can pop up in a wide range of situations, and it isn’t always as obvious as in this example – it also doesn’t always relate to a protected characteristic. Sometimes it is due to the source data, and sometimes it is a property of how the model was designed or written – we need to examine all aspects of an AI to eradicate it.  

AI Ethics seeks to encode fair, moral or ethical values into AI, from development to implementation. This could mean eradicating bias from training data, investigating how the model makes decisions, checking whether it performs differently for different users, ensuring a human is trained to work alongside the AI to detect and correct biased outputs. In some cases, ethical AI could mean challenging the very process of deciding which global challenges are prioritised to address through the development of AI tools.  

The need to ensure AI doesn’t propagate and exacerbate existing inequalities in our world motivates the development of Explainable AI, where a user can clearly see how an AI model made a particular decision and why.  


 

Join Newsletter

Provide your details to receive regular updates from the STFC Hartree Centre.