Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial Intelligence

Understanding the key machine learning terms for AI

· 7 minute read

· 7 minute read

With reports about the latest artificial intelligence development seemingly popping up on the daily, there are a lot of terms related to AI, and not all of them are easy to understand. When it comes to machine learning, the computer science jargon is especially complex. This article will cover the most commonly used terms about machine learning that working professionals should have a basic understanding of.  

What is machine learning?  

Machine learning is the foundational aspect of AI focusing on the study of algorithms. It allows computers to build flexible models from data and relationships, enabling the system to perform certain tasks and make predictions. 

By using multiple data points to identify patterns over time, machine learning powers technology to eventually make decisions or recommendations. This stands in contrast to traditional computers which require explicit instruction for every aspect of a task. For decades, machines had to be taught everything. With artificial intelligence, they learn.  

What you need to know about machine learning 

Your phone knows you. Your cellphone learns that you always head home around 5:15 pm. Based on that information, your phone can predict how long it’s going to take you to get home by analyzing factors such as time of day and actual movement of traffic. It is learning from a combination of historical traffic patterns and from real-time data about that day’s traffic. Machine learning also shows up in personalized recommendations, face and voice recognition, and a host of other applications. 

What are neural networks? 

A neural network is a method of teaching a computer to process data like the human brain. Neural networks are a machine learning process that uses interconnected nodes (or artificial neurons) to extract meaning from large amounts of data. Some applications of neural networks include: 

  • Computer vision (e.g. facial recognition or content moderation) 
  • Speech recognition (e.g. creating a transcript from audio) 
  • Financial predictions 

What is Deep Learning?  

Deep Learning is a subset of machine learning that uses artificial neural networks with multiple layers. Think of a deep learning model as the product of AI researchers working to code something comparable to a human brain, with its extensive interconnections, rather than a traditional linear computer “if this, then that” model. Deep Learning does this by creating parameters and applying weights to them – building a model of what’s important, and how the important factors interact with each other – and then modifying those over time and with iterations.  

Successful deep learning models find and use patterns in data that aren’t easily visible or intuitive to humans, through those weighted parameters.   

What you need to know about Deep Learning 

Deep Learning models are already powering machine translation and are the core of self-driving car technology. Modern chatbots rely on deep learning, and the healthcare industry in particular has made strong use of these models for computer-aided diagnosis.  

In other professional fields, deep learning allows well-trained AI systems to assemble documents based on user inputs and established guidelines – for example, to output a draft document based on an intake form.  

What are Foundation Models?  

A Foundation Model is a large AI model trained on a very large quantity of data, often by self-supervised or semi-supervised learning. In other words: the model starts from a “corpus” (the dataset it’s being trained on) and generates outputs, over and over, checking those outputs against the original data. Foundation Models, once trained, gain the ability to output complex, structured responses to prompts that resemble human replies.   

The advantage of a foundational model over previous deep learning models is that it is general, and able to be adapted to a wide range of downstream tasks.  

What you need to know about Foundation Models 

Foundation Models can start from very simple data – albeit vast quantities of very simple data – to build and learn very complex things. Think about how your profession is made up of many interwoven, complex and nuanced concepts and jargon: a good foundational model offers the potential to quickly and correctly answer your questions, using that vast corpus of knowledge to deliver responses in understandable language.  

Some things foundation models are good at:  

  • Translation (from one language to another)
  • Classification (putting items into correct categories)  
  • Clustering (grouping similar things together)  
  • Ranking (determining relative importance)  
  • Summarization (generating a concise summary of a longer text)  
  • Anomaly Detection (finding uncommon or unusual things)  

Those capabilities could easily be a great benefit to professionals in their day-to-day work, such as reviewing large quantities of documents to find similarities, variances, and determining which are the highest importance.  

What is a Large Language Model?  

Large Language Models (LLMs) are a subset of Foundation Models and are typically more specialized and fine-tuned for specific tasks or domains. An LLM is trained on a wide variety of downstream tasks, such as text classification, question-answering, translation, and summarization. That fine-tuning process helps the model adapt its language understanding to the specific requirements of a particular task or application. 

Large Language Models are often used for various natural language processing applications and are known for generating coherent and contextually relevant text based on the input provided. But LLMs are also subject to hallucinations, in which outputs confidently assert claims of facts that are not actually true or justified by their training data. This is not necessarily a bad thing in all cases, since it can be advantageous for LLMs to be able to mimic human creativity (like asking the LLM to write song lyrics in the style of Taylor Swift), but it is a serious concern when citing resources in a professional context. Hallucinations related to factual citations have tended to decrease as LLMs are trained more carefully both on vast, diverse data and for specific, particular tasks, and as human reviewers flag those errors. 

What you need to know about Large Language Models 

We already knew computers were good at manipulating data based on numbers, from Microsoft Excel to VBA to more complex databases. With LLMs, an even greater power of analysis and manipulation can be applied to unstructured data made up of words – such as legal or accounting treatises and regulations, the entire corpus of an organization’s documents, and massive, larger datasets than those.  

LLMs promise to be the same force multiplier for professionals who work with words, risks, and decision-making as Excel was for professionals who work with numbers.  

What is cognitive computing? 

Cognitive computing is a combination of machine learning, language processing, and data mining that is designed to assist human decision-making. Cognitive computing differs from AI in that it partners with humans to find the best answer instead of AI choosing the best algorithm. The example from Deep Learning about healthcare applies here too: doctors use cognitive computing to help make a diagnosis; they are drawing from their expertise but are also aided by machine learning.  

The future of machine learning 

Machine learning has become one of the most exciting and rapidly growing fields in computer science in recent years, and it is here to stay. As the world generates more and more data every day, the need for intelligent algorithms that can make sense of this data and extract insights has only grown. 

However, as with any technology, there are also potential risks and ethical considerations that must be carefully considered and addressed. As we move forward, it is important to approach machine learning with a balanced perspective, recognizing both its potential and its limitations, and working to ensure that it is developed and deployed responsibly. 

For more information about ChatGPT and generative AI, read our companion article on the current state of artificial intelligence in 2023, and visit our hub on artificial intelligence. 

More answers