Press "Enter" to skip to content

The Language of AI: Defining the Most Common Terms

The language of artificial intelligence (AI) can seem inaccessible to those not working in the field, so MedPage Today created this glossary of some of the most common AI-related terms.

Definitions have been adapted from multiple sources, including Stanford Medicine, JAMA, and the World Health Organization.

Artificial Intelligence: A branch of computer science that focuses on creating machines, computer programs, or software systems that can perform tasks, solve problems, and learn from those experiences in ways that usually require human intelligence.

Algorithm: A step-by-step set of instructions that a computer follows to solve a problem or perform a task.

Chatbot: A software application that mimics human conversation.

ChatGPT: A chatbot, powered by a large language model, that can generate human-like text. Developed by OpenAI, it’s an example of generative AI.

Deep Learning: A type of machine learning that uses artificial neural networks.

Deep Neural Network: A step above neural networks, this type of machine learning is composed of multiple layers of interconnected “neurons,” enabling it to learn complex patterns in data.

Generative AI: AI models that analyze training data and then generate new data with similar characteristics to their data sets, or perform additional tasks they weren’t trained to do.

GPT-4: A large multimodal AI model that can analyze training data made of text, images, videos, and audio and then generate new data with similar characteristics based on specific prompts from human users. Developed by OpenAI, it powers the current version of ChatGPT.

Large Language Model: An AI model trained on large amounts of data that can be adapted to perform a wide range of tasks. It learns the probabilities of the occurrence of sequences of words from a vast body of text. Examples include OpenAI’s ChatGPT and Google’s Bard.

Large Multimodal Model: A machine learning model that incorporates more than one kind of modality for inputs and outputs, such as text, images, videos, and audio.

Machine Learning: Where computers analyze data to perform tasks without being specifically programmed to do so, using patterns discovered in those data sets.

Natural Language Processing: A branch of AI that uses machine learning to process and interpret text and data.

Neural Network: A computer system made up of interconnected “artificial neurons” that help computers learn from data and recognize patterns. It’s inspired by the way the human brain works.

Prompt Engineering: The process of creating specific inputs for generative large language models or large multimodal models that can be interpreted to produce a desired output.

Training Data: The data set that contains the examples used to teach a machine learning application to recognize patterns and perform tasks.

Transformer: A deep neural network designed to capture relationships and dependencies between elements in a sequence, such as words in a sentence.

Tuning: The process of adapting a trained large language model to perform well on a specific task or domain.

  • Kristina Fiore leads MedPage’s enterprise & investigative reporting team. She’s been a medical journalist for more than a decade and her work has been recognized by Barlett & Steele, AHCJ, SABEW, and others. Send story tips to [email protected]. Follow

Please enable JavaScript to view the comments powered by Disqus.