AI terminology can feel like learning a new language. Half the articles you read assume you already know what a “token” is or why “hallucinations” matter.
This AI glossary covers the terms you’ll actually encounter as a manager using AI tools. No computer science degree required. Just practical definitions so you can follow along, ask better questions, and use these tools more effectively.

A
AI (Artificial Intelligence)
Software that can perform tasks typically requiring human intelligence, like understanding language, recognizing patterns, or making decisions. When people say “AI” today, they usually mean generative AI tools like ChatGPT or Claude.
AI Agent
An AI system that can take actions on your behalf, not just answer questions. Instead of telling you how to book a flight, an agent would actually book it. Most current tools are assistants, not agents, but this is changing fast.
API (Application Programming Interface)
A way for software to talk to other software. When a company says they have an “API,” it means developers can build tools that connect to their AI. You’ll mostly hear this in the context of pricing tiers or enterprise features.
C
ChatGPT
OpenAI’s conversational AI tool and the one that started the current AI boom. Available free (GPT-4o mini) or paid (ChatGPT Plus at $20/month for GPT-4o and newer features). The most widely used AI assistant as of 2025.
Claude
Anthropic’s conversational AI tool and ChatGPT’s main competitor. Known for longer context windows and a slightly more conversational tone. Available free (limited) or paid (Claude Pro at $20/month).
Context Window
How much text an AI can “see” at once, measured in tokens. A larger context window means you can paste longer documents or have longer conversations before the AI starts forgetting earlier parts. Claude currently leads here with 200K tokens; ChatGPT offers 128K.
Copilot (Microsoft)
Microsoft’s AI assistant integrated into Word, Excel, Outlook, Teams, and other Microsoft 365 apps. Requires a separate subscription ($30/user/month for business) on top of your existing Microsoft license.
F
Fine-tuning
Training an AI model on specific data to make it better at particular tasks. Companies fine-tune models on their own documents, customer conversations, or industry data. As a manager, you probably won’t fine-tune anything yourself, but you might use tools built on fine-tuned models.
G
Generative AI
AI that creates new content (text, images, code, audio) rather than just analyzing existing content. ChatGPT, Claude, Midjourney, and DALL-E are all generative AI. This is the category of AI most relevant to managers today.
GPT (Generative Pre-trained Transformer)
The technology behind ChatGPT. GPT-4 and GPT-4o are OpenAI’s current flagship models. When someone says “GPT,” they usually mean the model itself; “ChatGPT” is the product you interact with.
H
Hallucination
When AI confidently states something that isn’t true. It might invent facts, cite sources that don’t exist, or misremember details from its training. This is why you should always verify important information, especially names, dates, statistics, and citations.
L
LLM (Large Language Model)
The type of AI that powers tools like ChatGPT and Claude. “Large” refers to the billions of parameters (variables) the model uses. LLMs are trained on massive amounts of text and learn to predict what words come next, which turns out to enable surprisingly sophisticated conversations.
M
Model
The underlying AI system that powers a tool. ChatGPT the product uses GPT-4o the model. Claude the product uses Claude 3.5 Sonnet the model. Different models have different capabilities, and companies regularly release new versions.
Multimodal
AI that can work with multiple types of input, not just text. A multimodal model might understand images, audio, video, or documents. GPT-4o and Claude 3.5 are multimodal, meaning you can upload images and ask questions about them.
N
Natural Language Processing (NLP)
The field of AI focused on understanding and generating human language. LLMs are the current state-of-the-art in NLP. When someone says a tool uses “NLP,” they mean it can understand text like a human would.
P
Parameter
A variable in an AI model that gets adjusted during training. More parameters generally means more capability, but also more cost to run. GPT-4 has over a trillion parameters. You don’t need to understand parameters deeply, but it explains why “larger models” cost more.
Prompt
The input you give to an AI tool. Everything you type into ChatGPT or Claude is a prompt. Better prompts get better results, which is why “prompt engineering” became a thing.
Prompt Engineering
The practice of crafting effective prompts to get better AI outputs. Includes techniques like giving context, specifying format, providing examples, and breaking complex tasks into steps. It’s less mystical than it sounds. Mostly it’s just being clear about what you want.
R
RAG (Retrieval-Augmented Generation)
A technique where AI retrieves relevant information from a database before generating a response. This helps AI give accurate, up-to-date answers based on specific documents rather than just its training data. Enterprise AI tools often use RAG to answer questions about company documents.
T
Temperature
A setting that controls how creative or random an AI’s responses are. Low temperature (0-0.3) gives more predictable, focused answers. High temperature (0.7-1.0) gives more creative, varied responses. Most users never touch this, but it’s useful for understanding why AI sometimes gives different answers to the same question.
Token
The unit AI uses to measure text. Roughly 4 characters or about 3/4 of a word in English. “Manager” is 2 tokens. Pricing and context limits are measured in tokens. When a tool says it costs “$0.01 per 1K tokens,” that’s roughly 750 words.
Training Data
The text, images, or other content used to teach an AI model. ChatGPT was trained on books, websites, articles, and other internet text. The training data determines what the model knows and also its limitations and biases.
Z
Zero-shot
When AI performs a task without being given examples first. “Write a performance review” with no examples is zero-shot. If you provide examples of what you want, that’s “few-shot.” Zero-shot works surprisingly well for common tasks.
