AI terminology can feel like learning a new language. Half the articles you read assume you already know what a “token” is or why “hallucinations” matter.
This AI glossary covers the terms you’ll actually encounter as a manager using AI tools. No computer science degree required. Just practical definitions so you can follow along, ask better questions, and use these tools more effectively.
A
AI (Artificial Intelligence)
Software that can perform tasks typically requiring human intelligence, like understanding language, recognizing patterns, or making decisions. When people say “AI” today, they usually mean generative AI tools like ChatGPT or Claude.
AI Agent
An AI system that can take actions on your behalf, not just answer questions. Instead of telling you how to book a flight, an agent would actually book it. Most current tools are assistants, not agents, but this is changing fast.
API (Application Programming Interface)
A way for software to talk to other software. When a company says they have an “API,” it means developers can build tools that connect to their AI. You’ll mostly hear this in the context of pricing tiers or enterprise features.
B
Bias
Systematic errors in AI output caused by patterns in training data. AI models can reflect stereotypes, favor certain perspectives, or produce uneven results across demographics. Especially relevant for managers writing reviews or evaluating candidates, where biased AI suggestions could create real problems.
Benchmark
A standardized test used to compare AI model performance. When a company says their model ‘scores 90% on MMLU,’ they’re referencing a benchmark. Useful for understanding marketing claims, but real-world performance on your specific tasks matters more than any benchmark score.
C
ChatGPT
OpenAI’s conversational AI tool and the one that started the current AI boom. Available free (GPT-4o mini) or paid (ChatGPT Plus at $20/month for GPT-4o and newer features). The most widely used AI assistant as of 2025.
Claude
Anthropic’s conversational AI tool and ChatGPT’s main competitor. Known for longer context windows and a slightly more conversational tone. Available free (limited) or paid (Claude Pro at $20/month).
Context Window
How much text an AI can “see” at once, measured in tokens. A larger context window means you can paste longer documents or have longer conversations before the AI starts forgetting earlier parts. Claude currently leads here with 200K tokens; ChatGPT offers 128K.
Copilot (Microsoft)
Microsoft’s AI assistant integrated into Word, Excel, Outlook, Teams, and other Microsoft 365 apps. Requires a separate subscription ($30/user/month for business) on top of your existing Microsoft license.
E
Embedding
A way AI converts text into numbers so it can measure how similar different pieces of content are. This powers features like semantic search, where you can search by meaning rather than exact keywords. Enterprise knowledge bases and AI-powered search tools rely on embeddings.
Enterprise AI
AI tools designed for business use with features like data privacy, admin controls, team management, and compliance certifications. OpenAI’s ChatGPT Enterprise and Anthropic’s Claude for Business are examples. Typically more expensive but necessary if your company handles sensitive information.
D
Data Privacy
How your information is handled when you use AI tools. Some tools use your conversations to train future models, others don’t. Enterprise plans typically offer stronger privacy guarantees. Worth checking before pasting sensitive employee data into any AI tool.
Deep Learning
A subset of machine learning that uses layered neural networks to learn complex patterns. It’s the technology behind modern AI tools like ChatGPT and image generators. You don’t need to understand the mechanics, but knowing the term helps when reading about AI capabilities and limitations.
F
Fine-tuning
Training an AI model on specific data to make it better at particular tasks. Companies fine-tune models on their own documents, customer conversations, or industry data. As a manager, you probably won’t fine-tune anything yourself, but you might use tools built on fine-tuned models.
G
Generative AI
AI that creates new content (text, images, code, audio) rather than just analyzing existing content. ChatGPT, Claude, Midjourney, and DALL-E are all generative AI. This is the category of AI most relevant to managers today.
GPT (Generative Pre-trained Transformer)
The technology behind ChatGPT. GPT-4 and GPT-4o are OpenAI’s current flagship models. When someone says “GPT,” they usually mean the model itself; “ChatGPT” is the product you interact with.
H
Hallucination
When AI confidently states something that isn’t true. It might invent facts, cite sources that don’t exist, or misremember details from its training. This is why you should always verify important information, especially names, dates, statistics, and citations.
I
Inference
The process of an AI model generating a response to your input. Every time you send a message to ChatGPT, that’s an inference call. Inference costs money to run, which is why AI tools have usage limits and why responses sometimes slow down during peak hours.
Iteration
The process of refining AI output through multiple rounds of feedback. Instead of expecting a perfect first response, you give the AI feedback and ask it to adjust. Most effective AI users iterate 2-3 times on important outputs rather than accepting the first draft.
L
LLM (Large Language Model)
The type of AI that powers tools like ChatGPT and Claude. “Large” refers to the billions of parameters (variables) the model uses. LLMs are trained on massive amounts of text and learn to predict what words come next, which turns out to enable surprisingly sophisticated conversations.
M
Model
The underlying AI system that powers a tool. ChatGPT the product uses GPT-4o the model. Claude the product uses Claude 3.5 Sonnet the model. Different models have different capabilities, and companies regularly release new versions.
Multimodal
AI that can work with multiple types of input, not just text. A multimodal model might understand images, audio, video, or documents. GPT-4o and Claude 3.5 are multimodal, meaning you can upload images and ask questions about them.
N
Natural Language Processing (NLP)
The field of AI focused on understanding and generating human language. LLMs are the current state-of-the-art in NLP. When someone says a tool uses “NLP,” they mean it can understand text like a human would.
O
Open Source
AI models whose code is publicly available for anyone to use, modify, and deploy. Meta’s Llama and Mistral are popular open-source models. They offer more control and privacy since you can run them on your own servers, but require more technical expertise to set up.
Output
Whatever the AI generates in response to your prompt. Could be text, code, a summary, or a structured document. The quality of your output depends heavily on the quality of your input. Vague prompts produce vague output.
P
Parameter
A variable in an AI model that gets adjusted during training. More parameters generally means more capability, but also more cost to run. GPT-4 has over a trillion parameters. You don’t need to understand parameters deeply, but it explains why “larger models” cost more.
Prompt
The input you give to an AI tool. Everything you type into ChatGPT or Claude is a prompt. Better prompts get better results, which is why “prompt engineering” became a thing.
Prompt Engineering
The practice of crafting effective prompts to get better AI outputs. Includes techniques like giving context, specifying format, providing examples, and breaking complex tasks into steps. It’s less mystical than it sounds. Mostly it’s just being clear about what you want.
R
RAG (Retrieval-Augmented Generation)
A technique where AI retrieves relevant information from a database before generating a response. This helps AI give accurate, up-to-date answers based on specific documents rather than just its training data. Enterprise AI tools often use RAG to answer questions about company documents.
S
System Prompt
Hidden instructions that shape how an AI behaves before you start chatting. Companies use system prompts to make their AI tools focus on specific tasks or maintain a particular tone. When you use a custom GPT or a specialized AI tool, it’s usually running a system prompt behind the scenes.
Summarization
One of the most practical AI capabilities for managers. AI can condense long documents, meeting transcripts, email threads, or reports into brief summaries. Works best when you specify what you want highlighted, like action items, decisions made, or key takeaways.
T
Temperature
A setting that controls how creative or random an AI’s responses are. Low temperature (0-0.3) gives more predictable, focused answers. High temperature (0.7-1.0) gives more creative, varied responses. Most users never touch this, but it’s useful for understanding why AI sometimes gives different answers to the same question.
Token
The unit AI uses to measure text. Roughly 4 characters or about 3/4 of a word in English. “Manager” is 2 tokens. Pricing and context limits are measured in tokens. When a tool says it costs “$0.01 per 1K tokens,” that’s roughly 750 words.
Training Data
The text, images, or other content used to teach an AI model. ChatGPT was trained on books, websites, articles, and other internet text. The training data determines what the model knows and also its limitations and biases.
W
Workflow Automation
Using AI to handle repetitive sequences of tasks automatically. For managers, this might mean auto-generating meeting summaries, drafting follow-up emails after calendar events, or creating weekly reports from project data. Tools like Zapier and Make can connect AI to your existing apps.
Z
Zero-shot
When AI performs a task without being given examples first. “Write a performance review” with no examples is zero-shot. If you provide examples of what you want, that’s “few-shot.” Zero-shot works surprisingly well for common tasks.