This glossary collects every key term formally defined across Collaborative Intelligence, listed alphabetically. Each entry links back to the section where the term first appears in the text. Terms marked with a dagger (†) are supplemental — important concepts referenced throughout the book but not formally wrapped in a definition in any single chapter.
A – B
- Accessibility tool
- Any technology designed to reduce barriers that prevent full participation in learning or work. In this book, the term is used to frame generative AI not only as a productivity tool but as an adaptive support that can respond to specific, individual barriers in ways traditional assistive technologies cannot. → Ch. 6 — Models of Partnership
- Agency
- The ongoing capacity to choose, guide, and revise your own actions when using AI, rather than letting the system define the process for you. Agency is what keeps you the author of your work rather than merely its editor. → Ch. 4 — Agency and Autonomy
- Agentic AI †
- AI systems that can carry out multi-step tasks with some degree of autonomy — browsing the web, writing and executing code, managing files, or chaining actions toward a goal. Agentic AI extends beyond single-turn conversation into sequences of decisions. Current agentic systems still operate within lanes defined by humans in advance. → Ch. 6 — An Artificially Intelligent Future?
- AI literacy
- The ability to skillfully and critically create and engage with AI-driven content — understanding how AI works, what it can and can’t do, and how to evaluate its output. AI literacy is the central skill this book is designed to develop. → Introduction — Purpose · Introduction — What Is AI Literacy?
- Artificial General Intelligence (AGI)
- A hypothetical form of AI capable of performing a wide range of intellectual tasks across domains at a level comparable to human intelligence. AGI is discussed as a boundary concept — a possible future state — rather than a current reality. No existing AI system qualifies. → Ch. 6 — An Artificially Intelligent Future?
- Autonomy
- The right and responsibility to make intentional decisions about when and how to engage with AI tools, preserving ownership of your work and judgment. Where agency is about doing, autonomy is about direction — knowing why you make the choices you make. → Ch. 4 — Agency and Autonomy
- Beneficence
- The ethical principle of using AI in ways that promote learning, creativity, and collective good while minimizing harm to individuals or communities. One of three guiding principles for ethical AI use explored in Chapter 4, alongside justice and fairness. → Ch. 4 — Ethical Principles in Practice
C – D
- Close reading
- Reading a text carefully and slowly, looking for contradictions, gaps, or assumptions hiding between the lines. In AI use, close reading is how you catch hard hallucinations — moments where the model breaks its own logic, contradicts information you provided, or makes a claim that doesn’t hold up internally. → Ch. 5 — Evaluate, Verify, Revise
- Cognitive biases
- Psychological shortcuts the brain takes to process information quickly. Unlike logical fallacies, cognitive biases are errors of perception rather than errors of logic — for example, Confirmation Bias is the tendency to notice and trust information that supports existing beliefs while discounting contradictory evidence. AI can reproduce and amplify these biases when its training data reflects them. → Ch. 5 — Common Fallacies and Biases
- Cognitive partnership
- A long-term working relationship between a human and an AI system in which the human provides direction, judgment, and values while the AI supports organization, recall, and pattern-based assistance. The goal is not to split the work equally but to keep the human in control of what the work means. → Ch. 6 — Choosing Your Level of Partnership
- Cognitive scaffolding
- The use of AI to support human thinking by organizing information, surfacing connections, and reducing cognitive load — without replacing human decision-making or understanding. Scaffolding helps you get started or stay oriented; it should not be confused with doing the thinking for you. → Ch. 6 — Models of Partnership
- Consensus hallucination
- A feedback cycle where AI output becomes AI training data, validating a popular falsehood until it becomes indistinguishable from fact. The AI confuses popularity with truth — not because it is lying, but because it has learned to recognize what sounds correct from sources that were themselves wrong. → Ch. 5 — From Logical Fallacies to Hallucinations
- Context window †
- The amount of text an AI model can hold in its working memory at once — everything it uses when generating the next token. Think of it as a spotlight: the model can only “see” what’s inside it. Long conversations, uploaded documents, and prior messages all count against this limit. Once the window fills, earlier content falls out of view. → Ch. 2 — Does AI Think?
- Data privacy
- The responsibility to protect personal and sensitive information when using AI, recognizing that ethical use includes careful choices about what data is shared, how it is used, and whose trust may be affected. Not everything should be entered into a chatbot, even when doing so would make a task faster. → Ch. 4 — Data Privacy
- Detection tools
- Programs or methods designed to identify whether text is human- or AI-written. These tools look for surface features like consistency, sentence rhythm, and phrasing patterns, but their accuracy varies widely. They can flag human writing as AI-generated (false positives) or miss AI-assisted writing entirely (false negatives). No detector is reliable enough to be treated as proof. → Ch. 2 — From Fingerprints to Fair Use
F – H
- Fine-tuning †
- A training process in which a pre-trained AI model is further trained on a curated dataset to improve performance on specific tasks — such as following instructions, maintaining a tone, or generating content in a specialized domain. Fine-tuning shapes AI’s polished, compliant voice and is why models respond so reliably to formatting instructions. → Ch. 2 — From Fingerprints to Fair Use
- Generative AI
- A type of artificial intelligence that creates new content — text, images, audio, code — by recognizing and predicting patterns from large amounts of training data. Unlike a search engine, which retrieves existing content, generative AI synthesizes a response from scratch each time. → Introduction · Ch. 1 — What Generative AI Does
- Hallucination
- When an AI confidently produces information that is false, misleading, or fabricated — often because it is filling gaps in its training with plausible-sounding patterns. Hallucinations are not bugs in the traditional sense; they are a consequence of how language models work, optimizing for coherence over accuracy. → Ch. 1 — Getting Better Output · Ch. 5 — From Logical Fallacies to Hallucinations
- Hard hallucinations
- AI errors where the model states something that can be shown to be false — contradicting reality, established knowledge, or the internal logic of its own previous statements. These map onto formal logical fallacies: the model’s reasoning structure breaks down in a way that can be directly identified and disproved. → Ch. 5 — Hard and Soft Hallucinations
- Human in the Loop (HITL)
- A model of interaction where a human being actively intervenes in the AI’s workflow to check, validate, or correct the output. HITL is both a safeguard and a risk: it works when users remain skeptical and curious, and fails when users let the AI’s confident tone steer them toward uncritical acceptance. → Ch. 5 — Evaluate, Verify, Revise
I – L
- Integrity
- The alignment between your values, your actions, and the way you represent your collaboration with AI — staying truthful to both process and product. In practice, integrity means being able to show where your thinking ends and the tool’s begins, and being honest about that line with your instructor, your reader, and yourself. → Ch. 4 — Transparency and Integrity
- Iterative prompting
- The practice of improving a prompt through repeated rounds of testing and revision. Each small change teaches you how the AI interprets your instructions and how to guide it more effectively. Iteration is what separates a single lucky output from a reliable, repeatable process. → Ch. 3 — Frameworks in Action
- Large Language Model (LLM)
- A type of generative AI trained on text — books, articles, websites, and more — that generates responses by predicting the most likely next words given a context. Tools like ChatGPT, Claude, and Gemini are all LLMs. The “large” refers to the scale of training data and parameters, not to any claim about understanding or intelligence. → Ch. 1 — What Generative AI Does
- Lateral reading
- Stepping outside a text entirely to check a claim through an independent reference or verify an idea against outside evidence. Lateral reading is the most effective method for catching soft hallucinations — plausible but misleading claims that echo popular or comforting patterns rather than verified truth. → Ch. 5 — Evaluate, Verify, Revise
- Logical fallacies
- Structural errors in an argument where the reasoning itself is broken, even if individual facts might be true. Examples include the Post Hoc fallacy (assuming causation from sequence), the Straw Man (misrepresenting an opposing view), and the Non Sequitur (a conclusion that doesn’t follow from its premise). AI can reproduce these errors at scale. → Ch. 5 — Common Fallacies and Biases
M – O
- Metacognition
- The awareness and regulation of your own thinking — often described as “thinking about thinking.” When you plan how to approach a problem, monitor your progress while working, and evaluate the outcome afterward, you are using metacognitive skills. In AI use, metacognition means noticing how your prompting choices shape the AI’s response and using that awareness to improve your process over time. → Ch. 3 — Metacognition
- Output
- The response or result the AI generates based on your prompt. Output is not retrieved from a database — it is composed on the fly, token by token, based on probability and context. The quality of output depends heavily on the quality of the prompt that precedes it. → Ch. 1 — What Generative AI Does
P – R
- Pattern recognition
- The core process through which AI generates responses — predicting the most likely next element in a sequence based on patterns learned from training data. AI does not “think” the way humans do; it recognizes and extends patterns. This explains both its fluency and its tendency to fill gaps with plausible-sounding but inaccurate information. → Ch. 2 — Does AI Think?
- Persistent memory
- The ability of an AI system to retain information across sessions, allowing it to recognize patterns in a user’s goals, preferences, and prior work over time rather than starting from scratch with each interaction. Persistent memory can make AI a more effective long-term collaborator, but it raises questions about data ownership and privacy. → Ch. 6 — Choosing Your Level of Partnership
- Prompt
- The instructions, question, or request you give an AI system to guide what it produces. A prompt can be a single sentence or a carefully structured paragraph — and the difference between the two often determines whether the output is useful or not. → Ch. 1 — What Generative AI Does
- Prompt engineering
- The practice of designing, refining, and iterating on AI prompts to produce more accurate, useful, and directed responses. Less about coding and more about clear thinking, communication, and revision. Effective prompt engineering requires subject-matter knowledge, familiarity with the AI tool’s tendencies, and the critical skill to recognize when output is and isn’t good. → Introduction — Prompt Engineering · Ch. 3 — From Curiosity to Craft
- Prompting framework
- A structured approach to writing AI prompts that combines key elements — such as task, audience, context, and format — to produce more reliable, directed responses. Frameworks help you think through what information the AI needs and how to communicate your expectations, the same way a research method or writing outline structures academic work. → Ch. 3 — Frameworks in Action
- Reinforcement Learning from Human Feedback (RLHF)
- The training method used to align AI models with human preferences. Human reviewers rate early model outputs for helpfulness, clarity, and safety, and the model learns to produce responses that humans tend to reward. RLHF shapes AI’s polished, agreeable tone — and can inadvertently teach it to produce familiar-sounding but inaccurate answers if reviewers reward fluency over accuracy. → Ch. 5 — Where These Errors Come From
- Reproducibility
- The ability to repeat a prompting process and achieve similar results. When you can produce high-quality output consistently, you demonstrate control over the tool rather than dependence on it. In science, reproducibility means another person could follow your method; in AI use, it means you have a process, not just luck. → Ch. 3 — Transparency & Logging
S – T
- Soft hallucinations
- AI errors where the model fills gaps by reproducing biased or incomplete patterns from its training data. The output sounds plausible and persuasive but may be misleading, stereotyped, or culturally narrow. Soft hallucinations mirror cognitive biases — they are harder to catch than factual errors because nothing in the text obviously breaks. → Ch. 5 — Hard and Soft Hallucinations
- Stylistic fingerprints
- Features such as even pacing, tidy topic sentences, smooth transitions, balanced structure, and polite hedging that often appear in AI-generated writing. No single trait proves AI authorship — human writers use these features too. But a consistent cluster of them can form a recognizable pattern worth examining. → Ch. 2 — From Fingerprints to Fair Use
- Temperature
- A setting that controls how conservatively or creatively a model selects the next token. Lower temperature favors safer, more predictable wording; higher temperature allows rarer, more surprising choices — and a greater risk of incoherence or error. Adjusting temperature is one of the most direct ways to shape whether AI output sounds cautious or inventive. → Ch. 2 — Does AI Think?
- Tokens
- Units of text — a piece of a word, a whole word, or punctuation — that AI models use to process and generate language. When you type a prompt, the model immediately breaks it into tokens before doing anything else. Understanding tokens helps explain why AI can struggle with unusual spellings, non-English text, or very specific formatting: the model is working in pieces, not whole words. → Ch. 2 — Tokens in Action
- Training data †
- The large collections of text — books, articles, websites, code, and other sources — used to train an AI model. The model learns patterns, associations, and language structure from this data. What is in the training data shapes what the model knows; what is absent or underrepresented shapes where it falls short, produces bias, or generates confident-sounding gaps. → Ch. 2 — Does AI Think? · Ch. 5 — Where These Errors Come From
- Transparency log
- A short record of your prompting process that captures key stages of your work: the original prompt, the key revisions, and what you learned along the way. The goal is to make your reasoning visible enough that another person could understand and, if needed, replicate your process. Over time, a log helps you identify which strategies consistently produce quality results and provides clear evidence of responsible AI use. → Ch. 3 — Transparency & Logging