04

Ethics and Responsible AI Use

Agency, integrity, and the habits that keep your thinking your own

What does education mean when a chatbot can summarize the reading, outline the paper, and imitate your writing style better than you can on a tired Tuesday night? What does learning look like when your success depends not only on mastering ideas but on mastering the shifting rules of how much AI help is “too much”? One instructor encourages you to use it to brainstorm; another warns that even a paraphrase might count as misconduct. The same sentence that earns praise in one class could earn an F in another. Every prompt becomes a small gamble. And beneath it all sits a quiet, uneasy question: If the machine can learn for me, what’s left for me to learn at all?

AI technology feels confident, polished, unflustered by doubt, and the temptation to use it in dishonest ways is understandable. Sure, there is danger in the fact that AI use could lead you to fail an assignment or a class. But that is not the worst danger out there when it comes to AI in your education. As cheesy as it may sound, the real danger to over relying on AI is that you will start to believe that your ability to think for yourself may not matter when AI can seemingly be more than human.

Ethical AI use matters because it helps to ensure your intellectual ability drives the output. Every decision about how, when, or whether to use AI is an act of authorship and your is strengthened. This agency, this ability to act as a person independent of the machine, becomes the line you draw to protect your own learning from automation. Integrity, transparency, and reflection are tools for keeping that line visible when everything around you pushes toward blur.

Responsible use begins as an act of resistance: the refusal to let efficiency replace understanding or to let fear dictate your choices. Transparency, honesty, and reflection form a kind of armor in this landscape. They keep you visible in your own work. When you disclose how you used AI, when you can explain your reasoning, when you can show that your choices were intentional, you reclaim ground that uncertainty would otherwise take from you.

In this chapter, we’ll explore how to build these habits of agency. You’ll learn strategies for making ethical reasoning part of your everyday collaboration with AI and hold yourself accountable when using it. The goal is not to fear the technology or worship it, but to meet it as an equal partner. Ethics becomes your compass: the way you keep your bearings when the line between human and machine work begins to blur.

By the end of this chapter, you should be able to:

  1. Reflect on common emotional responses to AI — curiosity, fear, overreliance — and connect them to deeper concerns about human autonomy.
  2. Explain why ethical AI use depends on personal agency and informed choice, not just institutional policy.
  3. Recognize moments in your own or others’ AI use where responsibility, transparency, or fairness come into play.
  4. Apply ethical reasoning to practical classroom or professional scenarios involving AI collaboration.
  5. Articulate how maintaining a sense of self and intention sustains ethical, human-centered AI practice.

Authentic Agency and Autonomy in an Artificial World

and are easy words to nod at and hard ones to live by. Both speak to something deeply human: the ability to think, choose, and act for yourself. Agency is about doing — making deliberate choices and taking ownership of them. Autonomy is about direction — knowing why you make those choices and being guided by your own judgment rather than someone else’s command or a machine’s suggestion. Together, they are the heartbeat of genuine learning.

In education, everything is supposed to build toward these two capacities. A college degree isn’t just proof of what you know — it’s evidence of who you’ve become through the act of thinking. Every time you design a project, revise a paper, or decide what argument to make, you’re exercising agency. Every time you weigh competing ideas and commit to one that feels right to you, you’re practicing autonomy.

Psychologists often describe autonomy as a basic psychological need. People are more motivated when they feel they have real choice and influence over their actions. Students who can decide how to learn tend to retain more, persist longer, and take greater pride in their work. The same is true outside the classroom: an artist who experiments freely is more likely to produce authentic work than one painting to please a trend. Autonomy isn’t about isolation — it’s about authorship.

Generative AI complicates that sense of authorship. It’s a tool that can finish your thought before you do, offering confidence where you feel uncertain. A student struggling to start a paper might ask an AI to outline it “just to get ideas,” then quietly lean on the draft more than intended. Another might use it to reword an essay, only to realize the new version no longer sounds like their own writing. The problem isn’t that AI helps; it’s that it helps too smoothly. You can lose track of where your thinking ends and the model’s begins.

That’s why agency and autonomy matter so much now. They remind us that thinking is not just a means to an end — it’s the point. When you outsource too much of that process, you risk becoming a passenger in your own education. The ethical challenge of AI isn’t only about honesty or citation — it’s about maintaining ownership of your mind.


Ethical Frameworks for Choice: Consequences and Principles

This tension — between usefulness and self-direction — isn’t new. Long before AI, people debated what makes an action right: is it right because it leads to good results, or because it honors our duty to act with integrity? That question sits quietly behind every decision you make about using technology.

Ethical reflection, then, becomes an act of reclaiming control. It forces you to pause before accepting the easiest outcome and to ask questions that keep your agency intact. You might still use AI to brainstorm, summarize, or refine — but you’ll do so consciously, with awareness of both benefit and cost. Each prompt becomes a moral decision as much as an academic one.

To analyze your own behavior through the lens of principle-based ethics, create an ethical check for yourself. Before using AI for a task, imagine explaining your process to a future version of yourself or to someone you respect deeply. Would you still feel ownership of the work? Could you honestly say the learning was yours?


Ethical Principles in Practice

Ethics isn’t only found in abstract rules or philosophical debates. Unknowingly, people use technology in ways that can actually harm the people around them and the planet as a whole. In classrooms and workplaces alike, three ethical concepts should shape our use of AI: , justice, and fairness. They sound lofty, but they touch nearly everything you do with AI. They ask three deceptively simple questions: Who benefits? Who is left out? and Is the gain worth the cost?

Beneficence is the idea of using technology to make things better. Technology should be about doing good as much as it is about making things more efficient. The invention of the cotton gin made textile production much more efficient. At the same time, it created more demand for cotton which led to the expansion of slavery in the American South. Although AI does not perpetuate the horrors of slavery, its enormous benefits in terms of efficiency come at the cost of more data centers which create equally enormous demands on energy and water supplies. To what degree should these environmental concerns influence our decision to use AI?

Concerns of beneficence also exist on a smaller scale inside your classroom. Imagine an auto-grading tool that promises “instant feedback.” It saves instructors time but gives students nothing to discuss, nothing to learn from, nothing to wrestle with. The tool works, but it doesn’t help. What begins as convenience can quietly hollow out the human parts of education that matter most.

Justice and fairness bring that question down to ground level. They remind us that access to AI tools isn’t evenly distributed, and neither are their errors. Some students have fast devices and premium subscriptions; others have only the free versions. Some tools are trained mostly on Standard American English, which means they may read certain writing styles as “unclear” or “awkward” when what they’re really encountering is a different voice. Fairness demands that we notice these gaps.

Justice asks us to look beyond individual choices and examine the systems around them. If an assignment assumes every student can use a paid AI tool, then the assignment may reward money rather than learning. If a workplace expects employees to use AI but provides no training, then the people who are already more confident with technology gain an advantage while others are left behind. A tool can be available in theory and still be inaccessible in practice.

Fairness also means paying attention to how AI outputs are treated once they are produced. AI systems can generate fluent answers that sound neutral and authoritative even when they contain bias, stereotypes, or errors. If users trust those outputs too quickly, existing inequalities can be reinforced rather than reduced. When that happens, AI does not simply reflect unfairness; it can scale it.

The following scenario puts these principles into practice. Navigate the situation and see how beneficence, justice, and fairness apply to the choices you make.

ExerciseEthics in Practice: The Essay Due Tonight

Ethics in Practice: The Essay Due Tonight

Navigate a realistic scenario about using AI to brainstorm for an essay. Each decision surfaces a core ethical principle. There is no single correct path — follow your instinct, then read the reflection.

The Essay Due Tonight

The Situation

You have an argumentative essay due in your English Composition class. The assignment asks you to take a position on a topic of your choice. You've picked social media and self-image — you care about it, but you can't figure out how to turn that into a focused argument. It's 9 p.m. You open an AI tool and type: "Give me five possible thesis ideas and an outline for an argumentative essay on social media and self-image." In seconds, you have a clean list. For a moment, it feels like a lifesaver.

What do you do next?

The Grade Comes Back

Decision 1 of 2

The essay comes together fast. You follow the outline, hit the word count, and submit before midnight. A week later your instructor returns it with a C+. The comment reads: "Your argument feels borrowed rather than developed — I want to hear your voice and your reasoning here." You read it twice, unsure how to respond. The structure wasn't really yours, so you're not sure what your reasoning actually was.

What does this tell you about how you used the tool?

A Text from Marcus

Decision 1 of 2

You read all five thesis ideas, compare them, and pick the one that connects to an observation you made yourself about how people perform confidence online. You build your own outline from there. The AI gave you options — you made the choice. As you work, your classmate Marcus texts you: "Hey, I'm totally stuck on this essay. How are you approaching it?"

What do you say?

Efficient. But Did It Help?

Reflection

You got through the assignment. But beneficence asks a harder question than whether the tool worked — it asks whether it helped you learn. When AI handles the messy early thinking for you, you may produce an essay without developing the skill the assignment was designed to build. The grade reflects a completed task. The learning may not have happened.

Beneficence

The principle of using AI in ways that promote learning, creativity, and collective good while minimizing harm to individuals or communities.

A tool that produces output isn't automatically a tool that does good. The measure is whether you grew.

That's the Insight

Reflection

Noticing that is the first move toward better practice. Beneficence doesn't mean avoiding AI — it means asking whether your use of it helped you think more deeply. Argumentative writing is supposed to develop your ability to take a position, build a case, and defend it in your own voice. When AI provides that structure, you skip the part of the process that builds those skills. Next time, use the AI's output as raw material to react to — not a scaffold to follow.

Beneficence

The principle of using AI in ways that promote learning, creativity, and collective good while minimizing harm to individuals or communities.

The tool worked. Next time, make sure it also helps.

A Study Tip — or an Advantage?

Reflection

Marcus turned in a weaker essay. You find out later he didn't know AI tools could even be used for brainstorming — no one had told him. The assignment rewarded students who already had that knowledge, not just their effort or thinking. Justice and fairness ask us to notice when a personal "study tip" is really an unequal advantage — and whether we could have leveled the playing field.

Justice & Fairness

The principle that access to AI tools and the knowledge of how to use them should not determine outcomes — and that we have a responsibility to notice when they do.

Fairness doesn't just mean following the rules. It means noticing when the rules quietly favor some students over others.

Shared Access, Fairer Outcome

Reflection

Marcus used the same approach, engaged with it critically, and turned in an essay he felt good about. By sharing your strategy, you gave him a real chance to compete on thinking — not just on who already knew the tool existed. Justice and fairness don't require you to do anyone's work for them. They ask you to notice when you can share access — and to do it.

Justice & Fairness

The principle that access to AI tools and the knowledge of how to use them should not determine outcomes — and that we have a responsibility to notice when they do.

Fairness asks three questions: Who benefits? Who is left out? Is the gain worth the cost?

Acting with beneficence, justice, and fairness doesn’t mean rejecting technology; it means using it with intention. Ask yourself: Does this use of AI help someone learn or grow? Does it give everyone a fair chance to participate? Does it make the system more just — or just more efficient? Ethics begins there, in those small acts of awareness. Over time, those habits of noticing and questioning become part of your professional character: the quiet discipline of making sure that progress still serves people.


Transparency and Integrity

If beneficence and fairness are about doing good, is about being seen doing it honestly. Integrity begins with transparency, which we discussed in Chapter 3, and transparency leads to ethical clarity. When you use AI, can someone else see where your thinking ends and the tool’s begins? Can you? That line matters. If your instructor doesn’t know how you arrived at your answer, they can’t teach you what to do next. And if you can’t remember what parts came from you and what came from the tool, your own reasoning starts to fade from view.

Think about what happens when that line blurs. A student pastes an AI-generated summary into a discussion board, meaning only to “get the ball rolling.” The post looks polished, so classmates assume it reflects the student’s thinking. Discussion stalls because no one knows what’s authentic. Another student uses AI to fix grammar errors and forgets to mention it; an instructor suspects cheating. In both cases, the problem isn’t necessarily the use of AI. Rather, the problem is the silence surrounding the use of AI.

Transparency is how you prevent that silence. It’s the act of saying, Here’s what I used, how I used it, and why. A simple disclosure at the bottom of a document — “Portions of this draft were edited with ChatGPT to improve clarity” — can turn suspicion into collaboration. It shows not only honesty but control. It says you were the one steering the process, not the other way around.

Integrity goes a step further. It’s not just about what you reveal — it’s about what you choose to do even when no one is watching. It’s the difference between using AI to understand an idea and using it to avoid thinking about it. In this sense, integrity is the inner version of transparency: the commitment to align your actions with your values, even when the rules are fuzzy.

That fuzziness is real. One professor allows AI brainstorming, another bans it. One workplace encourages AI-assisted writing, another warns against “unauthorized automation.” In such uncertainty, integrity becomes the only constant. It asks you to pause and ask: Does this choice represent my own work honestly? Does it respect the spirit of what I’m trying to learn or contribute?

Students who practice integrity now are rehearsing for professional life later. In most fields, transparency and integrity will be the difference between trust and dismissal. Imagine a marketing team that secretly relies on AI to write client reports but claims the work as entirely human. The shortcut might save time once, but when discovered, it erodes credibility far beyond a single project. The same is true in research, design, journalism, and even coding: transparency makes your success sustainable.

The following scenario walks you through a realistic late-night situation and asks you to navigate the choices around transparency and integrity.

ExerciseEthics in Practice: 11:45 p.m.

Ethics in Practice: 11:45 p.m.

Navigate a realistic late-night scenario about using AI to help draft a discussion post. Each decision surfaces a core ethical principle: Transparency or Integrity. There is no single correct path — follow your instinct, then read the reflection.

11:45 p.m.

The Situation

Your discussion post is due at midnight. You've read the article — a piece on how algorithms shape what news people see — but your mind is foggy and the clock is cruel. You paste the article's title into an AI tool and ask for a summary to help get your bearings. The summary is accurate enough. It helps you remember the key points. You use it to draft your own post, adding examples from lecture and your own reactions. When you reread it, it sounds like you. Still, you pause. Did the AI shape your understanding more than you realized? Would your classmates assume the insight was entirely yours?

What do you do before you hit submit?

In Class the Next Day

Decision 1 of 2

You submitted without disclosing. Your instructor opens class by referencing your post: "This one raised a point I want to dig into — can you say more about how algorithmic filtering creates what you called 'invisible consensus'?" The phrase came from the AI's summary. You used it because it sounded right, but you're not sure you can explain it in your own words under pressure.

What happens next?

A Message from a Classmate

Decision 1 of 2

You add a brief note at the bottom of your post: "I used an AI tool to generate a short summary of the article before drafting this response. The tool helped me organize my thoughts, but all analysis and examples are my own." You submit with two minutes to spare. The next morning, your classmate Jordan messages you: "Why did you disclose that? Everyone uses AI. You're just making yourself look suspicious."

How do you respond?

You Held Your Own — This Time

Reflection

You got through it. But the close call points to something real: when AI shapes how you first understand an idea, it can be hard to know where the tool's framing ends and your own thinking begins. Integrity isn't only about avoiding plagiarism — it's about maintaining honest ownership of your reasoning. A process you can't explain or defend is a process that hasn't fully become yours yet.

Integrity

The alignment between your values, your actions, and the way you represent your collaboration with AI — staying truthful to both process and product.

Integrity isn't just about what you turned in. It's about whether the thinking was genuinely yours.

The Summary Was the Shortcut

Reflection

The stumble reveals something the grade won't. You understood the AI's version of the article — not the article itself. That gap is where integrity lives. It isn't only about honesty with others; it's about honesty with yourself. When AI mediates your understanding of source material, the post may sound like you without actually reflecting your thinking. The problem isn't that you used the tool. It's that you didn't notice what it replaced.

Integrity

The alignment between your values, your actions, and the way you represent your collaboration with AI — staying truthful to both process and product.

If you can't explain it without the AI's framing, the understanding isn't fully yours yet.

Invisible by Default

Reflection

Jordan's logic is understandable — disclosure can feel like drawing attention to something everyone quietly does. But transparency isn't about what's common. It's about what's visible. When AI use stays invisible by default, instructors can't teach you what to do differently, classmates can't calibrate what's authentic, and your own reasoning starts to fade from view. Silence isn't neutral — it's a choice that shapes how your work is read and trusted.

Transparency

The practice of making your AI use visible — stating what you used, how you used it, and why — so that your reasoning and your authorship remain clear to others and to yourself.

A simple disclosure doesn't weaken your work. It shows you were the one steering the process.

Visible on Purpose

Reflection

You're right — and the distinction matters. Transparency isn't about confession or self-incrimination. It's about keeping your reasoning visible. When you disclose how AI helped you organize your thoughts, you're not signaling weakness. You're signaling control: the tool assisted, but you were the author. That clarity protects you from suspicion, helps your instructor understand your process, and keeps your own sense of authorship intact. It also models something your classmates may not have seen done clearly before.

Transparency

The practice of making your AI use visible — stating what you used, how you used it, and why — so that your reasoning and your authorship remain clear to others and to yourself.

Transparency turns potential suspicion into clarity. It's a declaration: I'm still the author of this work, and I'm proud enough of my process to show it.

Transparency and integrity are partners to ensure you are honest with others and yourself. In academic and professional life, both are signs of trustworthiness. They show that you understand not just how to use AI, but how to stay accountable for it. A clear disclosure isn’t a confession; it’s a declaration: I’m still the author of this work, and I’m proud enough of my process to show it.


Data Privacy

is the responsibility to protect personal and sensitive information by making careful decisions about what you share, where you share it, and who can access it. In the context of AI, this means recognizing that not everything should be entered into a chatbot or generative tool, even if doing so would make a task faster or easier. Ethical AI use includes knowing the difference between information that is yours to use freely and information that belongs to someone else or was shared with you in confidence.

This matters in school right now. A student might paste a classmate’s discussion post into an AI tool and ask for a response, upload instructor feedback to get help rewriting an assignment, or copy details from an advising email into a prompt without thinking much about it. In each case, the issue is not only whether the AI helps. The issue is whether private or identifiable information was shared without permission. A tool can feel informal and harmless in the moment, but the information entered into it may still involve trust, consent, and responsibility. Once that information has been shared with AI, there is no guarantee of what may happen with that information, and people could be harmed as a result.

The consequences of those harms are often uneven. When private information is exposed, the harm usually falls on the person whose data was shared, not the person who got the convenience. A classmate may feel embarrassed, an instructor may lose confidence in a student’s judgment, or a group project can be disrupted when team members realize their work was shared outside the group. These concerns shape whether people feel safe collaborating, asking questions, and sharing unfinished work.

The same principle carries directly into professional life. In many workplaces, employees handle information that is not theirs to disclose — client records, internal planning documents, personnel matters, financial details, or draft communications not yet ready to be public. Using AI carelessly in those settings can damage trust, violate policy, and create legal or ethical problems that extend far beyond a single task. Learning data privacy now helps build a habit of judgment that will matter later when the stakes are higher and the consequences are harder to reverse.

Data privacy connects back to beneficence, fairness, and integrity. Beneficence asks whether your use of AI is actually doing good; fairness asks who may bear the risk when something goes wrong; integrity asks whether your actions match your values even when no one is watching. Protecting private information is one of the clearest ways to practice all three at once.


Wrapping Up: Responsibility in a Shared World

Ethics begins with you, but it does not end there. Each time you use AI, whether to brainstorm, edit, or explore an idea, you make choices that reach beyond your own screen. Those choices shape how others experience learning, how instructors design courses, and how your peers collaborate. Responsibility means more than following your own sense of right and wrong. It means recognizing that your work exists within a network of expectations, rules, and trust.

Your personal ethics matter deeply, but they do not exist in isolation. In a college setting, every decision unfolds within a shared community. Professors, classmates, and institutions all have their own boundaries for acceptable use. What feels reasonable to you might raise questions for someone else. A practice that one instructor allows could violate another’s policy entirely. This is not contradiction but context. Acting responsibly requires balancing your own values with an awareness of the standards that shape the space you share with others.

When you pause to ask, How will this choice affect the people around me? you move from private conscience to public integrity. That question is the heart of ethical maturity. It acknowledges that responsible use of AI is never just about your intention but also about its impact on others.

This chapter has focused mostly on the personal side of AI ethics: how to protect your agency, preserve your integrity, and remain transparent in your work. Yet these tools also raise wider questions about their place in the world. The same systems that can draft an essay or outline a presentation draw power, water, and creative labor from sources you may never see.

The question of ownership is just as complex. Most generative AI models are trained on massive collections of online text, art, and music, much of it gathered without permission. The patterns they reproduce often come from the uncredited work of living artists, writers, and musicians. Many of those creators now see AI-generated imitations of their style circulating without their consent, threatening both their livelihoods and their artistic identity. Laws are still catching up, but the ethical issue is already here. When we use or share AI content, we must consider whose labor we might be borrowing.

Understanding these broader consequences does not mean abandoning AI altogether. It means using it with intention and humility, acknowledging that even small choices participate in a much larger ecosystem. Each time you verify a source, credit a creator, or pause to ask whether a use of AI feels fair, you practice a kind of environmental and cultural stewardship. You treat responsibility not as a rule to obey but as a relationship to maintain.

As you move forward, remember that ethical use of AI is less about punishment or perfection and more about presence. It is the quiet discipline of showing up fully human and fully accountable in how you learn, create, and contribute. The decisions you make today will help define the culture that surrounds these tools tomorrow. The more aware and intentional you are, the more humane that future becomes.


Dig Deeper

For more about autonomy as a basic psychological need and why people learn more deeply when they feel ownership over their choices — the theoretical backbone of this chapter’s discussion of agency: Ryan, R.M. & Deci, E.L. (2000). “Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being.” American Psychologist, 55(1), 68–78. doi.org/10.1037/0003-066X.55.1.68

For more about the first global ethical framework for AI, including its core values of human rights, fairness, transparency, and human oversight — and how these principles apply to education: UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. unesdoc.unesco.org/ark:/48223/pf0000380455

For more about UNESCO’s guidance on using generative AI specifically in educational settings, including recommendations on data privacy, age-appropriate use, and human-centered design: UNESCO. (2023). Guidance for Generative AI in Education and Research. unesco.org/en/articles/guidance-generative-ai-education-and-research

For more about how students are using AI in college right now — including survey data showing that 82% of U.S. students have used AI for academic tasks — and why institutions need to lead with ethical support rather than just policy: Studiosity & YouGov. (2025). 2025 U.S. Student Wellbeing Survey. Discussed in: “Student AI Use on the Rise: Why Universities Must Lead with Ethical Support.” Higher Education Today (American Council on Education). higheredtoday.org/2025/09/02/ai-leading-with-ethical-support/

For more about chatbot privacy risks — including how user data may be collected for training, retained for long periods, and used to make inferences about health and identity — and why students should think carefully about what they share: King, J. & Meinhardt, C. (2025). “Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World.” Stanford Institute for Human-Centered Artificial Intelligence. news.stanford.edu/stories/2025/10/ai-chatbot-privacy-concerns-risks-research

For more about how AI’s rapid growth is straining energy grids and water supplies — and a data-driven roadmap for reducing those impacts — relevant to this chapter’s discussion of beneficence and environmental responsibility: Xiao, T. et al. (2025). “Environmental Impact and Net-Zero Pathways for Sustainable Artificial Intelligence Servers in the USA.” Nature Sustainability. doi.org/10.1038/s41893-025-01681-y

For more about why ethical AI leadership in higher education requires more than compliance — including a framework modeled on the Belmont Report’s principles for protecting human subjects: Georgieva, M. & Stuart, J. (2025). “Ethics Is the Edge: The Future of AI in Higher Education.” EDUCAUSE Review. er.educause.edu/articles/2025/6/ethics-is-the-edge-the-future-of-ai-in-higher-education

For more about how AI classroom adoption is outpacing privacy protections, including concerns about FERPA’s limitations and the risks of teachers using unapproved AI tools with student data: Axios. (2025). “How Students’ Privacy Could Be a Casualty of Schools’ Rush to AI.” axios.com/2025/08/14/ai-education-privacy

For more about the utilitarian and deontological traditions introduced in this chapter, written accessibly for readers new to moral philosophy: Sandel, M.J. (2009). Justice: What’s the Right Thing to Do? Farrar, Straus and Giroux.

For a practical guide to AI ethics in the classroom, including the AI Assessment Scale (AIAS) — a tool designed to foster student agency and responsibility rather than just policing AI use: Harvard University Derek Bok Center for Teaching and Learning. (2025). “AI Literacy & Ethics.” bokcenter.harvard.edu/ai-literacy-and-ethics