Professor Young’s AI Policy
AI has changed how people research, write, create, work, and even think. We’re living in a quickly-shifting landscape with regard to AI, and that is going to continue. Because you are a university student who will presumably be moving along an academic path and eventually entering the workplace, it is important that you gain a fundamental understanding of the ways in which AI intersects with modern life. To that end, I consider it my job to help you begin to explore and practice working with AI in ways that are ethical, productive, transparent, and smart.
In my writing classes, I am okay with you using AI when you believe it will be helpful, as long as you are ethical and transparent. Any time you use an AI, you must somewhere include a brief note that articulates which AI you used and how. Here are some examples:
- “I used ChatGPT to help me brainstorm ideas for refining the focus of my essay.”
- “I used Microsoft Copilot to help me build a list of articles that I needed to read for research.”
- “I used ChatGPT to summarize “[Title of Article]” so that I could accurately incorporate it in my literature review.”
- “After I wrote a draft of my essay, I copied and pasted it into Microsoft Copilot and asked for assistance with essay structure, writing clarity, and grammar. I also asked it for additional feedback that I should consider in revision.”
Here is what you may NOT do:
- Submit totally-generated AI writing as your own. This is clear plagiarism, it is easy to identify, and it can result in failing grades and/or expulsion from the University.
- Use AI for anything without transparency. If you use an AI-powered tool to generate, draft, create, or compose any portion of any assignment, you must disclose any assistance or production AI has provided.
Which AI to use:
The only university-supported GEN AI (generative AI) is Microsoft Copilot. This means that if you need tech support, that’s the only one the university tech help can help you with. If you do choose to use an AI, the decision of which one to use is entirely up to you. When I use an AI, I use ChatGPT because I prefer it, but the choice is yours.
Ethan Mollick, the author of Co-Intelligence: Living and Working with AI, (a leading book on this topic) offers 4 rules for working with AI:
- Always invite AI to the table: AI should be integrated into workflows where its strengths can be leveraged. In the beginning, it can be helpful to approach every task with the question, “Which parts of this task may be well-suited to AI?” or “What could I improve through quicker iteration or data analysis?”
- Be the human in the loop: Your role is to oversee and validate the AI’s outputs by critically assessing it for correctness. That means never handing the reins over entirely to the AI; to be successful, that means leaning into professional growth and development to become the in-demand expert and critical thinker AI can’t replace.
- Treat AI like a person, but tell it what kind of person it is: To get the most useful output, give the AI a clear context for the outputs you need. The more detail and insight you can provide in a prompt, the better the AI can make the appropriate language-token predictions to generate useful outputs for your task.
- Assume it’s the worst AI you’ll ever use: As its capabilities grow, today’s AI will seem primitive in hindsight. By understanding and starting to use it today, you get the maximum opportunity to increase your sophistication as the technology advances over time.
And finally, a note about grading and AI:
There is no difference to me between “AI-generated text” and “Sounds like AI-generated text.” Without extensive human guidance and interaction, AI-generated text sounds robotic, disembodied, overly-formal, and downright boring. Any writing that can be described by these criteria is, by definition, not good writing that should not receive a good grade.
I am not interested in policing anyone’s AI usage, and I do not view the ethical and transparent use of AI as any form of “cheating.”
However, I cannot stand robotic, disembodied, overly-formal, downright-boring writing, and I grade it accordingly when I receive it, regardless of whether AI had anything to do with it — bad writing is bad writing.
We’re going to work together to create GOOD writing (there are a lot of ways to define that depending upon what the writing task is!). If AI can help us to do that, then great! If it makes it worse or just gets in the way, then we won’t bother ourselves with it. Much of learning to effectively work with an AI (or not) is practice and trial and error. Please consider this course a space in which to do that. I’m excited to work on it with you.
~Dr. Young 😊🤖❤️