Revising my homework policy

ChatGPT, Gemini, CoPilot, etc. are all excellent tools to support learning. However, as these technologies are starting to become more easily available, we’re right back to the discussion about whether or not math teachers should allow calculators in the classroom.

Of course these tools are here to stay.

Yes, their environmental impact is huge. Yes, in a real way, they displace learning. Yes, their owners are sometimes using questionable data sets to train the models. Yes, we are handing control of our learning (and thinking) to a very small group of for-profit companies. But, in the end, it doesn’t matter. People will use the tools that are made available to them.

For the past few years, we’ve seen these tools becoming more and more powerful.

And, for the same amount of time, we’re seeing students use them to replace learning. And that should be cause for concern. IBM figured out a long time ago that computers cannot be held accountable, and, therefore, should not be empower to autonomously make decisions. That is true for learning as well. It only takes a few generations for human knowledge to be lost, and AI is rapidly sending us in that direction. Are we really moving to a future where AI is too advanced for human oversight? Some thing we’re already there.

What does that mean for me, as an instructor?

I have three types of students:

  1. Students who are very motivated to learn, and who are willing and able to put effort towards doing so. They will benefit from AI tools hugely. Using the tool as a sparring budy rather than as a substitute will enable to them go far beyond students that had to learn “just” from textbooks and lectures. This portion is typically about 25% of the overall population. As course levels go up, this fraction increases.

  2. Student’s who don’t care about learning and just want to check a box. In the U.S. system of higher private education, they’ll make it through with minimal effort. Their transcripts often contain lots of withdrawn courses and low grades (Ds and Cs), with an occasional higher grade, but they’ll earn their degree. AI won’t make it better for them, but it also won’t make it worse. This portion is generally about 20% of the overall population. As course levels go up, this fraction decreases.

  3. The group in the middle. Those who probably want to learn and are able and willing to put some effort to that. This is the group that can go either way. If they choose to use AI to support learning, they’ll benefit from it. If they choose AI to replace learing, they’ll go straight in with group 2. This is typically 50%–60% of the class size and it is pretty stable.

Observing this has led me to an important realization: “A professor’s job is to encourage and support learning.”

The emphasis is on “support learning.” I cannot learn the materials for the students so they don’t have to. I also cannot force students to learn. Learning is much like therapy. It only works when you are open to it.

At best, I can force students to hand in homework. However, while most will do that, a good portion will not do the work themselves. Instead, they’ll turn to ChatGPT and friends to do the work for them, tweak it a bit, and hand that in.

That sounds adversarial, and that is not my intention. It is just the experience of seeing the reality of the world. Academics would consider that an integrity violation, but that seems to be an outdated concept. Sometimes that lack of effort is a motivation issue; sometimes it is an economic issues, and sometimes it is a social issue. Many students work full-time to survive, or have significant family obligations. Learning takes a lot of time, and that isn’t always readily available!

As an instructor, I am expected to provide detailed and timely feedback on all work handed in by students. In reality, I often end up grading homework generated by AI sources. Again: that’s not true for all students! There is a core of motivated strong students whose goal is to learn and who are able and willing to put effort towards reaching that goal.

Effective this semester, I made a few changes:

  1. I expect each student to hand in an annotated outline of the topics we discussed in class. These outlines must be annotated with references to external sources, like a textbook. Each week’s content must fit on two pages. These notes can be brought to quizzes and exams. Notes are grades pass/fail/missing. Since they are primarily meant as student’s learning aides, I’ll provide feedback when asked. Otherwise, the only thing that’s assessed is effort. The quality of the notes will become evident during exams and quizzes.

  2. I will administer short quizzes every other week, or every third week. They’ll factor into the grade, but they’re really meant for students to understand to what extent they meet my expectations of learning. A typical quiz consists of 3–5 multiple choice questions to test knowledge (or quality of notes) and two or three short-answer questions to test understanding and application of concepts.

  3. There will be a written final exam or a proctored coding activity in a controlled environment when appropriate.

  4. There will be NO OTHER REQUIRED HOMEWORK. However, I will provide weekly exercises that are fully optional to complete. If students to hand in their solutions, I’ll provide them with timely feedback. If they don’t, that’s fine too. For the exercises, I encourage people to use any tool to support their learning. AI is included in that.

I justify this approach in a few ways:

In the end, I’m going to be curious to see what the results are, and how they compare to previous years. We’re now about three-quarters through the semester, and I feel pretty good about them. The amount of “stupid” work for me has been drastically reduced, and the outcomes seem to be pretty similar to previous years.