How AI Has Negatively Affected Asynchronous Learning

How AI Has Negatively Affected Asynchronous Learning

The emergence of tools like ChatGPT complicates the ability of instructors to assess genuine learning, raising concerns about the future of this educational model.

The COVID pandemic accelerated the adoption of different classroom modalities. One of the most popular was the rise of asynchronous classes. They offered flexibility, reduced commuting, and made it easier for students with jobs or family obligations to complete their degrees. As long as the work was submitted on time, the model largely functioned.

But Generative Artificial Intelligence (AI) has negatively affected the core assumptions that made this model viable: that submitted work reflects a student’s independent thinking. In asynchronous courses, it has become increasingly difficult to distinguish between students’ original work and that created by ChatGPT or other AI programs.

This issue is not primarily a cheating problem. It is a challenge to assess students’ work and their learning adequately. When AI can reliably produce summaries, reflections, and even passable analyses, written assignments lose much of their value as evidence of understanding. When students can simply go to AI and ask it for answers to quizzes, multiple-choice exams, etc., then little learning occurs.

The issue is especially pronounced in content-heavy courses, where instructors want students to understand specific ideas, concepts, or theories. Students may still be engaging with the material, but instructors can no longer be confident that the work they are grading demonstrates understanding and mastery of the subject matter.

Some instructors who teach asynchronous classes have modified the way they assess students’ learning by asking them to write assignments or essays that connect the course content to personal experience. That approach can limit AI use, but it does not work well when instructors want their students to master disciplinary content rather than reflection. Others have experimented with AI-permitted assignments, reframing coursework around prompting or critique. These strategies may be pedagogically interesting, but they do not solve the basic problem of evaluating individual learning in asynchronous environments.

The most reliable alternatives (e.g., oral exams, in-class tests, etc.) reintroduce exactly what asynchronous students thought they were avoiding. They also raise equity concerns for students who live out of state, overseas, or whose schedules make real-time participation difficult.

Higher education now faces a difficult tradeoff. Asynchronous courses expanded access and flexibility, but AI has exposed how weak their assessment models really are. Instructors and the educational institutions they work for will eventually have to choose between preserving convenience and enabling meaningful evaluation of learning. The future of asynchronous education isn’t its elimination, but its use will probably be more limited.