The Blog

Enjoy a stimulating read!

PEDAGOGICAL INNOVATIONS IN THE ERA OF AI

Pedagogy
By Jean MOUSSAVOU, Professeur HDR le 23 January 2024
IA générative

For some months now, generative AI, such as ChatGPT, Google Bard and, more recently, Gemini, has been making the headlines. It is fuelling debate, sparking reactions (sometimes rather strong), and raising a whole host of questions.

 

Generative AI is transforming our societies and revolutionising all sectors of the economy. And the world of higher education and research is not immune. In our particular sector, there is a strong temptation to believe that only specialists in Education Sciences or Information Systems are qualified to help us understand the issues surrounding AI. However, AI concerns a whole spectrum of different stakeholders in higher education and research, provided they are interested in exploring the potential of this technology. In 2023, UNESCO published a report entitled ‘Guidance for generative AI in education and research’. One of the principal recommendations in this report is to enhance the skills of teachers and researchers in the proper use of generative AI for critical thinking and creativity in education and research, whilst mitigating the risks.

In fact, as early as 2019, in its objectives for achieving the ‘Education 2030’ agenda, UNESCO had already recognised the potential benefits of AI. In its report entitled ‘Beijing Consensus on Artificial Intelligence and Education’, UNESCO recommended, among other things, that AI be systematically incorporated into education in order ‘to address some of the biggest challenges in education today, innovate teaching and learning practices, and ultimately accelerate the progress towards SDG 4’. Although there are many arguments in favour of AI (as well as those detailing its risks), there are very few developments explaining just how this technology can be applied in higher education and research. And yet, as was pointed out in a previous article (cf. Moussavou, 2023), AI has certain implications for academic research.

So, what are its implications for teaching? What potential does AI have for pedagogical innovation, and how should teachers leverage it?

 

1. AI applied to education… what does it mean?

First of all, AI can be considered as a set of techniques of varying degrees of complexity. We understand AI as ‘the ability of a machine to perform cognitive functions that we associate with the human mind, such as perceiving, reasoning, learning, [...] and even demonstrating creativity’ (Rai et al 2019, p. iii). In the field of Education Sciences, Humble and Mozelius (2019) define AI by stressing its multidisciplinary nature that goes beyond computer sciences alone: AI in education is ‘an interdisciplinary field containing psychology, linguistics, neuroscience, pedagogy, anthropology and sociology with the goal of being a powerful tool for education and providing a deeper understanding of how learning occurs’ (p. 1).

Meanwhile, Popenici and Kerr (2017) define AI in education as ‘computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and the use of data for complex processing tasks’ (p. 4). This latter definition of AI in education is particularly interesting because it suggests complex interactions between the human intelligence of the teacher and AI.

 

2. How can AI be used in education?

To begin with, although most discussions today focus on AI, over the years a whole range of learning technology tools have been developed in line with technological advances. In his book ‘Teaching in a Digital Age’, Bates (2019) attempts to put forward arguments about the role of technology in education, which he claims go back at least 2,500 years. The book examines the principles of effective teaching in the age of digital technologies and provides a relatively comprehensive historical account of developments in technology for learning [cf. also Open College (2015)].

  • Among the technological tools listed are Learning Management Systems (LMS), which provide an online teaching experience whereby educational content can be uploaded/downloaded, with ‘spaces’ specifically for learning and learner activities;
  • Massive Open Online Courses (MOOCs), meanwhile, offer learners a ‘core learning experience on a given subject, generally consisting of videos and quizzes’;
  • educational robots allow ‘interaction with learners to support them in their learning process’.
  • Finally, virtual reality systems enable learners to be immersed in 3D models of real-world activities, thereby making it possible to ‘carry out or simulate activities that would otherwise be costly, dangerous or simply impossible to perform’.

 

With the emergence of generative AI, learning technologies seem to be gaining new impetus thanks to the ongoing development of algorithms, and above all an exponential increase in computer processing power. Another major development concerns the vast amount of information available to teachers and learners. On this last point, designing a study course most often requires the compilation of a large amount of data in order to enrich personal knowledge or skills. In simple terms, this data, which is often collected from a variety of sources and is sometimes not readily accessible to teachers, is the knowledge or facts that the teacher will need in order to develop the course. Furthermore, the data is very often collated at a given point in time during the design of the course, or perhaps when the course is being updated at a later stage. With AI, and putting aside certain risks discussed below (cf. also Moussavou, 2023), the accumulation of large quantities of data, and the fact that it is constantly being updated, changes everything.

As noted by Extance (2023), experiments to leverage the use of generative AI in pedagogy are underway in many schools and universities. AI can help teachers and learners increase their ability to retrieve large amounts of data in real time in order to feed educational experiences. And AI can provide a personalised tutoring experience, available anytime, anywhere, and accessible to more learners than a single teacher would ever be able to reach. 

With regard to enhancing data/knowledge retrieval capabilities, one approach to creating an AI learning opportunity is to link the tool to external or targeted sources of knowledge - such as a textbook or a set of research articles - that have been rigorously checked upstream. The aim of such an approach, described by some authors as Retrieval-Augmented Generation (RAG) (Lewis et al., 2020), is to avoid the impossible task of verifying the billions of text sources that give AI its conversational power. For example, Arizona State University (ASU), one of the most progressive universities in terms of adopting learning technologies, has implemented a platform that allows faculty to use generative AI models, including Google’s GPT-4 and Bard.

Chat GPT

This platform uses Retrieval-Augmented Generation in ASU courses. ChatGPT and Bard are used as tools to answer learners’ questions by searching in specific datasets, such as academic research articles or lecture notes. After an initial basic version for trial purposes, in October 2023 ASU launched a web user interface enabling its teachers to experiment with the tool. This will gradually enable teachers to create chatbots that learners will be able to interact with. Other educational institutions have also embraced generative AI, including Vanderbilt University in Nashville, Tennessee (USA), which offers learners certain courses via access to a paid version of ChatGPT, including specialist plug-in tools.Researchers at East China Normal University in Shanghai have also created a dedicated educational tool called EduChat that combines essay assessment, dialogue-based tutoring, and emotional support in one chatbot (Dan et al., 2023). Although EduChat is still in its infancy, its distinguishing feature is that it is a dedicated educational tool rather than an adaptation of an existing mainstream model, such as ChatGPT or Bard.

 

With regard to personalised tutoring, it should firstly be noted that the idea of using technology for such an activity is nothing new in itself. Trumbore (2023) reports that as far back as 1972, a personalised learning system called PLATO (Programmed Logic for Automated Teaching Operations) made its debut. This was the first personalised learning system available to the general public. Developed by Don Bitzer, a Professor at the University of Illinois (USA), PLATO enabled learners to be simultaneously connected to a central computer and to follow various online courses as well as receiving feedback on assignments. PLATO is said to have enabled learners to achieve the same level of success as face-to-face courses in less time. Then, in 2007, the first AI chatbots made it possible to offer tutoring to learners. Research shows that these chatbots saw similar results as human tutors (Trumbore, 2023). However, the use of this technology to provide tutoring was mainly experimental. Today, generative AI has more advanced capability enabling more complex conversations and can provide more effective personalised tutoring. According to Extance (2023), some teachers see generative AI as potential ‘thinking partners who could cost less than a human tutor and, unlike humans, are always available’. One example is the Khanmigo tutor and teaching assistant, one of the first automated tutors using ChatGPT. The tool is the result of a partnership between OpenAI and the Khan Lab School, a private school based in California’s Silicon Valley (USA). Using GPT-4, Khanmigo provides students with advice as they complete an exercise, saving teachers and learners time, and enabling them to focus more on discussion and learning during face-to-face sessions. Another example is TAL Education Group, a Chinese tutoring company based in Beijing, which has created a tool called MathGPT. According to Extance (2023), MathGPT is more accurate than GPT-4 at responding to questions specifically about maths. MathGPT, like Khanmingo, also aims to help learners by explaining how to solve problems.

 

 

3. Risks and concerns

In spite of its potential, AI poses a number of risks or concerns, which we need to be aware of in order to anticipate them before embracing this technology. Researchers and designers of AI tools recognise a range of drawbacks, such as AI’s tendency to generate incorrect or nonsensical answers, bias in text generation data, and the production of harmful content (OpenAI., 2023; Quinio and Bidan, 2023; Stokel-Walker and Van Noorden, 2023). If no precautions are taken, using AI for educational innovation purposes, without taking account of such limitations, would be counter-productive.

Such limitations would render AI, at best, useless and, at worst, detrimental to a learner’s ability to learn. Some higher education institutions, such as ASU, are trying to reduce AI’s limitations, even seeking to turn them into strengths, for example, by using them to improve a learner’s critical thinking skills (Nature, 2023). In the same vein, we can cite the example of Raphaël Suire (2023), Professor of Innovation Management at IAE-Nantes-University (France), who, to assess his students on the digital economy course, explained: “I asked them to draft a strategic analysis at home using ChatGPT. The aim was to assess the relevance of shared reasoning, as well as the work carried out with ChatGPT. They had to explain the whys and the wherefores. On a personal level, my knowledge base is being enriched, enabling me to identify and characterise usage and appropriation contexts more effectively.” Based on this type of experiment, one recommendation would be to train learners to either validate or question an answer provided by AI; this would require learners to go back to the source of the data to check its accuracy and reliability and, after careful consideration, to decide whether or not to accept the results that arise from the conversation with the AI tool.

The other risk, more of a societal nature, that often emerges when the question of generative AI development is broached, is the potential erosion of human cognitive capacity. ‘If we are content to use AI algorithms without seeking to understand their main operating principles and their implications for our lives, we run the risk of reducing or even losing our individual and collective intelligence: we will rely on AI mechanisms, thinking less for ourselves and developing less critical thinking’ (Roux et al., 2020).

AI therefore raises concerns about the idea that learners will simply be able to ask the tool to do the work for them, or at the very least, that they could become dependent on AI for quick answers, without understanding the significance or the reasons. Nevertheless, history has shown time and again that education always adapts to new technologies. In the 1970s, the increased use of hand-held calculators caused maths teachers to worry about the future of their discipline (Rudnick and Krulik, 1976), but it is safe to say that mathematics has survived. Just as Wikipedia, Google and all the other digital educational tools that have gone before didn’t sound the death knell for teachers, AI certainly won’t either. New technologies simply lead to new and innovative ways of organising our work (Perez, 2009). And the same will certainly be true of generative AI. Teachers will need to take bold steps to ensure that they don’t miss the opportunity to innovate in their teaching practices and be vigilant in ensuring that AI is harnessed in a way that delivers enhanced, ethical, and responsible teaching.

 

Conclusion

AI technology, particularly generative AI, has emerged as a new phenomenon in the world of pedagogy, and as yet there is little knowledge of how to define the associated frameworks for use (Romero and Heiser, 2023). In this article, we have tried to provide some pointers in response to this issue. The potential of AI as a tool for educational innovation is considerable, whether for personalised tutoring or for teaching and learning. It is true that there are still risks that need to be considered before AI is fully adopted. However, as our understanding of the advantages and the limitations of AI improves, more initiatives for educational establishments will certainly emerge. Whilst there were concerns following the high-profile launch of the acclaimed ChatGPT some time ago, the innate resistance of the human mind to any change is a well-described phenomenon and can be understandable from evolutionary and social psychology perspectives (Tobore, 2019). Our conviction is that the Human-AI combination can truly transform teaching, leading to innovation, whilst placing the teacher at the very heart of the process.

 

References

 

You appear to be in France... if you prefer, you can browse the French version of our site!

browse french site
en