Interactivity and Feedback
The introduction of interactivity in eLearning courses has been shown to benefit training strategies. As one form of interactivity, receiving feedback in eLearning is a motivational tool that increases the effectiveness of online training programs.
Traditional Feedback
Traditionally, interactive questions in asynchronous online learning have involved selecting from a list of multiple-choice answer options and receiving pre-written feedback. Good, instructional feedback can be constructive, meaning that it explains not just that the answer is correct, but also why it is correct and intrinsic, meaning that it explains the consequences of the answer. Prefabricated feedback, while effective in many ways, lacks personalization, limits the learner’s imagination and ability to recall or come to answers on their own and does not specifically address individualized learner responses or provide actionable guidance or consequences based on those responses.
Personalized AI-Generated Feedback
With the advent of easy-to-use generative artificial intelligence (AI) tools, it is now possible to provide robust, personalized feedback in real-time based on a user’s typed input. This case study sought to understand how AI-generated feedback would be received by instructional designers and subject matter experts in an eLearning module on the sensitive topic of suicide prevention.
Case Study Design
The eLearning module in this case study consisted of introductory materials related to suicide prevention and how to initiate a conversation with a coworker for whom you have concern that they may be considering suicide. A brief scenario set the scene for the interaction. A question prompted the learner to consider how they might approach the coworker and instructed them to type their answer into an open-response field. On the next page, a box displayed immediate, personalized AI-generated feedback in text format that addressed the learner’s input. A “try again” button cleared the interaction, allowing the learner to continue experimenting with different approaches.
Methodology
An open-source AI tool was integrated into an interactive online module, created using a popular eLearning authoring tool. The module was distributed to a group of suicide prevention subject matter experts and instructional designers with a broad array of expertise. The study included a survey containing Likert and open-response questions that addressed the participants’ experiences with and opinions of the interaction. Seventeen participants responded to the survey.
Overall, the responses to the Likert scale questions indicated that the AI-generated feedback was generally clear and consistent, appropriate for the subject matter, motivating, actionable, personalized and contributed meaningfully to the effectiveness of the module. The qualitative data reinforced these results and provided recommendations for improvements, such as incorporating positive reinforcement.
The Integration Process
Integrating artificial intelligence into an eLearning authoring tool for an online, asynchronous suicide prevention module presented both technical and content challenges. A popular eLearning authoring tool allowed for interactivity, JavaScript was used for input handling and an integrated development environment (IDE) hosted the server connecting to an open-source AI tool.
By systematically addressing the challenges that arose using the best practices found below, it was possible to create a realistic, safe and engaging learning experience for sensitive topics such as suicide prevention.
Best Practices
Ensure Clear and Consistent Feedback
Initial AI-generated responses were sometimes vague, inconsistent or generic. In other cases, the response didn’t align with the learner’s input. For example, the AI generator would respond “positively” to an incorrect answer or would contradict one of its previous responses. In other instances, the responses would refer to the learner in the third person, which detracted from the desired conversational tone.
Achieving consistent, sensitive feedback required iterative testing and subject matter expert reviews of the prompt given to the AI tool. Refining prompts by providing clear instructions and following standard prompt engineering practices improved the AI tool’s ability to deliver appropriate and actionable guidance.
Personalize Feedback to Enhance Engagement
Without appropriate prompting, the AI generator sometimes provided similar feedback for every answer, which lacked the desired personalization. Again, prompt engineering enabled specific responses that directly addressed the text that the learner submitted.
The eLearning module contained a “try again” button, which allowed the learner to enter a new answer and refresh the AI-generated response to receive new feedback. The ability to test new responses in a safe space without fear of failure or judgement enabled participants to experiment and obtain a deeper understanding of the module’s content and how they might apply what they learned to their own experiences.
Handle Sensitive Topics Appropriately
Suicide prevention is a sensitive topic with high stakes. Relying on a generative AI tool to develop feedback based on a user’s response could introduce risk if the tool were to provide instruction that is not generally accepted by suicide prevention best practices or that is considered harmful.
Overall, participants indicated that the feedback was sensitive to the topic and that the AI tool responded appropriately in almost all cases. One participant suggested that a closed-source AI tool, as opposed to an open-source one like this module employed, could allow the course designer to have more control and even pull from specific curriculum, which would feel less risky for sensitive topics with high stakes.
Incorporate Positive Reinforcement
The prompt in the module did not direct the AI tool to deliberately indicate whether the learner’s answer was correct, nor to provide praise or criticism. Instead, the AI tool was prompted to provide constructive feedback about the learner’s answer and how it could be improved.
Results from the survey indicated that training professionals felt that learners would want to know if they were on the right track, and that adding positive encouragement, such as “Great job!” or “Not bad!” would better inform the learner of their progress. Future iterations of this project may include prompting the AI tool to measure how correct or accurate the learner’s response is and to provide positive reinforcement based on that assessment, in addition to the constructive, instructional, and intrinsic feedback given in the current module.
Address Technical Issues
The module exhibited a few occasional technical issues, such as slow response time and a malfunction that prevented the “try again” button from resetting the text entry field. The former was resolved by upgrading to a premium service for the integration’s IDE, and the latter was resolved by troubleshooting the JavaScript code. Understanding how the tools, code, and AI generator work together was crucial for uncovering and fixing issues.
Results
This case study showed that the integration of an open-source AI generator into an asynchronous online module developed using a popular eLearning authoring tool was well-received by both instructional designers and subject matter experts. High average scores across all Likert scale questions, as well as the positive sentiment of the open-ended responses, illustrated the enthusiastic reception of AI-generated, personalized, immediate feedback in online training, even for a sensitive topic.
Results from this case study point to the potential of such integrations to improve the engagement and motivation of learners and the effectiveness of interactivity — including feedback — in eLearning modules overall. Future iterations of this study will include testing a similar, AI-integrated online module on learners rather than solely on training experts. The study will aim to obtain the effect of AI-generated feedback on self-efficacy in leaders taking online leadership training. Additional research on AI-generated feedback, including safety and ethics, engagement and motivation, and general effectiveness on reaching learning outcomes would benefit many fields, including instructional design and online training.