Real-time learning analytics system for improvement of on-site lectures

The goal of this research is to provide a real-time lecture assistance system. The study’s focus is on-site classrooms where professors offer lectures and a large number of students listen to lecturers’ explanations, do exercises, and so on.
Design/methodology/approach
The suggested system collects teaching and learning activities from a teacher and students in real time using an e-learning system and an e-book system. The obtained data is promptly processed in order to offer feedback to the instructor both before and during the lesson. The preview accomplishment graph, for example, allows the instructor to see which pages were well previewed and which were not. Real-time analytics graphs are shown on the teacher’s PC during the lesson. The instructor can readily determine the state of the pupils and whether or not they are following the teacher’s explanation.
Findings
The authors initially validated the efficiency of each instrument produced in this research using the case study. The authors then carried out a large-scale experiment using a real-time analytics graph to see whether the suggested method might enhance teaching and learning in on-site classes. According to the findings, instructors could modify the tempo of their lectures depending on the real-time feedback system, which encouraged students to bookmark and highlight keywords and phrases.
Originality/value
Real-time learning analytics helps professors and students to improve their lecture teaching and learning. Teachers should immediately begin examining this new method to enhance their lectures.
Emerald Group Publishing Limited is the publisher. This article is licensed under the Creative Commons Attribution 4.0 International License. Anyone may copy, distribute, translate, and make derivative works of this material (for both commercial and non-commercial reasons), as long as the original publication and authors are properly credited.
Learning analytics has received a lot of interest in the technology-enhanced learning research arena. Learning analytics is described by the Society for Learning Analytics and Research as “the measurement, collecting, analysis, and reporting of data about learners and their surroundings for the aim of understanding and enhancing learning and the contexts in which it happens.” Researchers debated strategies for monitoring a learning environment and collecting data in the early phases of learning analytics. Virtual learning environments and learning management systems such as Blackboard (Bradford et al., 2007) and Moodle (Dougiamas and Taylor, 2003) have recently made it possible to gather massive amounts of educational data (educational big data) in a simple manner. The most recent study has concentrated on ways for analyzing educational big data and reporting, that is, how to deliver feedback to teachers/students on analysis findings.
Khalil et al. (2016) conducted a survey on learning analytics and classified the methodologies into seven categories: Data mining techniques – forecasting the relationship between studying time and learning performance (Jo et al., 2014); statistics and mathematics – building a grading system (Vogelsang and Ruppertz, 2015) and temporal discourse analysis of an online discussion (Asif et al., 2017). (Lee and Tan, 2017) ; text mining, semantics, and linguistic analysis – summarization of students’ learning journals (Taniguchi et al., 2017) and comprehension of students’ self-reflections (Kovanovi, 2018); visualization – comprehensive overview of students’ learning from learning management system (Poon et al., 2017), awareness tool for teachers and learners (Martinez-Maldonado et al., 2015), and a learning analytics dashboard (Aljohani et al., 2018) ; network analysis – a relationship analysis between technology use and cognitive presence (Kovanovi, 2017), classification of students’ patterns into categories based on level of engagement (Khalil and Ebner, 2016), and a network analysis of LAK (Learning Analytics and Knowledge) conference papers (Dawson et al., 2014); qualitative analysis – an evaluation of MOOC discussion forums (Ezen-Can et al., 2015), and analyzing instructors’ comments (Gardner e (Grann and Bushway, 2014).
Data mining approaches have been used to predict students’ academic progress (Asif et al., 2017), identify students at risk using clicker answers (Choi et al., 2018), and anticipate the relationship between studying time and learning performance (Jo et al., 2014).
Building a grading system using statistics and mathematics (Vogelsang and Ruppertz, 2015) and temporal discourse analysis of an online conversation (Lee and Tan, 2017);
Summarization of students’ learning journals (Taniguchi et al., 2017) and comprehending students’ self-reflections (Kovanovi, 2018) using text mining, semantics, and linguistic analysis
Visualization provides a comprehensive overview of students’ learning from a learning management system (Poon et al., 2017), an awareness tool for teachers and learners (Martinez-Maldonado et al., 2015), and a learning analytics dashboard (Aljohani et al., 2018).
network analysis – association analysis between technology usage and cognitive presence (Kovanovi, 2017), categorization of student patterns into involvement levels (Khalil and Ebner, 2016), and a network analysis of LAK (Learning Analytics and Knowledge) conference papers (Dawson et al., 2014)
;
qualitative analysis – an assessment of MOOC discussion forums (Ezen-Can et al., 2015) and an examination of teacher remarks (Gardner et al., 2016); and
Gamification – gamified e-assessment platform (Gaán et al., 2017), gamified dashboard (Freitas et al., 2017), and a competence map (Grann and Bushway, 2014).
Learning analytics results may assist instructors and students improve their teaching and learning. As a result, gathering input for enhancing the learning environment and learners themselves is an essential topic in learning analytics. In terms of frequency, there are approximately three kinds of feedback loops: annual, weekly, and real-time feedback. Because the analyses findings are not instantly given back to on-site teachers/students who supply their educational/learning records for the analytics, the above-mentioned studies are grouped into annual feedback or weekly feedback kinds. The explanation is obvious: learning analytics is mostly carried out after lessons, school periods, or school years. As a result, feedback is delayed proportionately. However, obtaining real-time feedback may be highly valuable and beneficial for instructors and students in on-site classes.
Our research focused on providing feedback – especially, how to deliver feedback – on efficient information to on-site classrooms even during presentations. The goal of this research is to implement real-time feedback, which has not been widely explored in relation to on-site educational settings. Our goal is to have on-site classrooms where professors offer lectures and a large number of students listen as lecturers explain things, do exercises, and so on. It is difficult for instructors to understand students’ conditions and actions in such a vast classroom. We use both an e-learning and an e-book system to capture real-time learning activities during lectures. We have created two major feedback platforms. One is beneficial for a teacher just before a lecture. The system generates summary reports of the provided materials’ previews and quiz scores. Using the preview accomplishment graph, the instructor may determine which pages were well previewed and which were not. Furthermore, the instructor can see which tests were tough for students and which pages could be utilized in the lecture to help pupils. Another feature is real-time analytics graphs, which allow the instructor to alter his or her lecture pace throughout the lecture. The technology successively gathers e-book records managed by students and does real-time analytics to identify how many pupils are following the teacher’s explanation. The remainder of this article describes the mechanics of our real-time feedback system and reports on the experimental outcomes.
Review of the literature
In terms of frequency, there are approximately three kinds of feedback loops: annual, weekly, and real-time feedback. The evaluation and improvement of schooling is a common example of a yearly (or term-by-term) feedback loop. Typically, students’ grades, test results, class surveys, and so on are studied and assessed. On the e-book system, the link between self-efficacy and learning habits was investigated (Yamada et al., 2015). Teachers understand that student actions connected to markers and annotations are related to students’ self-efficacy and the intrinsic value of the learning materials. A yearlong (or term-by-term) feedback loop might also incorporate an appraisal of students’ performance (Okubo et al., 2016) and a forecast of students’ final grade (Mouri et al., 2016). The yearly feedback loop is set up so that the feedback results are delivered the following year. In other words, students and instructors will not obtain feedback directly from examining their own learning records. Feedback might also include an examination of past years’ learning records.
A weekly feedback loop may propose relevant items depending on students’ status, which can be assessed by predicting academic success using learning records such as attendance reports and quiz outcomes. For example, analytics of preview and review patterns (Oi et al., 2015) or learning behavior analytics (Yin et al., 2015) may be used to better assess students’ weekly performance. Text analytics technology delivers summary contents for viewing and evaluation (Shimada et al., 2015). (Shimada et al., 2016). Unlike a yearly feedback loop, the findings of the analysis are provided immediately back to the students and instructors who supply the learning logs.
There are various works that deal with real-time learning analytics. Minovic et al. presented a visualization tool for instructors to use while playing a game to monitor students’ learning progress in real time (Minovic and Milovanovic, 2013). Piech et al. gathered tens of thousands of program codes and used a machine learning method to determine students’ “sink” states. Students were given feedback just before they were ready to enter such dangerous “sink” states (Piech et al., 2012). Freitas et al. address the efficacy of quick feedback to students, particularly the effects of gamification in university education (Freitas et al., 2017). Fu et al. also offered real-time program code analysis (Fu et al., 2017). They created a learning dashboard to record student behavior in the classroom and highlight various issues encountered by pupils. Although these experiments acquired real-time data, the analytics and feedback were intended for activities in virtual learning environments. In contrast, the goal of our research is to create real-time feedback loops in which analytical findings may be given back to on-site students and lecturers even during a lecture. A instructor may monitor what pupils are doing, such as if they are following the explanation or doing something unrelated to the lecture. Rather of participating in constant talking, a teacher might manage the tempo of the lecture and/or devote more time to activities.
Cyber-physical educational system implementation
Three systems at our institution gather different types of educational/learning logs: e-learning (Moodle), e-portfolio (Mahara), and e-book (BookRoll). Students use these platforms to submit reports, take quizzes, access information, and reflect on their learning activities. The e-book system collects more exact learning records, such as when a learner accesses certain content or flips a page of the material. All students utilize their own computers to access these systems from wherever on or off campus.
The e-book logs were obtained using the “BookRoll” e-book system. Table I contains examples of e-book logs. The logs include a wide variety of activities. For example, OPEN indicates that the student opened the e-book file, while NEXT indicates that the student moved to the next page by clicking the next button. By subtracting the succeeding timestamps, the surfing duration for each page may be computed. On the e-learning system, learning records like as attendance and quiz scores are gathered from tables in the Moodle database. By merging linked data, the system evaluates quiz results and class attendance. To achieve the suggested real-time learning analytics system, we primarily employ the e-learning system and e-book system in this research.
On-site lecture assistance system
We give an example case study, as illustrated in Figure 1, that was used in a lecture at our institution. The time line is separated into two sections: before and during class. During the previous lesson, a teacher provided students with some sample materials that were prepared automatically utilizing the summarizing approach (Shimada et al., 2017). Students previewed the assigned materials prior to class, and the system gathered the operation logs recorded during the previews. Students completed the quizzes prior to the commencement of class, and the results were saved on the server.
Our technology assessed the learning logs just before the lecture began to provide a summary report with previews of the accomplishment and quiz outcomes (details are given in Section 2.6). Furthermore, the system offered information on essential pages that should be thoroughly described in the lecture. For example, the instructor should concentrate on quizzes-related websites, particularly those that have resulted in lower quiz results. In advance, our technology examined the link between quiz statements and the associated pages in the lecture material. Section 2.3 describes how we found significant sites automatically.
A instructor presented the contents of the materials during the lecture, while students perused the information on their computers. Students at our institution were instructed to open and explore the same page as the instructor, with highlights or memoranda on the relevant parts. Learning logs were gathered and saved consecutively throughout the lesson. The analytical findings were instantly shown and updated every minute on the online interface. As a result, the instructor may monitor the most recent student activity. The visualization displayed real-time data such as how many students were following the lecture, how many students were exploring prior pages, and so on. Section 2.6 describes the web interface. The instructor adjusted the tempo of the lecture based on the students’ responses. For example, if several pupils were not paying attention and were still on the previous page, the instructor might slow down the lecture.
Significant page mining
Because quizzes are often prepared utilizing the contents of lecture materials, there is a strong link between lecture materials and quizzes. Related pages are essential for comprehending the materials’ contents. However, lecture materials and quizzes are maintained independently or are very loosely linked in systems utilizing topic names. We can manually examine the connection between a quiz item and its linked pages, but this becomes difficult and unrealistic as the number of quiz items and/or pages grows. Furthermore, if the lecture material is altered, such as page numbering, the instructor must update the correspondence. As a result, we devised a technique for determining correspondences automatically.
Our approach is based on the assumption that a connected website has the same term as the quiz statement. Each quiz statement (QS) is broken down into morphemes. The nouns n (1,, n,, N) are then extracted. A normalized histogram hn is generated for each noun n. Each bin bu,n of the histogram hn reflects the number of times noun n appears on page u. It is worth noting that the bins are normalized after calculating the number of times noun n occurs across all pages. We total the frequencies of all words to get the final findings. The associated score of page u is defined as the normalized value ru.
Although the mining approach identifies pages that are significantly relevant to a particular quiz statement, it does not take into account page linkages. As a result, we use a rating mechanism that provides a ranking score to each page. VisualRank influenced this concept (Jing and Baluja, 2008). R=SR+1-B is used to iteratively update a ranking vector R. where Su,v measures the page similarity between pages u and v, and S is the column normalized similarity matrix. In this work, we simply compare the similarity of two feature vectors represented by a bag of words using the L2 norm (Zhang et al., 2010). B denotes a bias vector. The relate score ru is used as an element of B. R is updated regularly until it converges. The balance between the similarity matrix and the bias vector is controlled by, (0 1) 0.8 is often employed in practice, according to the literature (Jing and Baluja, 2008). Pages that are associated to key pages receive higher ranking scores once the ranking vector R converges. We consider the top N rated sites to be the most significant.