Formative feedback generation in a VR-based dental surgical skill training simulator
- By admin
- 0 comments
Abstract
Fine motor skill is indispensable for a dentist. As in many other medical fields of study, the traditional surgical master-apprentice model is widely adopted in dental education. Recently, virtual reality (VR) simulators have been employed as supplementary components to the traditional skill-training curriculum, and numerous dental VR systems have been developed academically and commercially. However, the full promise of such systems has yet to be realized due to the lack of sufficient support for formative feedback. Without such a mechanism, evaluation still demands dedicated time of experts in scarce supply. To fill the gap of formative assessment using VR simulators in skill training in dentistry, we present a framework to objectively assess the surgical skill and generate formative feedback automatically. VR simulators enable collecting detailed data on relevant metrics throughout a procedure. Our approach to formative feedback is to correlate procedure metrics with the procedure outcome to identify the portions of a procedure that need to be improved. Specifically, for the errors in the outcome, the responsible portions of the procedure are identified by using the location of the error. Tutoring formative feedback is provided using the video modality. The effectiveness of the feedback system is evaluated with dental students using randomized controlled trials. The findings show the feedback mechanisms to be effective and to have the potential to be used as valuable supplemental training resources.
Keywords: Virtual reality, Dental skill training simulator, Formative feedback, Objective feedback, Video-based feedback.
INTRODUCTION
Surgical skill training has been traditionally based on the Halstedian apprenticeship model whereby the surgical trainee performs a task with guidance and close supervision from an expert surgeon [18], [58]. But several factors, including patient safety concerns, shortened training programs, limitations on available operating room time, and desire for standardization have strained this model [36]. In Dentistry, increasing enrollments and a shortage of experienced instructors, have resulted in high student-to-tutor ratios [61], which in turn means that students often do not receive as much supervised training as would be desirable and can end up with unsupervised practice during training. Even when an assessment is carried out, it is by nature subjective, lacking sufficient standardization [39], [51]. As a result, the past two decades have seen an increase in the use of simulation-based training [33] to provide trainees with increased training time and the skills needed to perform complex operations before practicing them on live patients. Recent years have seen the proliferation of VR-based dental simulators due to enabling technological advancements, combined with concrete benefits of the approach [16], [24], [34], [55]; Dangxiao [57]. VR simulators offer high-fidelity simulations that are reusable and can be configured to provide trainees practice on a variety of different cases [5]. They also have the ability to record accurate data on individual performance, which provides the opportunity for trainees to practice independently and receive objective feedback [42]. While many existing VR surgical simulators provide feedback, the feedback typically concerns the outcome and/or procedural kinematic parameters, with no linkage between them. While such feedback can explain the differences between student and expert performance and/or the distance to the ideal performance, it lacks the essential causal information about actions and their desirable or undesirable effects. This type of feedback is known to be essential in effective training of psychomotor skills [26], [47].
In this paper, we present a formative feedback system which objectively assesses surgical skill and generates feedback in a VR dental simulator. The feedback prototype was developed for the access opening procedure of endodontic root canal treatment. Endodontics is one of the most challenging areas of dental surgery and it can be associated with unwanted or unforeseen procedural errors [50]. A variety of procedural errors could contribute to reported failure rates as high as 32.8% [64]. Alhekeir and colleagues [2] with a self-report design showed 68% error rate by senior students. In terms of quality, less than 50% of root filling is found to have acceptable quality [28], [43]. A suboptimal standard of root canal in undergraduate training could translate into poor quality of treatment outcome and inferior standards care in providing the treatment to the actual patients [12]. Inadequately treated teeth commonly feature iatrogenic errors such as ledges, perforations, and apical transportation [28], [43]. Perforations in endodontics can occur during access cavity preparation and mechanical instrumentation of the root canals [14]. The healing rate in teeth with perforations was 30% lower than in teeth without perforation [7]. Whilst the majority of evidence has focused on repairing perforations using various materials [35], the evidence on prevention and training is limited [50]. Although the gap has long been noticed, the issues are not fully addressed anywhere. The first step towards increasing the level of patient safety in endodontic treatment is for all clinicians to acquire knowledge and skills in the early stage of training. Such skills are best learned with deliberate practice with sufficient objective formative feedback. Using a VR dental skill training simulator, we generated formative feedback by correlating the information concerning errors and the portions of the procedure responsible for them and then communicating the information using an augmented playback modality. The tutoring feedback enables students to learn to associate their actions with the resulting performance.
2. Methodology
We focused on the automatic generation of feedback for surgical skill training in a dental VR simulator. While we are interested in developing general techniques, a specific domain is required to demonstrate and evaluate the approach in a rigorous way. While many details of the system are specific to the chosen dental surgical procedure, fundamental elements of the framework generalize to other surgical domains.
2.1. Simulator and domain
We employed the VR dental simulator developed by Rhienmora et al. [41]. The simulator operates on a standard PC connected to two GeoMagic Touch haptic devices [1] which control the dental handpiece and dental mirror (Fig. 1). A monitor is placed at eye level, and the haptic device is positioned at the elbow level directly in front of the participant. A virtual high-speed handpiece with a tapered bur of diameter 1 mm and length 6 mm is employed. The tooth model is acquired using three-dimensional micro-CT (RmCT, Rigaku Co., Tokyo, Japan). In the simulator, the mandibular right molar tooth is stored in the form of a three-dimensional grid of voxels representing the density of the structure at each point using a value between 0 and 255, with 0 representing an empty voxel. When the bur collides with the tooth, the force transmitted through the haptic device is a function of the density values of the colliding tooth voxels, with higher density values producing larger counterforces. The operator receives different force feedback depending on the density value of the tissue while cutting the tooth. A study of the construct validity of the simulator showed the haptic force feedback to the operator to be similar to working in the real situation [49] and a second study demonstrated the transferability of learned skills [50].
We selected the access cavity preparation phase of the root canal treatment procedure on the mandibular right molar (Vertucci’s type VIII root canal configuration [54] to demonstrate and evaluate our approach. This procedure was chosen because it exclusively involves drilling, which is supported by the simulator, and because fine motor skill is essential to achieve an optimal outcome. In this phase, the endodontist drills a small access hole through the surface of the tooth crown to gain access to the pulp chamber and root canals for treatment. The ideal shape of the opening is a function of the tooth shape, tooth size, and the number and location of the root canals. The number and location of the root canals can differ in the same tooth across different patients. The ideal result of access opening preparation is to create an unobstructed passageway to the pulp space and the apical portion of the root canals (Fig. 2, red-colored outline) without needlessly removing excess tissue.
During data collection, data was gathered on elapsed time and kinematic variables concerning
- the position of the handpiece in x, y, and z-axes,1
- the angulation of the handpiece with respect to x, y, and z-axes,
- drilling enabled/not enabled,
- the position of the mirror in x, y, and z-axes,
- the angulation of the mirror with respect to x, y, and z-axes,
- the force applied on the handpiece in x, y, and z-axes.
Variables were collected from the beginning of the procedure until the end in the kinematic procedure log.
Access preparation consists of three distinct stages (Fig. 3):
- Stage 1, initial drilling to shape the outline;
- Stage 2, extending the opening to the distal canal orifice;
- Stage 3, extending the opening to all the remaining canal orifices.
We approached feedback generation as a credit assignment problem [31], which is the problem of assigning credit or blame for outcomes of a procedure to specific actions in that procedure. Our approach makes use of the spatial information of errors to obtain the associated temporal information of actions. The approach is robust and applicable to the tasks involving both hard and soft tissue simulation. We evaluate the effectiveness of the feedback system using randomized controlled trials with dental students. We measure the learning gains between three groups of participants: a group trained without using a training simulator, a group trained with the simulator without the feedback system, and a group trained with the simulator with the feedback system.
3. Formative feedback system
As shown in Fig. 4, our approach to formative feedback begins with an assessment of the procedure outcome to identify the location, the type, and the severity of errors. To determine the portions of the procedure responsible for errors, the way the procedure carried out by the student is assessed. In the subsequent step, the relation between procedure and outcome is used to provide feedback in the following step.
The major components of the feedback system are (i) Automated outcome scoring system to assess the outcome, (ii) Correlator to carry out the assessment of the procedure and perform the correlation between procedure and outcome, and (iii) Feedback generator to provide formative feedback.
3.1. Automated outcome scoring system
The automated outcome scoring system evaluates and assigns scores to the outcome to identify the types and locations of errors in the outcome. We use a general scoring algorithm [63] for outcome evaluation in dental procedures. The score cube-based outcome scoring approach starts by creating three virtual templates to define the maximum, optimal, and minimum acceptable drilling regions. The virtual templates are applied to the score cube volume where each voxel is assigned a score given by its proximity with the templates. The voxel scores are weighted based on their relative importance in defining the severity of the errors. The student’s drilling area is then extracted, and the weighted scores are obtained by mapping the drilled area onto the score cube. The scores are computed for four axial walls: Distal, Mesial, Lingual and Buccal walls, and the pulp floor (Fig. 5).
The overall outcome score is computed as the average of axial walls and the pulp floor scores. Also, an error log detailing the types of errors, and the locations of the errors in the outcome are available from the outcome scoring system. Since providing feedback at the lowest voxel level is not useful, the voxel level error information from the automated outcome scoring system is grouped into regions of under- and over-cutting. Based on discussions with an expert and through experiments, the clusters consisting of less than 50 voxels are considered as minor errors which do not contribute to the performance and are discarded from further analysis.
3.2. Correlator
The formative feedback in our study addresses three aspects of errors: the type (what), the location (where), and the time they were committed (when). In practice, several mishaps or errors can arise during the access opening stage of root canal treatment, including unidentified root canals, damage to existing restoration, under-/over-extension, perforations, and crown fractures [21]. We limit our focus to the three most common types of errors:
- Undercut: when the dentist drills a hole with a small diameter, the roots remain inaccessible
- Overcut: when the dentist removes more tooth mass than necessary
- Perforation: when the dentist accidentally drills a hole through the surface of the tooth.
Once the errors in the outcome are localized, the next step is to identify the actions in the procedure responsible for each error. However, there could be more than one underlying cause (action) that contributed to each error in the outcome. To provide formative feedback, it is necessary to understand the underlying actions that could lead to errors. For each operative error, we determine the possible causes as follows.
- Overcut error
As shown in Fig. 6, the overcut case occurs when the student’s drilling reaches the area beyond the maximum area defined by the templates Fig. 6 (a). In the resulting outcome, the overcut regions will be recognized as an overcut error (the filled area in Fig. 6 (b). Although the filled area is labeled as an error, the actions taken in that region could not be immediately labeled as wrong actions because the following conditions could lead to overcut errors (Fig. 6(a–d)).
- An improper amount of force was exerted on the instrument.
- The instrument had an incorrect orientation.
- The right amount of force was applied, but the student did not recognize the area to stop drilling and repeatedly drilled in the same region.
- Repeated drilling in the same neighborhood eventually leads to an overcut error in that area.
- Undercut error
In contrast to overcut, undercut cases occur when the student did not clear the internal tooth anatomy entirely as required. The undercut regions can be determined by the optimal template as shown in Fig. 7 (a). As shown in Fig. 7 (a-c), the undercut errors can be caused by
- An improper amount of force was exerted on the instrument.
- The instrument has an incorrect orientation.
- An insufficient number of passes at the student’s drilled area prevented him from reaching the optimal drill area.
- Perforation error
The perforation case occurs when the student’s drilling reaches beyond the maximum template, and the instrument punches through one of the tooth walls, resulting in an irreversible hole in the wall as shown in Fig. 8 (a–d). The causes of perforation errors are the same as for overcut errors.
Overcut errors occur from over drilling of the tooth (excessive drill actions), while undercut errors occur at the area of the tooth where the trainee did not drill as required (omitted drill actions). Consequently, drill actions associated with undercut errors cannot be identified in a straightforward manner. Therefore, we identify the nearest drilled regions of the undercut errors and perform the analysis on the drilling actions of the procedure in those regions to assign the blame for the undercut error.
For some errors, the responsible portions of the procedure can be identified based on the information obtained by correlating the outcome and the procedure. On the other hand, some errors caused by a consequence of more holistic characteristics of the procedure such as the incorrect tool angulation throughout one stage due to incorrect finger positions and misunderstanding of the sequence of stages, are more challenging to identify and are not addressed in this current formative feedback system design.
3.3. Procedure and outcome correlation
The correlator needs to identify when and how each error was made during procedure execution for each error diagnosed in the outcome. To determine when the error occurred, the spatial locations of error voxels in the outcome are mapped to the collided voxel log to identify the portion(s) of the procedure responsible for each error. Similar to the kinematic procedure log, the collided voxels are recorded every 1000 ms while the procedure is performed in the simulator. The log contains the timestamps and the locations of voxels with which the instrument collided during the procedure.
Actions over multiple parts of a procedure may be responsible for a single error. Some error regions may also spread across more than one wall, and a single wall may contain more than one error. To map the errors with the portions of the procedure, the correlator must obtain the timestamps at which the error voxels are drilled. Using the log of collided voxels over time, the temporal information of each voxel is gathered, and the portions of the procedure associated with the errors are identified as shown in Fig. 9. Fig. 10 shows an example of mapping between the error voxels from error information from outcome scoring and the collided voxel log.
For the overcut and the perforation error cluster types, the error-related timestamps are directly obtainable using the error voxels since the errors are caused by the drilling actions. For the undercut error clusters, the mapping cannot be done straightforwardly due to the absence of collided voxels. Therefore, for each undercut cluster, the correlator first looks up the nearest drilled area. From the voxels of the nearest drilled area, the timestamps from the collided voxel log are gathered for mapping. The large size of the log files can cause the mapping process to take significant time. To reduce the run time, in error clusters with more than 100 voxels (determined through experiments), error voxels of the same clusters are sampled by taking every fifteenth voxel (determined through experiments). The worst-case scenario in mapping occurs when the error voxels are drilled out in different portions of the procedure. However, since the drilling can be performed in one direction from the top occlusal surface towards the pulp floor only, this issue is solved by indexing error voxels.
After mapping the error information with the procedure, the identified portions of the procedure are extracted to analyze the applied force, and the orientation on the instrument. In the absence of a standard amount of force and orientation, the challenge comes in determining whether the applied force and the tool orientation are correct. To get the baseline data, we had an expert perform the procedure three times. For each stage of the procedure, the average of applied force, and the mean orientation of the instrument were collected. The correlator compares force and tool orientation with the expert data for each stage.
The mapping information on each error cluster: the types of errors, their locations on the outcome, the timestamps during the procedure, the differences in applied force, and the orientation of the instrument relative to the expert in x-, y-, z-axes are combined and sent to the feedback generator component. The pseudo-code of the correlator component is shown in Fig. 11.
4. Feedback generator
The correlator component provides information on types and locations of errors in the outcome, and portions of the procedure identified as causes responsible for them. Another challenge is to decide how to effectively convey this information as tutoring feedback. The errors of different types are located in the different regions of the outcome, and the portions of the procedure, which are identified as the origins of the errors, are temporally distributed across the procedure. We hypothesize that video could be an appropriate modality to convey the feedback as it allows the student to easily navigate through the procedure for review and the task-relevant visual aspects of internal anatomy, the errors in space and time can be conveyed through it. Hence, the video-based formative feedback generator was implemented as a modality to provide feedback using the dental VR skill-training simulator.
4. 4.1. Video-based formative feedback generator
The feedback information is provided by replaying the procedure in the simulator while highlighting the error areas within the tooth volume at the identified point of time associated with the incorrect actions determined from the correlator component. The video playback interface consists of four main components: a video control panel, a mode control panel, the simulation panel, and a viewing aspect control panel (Fig. 12).
4.1.1. Video control and mode control panel
The video control panel offers access to several standard buttons including Play, Stop, Skip Forward, and Skip Backward. The Play button toggles into Pause while the video is being played. The Skip Forward and Backward buttons are used to skip to the next/previous error in the video or the beginning of the nearest stage, whichever comes first. They allow the user to quickly and efficiently jump to the point of the error or stage. Fast-forwarding through portions of a procedure that may not contribute to the overall assessment reduces the time needed to complete the playback. Instructors as well can benefit from this feature, which can be used as a supplement in the assessment.
As the video is being played, a video progress bar is highlighted with red, blue, and yellow to denote the overcut, undercut, and perforation errors, respectively (Fig. 13). The color-coded error regions enable users to quickly focus on those portions of the procedure where errors were committed. Two vertical Stage Border bars appear on the progress bar to represent the three stages of the access opening procedure. They serve as reference points to indicate the current stage being played, the time spent in each stage, and the time and the type of errors that occurred in each stage.
The simulation panel (Fig. 14) hosts the video replay of the procedure consisting of the tooth, the handpiece, and the mirror. In the default video replay mode, the original opaque tooth is displayed; however, for a better understanding of feedback, the students can switch to the transparent mode during the playback. In the transparent mode, the error voxels are highlighted according to the type of errors (w.r.t progress bar) as they are being drilled.
The Mode control panel allows the user to switch between three modes: Train, Feedback, and Replay. In the Feedback mode, kinematic comparison graphs (concerning the force applied to the tooth (along x, y, z-axis), the orientation of the driller (along x, y, z-axis) between the trainee and the expert during the video playback. In the Replay mode, the system allows the user to view his/her performance in comparison with video playback of the expert performance (Fig. 15). The teeth in both windows are displayed in the transparent mode to allow the student to view the changes in the tooth internally as it is being drilled.
4.1.2. Viewing aspect control panel
Dentists commonly use axial walls to communicate and therefore, perspectives from the four axial walls (Mesial, Lingual, Distal, and Buccal) are provided to the user. Upon selection, the camera will be rotated to the selected wall, and the user can view how the tooth is being drilled from the selected wall (Fig. 16). Users can also rotate the viewing angle step by step by tilting the walls and turning left/right. As shown in Fig. 17, six- rotation buttons are Buccal Tilt, Lingual Tilt, Mesial Tilt, Distal Tilt, Left and Right. The zoom in/out functions allow the user to have a closer look at what is happening while the tooth is being drilled (Fig. 17).
In the default view, the tooth is positioned with the Buccal wall facing towards the user. As the tooth is being drilled, the Lingual and Mesial walls may be partially blocked from view by the drill and mirror. Therefore, functions are provided to remove the handpiece and the mirror from the playback scene.
5. Evaluationl
We aim to evaluate two main hypotheses:
- Hypothesis I: Skill training using the simulator with video-based formative feedback is more effective than training using the simulator without feedback.
- Hypothesis II: Skill training using the simulator with video-based formative feedback is more effective than the traditional training approach.
To evaluate these hypotheses, a pre-test/post-test control group design is used with three groups:
- Experimental group I (G1): The participants in this group are trained with the VR simulator without feedback.
- Experimental group II (G2): The participants in this group are trained with the VR simulator with the video-based formative feedback and the overall outcome score obtained from the outcome scoring system.
- Control group (G3): The participants in this group are trained without the VR simulator in the traditional laboratory setting.
To test Hypothesis I, the learning gain of the training is defined as the difference between the pre-and post-test scores. Hypothesis I is confirmed if the student group trained with the simulator with video-based formative feedback achieves higher learning gain than that of a control group consisting of students trained with the simulator without feedback. The null and alternative hypotheses are:
- Null Hypothesis (H0): There will be no significant difference in learning gains between the participant group trained using the simulator with the video-based formative feedback system (G2) and the participant group trained using the simulator without feedback (G1).
- Alternative Hypothesis (HA): The learning gains of the participant group trained using the simulator with a video-based formative feedback system (G2) will be greater than that of the participant group trained using the simulator without feedback (G1).
Regarding Hypothesis II, to confirm that the two training methods are equally effective, we compared whether the learning gains of the participant group trained with the simulator with feedback are equivalent to outcome scores of the participant group trained in the traditional laboratory setting. The null and alternative hypotheses are:
- Null Hypothesis (H0): There will be no significant difference in learning gains between the participant group trained using the simulator with the video-based formative feedback system (G2) and the participant group trained in the traditional laboratory setting (G3).
- Alternative Hypothesis (HA): The learning gains of the participant group trained using the simulator with the video-based formative feedback system (G2) will be greater than that of the participant group trained in the traditional laboratory setting (G3).
Ethical approval was obtained from the Institutional Review Boards of Mahidol University and Thammasat University. We recruited thirty dental students at Thammasat University School of Dentistry, Thailand. The inclusion criteria include the students who were in the fifth year at the dental school and have no prior experience with haptic VR simulation. They were not admitted to the study if any of the following criteria were present: left-hand dominant individual; had prior experience with the simulation; or received below 70 percent marks in knowledge assessment of the endodontic access opening. No participant dropped out of the study. The flowchart of participants through the trials is shown in Fig. 18. At the end of the experiment, the authors have optional informal interviews with the participants.
6. Experimental setup
After consenting to participate in the experiment, each student was provided with a plastic typodont mandibular left molar and asked to prepare the access opening for the root canal treatment. The artificial plastic teeth are designed for endodontic training with simulated anatomical pulp cavity and canals and have an x-ray imaging ability. Similar to working with natural teeth, trainees can experience the difference in cutting feel between the enamel and the dentin material. The teeth were acquired from Nissin Dental Products Inc. (http://www.nissin-dental.net/). Examples of the drilled tooth before, during, and after preparation are shown in Fig. 19. We would like to note the difference between the simulated tooth (lower right molar) and the plastic teeth (lower left molar). The lower left molar tooth was used in the evaluation study as it is the only lower molar tooth available in supply and the internal anatomy of the tooth is similar to the tooth used in the simulation. Students were additionally provided with a tungsten carbide bur (3 3 0), a millimeter graduated periodontal probe, a mouth mirror, and a sharp straight dental probe. All teeth were coded anonymously.
Data were collected in separate sessions between control and experimental groups after study hours. In the pre-training session, all participants performed access opening in the laboratory using plastic teeth. During the training session, participants from G1 were trained using the simulator without feedback; participants from G2 were trained using the simulator with formative feedback, and participants from G3 were trained in the traditional laboratory without the VR simulator.
Participants from G1 and G2 were briefly instructed on the use of the simulator, the experiment flow, and the requirements of the access opening. The participants received a verbal explanation about the use of the system from the investigators and familiarized themselves for fifteen minutes with the system interface, but not with the task. Participants from G2 were also informed that they were allowed to stop the video feedback once they felt that they understood the errors and the causes. During this familiarization or warm-up period, each participant was allowed to ask questions and receive further verbal explanations and suggestions from the investigators. After the familiarization, the participants continued in training sessions. During the training stage, participants from G2 received scores on the outcome from the automated outcome scoring system and video-based formative feedback on the performance. They were allowed to navigate the video playback freely and exit before the replay was over (and many did) if they felt that they had understood how and what lead to the resulting performance score.
The primary outcome measure used was the average of the outcome scores of each pre-and post-training session, assessed by a panel of two experts who were blinded to trainee and training status. The standard preparation is the preparation (i) with the straight-line access to the pulp chamber and root canal system without missing the orifice in all walls (Mesial, Distal, Buccal, Lingual and Pulp Floor), (ii) without excessive removal of tooth mass, and (iii) without perforation. Based on this, all tooth surfaces (Mesial, Distal, Buccal, Lingual, and Pulp Floor) were evaluated and graded using evaluation parameters – (i) the straight-line access to the pulp chamber and root canal system without missing the orifice, and (ii) smoothness of the preparations (Errors undercut, overcut, and perforation are penalized based on the severity)– as assessment criteria. The outcome score ranges from 1 to 100, with 70 being the clinically acceptable and passing score. The outcome score was considered as the primary dependent variable representing the success in learning outcome while the scores on axial walls and the pulp chamber floor were considered for detailed analysis of performance.
7. Results
The two experts evaluated the outcomes from the control and experimental groups in pre-and post-training steps. The normality of the variables was confirmed using the Kolmogorov and Smirnov test. Since the outcome scores in this study were normally distributed, we computed the intraclass correlation coefficient (ICC) [30] to determine the degree of agreement between the scores of the two experts. ICC values range between 0.0 and 1.0, with the highest value indicating strong agreement between the scores given to each tooth by the raters. The high ICC values shown in Table 1 indicate strong inter-rater agreement in all categories (the axial walls, the floor, and the overall scores). All the coefficients of ICC were significant at p = 0.05. The highest ICC (0.99) is observed in the overall score while the lowest (0.91) is found in the floor scores.
The descriptive statistics of pre-and post-training scores in all groups are summarized in Table 2. The mean overall scores before training range between 61.30 and 65.80, while the means after training range between 66.30 and 91.6. The overall post-training scores showed a marked decrease in standard deviation compared to the pre-training scores in G2 (8.07 from 15.52) and G3 (7.70 from 15.71), indicating the convergence in performance of these two groups. The participant group trained using the simulator with feedback achieved higher mean post-training overall outcome scores (G2 Post-Mean = 91.6) than the control group trained with the simulator without feedback (G1 Post-Mean = 66.3), and the group trained in the traditional laboratory setting (G3 Post-Mean = 72.20).
Table 3 shows the mean learning gains for the three groups in the four-axial wall, the floor, and overall. The learning gain of each student is computed as the difference between the post-training and pre-training scores. Independent samples t-tests were used to compare the learning gains among all three groups. We found that the student group trained with the simulator with feedback (G2) achieved statistically significantly the highest learning gain in terms of overall score (mean 30.30 ± standard deviation 17.5) at the end of the training session. The statistically significant higher gain compared to group G1 trained with the simulator without feedback (mean 0.5 ± standard deviation 11.375), t (18) = − 4.523, p = 0.000, confirms our hypothesis I that skill training using the simulator with video-based formative feedback is more effective than training using the simulator without feedback. Similarly, experimental group G2 had statistically significantly higher learning gain at the end of the training session compared to the control group G3 (mean 8.5 ± standard deviation 14.152), t (18) = −3.068, p = 0.007. This confirms our hypothesis II that skill training using the simulator with video-based formative feedback is more effective than the traditional training approach.
To have a better understanding of the difference in performance between groups, we analyzed the learning gains in the axial walls and the pulp floor of each group. Negative learning gains were observed in G1 for Buccal and Lingual walls, as the scores of participants from G1 significantly dropped from pre-training scores in these two walls (Table. 3). In contrast, G2 and G3 had positive learning gains in Buccal, and Lingual, with higher learning gains observed in G2. One-way analysis with Tukey posthoc tests revealed that the mean learning gain of G2 is significantly higher in the Mesial and Distal wall scores than that of G1 (Mesial: p = 0.04, Distal = 0.00) and G3 (Distal = 0.02) in those walls. Experts highlighted the fact that even though the simulator did not include the dental probe tool, the positive learning gains indicate that the participants from G2 may be gaining benefits from observing errors visualized in axial walls in the video-playback.
We next examine within-group individual learning gains. A paired t-test was used to compare the difference between the means of pre-and post-training scores. At p < 0.05, significant differences are found between pre-and post- training scores for all categories in the experimental group G2. In contrast, a significant difference is found for G3 only in the distal wall and in no learning gains for G1.
A further important question is how the initial skill level affects learning gains. From Fig. 20 we can see that in group G2 for low initial scores the learning gains are high, but for high initial scores, there is little improvement. The same trend holds for group G3, but the effect of initial skill level is less pronounced. For group G1 we see little effect, if any.
8. Discussion
Dental students acquire pre-clinical knowledge from a range of media including didactic lectures, seminars, and online learning, and reading. In translating knowledge to skills, dental students practice using extracted human teeth, artificial teeth (and jaws) mounted in phantom heads, and computed-based training simulators. Artificial teeth allow instructors to improve a student’s hand-eye coordination, indirect vision, and dexterity, but tactile sensation is difficult to explain verbally [44]. Other drawbacks include a lack of anatomical structure and high cost. Extracted teeth have higher fidelity of physical properties than artificial teeth, however, standardization of training procedures is often problematic due to anatomical and pathological variations.
With traditional phantom head simulator training practice, students should ideally receive assessment and feedback on each stage of their work to move on to the next stage of the procedure. However, tutors are often only able to inspect the outcome of each student due to time constraints and high student-to-tutor ratio [18], [44], [61]. By only examining the result of the pre-clinical training task, instructors can rarely assess the actual procedure followed by every student to achieve the desired outcome, and the feedback can be subjective to the experience and opinion of the instructors [40]. In this setting, when it is given, the feedback is often nonspecific, making it ineffective in providing learners with concrete strategies on how to improve.
In recent years computer-based simulators have been widely adopted into the dental curriculum [10], [13], [62]. The operator of a computer-based simulator is usually presented with a 3D target area that they are instructed to remove using a dental handpiece. Typically, feedback is generated using a combination of the amount of the target shape removed, the damage done to the area outside of the target, and the time taken to complete the task. Despite having several limitations, this shape agreement approach has been widely adopted in computer-based simulators [53], [55]. But knowing the percentage or volume of material removed inside or outside of a target area might not help the endodontic trainee who is practicing to improve skills. Since not all the materials removed are equally important, the trainee should be informed about which areas the material has been removed from and how critical those areas are.
Metrics based on kinematic data from the user’s movement and force exertion have also commonly been used as the basis for comparison with an expert’s performance on the same exercise [52]. Rhienmora and colleagues [41] presented and evaluated such a feedback system using a haptic dental simulator. Comparing a student’s performance with an expert in this way is using more factors than the shape agreement method, however, the information is limited to a particular exercise for a particular tooth only. Although it may not immediately correlate with the internalization of that skill so it can be transferable to other contexts, still it is useful for the trainee to learn how the expert would perform in a given scenario.
Another commonly used metric in computer-based dental simulators is task completion time or task time [4], [20], [57]. While learning and developing a skill, receiving feedback on how much time was taken may not be particularly useful. It may be true that an expert can perform a procedure more quickly than a novice, but providing this metric simply informs the novice of this fact without offering any guidance on how to achieve mastery. Additionally, it has been shown that introducing time pressures can negatively impact a novice’s performance and impede their ability to concentrate on the factors that actually would lead to improved performance [9].
Central to effective learning in simulation-based skill training is the role of feedback on a learner’s performance [8], [29], [33]. The formative feedback in our study is constructed by forming a linkage between information about the outcome of the performance, which is known as knowledge of result (KR), and information about the quality of performance and movement characteristics, known as knowledge of performance (KP). The availability of KR feedback during simulated practice has been identified as one of the most important factors leading to differences in motor learning [19], [46], [48]. Provision of formative feedback from the simulator was found to result in significant performance improvements relative to the training using the simulator without feedback (G1) and training in the conventional setting (G3). Learning gains were particularly strong for trainees with low pre-training scores in G2. Gains were lower for similar students in G3 and were only moderate in G1. This shows the benefit of the simulator with feedback over traditional training and the simulator without feedback for students with low skill levels. It suggests the importance of high-quality formative feedback, particularly for students in the early stages of skill development. In [50], the authors showed that even when the novices were provided with a simple one-time summative feedback on the outcome score, participants trained with the haptic VR simulator and conventional phantom head had equivalent effects on minimizing procedural errors in endodontic access cavity preparation. They also reported that the participants trained with the VR simulator tended to remove less tooth mass. Positive learning gains from G2 and G3 in our study indicate that the participants trained with the haptic VR simulator with formative feedback and conventional phantom head had similar effects on minimizing procedural errors in endodontic access cavity preparation. Although the tooth mass was not measured in this study, with the positive learning gains, we expected the participants in our study achieved the same effect.
Feedback for the development of psychomotor skills can be classified into immediate and terminal, where immediate feedback occurs immediately after action and terminal feedback occurs after procedure completion. Both immediate and terminal feedback have different strengths and are indispensable for skill training. Frequent immediate feedback is considered appropriate for early novices who are not familiar with the procedure (or the tools) at the cost of potentially interrupting the students, whereas terminal feedback is suitable for users who already have substantial knowledge about the procedure, including how to perform the procedure, like the participants in our study. A clear benefit of immediate feedback in a traditional training environment is its ability to provide a link between action and outcome, which may be lost in terminal feedback. Our system occupies an interesting space between the two approaches. By providing feedback at the end of the procedure, we avoid interrupting the flow of work of the student. At the same time, by replaying the student actions and highlighting errors made, we retain the linkage between action and outcome important in learning and refining psychomotor skills. Also, we permit students to re-try portions of the procedure they choose. This formative informational property of the feedback [45] directs the learner in terms of how to correct the error on the next trial. The differences in the learning gains between the two groups (G2 and G3) observed in our study indicates the potential benefits of the terminal formative feedback in skill training in relation to the traditional setting.
Procedure playback is commonly used as a terminal feedback modality in VR simulators. According to a survey of existing VR dental simulators for skill training by Wang et al. (D. [56], the ForssLund simulator (Forsslund [17], the HapTel simulator [53], and the Simodont simulator [6] have replay features which allow the student or instructor to watch in full replay mode upon completion of a procedure. All these existing systems provide simple playback of the procedure carried out using the simulator without any augmentation. In contrast, our approach augments the replay with information about the errors committed. With our simulator playback feedback system, the trainee can deconstruct the actions and errors that unfold during the procedure, and identify the information necessary to improve in subsequent practice. A few examples include reviewing how the drill/handpiece was in close contact with critical regions in the operating area, the amount of force used on the handpiece, and how it could affect the outcome, the speed, and direction in which to move the handpiece to remediate the errors. Instructors often use debriefing to guide the trainees to explore and understand the relationships among events, actions, thought and feeling processes, as well as performance outcomes of the simulation [23]. This video-based formative feedback could assist the endodontic experts in debriefing with detailed feedback on procedural aspects which are usually excluded from post-procedure debriefing.
Practice in simulation-based learning environments may improve student decision-making and error management opportunities by providing a structured experience where errors are explicitly characterized and used for training and feedback [38]. White et al. [60] noted that trainees’ knowledge is increased by making and learning from errors. Our system is designed to give students the freedom and autonomy to commit errors and then in retrospective feedback shows them where errors occurred and provide information concerning the causes of the errors. Trainees can develop an understanding of how their actions lead to correct or incorrect results which are considered to be highly effective feedback for motor skill development [59].
Our feedback prototype was developed for the access opening stage of endodontic root canal treatment, which is one of the most challenging areas of dental surgery. Creating a proper access opening is critical to the success of the later stage involving instrumentation of the root canal system. The small pulp chamber is encapsulated deep inside the tooth so that working in the pulp area demands fine motor skills and experience. In the access opening stage of an endodontic root canal procedure, dental students tend to have difficulties in adequately deroofing the chamber, reaching the pulp chamber, and locating the orifices [32]. Overcutting errors may result in excessive loss of the tooth structure and subsequently lead to brittle and fragile teeth with decreased fracture strength against loads [11]. Undercutting errors can lead to missed root canals to be treated or to instrument breakage if the access to the canals is not adequately expanded and extended. Various studies have demonstrated that such procedural accidents have a negative effect on the prognosis of the overall treatment outcome [3], [22], [27] and correction of such errors is difficult, if not impossible [15], [25].
The shape of the access opening is dictated by the pulp morphology. A meticulous study of pulp morphology is essential to design any therapeutic intervention plan [37], [54]. In a study by [15], the authors concluded that negligence, the lack of planning, and unfamiliarity with the internal anatomy contribute significantly to the failure of root canal treatment. To date, cone-beam computed tomography (CBCT) and micro-computed tomography have been used in conjunction with digital radiography images for visualization, measuring, quantitative or qualitative analysis, three-dimensional assessment, and design in endodontic treatment planning. Endodontic treatment planning can be further improved through the use of training simulators like the one presented in this study. With the CBCT of the patient’s tooth, the trainees could plan the treatment as well as repeatedly rehearse to achieve the optimal straight-line access outcome. Each plan can be judiciously analyzed to keep the procedural errors at a minimum. With the 3D video playback, the sound restorative margins and the possibility of retention of natural dentin, also the amount of remaining tooth structure can be visually confirmed for each treatment plan. Undergraduates novices who reportedly have low technical proficiency [17] and trainees who did not feel to perform the endodontic procedures [65] could be benefitted by rehearsing the plans with objective formative feedback. The access opening and cavity preparation is an important step of root canal treatment as all other factors precede this step. By keeping the procedural errors at a minimum in this step, the better the prognosis of the treatment can be expected.
9. Limitations and future work
In this study, only the three most common types of errors (undercutting, overcutting, perforation) associated with the access opening procedure stage from root canal treatment are taken into consideration. Details are provided in Section 3.2. Errors can be further analyzed at a more detailed level, for example, we could distinguish between lateral and vertical perforation errors. Similarly, the undercut error could be separated from incomplete error (unfinished task outcome) by thresholding the drilling region below the minimum template.
A limitation of the evaluation is that there are two forms of feedback in the experimental group: the outcome score from the automated outcome scoring system and the video-based formative feedback. In designing feedback systems, it would be good to know to what extent these different aspects contribute to improvements separately. This could be addressed by adding another experiment group and training the participants with each specific type of feedback.
The system’s feedback component could be extended in many ways. The system could be extended to have a feature to save the playback as a video, to enable students and instructors to keep records of student performance, and use it to monitor their progress. Instructors could also use it to review student skills and performance without having to be present during the training session. In determining the factors that contributed to errors, we focused only on the instrument during the procedure. However, the orientation variables associated with the mirror could indicate whether the trainee properly manipulates it whenever the indirect view of the operating tooth is needed. Besides, this study could be extended by distinguishing the sources of errors. Each error could be analyzed to determine whether it is caused by a lack of psychomotor ability or lack of relevant knowledge or a combination of both. This information could be instrumental in deriving directive feedback on correcting errors, inappropriate actions, or misconceptions. We plan to include the above-mentioned features in future work.
10. Conclusion
Simulation-based surgical skill training was largely driven by the tenet that simulators facilitate deliberate practice without risking the patient. However, assessment and feedback are as yet underutilized. Our formative feedback system provides an objective feedback mechanism and could be incorporated into formal skills training curricula. We would like to emphasize that virtual simulators cannot replace the experts during training but rather complement experts in the training process. While simulators are the perfect platform for deliberate practice, they can never replicate fully the clinical experience of experts nor their ability to motivate students. As simulators provide assessment and feedback for each practice session, experts can focus on qualitative feedback aspects of skill training. When both the expert and the simulator actively engage in the training process, the benefits are multifold. Expert’s time and workload could significantly reduce with the addition of VR simulators equipped with assessment and feedback features.
CRediT authorship contribution statement
Myat Su Yin: Conceptualization, Methodology, Writing – original draft, Writing – review & editing, Visualization, Software, Resources, Validation. Peter Haddawy: Conceptualization, Methodology, Writing – original draft, Writing – review & editing. Siriwan Suebnukarn: Conceptualization, Resources, Validation. Farin Kulapichitr: Visualization, Software. Phattanapon Rhienmora: Visualization, Software. Varistha Jatuwat: Visualization, Software. Nuttanun Uthaipattanacheep: Visualization, Software.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgement
This work was partially supported through a fellowship from the Hanse-Wissenschaftskolleg Institute for Advanced Study, Delmenhorst, Germany to Su Yin for collaborative work with the University of Bremen and Santander BISIP Scholarship to Kulapichitr.
References
[1] 3dsystems, Haptic Force feedback device. 3dsystems, 2016. https://www.3dsyst ems.com/haptics-devices/touch.
[2] D.F. Alhekeir, R.A. Al-Sarhan, H. Mokhlis, S. Al-Nazhan, Endodontic mishaps among undergraduate dental students attending King Saud University and Riyadh Colleges of Dentistry and Pharmacy, Saudi Endodontic J. 3 (1) (2013) 25.
[3] R.W. Alhussain, Y.K.A. Alhajimohammed, R.E. Almohammadi, R.W. Alhussain, E. J. Asmaa Ayed Alruwili, A.M.S. Almalki, S. Hanin Mohammed Basheer, M. Aljasem, A.K.A. Albaz, Evaluation of endodontic errors causes and management approach: literature review, Ann. Dental Specialty 8 (3) (2020) 18.
[4] L.M. Al-Saud, F. Mushtaq, M.J. Allsop, P.C. Culmer, I. Mirghani, E. Yates, A. Keeling, M.A. Mon-Williams, M. Manogue, Feedback and motor skill acquisition using a haptic dental simulator, Eur. J. Dental Educ.: Off. J. Assoc. Dental Educ. Europe 21 (4) (2017) 240–247.
[5] I. Badash, K. Burtt, C.A. Solorzano, J.N. Carey, Innovations in surgery simulation: a review of past, current and future techniques, Ann. Transl. Med. 4 (23) (2016). http s://www.ncbi.nlm.nih.gov/pmc/articles/pmc5220028/.
[6] M. Bakr, W. Massey, H. Alexander, Academic evaluation of simodont® Haptic 3D virtual reality dental training simulator, Gold Coast Health Med. Res. Conf. (2012) 1574–1582.
[7] C. Barone, T.T. Dao, B.B. Basrani, N. Wang, S. Friedman, Treatment outcome in endodontics: the Toronto study-phases 3, 4, and 5: apical surgery, J. Endodontics 36 (1) (2010) 28–35.
[8] S. Barry Issenberg, W.C. Mcgaghie, E.R. Petrusa, D. Lee Gordon, R.J. Scalese, Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review, Med. Teach. 27 (1) (2005) 10–28.
[9] S.L. Beilock, B.I. Bertenthal, A.M. McCoy, T.H. Carr, Haste does not always make waste: expertise, direction of attention, and speed versus accuracy in performing sensorimotor skills, Psychon. Bull. Rev. 11 (2) (2004) 373–379.
[10] C.M. Bogdan, A.F. Dinca, D.M. Popovici, A brief survey of visuo-haptic simulators for dental procedures training, in: Proceedings of the 6th International Conference on Virtual Learning, 2011, pp. 28–29.
[11] F.K. Cobankara, N. Unlu, A.R. Cetin, H.B. Ozkan, The effect of different restoration techniques on the fracture resistance of endodontically-treated molars, Oper. Dentistry 33 (5) (2008) 526–533.
[12] D.A. Decurcio, E. Lim, G.S. Chaves, V. Nagendrababu, C. Estrela, G. Rossi-Fedele, Pre-clinical endodontic education outcomes between artificial versus extracted natural teeth: a systematic review, Int. Endod. J. 46 (2019) 1104.
[13] M. Dut˜a, C.I. Amariei, C.M. Bogdan, D.M. Popovici, N. Ionescu, C.I. Nuca, An overview of virtual and augmented reality in dental education, Oral Health Dental Manage. 10 (2011) 42–49.
[14] C. Estrela, D. de A.G. Decurcio, Rossi-Fedele, J.A. Silva, O.A. Guedes, ´A.H. Borges, Root perforations: a review of diagnosis, prognosis and materials, Braz. Oral Res. 32 (suppl 1) (2018).
[15] C. Estrela, J.D. P´ecora, C.R.A. Estrela, O.A. Guedes, B.S.F. Silva, C.J. Soares, M. D. Sousa-Neto, Common operative procedural errors and clinical factors associated with root canal treatment, Braz. Dental J. 28 (2) (2017) 179–190.
[16] E.J. Eve, S. Koo, A.A. Alshihri, J. Cormier, M. Kozhenikov, R.B. Donoff, N. Y. Karimbux, Performance of dental students versus prosthodontics residents on a 3D immersive haptic simulator, J. Dent. Educ. 78 (4) (2014) 630–637.
[17] Forsslund Systems, Forsslund Systems. Forsslund Systems, 2018. https://www.fo rsslundsystems.com/.
[18] G.N. Glickman, A.H. Gluskin, W.T. Johnson, J. Lin, The crisis in endodontic education: current perspectives and strategies for change, J. Endodontia 31 (4) (2005) 255–261.
[19] J.C. Huegel, M.K. O’Malley, Progressive haptic and visual guidance for training in a virtual dynamic task, IEEE Haptics Symposium 2010 (2010) 343–350.
[20] IDEA International, Inc., IDEA: Individual Dental Education AssistantTM. IDEA: Individual Dental Education Assistant, 2020. http://www.ideadental.com/.
[21] J.I. Ingle, L.K. Bakland, J.C. Baumgartner, Endodontics 6, in: J.I. Ingle, L.K. Bakland, J.C. Baumgartne (eds.), PMPH-USA, 2008.
[22] K.C. Keine, M.C. Kuga, K.F. Pereira, A.C.S. Diniz, M.R. Tonetto, M.O.G. Galoza, M. G. Magro, Y.B.A.M. de Barros, M.C. Band´eca, M.F. de Andrade, Differential diagnosis and treatment proposal for acute endodontic infection, J. Contemp. Dent. Pract. 16 (12) (2015) 977–983.
[23] M. Kolbe, B. Grande, D.R. Spahn, Briefing and debriefing during simulation-based training and beyond: Content, structure, attitude and setting, Best Pract. Res. Clin. Anaesthesiol. 29 (1) (2015) 87–96.
[24] P. Koopman, J. Buis, P. Wesselink, M. Vervoorn, Simodont, a haptic dental training simulator combined with courseware, Bio-Algorithms Med-Syst. 6 (11) (2010) 117–122.
[25] H. Labbaf, G. Rezvani, S. Shahab, H. Assadian, F. Mirzazadeh Monfared, Retrospective evaluation of endodontic procedural errors by under-and postgraduate dental students using two radiographic systems, J. Islamic Dental Assoc. Iran 26 (4) (2014) 249–258.
[26] D. Lentz, A. Hardyk, Training for Speed, Agility, and Quickness. Human Kinetics, Champaign, IL, 2005, pp. 1–255.
[27] L.M. Lin, P.A. Rosenberg, J. Lin, Do procedural errors cause endodontic treatment failure? J. Am. Dental Assoc. 136 (2) (2005) 187–193, quiz 231.
[28] C.D. Lynch, F.M. Burke, Quality of root canal fillings performed by undergraduate dental students on single-rooted teeth, Eur. J. Dental Educ.: Off. J. Assoc. Dental Edu. Europe 10 (2) (2006) 67–72.
[29] W.C. McGaghie, S.B. Issenberg, E.R. Petrusa, R.J. Scalese, A critical review of simulation-based medical education research: 2003–2009, Med. Educ. 44 (1) (2010) 50–63, https://doi.org/10.1111/j.1365-2923.2009.03547.x.
[30] K.O. McGraw, S.P. Wong, Forming inferences about some intraclass correlation coefficients, Psychol. Methods 1 (1) (1996) 30–46.
[31] M. Minsky, Steps toward artificial intelligence, Proc. IRE 49 (1) (1961) 8–30.
[32] M.B. Mirza, Difficulties encountered during transition from preclinical to clinical endodontics among Salman bin Abdul Aziz university dental students, J. Int. Oral Health: JIOH 7 (Suppl 1) (2015) 22–27.
[33] I. Motola, L.A. Devine, H.S. Chung, J.E. Sullivan, S.B. Issenberg, Simulation in healthcare education: a best evidence practical guide AMEE Guide No. 82, Med. Teacher 35 (10) (2013) 1511–1530.
[34] C. Osnes, A. Keeling, Developing haptic caries simulation for dental education, J. Surg. Simul. 4 (2017) 29–34.
[35] R. Pace, V. Giuliani, G. Pagavino, Mineral trioxide aggregate as repair material for furcal perforation: case series, J. Endodontia 34 (9) (2008) 1130–1133.
[36] V.N. Palter, T.P. Grantcharov, Simulation in surgical education, CMAJ: Can. Med. Assoc. J. = Journal de l’Association Medicale Canadienne 182 (11) (2010) 1191–1196.
[37] J.D. P´ecora, J.B. Woelfel, M.D. Sousa Neto, E.P. Issa, Morphologic study of the maxillary molars. Part II: Internal anatomy, Braz. Dental J. 3 (1) (1992) 53–57.
[38] C.M. Pugh, K.E. Law, E.R. Cohen, A.-L.D. D’Angelo, J.A. Greenberg, C. C. Greenberg, D.A. Wiegmann, Use of error management theory to quantify and characterize residents’ error recovery strategies, Am. J. Surg. 219 (2) (2020) 214–220.
[39] A. Qutieshat, Assessment of dental clinical simulation skills: recommendations for implementation, J. Dental Res. Rev. 5 (4) (2018) 116, https://doi.org/10.4103/ jdrr.jdrr_56_18.
[40] A.S. Rad, The effects of a virtual reality simulator on formative and summative assessment methods for dental clinical skills, Bull. Group. Int. Rech. Sci. Stomatol. Odontol. 51 (3) (2012) 17–18.
[41] P. Rhienmora, P. Haddawy, S. Suebnukarn, M.N. Dailey, Intelligent dental training simulator with objective skill assessment and feedback, Artif. Intell. Med. 52 (2) (2011) 115–121.
[42] K.-E. Roberts, R.-L. Bell, A.-J. Duffy, Evolution of surgical skills training, World J. Gastroenterol.: WJG 12 (20) (2006) 3219–3224.
[43] S. Rom´an-Richon, V. Faus-Matoses, T. Alegre-Domingo, V.-J. Faus-Ll´acer, Radiographic technical quality of root canal treatment performed ex vivo by dental students at Valencia University Medical and Dental School, Spain, Medicina Oral, Patologia Oral Y Cirugia Bucal 19 (1) (2014) e93–e97.
[44] E. Roy, M.M. Bakr, R. George, The need for virtual reality simulators in dental education: a review, Saudi Dental J. 29 (2) (2017) 41–47.
[45] R.A. Schmidt, Feedback for Skill Acquisition: Preliminaries to a Theory of Feedback, California Univ Los Angeles Dept of Psychology, 1997. https://apps.dtic. mil/sti/citations/ADA328695.
[46] C.H. Shea, G. Wulf, Enhancing motor learning through external-focus instructions and feedback, Hum. Mov. Sci. 18 (4) (1999) 553–571.
[47] V.J. Shute, Focus on formative feedback, Rev. Educ. Res. 78 (1) (2008) 153–189.
[48] R. Sigrist, G. Rauter, R. Riener, P. Wolf, Augmented visual, auditory, haptic, and multimodal feedback in motor learning: a review, Psychon. Bull. Rev. 20 (1) (2013) 21–53.
[49] S. Suebnukarn, M. Chaisombat, T. Kongpunwijit, P. Rhienmora, Construct validity and expert benchmarking of the haptic virtual reality dental simulator, J. Dent. Educ. 78 (10) (2014) 1442–1450.
[50] S. Suebnukarn, R. Hataidechadusadee, N. Suwannasri, N. Suprasert, P. Rhienmora, P. Haddawy, Access cavity preparation training using haptic virtual reality and microcomputed tomography tooth models, Int. Endod. J. 44 (11) (2011) 983–989.
[51] C.L. Taylor, N. Grey, J.D. Satterthwaite, Assessing the clinical skills of dental students: a review of the literature, J. Stat. Educ.: Int. J. Teach. Learn. Stat. 2 (1) (2013) 20–31.
[52] A. Towers, J. Field, C. Stokes, S. Maddock, N. Martin, A scoping review of the use and application of virtual reality in pre-clinical dental education, Br. Dent. J. 226 (5) (2019) 358–366.
[53] B. Tse, W. Harwin, A. Barrow, B. Quinn, J.S. Diego, M. Cox, Design and development of a haptic dental training system – hapTEL, Eurohaptics 1 (2010) 101–108.
[54] F.J. Vertucci, Root canal morphology and its relationship to endodontic procedures, Endodontic Top. 10 (1) (2005) 3–29.
[55] Voxel-Man. Voxel-Man. Voxel-Man Dental, 2020. http://voxel-man.de/.
[56] D. Wang, T. Li, Y. Zhang, J. Hou, Survey on multisensory feedback virtual reality dental training systems, Eur. J. Dental Educ.: Off. J. Assoc. Dental Educ. Europe 20 (4) (2016) 248–260.
[57] D. Wang, S. Zhao, T. Li, Y. Zhang, X. Wang, Preliminary evaluation of a virtual reality dental simulation system on drilling operation, Bio-Med. Mater. Eng. 26 (Suppl 1) (2015) S747–S756.
[58] K.R. Wanzel, M. Ward, R.K. Reznick, Teaching the surgical craft: from selection to certification, Curr. Probl. Surg. 39 (6) (2002) 583–659.
[59] D.L. Weeks, R.N. Kordus, Relative frequency of knowledge of performance and motor skill learning, Res. Q. Exerc. Sport 69 (3) (1998) 224–230.
[60] C. White, M.W.M. Rodger, T. Tang, Current understanding of learning psychomotor skills and the impact on teaching laparoscopic surgical skills, Obstet. Gynecol. 18 (1) (2016) 53–63.
[61] K. Woodmansey, L.G. Beck, T.E. Rodriguez, The landscape of predoctoral endodontic education in the united states and canada: results of a survey, J. Dent. Educ. 79 (8) (2015) 922–927.
[62] P. Xia, A.M. Lopes, M.T. Restivo, Virtual reality and haptics for dental surgery: a personal review, Visual Comput. 29 (5) (2012) 433–447.
[63] M.S. Yin, P. Haddawy, S. Suebnukarn, P. Rhienmora, Automated outcome scoring in a virtual reality simulator for endodontic surgery, Comput. Methods Programs Biomed. 153 (2018) 53–59.
[64] W. Yousuf, M. Khan, H. Mehdi, Endodontic procedural errors: frequency, type of error, and the most frequently treated tooth, Int. J. Dentistry 2015 (2015), 673914.
[65] J. Davey, S.T. Bryant, P.M.H. Dummer, The confidence of undergraduate dental students when performing root canal treatment and their perception of the quality of endodontic education, Eur. J. Dental Educ.: Off. J. Assoc. Dental Educ. Europe 19 (4) (2015) 229–234.