Advertisement
Original Article| Volume 134, ISSUE 6, P749-757, December 2022

Download started.

Ok

Automatic visualization of the mandibular canal in relation to an impacted mandibular third molar on panoramic radiographs using deep learning segmentation and transfer learning techniques

Open AccessPublished:June 04, 2022DOI:https://doi.org/10.1016/j.oooo.2022.05.014

      Objective

      The aim of this study was to create and assess a deep learning model using segmentation and transfer learning methods to visualize the proximity of the mandibular canal to an impacted third molar on panoramic radiographs.

      Study Design

      The panoramic radiographs containing the mandibular canal and impacted third molar were collected from 2 hospitals (Hospitals A and B). A total of 3200 areas were used for creating and evaluating learning models. A source model was created using the data from Hospital A, simulatively transferred to Hospital B, and trained using various amounts of data from Hospital B to create target models. The same data were then applied to the target models to calculate the Dice coefficient, Jaccard index, and sensitivity.

      Results

      The performance of target models trained using 200 or more data sets was equivalent to that of the source model tested using data obtained from the same hospital (Hospital A).

      Conclusions

      Sufficiently qualified models could delineate the mandibular canal in relation to an impacted third molar on panoramic radiographs using a segmentation technique. Transfer learning appears to be an effective method for creating such models using a relatively small number of data sets.
      Statement of Clinical Relevance
      A deep learning segmentation technique can delineate the mandibular canal and third molar on panoramic radiographs using a transfer learning method with a relatively small number of data sets.
      Extraction is the main surgical intervention for an impacted mandibular third molar. Inferior alveolar nerve damage is one of the several complications which may occur during or after surgery, and it causes temporary or sometimes permanent neurosensory impairments.
      • Hasegawa T
      • Ri S
      • Umeda M
      • Komori T.
      • Leung YY
      • Cheng LK.
      Risk factors of neurosensory deficits in lower third molar surgery. A literature review of prospective study.
      • Hatano Y
      • Kurita K
      • Kuroiwa Y
      • Yuasa H
      • Ariji E.
      Clinical evaluations of coronectomy (intentional partial odontectomy) for mandibular third molars using dental computed tomography: a case-control study.
      Therefore, preoperative evaluation of the relationship between an impacted third molar and the mandibular canal is fundamentally important.
      • Liu W
      • Yin W
      • Zhang R
      • Li J
      • Zheng Y.
      Diagnostic value of panoramic radiography in predicting inferior alveolar nerve injury after mandibular third molar extraction: a meta-analysis.
      ,
      • Orhan K
      • Bilgir E
      • Bayrakdar IS
      • Ezhov M
      • Gusarev M
      • Shumilov E.
      Evaluation of artificial intelligence for detecting impacted third molars on cone-beam computed tomography scans.
      Although panoramic radiography cannot determine their 3-dimensional relationship,
      • Liu W
      • Yin W
      • Zhang R
      • Li J
      • Zheng Y.
      Diagnostic value of panoramic radiography in predicting inferior alveolar nerve injury after mandibular third molar extraction: a meta-analysis.
      it plays an essential role because of its convenience, low cost, and low radiation exposure of patients compared with computed tomography (CT) or cone beam CT for dental use.
      • Liu W
      • Yin W
      • Zhang R
      • Li J
      • Zheng Y.
      Diagnostic value of panoramic radiography in predicting inferior alveolar nerve injury after mandibular third molar extraction: a meta-analysis.
      • Orhan K
      • Bilgir E
      • Bayrakdar IS
      • Ezhov M
      • Gusarev M
      • Shumilov E.
      Evaluation of artificial intelligence for detecting impacted third molars on cone-beam computed tomography scans.
      • Vinayahalingam S
      • Xi T
      • Bergé S
      • Maal T
      • de Jong G.
      Automated detection of third molars and mandibular nerve by deep learning.
      To evaluate this relationship on panoramic radiographs, previous studies have identified notable features,
      • Liu W
      • Yin W
      • Zhang R
      • Li J
      • Zheng Y.
      Diagnostic value of panoramic radiography in predicting inferior alveolar nerve injury after mandibular third molar extraction: a meta-analysis.
      ,
      • Rood JP
      • Shehab BA.
      The radiological prediction of inferior alveolar nerve injury during third molar surgery.
      • Monaco G
      • Montevecchi M
      • Bonetti GA
      • Antonella MR
      • Checchi L.
      Reliability of panoramic radiography in evaluating the topographic relationship between the mandibular canal and impacted third molars.
      • Szalma J
      • Lempel E
      • Jeges S
      • Olasz L.
      Darkening of third molar roots: panoramic radiographic associations with inferior alveolar nerve exposure.
      • Tantanapornkul W
      • Okochi K
      • Bhakdinaronk A
      • Ohbayashi N
      • Kurabayashi T.
      Correlation of darkening of impacted mandibular third molar root on digital panoramic images with cone beam computed tomography findings.
      • Liye Q
      • Zhongwei Z
      • Xiaojuan S
      • Min W
      • Pingping L
      • Kun C
      Can narrowing of the mandibular canal on pre-operative panoramic radiography predict close anatomical contact of the mandibular canal with the mandibular third molar? A meta-analysis.
      such as the darkening of the root caused by canal radiolucency, bending and/or narrowing of the canal, and an interruption of the white line (i.e., partial disappearance of the superior wall of the canal). However, some authors emphasized the difficulty in the actual prediction of 3-dimensional real contact status solely on panoramic appearances.
      • Bell GW
      • Rodgers JM
      • Grime RJ
      • et al.
      The accuracy of dental panoramic tomographs in determining the root morphology of mandibular third molar teeth before surgery.
      • Rodriguez y Baena R
      • Beltrami R
      • Tagliabo A
      • Rizzo S
      • Lupi SM.
      Differences between panoramic and cone beam-CT in the surgical evaluation of lower third molars.
      • Shahidi S
      • Zamiri B
      • Bronoosh P.
      Comparison of panoramic radiography with cone beam CT in predicting the relationship of the mandibular third molar roots to the alveolar canal.
      When the canal visibility could be improved, the prediction performances might be elevated because the identification of these characteristic appearances would depend on the visibility of the mandibular canal.
      A deep learning (DL) system is a type of artificial intelligence by machine learning procedure that is created based on a convolution neural network (CNN) that mimics neurons of the human brain. A DL system has multiple layers between the input and output layers, and can automatically extract characteristic features of target objects to create a learning model for various task, such as classification, object detection, and semantic segmentation. These DL techniques have been applied to panoramic radiographic detection and diagnosis of root morphology,
      • Hiraiwa T
      • Ariji Y
      • Fukuda M
      • et al.
      A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography.
      root fractures,
      • Fukuda M
      • Inamoto K
      • Shibata N
      • et al.
      Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography.
      jaw cysts/tumors,
      • Ariji Y
      • Yanashita Y
      • Kutsuna S
      • et al.
      Automatic detection and classification of radiolucent lesions in the mandible on panoramic radiographs using a deep learning object detection technique.
      mesiodens,
      • Kuwada C
      • Ariji Y
      • Fukuda M
      • et al.
      Deep learning systems for detecting and classifying the presence of impacted supernumerary teeth in the maxillary incisor region on panoramic radiographs.
      mandibular condylar fractures,
      • Nishiyama M
      • Ishibashi K
      • Ariji Y
      • et al.
      Performance of deep learning models constructed using panoramic radiographs from two hospitals to diagnose fractures of the mandibular condyle.
      and maxillary sinusitis.
      • Kuwana R
      • Ariji Y
      • Fukuda M
      • et al.
      Performance of deep learning object detection technology in the detection and diagnosis of maxillary sinus lesions on panoramic radiographs.
      Although there have been several DL investigations of the spatial relationship between the mandibular third molar and mandibular canal,
      • Vinayahalingam S
      • Xi T
      • Bergé S
      • Maal T
      • de Jong G.
      Automated detection of third molars and mandibular nerve by deep learning.
      ,
      • Fukuda M
      • Ariji Y
      • Kise Y
      • et al.
      Comparison of 3 deep learning neural networks for classifying the relationship between the mandibular third molar and the mandibular canal on panoramic radiographs.
      • Yoo JH
      • Yeom HG
      • Shin W
      • et al.
      Deep learning based prediction of extraction difficulty for mandibular third molars.
      • Cha JY
      • Yoon HI
      • Yeo IS
      • Huh KH
      • Han JS.
      Panoptic segmentation on panoramic radiographs: deep learning-based segmentation of various structures including maxillary sinus and mandibular canal.
      few have directly addressed their delineation using DL segmentation methods.
      • Vinayahalingam S
      • Xi T
      • Bergé S
      • Maal T
      • de Jong G.
      Automated detection of third molars and mandibular nerve by deep learning.
      ,
      • Cha JY
      • Yoon HI
      • Yeo IS
      • Huh KH
      • Han JS.
      Panoptic segmentation on panoramic radiographs: deep learning-based segmentation of various structures including maxillary sinus and mandibular canal.
      Vinayahalingam et al.
      • Vinayahalingam S
      • Xi T
      • Bergé S
      • Maal T
      • de Jong G.
      Automated detection of third molars and mandibular nerve by deep learning.
      separately segmented the mandibular canal and third molar on 81 panoramic images using the U-Net CNN and reported high segmentation performance. A high-performance canal segmentation system could provide an effective educational tool to help inexperienced clinicians assess the above relationship.
      A major obstacle to the development of a high-performance DL method is the difficulty of collecting large amounts of qualified data. Although a multi-institutional study can address this issue, the protection of personal patient information should be considered. This can be addressed using the transfer learning technique that involves no patient data transfer. The technique involves creating a DL model (source model) at one institution, then transferring it to a second institution for further training, using its own data to create a new DL model (target model).
      • Wiens J
      • Guttag J
      • Horvitz E.
      A study in transfer learning: leveraging data from multiple hospitals to enhance hospital-specific predictions.
      • Mori M
      • Ariji Y
      • Katsumata A
      • et al.
      A deep transfer learning approach for the detection and diagnosis of maxillary sinusitis on panoramic radiographs.
      • Ishibashi K
      • Ariji Y
      • Kuwada C
      • et al.
      Efficacy of a deep learning model created with the transfer learning method in detecting sialoliths of the submandibular gland on panoramic radiography.
      The purposes of the present study were to use a segmentation function to create a sufficiently qualified DL model and to assess the ability of the transfer learning technique to visualize the mandibular canal near an impacted mandibular molar on panoramic radiographs.

      MATERIALS AND METHODS

      This study was approved by the ethics committees of Aichi-Gakuin University School of Dentistry (No. 586) and Asahi University School of Dentistry (No. 31017) in accordance with the Helsinki Declaration.

      Patients

      The patient group included individuals who visited Aichi Gakuin University Dental Hospital (Hospital A) and Asahi University Dental Hospital (Hospital B) between January 2019 and March 2021, who underwent panoramic radiography, and who had an impacted mandibular third molar. The cases with dentigerous cysts and marked bone resorption around the impacted teeth were excluded. A total of 3200 images containing the mandibular canal and an impacted mandibular third molar (1619 on the left side, 1581 on the right side) were collected from 1380 (727 women, 653 men) and 881 (462 women, 419 men) panoramic radiographs taken at Hospitals A and B, respectively (Table 1). The canal and molar appeared to be contacted or superimposed in half of the 3200 images and separated in the others. These determinations were made by 2 radiologists (Y.A. and E.A.) with >30 years of experience in interpreting panoramic radiographs. When their evaluations differed, the final determinations were reached by consensus after discussion.
      Table 1Patient image data analyzed in this study
      No. of patientsAge (y)SexNo. of areas containing the mandibular third molar and canalNo. of sidesRelationship between the third molar and canal
      Hospital A138032.3 ± 11.9Male6532000Left1023Superimposed
      Mandibular canal and impacted mandibular third molar were clearly contacted or superimposed on panoramic radiographs.
      1000
      (17-78)Female727Right977Separated1000
      Hospital B88138.2 ± 17.8Male4191200Left596Superimposed600
      (17-85)Female462Right604Separated600
      Total226134.6 ± 14.8Male10723200Left1619Superimposed1600
      (17-85)Female1189Right1581Separated1600
      low asterisk Mandibular canal and impacted mandibular third molar were clearly contacted or superimposed on panoramic radiographs.

      Image preparation

      Radiography at Hospital A was performed using a Veraviewepocs (J. Morita Mfg Corp., Kyoto, Japan) panoramic X-ray unit (tube voltage, 75 kV; tube current, 8 mA; irradiation time, 16.2 s). The panoramic image files (2402 × 1352 pixels, 96 dpi) were downloaded from the imaging database of Hospital A in 24-bit JPEG format. Panoramic radiography at Hospital B also used a Veraviewepocs X-ray unit (tube voltage, 70 kV; tube current, 10 mA; irradiation time, 16.2 s). The panoramic image files (1976 × 976 pixels, 150 dpi) were downloaded from the imaging database of Hospital B in 24-bit JPEG format. The patches (512 × 512 pixels) centered on the impacted molar were then extracted from the panoramic images and saved in JPEG format (Figure 1).
      Fig 1
      Fig. 1A square patch (512 × 512 pixels) cropped from a panoramic image for learning and inference.

      Image allocation and annotation

      Preliminary tests indicated that the CNN performance was improved by horizontally flipping the left-side images and combining them with the right-side images. Each image was then randomly assigned to the training, validation, or test data set (Table 2).
      Table 2No. of image patches used for training, validation, and test
      ModelTraining dataValidation dataTest data
      Source model
      The source model was created using Hospital A data.
      18408080
      Target model
      The target models (Tn) were created by transferring the source model to Hospital B and trained using n image patches from Hospital B. T0 denotes a source model that was transferred but not trained.
      T00050
      T50504
      T1001006
      T20020012
      T40040020
      T60060030
      T80080040
      T1100110050
      low asterisk The source model was created using Hospital A data.
      The target models (Tn) were created by transferring the source model to Hospital B and trained using n image patches from Hospital B. T0 denotes a source model that was transferred but not trained.
      For each image in the training and validation data sets, Adobe Photoshop software (Adobe Inc, San Jose, CA, USA) was used to mark the mandibular canal in blue and trace the contour of the molar in yellow (Figure 2). The mandibular canal was contained within an area of at least 200 × 200 pixels, which was approximately centered on the closest point between the canal and the molar or on the central region of their superimposed area (Figure 2). The performance of the created model was assessed within this square area. Annotations were performed by one radiologist (M.M.) and confirmed by another (Y.A.). Thereafter, they were saved as annotated data with their original images for use in the learning process.
      Fig 2
      Fig. 2An original image patch (left) and corresponding annotated patch (right). The area of mandibular canal was colored blue across an area covering at least 200 × 200 pixels (dotted line) and centered at the closest point between the mandibular third molar and canal or at the approximate midpoint of their superimposed structures.

      DL system

      The DL system was implemented using an NVIDIA GeForce RTX 2080 Ti graphics card with an 11-Gb GPU and the Windows 10 operating system (Microsoft Corporation, Redmond, WA). The DL was performed using the U-Net CNN built on a neural network console (Sony Network Communications Inc., Tokyo, Japan) (Figure 3). The U-net is a widely recognized fully convolutional network architecture for semantic segmentation of medical images.
      • Staar B
      • Bayrak S
      • Paulkowski D
      • Freitag M.
      A U-Net based approach for automating tribological experiments.
      • Ronneberger O
      • Fischer P
      • Brox T.
      U-Net: convolutional networks for biomedical image segmentation.
      Fig 3
      Fig. 3Architecture of the U-Net convolutional neural network used in this study.

      Source model creation

      The learning process with 200 epochs was performed with the U-Net, using the training and validation data sets from Hospital A to create the source model.

      Transfer learning process for creation of the target models

      The source model was transferred and trained using the image data from Hospital B (Table 2 and Figure 4). The source model was simulatively transferred from the machine that created it to a machine with the same architecture and specifications. In this study, we created T0, T100, T200, T400, T600, T800, and T1100 target models, where Tn denotes a source model transferred from Hospital A and trained using n image patches from Hospital B. The T0 target model received no training and was therefore equivalent to the source model. Transfer learning processes for all other target models (n > 0) were performed with 200 epochs.
      Fig 4
      Fig. 4(A) Diagram of the study design. Boxes A and B identify the creation and assessment processes of the source and target models, respectively. (B) Creation and assessment of the source model (box A in panel A). A total of 1920 pairs of original and annotated image patches from Hospital A were inputted into the U-Net to create the source model with 200 epochs of learning. Eighty image patches of test data were used to assess the source model, which outputted the segmented images. (C) Creation and assessment of the target models (box B in panel A). The source model was transferred to Hospital B and trained using a variable number of original and annotated image patch pairs from Hospital B. Target models, including the untrained T0 model, were assessed using test data (50 image patches) from Hospital B and outputted the segmented images.

      Inference process and evaluation of the source and target models

      The benchmark performance of the source model was evaluated by testing it using the data set acquired from Hospital A that comprised 40 image patches containing a contacted or superimposed canal and molar, and 40 patches in which they were separated. Seven target models were evaluated using the testing data set from Hospital B, including 50 image patches.
      To evaluate the similarity between the ground truth and predicted canal areas, the Dice coefficient, Jaccard index, and sensitivity were calculated within the area with 200 × 200 pixels (Figure 5). Images containing the predicted and ground truth canal areas were superimposed, then the number of overlapping pixels was measured using Adobe Photoshop software (Adobe Inc, San Jose, CA, USA) and used to calculate the indices:
      Dice coefficient=2S(PG)/(S(P)+S(G))=2TP/(2TP+FP+FN)
      (1)


      Jaccard index=S(PG)/S(PUG)=TP/(TP+FP+FN)
      (2)


      Sensitivity=TP/(TP+FN)
      (3)


      Where S is the area, P is model-predicted canal, G is ground truth, TP is true positive, FP is false positive, and FN is false negative. S(P∩G) and S(PUG) denote the overlapped and combined areas, respectively, of the predicted and ground truth.
      Fig 5
      Fig. 5Calculation scheme of the Dice coefficient, Jaccard index, and sensitivity (-). Continuous-red and dotted-blue lines enclose the ground truth and predicted areas, respectively. FN, false negative; TP, true positive; FP, false positive.

      Results

      Time taken for deep learning

      It took approximately 10.6 hours to create the source model with 200 epochs, and from approximately 30 minutes to more than 10 hours to create the target models, depending on the number of image patches used (Table 3). For the inference process, it took from only 22 seconds to approximately 1 minute to test the 50 image patches from Hospital B.
      Table 3Time required for learning and inference processes
      ModelLearningInference
      The time required for evaluating the target models using test data from Hospital B.
      Source model
      The source model was created and evaluated using Hospital A data.
      10 h 37 m 25 s25 s
      Target models
      The target models (Tn) were created by transferring the source model to Hospital B and trained using n image patches from Hospital B. T0 denotes a source model that was transferred but not trained.
      T00 s23 s
      T5029 m 29 s23 s
      T10058 m 04 s23 s
      T2001 h 55 m 21 s23 s
      T4003 h 49 m 33 s25 s
      T6005 h 44 m 40 s22 s
      T8007 h 39 m 04 s1 m 02 s
      T110010 h 32 m 15 s1 m 04 s
      low asterisk The source model was created and evaluated using Hospital A data.
      The target models (Tn) were created by transferring the source model to Hospital B and trained using n image patches from Hospital B. T0 denotes a source model that was transferred but not trained.
      The time required for evaluating the target models using test data from Hospital B.

      Performance of models

      The Dice coefficient, Jaccard index, and sensitivity of the source model that was transferred from Hospital A but not trained (T0 target model) were lower than those of the source model that was tested using the same Hospital A data. Using ≥200 image patches from Hospital B for transfer learning, the mean values of the previously mentioned 3 indices were approximately equal to those of the source model tested using the same Hospital A data (Table 4). The results produced by the target models T0, T50, T200, and T600 (Figure 6) show that the performance increased commensurately with the amount of training data.
      Table 4Model performance
      ModelTest dataDice coefficientJaccard indexSensitivity
      Source model
      The source model was created using Hospital A data.
      Hospital A0.831 ± 0.1200.700 ± 0.1260.840 ± 0.160
      Target model
      The target models (Tn) were created by transferring the source model to Hospital B and trained using n image patches from Hospital B. T0 denotes a source model that was transferred but not trained.
      T0Hospital B0.749 ± 0.1620.607 ± 0.1500.756 ± 0.208
      T500.759 ± 0.1700.640 ± 0.1660.714 ± 0.195
      T1000.786 ± 0.1370.692 ± 0.1480.756 ± 0.173
      T2000.840 ± 0.0790.742 ± 0.0910.821 ± 0.101
      T4000.836 ± 0.0730.734 ± 0.0870.841 ± 0.104
      T6000.836 ± 0.0800.731 ± 0.0890.820 ± 0.089
      T8000.851 ± 0.0750.791 ± 0.0870.871 ± 0.092
      T11000.857 ± 0.0890.755 ± 0.1030.839 ± 0.115
      low asterisk The source model was created using Hospital A data.
      The target models (Tn) were created by transferring the source model to Hospital B and trained using n image patches from Hospital B. T0 denotes a source model that was transferred but not trained.
      Fig 6
      Fig. 6Results predicted by the (A) T0, (B) T50, (C) T200, and (D) T600 target models. Performances were evaluated within an area containing 200 × 200 pixels (dotted line). The corresponding values of the Dice coefficient, Jaccard index, and sensitivity are given in .

      DISCUSSION

      The use of DL has enabled preoperative evaluation of the spatial relationship between the mandibular third molar and mandibular canal on panoramic radiographs.
      • Fukuda M
      • Ariji Y
      • Kise Y
      • et al.
      Comparison of 3 deep learning neural networks for classifying the relationship between the mandibular third molar and the mandibular canal on panoramic radiographs.
      ,
      • Yoo JH
      • Yeom HG
      • Shin W
      • et al.
      Deep learning based prediction of extraction difficulty for mandibular third molars.
      Fukuda et al.
      • Fukuda M
      • Ariji Y
      • Kise Y
      • et al.
      Comparison of 3 deep learning neural networks for classifying the relationship between the mandibular third molar and the mandibular canal on panoramic radiographs.
      compared 3 CNN-based DL systems for classifying the separation of the canal from the third molar and reported high diagnostic accuracies and consistencies. Yoo et al.
      • Yoo JH
      • Yeom HG
      • Shin W
      • et al.
      Deep learning based prediction of extraction difficulty for mandibular third molars.
      showed that a DL classification function could predict the extraction difficulty of the mandibular third molar based on the Pederson difficulty index
      • Yuasa H
      • Kawai T
      • Sugiura M.
      Classification of surgical difficulty in extracting impacted third molars.
      ; however, the anatomic locations of the mandibular third molar and canal could not be identified or visualized on radiographs. Segmentation techniques that enable designation of these structures have also been applied,
      • Merdietio Boedi R
      • Banar N
      • De Tobel J
      • Bertels J
      • Vandermeulen D
      • Thevissen PW
      Effect of lower third molar segmentations on automated tooth development staging using a convolutional neural network.
      but to date only Vinayahalingam et al.
      • Vinayahalingam S
      • Xi T
      • Bergé S
      • Maal T
      • de Jong G.
      Automated detection of third molars and mandibular nerve by deep learning.
      have attempted to visualize the canal in relation to the third molar using a DL segmentation function. Although they used the same data for learning and inference, relatively high performance was achieved using only 81 panoramic radiographs. The mean values of the Dice coefficient, Jaccard index, and sensitivity were 0.805, 0.687, and 0.847, respectively. Such high values may have resulted from the use of a two-step segmentation course before and after cropping the objective areas on the panoramic images. In the present study, we used a simple one-step segmentation method with enlargement of data sets. When the source model was tested using newly assigned data from the hospital used to create it (Hospital A), the mean values of the Dice coefficient, Jaccard index, and sensitivity were 0.831, 0.700, and 0.840, respectively, which are approximately equal to or slightly higher than those reported by Vinayahalingam et al.
      • Vinayahalingam S
      • Xi T
      • Bergé S
      • Maal T
      • de Jong G.
      Automated detection of third molars and mandibular nerve by deep learning.
      In the present study, the specificities were not determined because they varied in proportion to the size of the evaluated areas. The cropped image patches (512 × 512 pixels) were used for learning and inference because the canal could easily be identified on large images. Although annotations were performed in a narrower area (200 × 200 pixels) to reduce the workload, this size was large enough to reliably assess the relationship between the canal and molar. The canal segmentation technique was verified to be effective, and it also may be useful for various treatment procedures, such as implant embedded surgery when it is performed mainly based on panoramic radiographs.
      Using the transfer learning procedure, an effective learning model (target model) was developed by transferring the source model from the institution where it was created (Hospital A) to another institution (Hospital B) without transferring personal patient information and using a relatively small amount of training data from Hospital B. In this study, ≥200 image patches from Hospital B were used to create the target models, whose performances were equivalent to or higher than that achieved by testing the source model using data from the hospital where it was created (Hospital A). Some differences in image quality caused by a difference in the machine used and its exposure conditions might be consequently compensated with the transfer learning technique. The efficacy of the transfer learning method was therefore verified.
      Some limitations in the present study are worth highlighting. First, the number of data sets was relatively small in comparison with that required to create a versatile model. More data should be collected from multiple institutions to improve their performance. Second, the third molar and canal were not segmented simultaneously. Although the molar was generally visible without segmentation, simultaneous segmentation of both structures might enable automatic classification of various features, such as their relationship
      • Kuwada C
      • Ariji Y
      • Fukuda M
      • et al.
      Deep learning systems for detecting and classifying the presence of impacted supernumerary teeth in the maxillary incisor region on panoramic radiographs.
      and extraction difficulty.
      • Nishiyama M
      • Ishibashi K
      • Ariji Y
      • et al.
      Performance of deep learning models constructed using panoramic radiographs from two hospitals to diagnose fractures of the mandibular condyle.
      Third, taking into account clinical use, manual cropping following the method described herein may be required before segmentation. To create a fully automatic model, other DL techniques, such as object detection, should be used concomitantly with the segmentation technique.
      In conclusion, sufficiently qualified DL models were created to visualize the mandibular canal on panoramic radiographs using a segmentation technique. In addition, the transfer learning method was effective at creating such models using a relatively small number of data sets.

      Acknowledgments

      We thank Edanz (https://jp.edanz.com/ac) for editing a draft of this manuscript.

      Funding

      This work was supported in part by Grants-in-Aid for Scientific Research (KAKEN) issued by the Japan Society for the Promotion of Science (grant no. 20K10194) to Y. Ariji.

      References

        • Hasegawa T
        • Ri S
        • Umeda M
        • Komori T.
        Multivariate relationships among risk factors and hypoesthesia of the lower lip after extraction of the mandibular third molar. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2011; 111: e1-e7
        • Leung YY
        • Cheng LK.
        Risk factors of neurosensory deficits in lower third molar surgery. A literature review of prospective study.
        Int J Oral Maxillofac Surg. 2011; 40: 1-10
        • Hatano Y
        • Kurita K
        • Kuroiwa Y
        • Yuasa H
        • Ariji E.
        Clinical evaluations of coronectomy (intentional partial odontectomy) for mandibular third molars using dental computed tomography: a case-control study.
        J Oral Maxillofac Surg. 2009; 67: 1806-1814
        • Liu W
        • Yin W
        • Zhang R
        • Li J
        • Zheng Y.
        Diagnostic value of panoramic radiography in predicting inferior alveolar nerve injury after mandibular third molar extraction: a meta-analysis.
        Aust Dent J. 2015; 60: 233-239
        • Orhan K
        • Bilgir E
        • Bayrakdar IS
        • Ezhov M
        • Gusarev M
        • Shumilov E.
        Evaluation of artificial intelligence for detecting impacted third molars on cone-beam computed tomography scans.
        J Stomatol Oral Maxillofac Surg. 2021; 122: 333-337
        • Vinayahalingam S
        • Xi T
        • Bergé S
        • Maal T
        • de Jong G.
        Automated detection of third molars and mandibular nerve by deep learning.
        Sci Rep. 2019; 9: 9007
        • Rood JP
        • Shehab BA.
        The radiological prediction of inferior alveolar nerve injury during third molar surgery.
        Br J Oral Maxillofac Surg. 1990; 28: 20-25
        • Monaco G
        • Montevecchi M
        • Bonetti GA
        • Antonella MR
        • Checchi L.
        Reliability of panoramic radiography in evaluating the topographic relationship between the mandibular canal and impacted third molars.
        J Am Dent Assoc. 2004; 135: 312-318
        • Szalma J
        • Lempel E
        • Jeges S
        • Olasz L.
        Darkening of third molar roots: panoramic radiographic associations with inferior alveolar nerve exposure.
        J Oral Maxillofac Surg. 2011; 69: 1544-1549
        • Tantanapornkul W
        • Okochi K
        • Bhakdinaronk A
        • Ohbayashi N
        • Kurabayashi T.
        Correlation of darkening of impacted mandibular third molar root on digital panoramic images with cone beam computed tomography findings.
        Dentomaxillfac Radiol. 2009; 38: 11-16
        • Liye Q
        • Zhongwei Z
        • Xiaojuan S
        • Min W
        • Pingping L
        • Kun C
        Can narrowing of the mandibular canal on pre-operative panoramic radiography predict close anatomical contact of the mandibular canal with the mandibular third molar? A meta-analysis.
        Oral Radiol. 2020; 36: 121-128
        • Bell GW
        • Rodgers JM
        • Grime RJ
        • et al.
        The accuracy of dental panoramic tomographs in determining the root morphology of mandibular third molar teeth before surgery.
        Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2003; 95: 119-125
        • Rodriguez y Baena R
        • Beltrami R
        • Tagliabo A
        • Rizzo S
        • Lupi SM.
        Differences between panoramic and cone beam-CT in the surgical evaluation of lower third molars.
        J Clin Exp Dent. 2017; 9: e259-e265
        • Shahidi S
        • Zamiri B
        • Bronoosh P.
        Comparison of panoramic radiography with cone beam CT in predicting the relationship of the mandibular third molar roots to the alveolar canal.
        Imaging Sci Dent. 2013; 43: 105-109
        • Hiraiwa T
        • Ariji Y
        • Fukuda M
        • et al.
        A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography.
        Dentomaxillofac Radiol. 2019; 4820180218
        • Fukuda M
        • Inamoto K
        • Shibata N
        • et al.
        Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography.
        Oral Radiol. 2020; 36: 337-343
        • Ariji Y
        • Yanashita Y
        • Kutsuna S
        • et al.
        Automatic detection and classification of radiolucent lesions in the mandible on panoramic radiographs using a deep learning object detection technique.
        Oral Surg Oral Med Oral Pathol Oral Radiol. 2019; 128: 424-430
        • Kuwada C
        • Ariji Y
        • Fukuda M
        • et al.
        Deep learning systems for detecting and classifying the presence of impacted supernumerary teeth in the maxillary incisor region on panoramic radiographs.
        Oral Surg Oral Med Oral Pathol Oral Radiol. 2020; 130: 464-469
        • Nishiyama M
        • Ishibashi K
        • Ariji Y
        • et al.
        Performance of deep learning models constructed using panoramic radiographs from two hospitals to diagnose fractures of the mandibular condyle.
        Dentomaxillofac Radiol. 2021; 5020200611
        • Kuwana R
        • Ariji Y
        • Fukuda M
        • et al.
        Performance of deep learning object detection technology in the detection and diagnosis of maxillary sinus lesions on panoramic radiographs.
        Dentomaxillofac Radiol. 2021; 5020200171
        • Fukuda M
        • Ariji Y
        • Kise Y
        • et al.
        Comparison of 3 deep learning neural networks for classifying the relationship between the mandibular third molar and the mandibular canal on panoramic radiographs.
        Oral Surg Oral Med Oral Pathol Oral Radiol. 2020; 130: 336-343
        • Yoo JH
        • Yeom HG
        • Shin W
        • et al.
        Deep learning based prediction of extraction difficulty for mandibular third molars.
        Sci Rep. 2021; 11: 1954
        • Cha JY
        • Yoon HI
        • Yeo IS
        • Huh KH
        • Han JS.
        Panoptic segmentation on panoramic radiographs: deep learning-based segmentation of various structures including maxillary sinus and mandibular canal.
        J Clin Med. 2021; 10: 2577
        • Wiens J
        • Guttag J
        • Horvitz E.
        A study in transfer learning: leveraging data from multiple hospitals to enhance hospital-specific predictions.
        J Am Med Inform Assoc. 2014; 21: 699-706
        • Mori M
        • Ariji Y
        • Katsumata A
        • et al.
        A deep transfer learning approach for the detection and diagnosis of maxillary sinusitis on panoramic radiographs.
        Odontology. 2021; 109: 941-948
        • Ishibashi K
        • Ariji Y
        • Kuwada C
        • et al.
        Efficacy of a deep learning model created with the transfer learning method in detecting sialoliths of the submandibular gland on panoramic radiography.
        Oral Surg Oral Med Oral Pathol Oral Radiol. 2022; 133: 238-244
        • Staar B
        • Bayrak S
        • Paulkowski D
        • Freitag M.
        A U-Net based approach for automating tribological experiments.
        Sensors (Basel). 2020; 20: 6703
        • Ronneberger O
        • Fischer P
        • Brox T.
        U-Net: convolutional networks for biomedical image segmentation.
        MICCAI. 2015; : 234-241
        • Yuasa H
        • Kawai T
        • Sugiura M.
        Classification of surgical difficulty in extracting impacted third molars.
        Br J Oral Maxillofac Surg. 2002; 40: 26-31
        • Merdietio Boedi R
        • Banar N
        • De Tobel J
        • Bertels J
        • Vandermeulen D
        • Thevissen PW
        Effect of lower third molar segmentations on automated tooth development staging using a convolutional neural network.
        J Forensic Sci. 2020; 65: 481-486