Archives

  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2020-03
  • 2020-07
  • 2020-08
  • br proposed a transfer learning method Based

    2020-08-18


    [21] proposed a transfer learning method. Based on the pretraining model of GoogLeNet and ResNet, they first classified each patch of one image and then used the majority voting method to obtain image-wise classification results. Vang et al. [22] first proposed using Google In-ception-V3 to perform patch-wise classification. The patch-wise pre-dictions were then passed through an ensemble fusion framework in-volving majority voting, a gradient boosting machine and logistic regression to obtain an image-wise prediction. Rakhlin et al. [23] proposed a different method named deep convolutional feature re-presentation. In this method, pathological images are first encoded with the general convolutional neural network to obtain sparse descriptors of low dimensionality (1408 or 2048). Finally, they use gradient boosted trees for the final classification result. Awan et al. [24] utilized ResNet to obtain twelve 8192-dimensional feature 1346546-69-7 that represented twelve nonoverlapping patches of 512 × 512 pixels from one input image. To train a classifier with a larger context, they then trained an
    SVM classifier with the bound features of 2 × 2 overlapping blocks of patches, which is equivalent to training the classifier with the features of 1024 × 1024 pixels in size. The majority voting result on the clas-sification of 1024 × 1024 pixel overlapping blocks of patches was used as the final image-wise classification result.
    To summarize, the development of the pathological image classifi-cation method based on deep learning in chronological order is as follows: 1, CNN + majority voting; 2, CNN + SVM; 3, CNN + transfer learning + majority voting or SVM; 4, CNN + transfer learning + patch-wise binding + majority voting or SVM. SVM can also be replaced with other traditional methods of machine learning. Although recently proposed methods are starting to focus on the result of patch-wise fusion to obtain the final image-wise classification results, these methods are either directly using majority voting and SVM or simply integrating short-distance patch dependencies. They all ignore the important role of long-distance spatial dependence of the histo-pathological image. Moreover, most of these recently proposed methods only average pool the last convolutional layer of the CNN into a one-dimensional feature vector and use it as the feature representation of image patches. This feature representation is not sufficiently richer for patch-wise fusion.
    3. Dataset
    One main characteristic of the deep learning method is that it can learn from large amounts of training data. Breakthrough results in the computer vision field were obtained on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [25] based on the ImageNet dataset. In contrast, there are few publicly available large-scale image datasets in the medical image domain. Additionally, most of these datasets are not labeled. The cost for scarce medical experts to label data is very ex-pensive. Moreover, traditional methods of annotating natural images, such as crowdsourcing [26], cannot be transplanted to the medical image domain because these tasks are very complex and often require long-term professional training and extensive domain knowledge. Thus, most of the early research on breast cancer pathological image analysis is performed on a small dataset, and other large datasets are usually not publicly available. Veta et al. [2] noted that the main obstacle in de-veloping new analysis methods for pathological images is the lack of large, labeled and open datasets.
    The open challenge of the medical image field has greatly con-tributed to the development of medical image analysis. Since 2007, medical imaging conferences and workshops, such as the International Conference on Image Analysis and Recognition (ICIAR), the International Symposium for Biomedical Imaging (ISBI), the International Conference on Pattern Recognition (ICPR) and Medical Image Computing and Computer-Assisted Intervention (MICCAI), have published a large number of medical image datasets for benchmark research, available at http://www.grand-challenge.org. The main ad-vantage of these public benchmark datasets is that chiasma provide a precise definition of tasks and assessment metrics to facilitate a fair comparison of the performance of various methods. Related to breast cancer pathological image classification, one of the largest open data-sets containing 249 images was released by “Bioimaging2015: 4th International Symposium in Applied Bioimaging”. The goal of this challenge was to provide an automatic and precise classification for each input breast cancer pathological image. The Grand Challenge on BreAst Cancer Histology images (BACH) [20] was organized as part of the ICIAR 2018 conference (15th International Conference on Image Analysis and Recognition). The organizer of the BACH challenge pro-vided 400 pathological images that were consistent with the format of the Bioimaging2015 dataset. The pathological images were divided into 4 categories, each with 100 images. To the best of our knowledge, this is by far the largest dataset of breast cancer pathological images but is available only during the challenge for the competition. Although these open datasets have played a very significant role in improving the