Skip to main content

AutoLaparo: A New Dataset of Integrated Multi-tasks for Image-guided Surgical Automation in Laparoscopic Hysterectomy

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 (MICCAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13437))

Abstract

Computer-assisted minimally invasive surgery has great potential in benefiting modern operating theatres. The video data streamed from the endoscope provides rich information to support context-awareness for next-generation intelligent surgical systems. To achieve accurate perception and automatic manipulation during the procedure, learning based technique is a promising way, which enables advanced image analysis and scene understanding in recent years. However, learning such models highly relies on large-scale, high-quality, and multi-task labelled data. This is currently a bottleneck for the topic, as available public dataset is still extremely limited in the field of CAI. In this paper, we present and release the first integrated dataset (named AutoLaparo) with multiple image-based perception tasks to facilitate learning-based automation in hysterectomy surgery. Our AutoLaparo dataset is developed based on full-length videos of entire hysterectomy procedures. Specifically, three different yet highly correlated tasks are formulated in the dataset, including surgical workflow recognition, laparoscope motion prediction, and instrument and key anatomy segmentation. In addition, we provide experimental results with state-of-the-art models as reference benchmarks for further model developments and evaluations on this dataset. The dataset is available at https://autolaparo.github.io.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
eBook
USD 39.99
Price excludes VAT (USA)
Softcover Book
USD 54.99
Price excludes VAT (USA)

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Allan, M., et al.: 2018 robotic scene segmentation challenge. arXiv preprint arXiv:2001.11190 (2020)

  2. Allan, M., Ourselin, S., Hawkes, D.J., Kelly, J.D., Stoyanov, D.: 3-d pose estimation of articulated instruments in robotic minimally invasive surgery. IEEE Trans. Med. Imaging 37(5), 1204–1213 (2018)

    Article  Google Scholar 

  3. Allan, M., et al.: 2017 robotic instrument segmentation challenge. arXiv preprint arXiv:1902.06426 (2019)

  4. Barbash, G.I.: New technology and health care costs-the case of robot-assisted surgery. N. Engl. J. Med. 363(8), 701 (2010)

    Article  Google Scholar 

  5. Bihlmaier, A., Woern, H.: Automated endoscopic camera guidance: a knowledge-based system towards robot assisted surgery. In: ISR/Robotik 2014; 41st International Symposium on Robotics, pp. 1–6. VDE (2014)

    Google Scholar 

  6. Blikkendaal, M.D., et al.: Surgical flow disturbances in dedicated minimally invasive surgery suites: an observational study to assess its supposed superiority over conventional suites. Surg. Endosc. 31(1), 288–298 (2016). https://doi.org/10.1007/s00464-016-4971-1

    Article  Google Scholar 

  7. Bolya, D., Zhou, C., Xiao, F., Lee, Y.J.: Yolact: real-time instance segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9157–9166 (2019)

    Google Scholar 

  8. Czempiel, T., et al.: TeCNO: surgical phase recognition with multi-stage temporal convolutional networks. In: Martel, A.L., Abolmaesumi, P., Stoyanov, D., Mateus, D., Zuluaga, M.A., Zhou, S.K., Racoceanu, D., Joskowicz, L. (eds.) MICCAI 2020. LNCS, vol. 12263, pp. 343–352. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59716-0_33

    Chapter  Google Scholar 

  9. Dergachyova, O., Bouget, D., Huaulmé, A., Morandi, X., Jannin, P.: Automatic data-driven real-time segmentation and recognition of surgical workflow. Int. J. Comput. Assist. Radiol. Surg. 11(6), 1081–1089 (2016). https://doi.org/10.1007/s11548-016-1371-x

    Article  Google Scholar 

  10. Farquhar, C.M., Steiner, C.A.: Hysterectomy rates in the united states 1990–1997. Obstetrics Gynecology 99(2), 229–234 (2002)

    Google Scholar 

  11. Fujii, K., Gras, G., Salerno, A., Yang, G.Z.: Gaze gesture based human robot interaction for laparoscopic surgery. Med. Image Anal. 44, 196–214 (2018)

    Article  Google Scholar 

  12. Gao, X., Jin, Y., Long, Y., Dou, Q., Heng, P.A.: Trans-svnet: accurate phase recognition from surgical videos via hybrid embedding aggregation transformer. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 593–603. Springer (2021)

    Google Scholar 

  13. Gao, Y., Vedula, S.S., Reiley, C.E., Ahmidi, N., Varadarajan, B., Lin, H.C., Tao, L., Zappella, L., Béjar, B., Yuh, D.D., et al.: Jhu-isi gesture and skill assessment working set (jigsaws): A surgical activity dataset for human motion modeling. In: MICCAI workshop: M2cai. vol. 3, p. 3 (2014)

    Google Scholar 

  14. Grammatikopoulou, M., Flouty, E., Kadkhodamohammadi, A., Quellec, G., Chow, A., Nehme, J., Luengo, I., Stoyanov, D.: Cadis: Cataract dataset for surgical rgb-image segmentation. Med. Image Anal. 71, 102053 (2021)

    Article  Google Scholar 

  15. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE international conference on computer vision. pp. 2961–2969 (2017)

    Google Scholar 

  16. Huaulmé, A., et al.: Peg transfer workflow recognition challenge report: Does multi-modal data improve recognition? arXiv preprint arXiv:2202.05821 (2022)

  17. Jin, Y., et al.: Sv-rcnet: workflow recognition from surgical videos using recurrent convolutional network. IEEE Trans. Med. Imaging 37(5), 1114–1126 (2017)

    Article  Google Scholar 

  18. Jin, Y., Long, Y., Chen, C., Zhao, Z., Dou, Q., Heng, P.A.: Temporal memory relation network for workflow recognition from surgical video. IEEE Trans. Med. Imaging (2021)

    Google Scholar 

  19. Leibetseder, A., Kletz, S., Schoeffmann, K., Keckstein, S., Keckstein, J.: GLENDA: gynecologic laparoscopy endometriosis dataset. In: Ro, Y.M., et al. (eds.) MMM 2020. LNCS, vol. 11962, pp. 439–450. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-37734-2_36

    Chapter  Google Scholar 

  20. Leibetseder, A., et al.: Lapgyn4: a dataset for 4 automatic content analysis problems in the domain of laparoscopic gynecology. In: Proceedings of the 9th ACM Multimedia Systems Conference, pp. 357–362 (2018)

    Google Scholar 

  21. Li, B., Lu, B., Wang, Z., Zhong, B., Dou, Q., Liu, Y.: Learning laparoscope actions via video features for proactive robotic field-of-view control. IEEE Robotics and Automation Letters (2022)

    Google Scholar 

  22. Liu, H., Soto, R.A.R., Xiao, F., Lee, Y.J.: Yolactedge: Real-time instance segmentation on the edge. arXiv preprint arXiv:2012.12259 (2020)

  23. Maier-Hein, L., et al.: Surgical data science-from concepts toward clinical translation. Med. Image Anal. 76, 102306 (2022)

    Article  Google Scholar 

  24. Maier-Hein, L., et al.: Surgical data science for next-generation interventions. Nature Biomed. Eng. 1(9), 691–696 (2017)

    Article  Google Scholar 

  25. Merrill, R.M.: Hysterectomy surveillance in the united states, 1997 through 2005. Med. Sci. Monitor 14(1), CR24–CR31 (2008)

    Google Scholar 

  26. Nakawala, H., Bianchi, R., Pescatori, L.E., De Cobelli, O., Ferrigno, G., De Momi, E.: “deep-onto” network for surgical workflow and context recognition. Int. J. Comput. Assisted Radiol. Surg. 14(4), 685–696 (2019)

    Google Scholar 

  27. Sarikaya, D., Corso, J.J., Guru, K.A.: Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Trans. Med. Imaging 36(7), 1542–1549 (2017)

    Article  Google Scholar 

  28. Taylor, R.H., Kazanzides, P.: Medical robotics and computer-integrated interventional medicine. In: Biomedical Information Technology, pp. 393–416. Elsevier (2008)

    Google Scholar 

  29. Topol, E.J.: High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25(1), 44–56 (2019)

    Article  Google Scholar 

  30. Tsui, C., Klein, R., Garabrant, M.: Minimally invasive surgery: national trends in adoption and future directions for hospital strategy. Surg. Endosc. 27(7), 2253–2257 (2013)

    Article  Google Scholar 

  31. Twinanda, A.P., Shehata, S., Mutter, D., Marescaux, J., De Mathelin, M., Padoy, N.: Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans. Med. Imaging 36(1), 86–97 (2016)

    Article  Google Scholar 

  32. Wada, K.: labelme: Image Polygonal Annotation with Python (2016). https://github.com/wkentaro/labelme

  33. Zadeh, S.M., et al.: Surgai: deep learning for computerized laparoscopic image understanding in gynaecology. Surg. Endosc. 34(12), 5377–5383 (2020)

    Article  Google Scholar 

Download references

Acknowledgement

This work is supported in part by Shenzhen Portion of Shenzhen-Hong Kong Science and Technology Innovation Cooperation Zone under HZQB-KCZYB-20200089, in part of the HK RGC under T42-409/18-R and 14202918, in part by the Multi-Scale Medical Robotics Centre, InnoHK, and in part by the VC Fund 4930745 of the CUHK T Stone Robotics Institute.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunhui Liu .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 43 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Z. et al. (2022). AutoLaparo: A New Dataset of Integrated Multi-tasks for Image-guided Surgical Automation in Laparoscopic Hysterectomy. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13437. Springer, Cham. https://doi.org/10.1007/978-3-031-16449-1_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16449-1_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16448-4

  • Online ISBN: 978-3-031-16449-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics