Reducing Risks when Using Machine Learning in Diagnosis of Bronchopulmonary Diseases
- Авторлар: Yusupova N.I.1, Bogdanov M.R.1, Smetanina O.N.1
-
Мекемелер:
- Ufa State Aviation Technical University
- Шығарылым: № 1 (2023)
- Беттер: 42-54
- Бөлім: Machine Learning, Neural Networks
- URL: https://bakhtiniada.ru/2071-8594/article/view/269763
- DOI: https://doi.org/10.14357/20718594230105
- ID: 269763
Дәйексөз келтіру
Толық мәтін
Аннотация
The article is about issues of risk reduction when using software solutions based on machine learning methods for classifying chest x-rays on the example of chest x-ray in the diagnosis of bronchopulmonary diseases. A problem statement is formulated to reduce the risk of misdiagnosis by using of methods to counter malicious attacks. The machine learning methods of classification problem, the most dangerous attacks that reduce the recognition efficiency, and measures to counter attacks to reduce risks are identified based on experimental data. These methods were used when experimental studies. Defensive distillation, filtration, unlearning, pruning were used as countermeasures. The results obtained allow us to state that these methods can be used for other images as well. The results of experimental studies made it possible to formulate recommendations as rules, including combinations of recognition methods, attacks, and countermeasures to reduce the risk of misdiagnosis.
Негізгі сөздер
Толық мәтін

Авторлар туралы
Nafisa Yusupova
Ufa State Aviation Technical University
Хат алмасуға жауапты Автор.
Email: yusupova.ni@ugatu.su
Doctor of technical sciences, professor. Professor
Ресей, Ufa cityMarat Bogdanov
Ufa State Aviation Technical University
Email: bogdanov_marat@mail.ru
Candidate of biological sciences, docent. Аssociated Professor
Ресей, Ufa cityOlga Smetanina
Ufa State Aviation Technical University
Email: smoljushka@mail.ru
Doctor of technical sciences, docent. Professor
Ресей, Ufa cityӘдебиет тізімі
- Patricia AH Williams, Andrew J Woodward. Cybersecurity vulnerabilities in medical devices: a complex environment and multifaceted problem. Medical Devices: Evidence and Research 2015:8.
- Makary MA, Daniel M. Medical error-the third leading cause of death in the US. BMJ. 2016 May 3; 353:i2139.
- Molly T Beinfeld, Guy Scott Gazelle. Diagnostic Imaging Costs: Are They Driving Up the Costs of Hospital Care? Radiology 235(3):934-9. 2005.
- Kyriakos D. Apostolidis and George A. Papakostas. A Survey on Adversarial Deep Learning Robustness in Medical Image Analysis. Electronics 2021, 10, 2132.
- Maliamanis T.; Papakostas G.A. Machine Learning Vulnerability in Medical Imaging. In Machine Learning, Big Data, and IoT for Medical Informatics, 1st ed.; Elsevier: Amsterdam, The Netherlands, 2021. Available online: https://www.elsevier.com/books/machine-learning-big-data-and-iot-for-medical-informatics/xhafa/978-0-12-821777-1 (accessed on 4 June 2021).
- Tyukin I.Y., Higham D.J., Gorban A.N. On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–6.
- Mangaokar N., Pu, J., Bhattacharya P., Reddy C.K., Viswanath, B. Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models. In Proceedings of the 2020 IEEE European Symposium on Security and Privacy (EuroS&P), Genoa, Italy, 7–11 September 2020; pp. 139–157.
- Xiao Ying Lin, Battista Biggio. Vredonosnoe mashinnoe obuchenie: ataki iz laboratorij v real'niy mir [Malicious Machine Learning: Attacks from Labs to the Real World].// Open Systems. DBMS. 2021, No. 03.
- Ilyushin E.A., Namiot D.E., Chizhov I.V. Ataki na sistemy mashinnogo obucheniya – obshchie problemy i metody [Attacks on machine learning systems – general problems and methods]. // International Journal of Open Information Technologies. 2022. Vol. 10. No. 3. P. 17-21.
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826.
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2015, arXiv:1412.6572. Available online: http://arxiv.org/abs/1412.6572 (accessed on 4 June 2021).
- Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram Swami. Practical Black-Box Attacks against Machine Learning. Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security, Abu Dhabi, UAE.
- Moosavi-Dezfooli, S.-M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal Adversarial Perturbations. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 86–94.
- Carlini, N. and Wagner, D. A. (2017). Towards evaluating the robustness of neural networks. In SP. IEEE Computer Society.
- Chen, P., Zhang, H., Sharma, Y., Yi, J. and Hsieh, C. (2017). ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In AISec@CCS. ACM.
- Moosavi-Dezfooli, S.-M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal Adversarial Perturbations. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 86–94.
- Huang, Y.; Würfl, T.; Breininger, K.; Liu, L.; Lauritsch, G.; Maier, A. Some Investigations on Robustness of Deep Learning in Limited Angle Tomography. In Medical Image Computing and Computer Assisted Intervention— MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switzerland, 2018; Volume 11070, pp. 145–153.
- J. Lu, T. Issaranon, and D. Forsyth, “Safetynet: Detecting and rejecting adversarial examples robustly,” ICCV, 2017
- Metzen, J.H.; Genewein, T.; Fischer, V.; Bischoff, B. On Detecting Adversarial Perturbations. arXiv 2017, arXiv:1702.04267. Available online: http://arxiv.org/abs/1702.04267 (accessed on 4 June 2021).
- Shixiang Gu, Luca Rigazio. Towards Deep Neural Network Architectures Robust to Adversarial Examples.
- Wang, Bolun & Yao, Yuanshun & Shan, Shawn & Li, Huiying & Viswanath, Bimal & Zheng, Haitao & Zhao, Ben. (2019). Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks. 707-723. 10.1109/SP.2019.00031.
- https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection.
- G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” in Deep Learning and Representation Learning Workshop at NIPS 2014. arXiv preprint arXiv:1503.02531, 2014
- Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. 2016 IEEE Symposium on Security and Privacy.
- B. Wang et al., "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks,"2019 IEEE Symposium on Security and Privacy (SP), 2019, pp. 707-723.
