REFERENCES

1. Kwekha-Rashid AS, Abduljabbar HN, Alhayani B. Coronavirus disease (COVID-19) cases analysis using machine-learning applications. Appl Nanosci 2021:1-13.

2. Choudhury O, Park Y, Salonidis T, et al. Predicting adverse drug reactions on distributed health data using federated learning. AMIA Annu Symp Proc 2019;2019:313-22.

3. Xu J, Glicksberg BS, Su C, et al. Federated learning for healthcare informatics. J Healthc Inform Res 2021;5:1-19.

4. Vaid A, Jaladanki SK, Xu J, et al. Federated learning of electronic health records to improve mortality prediction in hospitalized patients with COVID-19: machine learning approach. JMIR Med Inform 2021;9:e24207.

5. Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. CCS '15. New York, NY, USA: Association for Computing Machinery; 2015. pp. 1322–33.

6. Shokri R, Stronati M, Song C, Shmatikov V. Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP); 2017. pp. 3–18.

7. Choudhury O, Gkoulalas-Divanis A, Salonidis T, et al. Differential privacy-enabled federated learning for sensitive health data; 2020. Available from: https://arxiv.org/pdf/1910.02578. [Last accessed on 19 Dec 2023].

8. Choudhury O, Gkoulalas-Divanis A, Salonidis T, Sylla I. Anonymizing data for preserving privacy during use for federated machine learning. Google Patents; 2021. US Patent 11, 188, 791. Available from: https://patentimages.storage.googleapis.com/82/a7/42/f741ab230e217a/US11188791.pdf. [Last accessed on 19 Dec 2023].

9. Islam TU, Ghasemi R, Mohammed N. Privacy-preserving federated learning model for healthcare data. In: 2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC). IEEE; 2022. pp. 0281–87.

10. Liu Y, Kang Y, Zhang X, et al. A communication efficient collaborative learning framework for distributed features. arXiv; 2020. ArXiv: 1912.11187 [cs, stat].

11. Chen T, Jin X, Sun Y, Yin W. VAFL: a method of vertical asynchronous federated learning. arXiv; 2020. ArXiv: 2007.06081 [cs, math, stat].

12. Dwork C, Lei J. Differential privacy and robust statistics. In: Proceedings of the forty-first annual ACM symposium on Theory of computing. STOC '09. New York, NY, USA: Association for Computing Machinery; 2009. pp. 371–80.

13. Hu Y, Niu D, Yang J, Zhou S. FDML: a collaborative machine learning framework for distributed features. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. KDD '19. New York, NY, USA: Association for Computing Machinery; 2019. pp. 2232–40.

14. Bishop CM. Neural networks and their applications. Rev Sci Instr 1994;65:1803-32.

15. Gurney K. An introduction to neural networks. CRC press; 2018.

16. Yu Y, Si X, Hu C, Zhang J. A review of recurrent neural networks: LSTM cells and network architectures. Neur Comput 2019;31:1235-70.

17. Understanding LSTM networks – colah's blog; . Available from: http://colah.github.io/posts/2015-08-Understanding-LSTMs/. [Last accessed on 19 Dec 2023].

18. Staudemeyer RC, Morris ER. Understanding LSTM–a tutorial into long short-term memory recurrent neural networks. arXiv preprint arXiv: 190909586 2019.

19. Wang Z, Song M, Zhang Z, et al. Beyond inferring class representatives: user-level privacy leakage from federated learning. In: IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE; 2019. pp. 2512–20.

20. Melis L, Song C, De Cristofaro E, Shmatikov V. Exploiting Unintended Feature Leakage in Collaborative Learning. In: 2019 IEEE Symposium on Security and Privacy (SP); 2019. pp. 691–706.

21. Bagdasaryan E, Veit A, Hua Y, Estrin D, Shmatikov V. How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics. PMLR; 2020. pp. 2938–48. Available from: https://proceedings.mlr.press/v108/bagdasaryan20a.html. [Last accessed on 19 Dec 2023].

22. Song M, Wang Z, Zhang Z, et al. Analyzing user-level privacy attack against federated learning. IEEE J Select Areas Commun 2020;38:2430-44.

23. Rajkumar A, Agarwal S. A differentially private stochastic gradient descent algorithm for multiparty classification. In: Artificial Intelligence and Statistics. PMLR; 2012. pp. 933–41. Available from: https://proceedings.mlr.press/v22/rajkumar12.html. [Last accessed on 19 Dec 2023].

24. Kikuchi H, Hamanaga C, Yasunaga H, et al. Privacy-preserving multiple linear regression of vertically partitioned real medical datasets. J Inform Process 2018;26:638-47.

25. Fang H, Qian Q. Privacy preserving machine learning with homomorphic encryption and federated learning. Fut Internet 2021;13:94.

26. Zhang C, Li S, Xia J, et al. {BatchCrypt}: Efficient homomorphic encryption for Cross-Silo federated learning. In: 2020 USENIX annual technical conference (USENIX ATC 20); 2020. pp. 493–506. Available from: https://www.usenix.org/conference/atc20/presentation/zhang-chengliang. [Last accessed on 19 Dec 2023].

27. Xu G, Li H, Liu S, Yang K, Lin X. Verifynet: Secure and verifiable federated learning. IEEE Trans Inform Forensic Secur 2019;15:911-26.

28. Zhu H, Goh RSM, Ng WK. Privacy-preserving weighted federated learning within the secret sharing framework. IEEE Access 2020;8:198275-84.

29. Zhang Y, Jia R, Pei H, et al. The secret revealer: generative model-inversion attacks against deep neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2020. pp. 253–61. Available from: https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.html. [Last accessed on 19 Dec 2023].

30. Geyer RC, Klein T, Nabi M. Differentially private federated learning: a client level perspective. arXiv preprint arXiv: 171207557 2017.

31. Cho H, Wu DJ, Berger B. Secure genome-wide association analysis using multiparty computation. Nat Biotechnol 2018;36:547-51.

32. Hu Y, Liu P, Kong L, Niu D. Learning Privately over Distributed Features: An ADMM Sharing Approach. arXiv; 2019. ArXiv: 1907.07735 [cs, stat]. Available from: http://arxiv.org/abs/1907.07735. [Last accessed on 19 Dec 2023].

33. McMahan B, Moore E, Ramage D, Hampson S, y Arcas BA. Communication-efficient learning of deep networks from decentralized data. In: Artificial intelligence and statistics. PMLR; 2017. pp. 1273–82. Available from: https://proceedings.mlr.press/v54/mcmahan17a?ref=https://githubhelp.com. [Last accessed on 19 Dec 2023].

34. Harutyunyan H, Khachatrian H, Kale DC, Ver Steeg G, Galstyan A. Multitask learning and benchmarking with clinical time series data. Sci Data 2019;6:96.

35. Ji Z, Jiang X, Wang S, Xiong L, Ohno-Machado L. Differentially private distributed logistic regression using private and public data. BMC Med Genomics 2014;7:S14.

36. Carlini N, Tramer F, Wallace E, et al. Extracting training data from large language models. In: 30th USENIX Security Symposium (USENIX Security 21); 2021. pp. 2633–50. Available from: https://www.usenix.org/conference/usenixsecurity21/presentation/carlini-extracting. [Last accessed on 19 Dec 2023].

37. Chaudhuri K, Monteleoni C, Sarwate AD. Differentially private empirical risk minimization. J Mach Learn Res 2011;12: 1069. Available from: https://www.jmlr.org/papers/volume12/chaudhuri11a/chaudhuri11a.pdf. [Last accessed on 19 Dec 2023].

38. Abadi M, Chu A, Goodfellow I, et al. Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. Vienna Austria: ACM; 2016. pp. 308–18.

39. Yeom S, Giacomelli I, Fredrikson M, Jha S. Privacy risk in machine learning: analyzing the connection to overfitting. In: 2018 IEEE 31st computer security foundations symposium (CSF). IEEE; 2018. pp. 268–82.

40. Watson L, Guo C, Cormode G, Sablayrolles A. On the importance of difficulty calibration in membership inference attacks. arXiv preprint arXiv: 211108440 2021.

41. Papernot N, Abadi M, Erlingsson U, Goodfellow I, Talwar K. Semi-supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv: 161005755 2016. Available from: https://doi.org/10.48550/arXiv.1610.05755. [Last accessed on 19 Dec 2023].

42. Papernot N, Song S, Mironov I, et al. Scalable private learning with pate. arXiv preprint arXiv: 180208908 2018. Available from: https://doi.org/10.48550/arXiv.1802.08908. [Last accessed on 19 Dec 2023].

43. Bagdasaryan E, Poursaeed O, Shmatikov V. Differential privacy has disparate impact on model accuracy. Advances in neural information processing systems 2019;32. Available from: https://proceedings.neurips.cc/paper_files/paper/2019/hash/fc0de4e0396fff257ea362983c2dda5a-Abstract.html. [Last accessed on 19 Dec 2023].

44. Torfi A, Fox EA, Reddy CK. Differentially private synthetic medical data generation using convolutional GANs. Inform Sci 2022;586:485-500.

45. Islam TU. Privacy-preserving federated learning model for healthcare data 2023 Feb. Available from: http://hdl.handle.net/1993/37192. [Last accessed on 19 Dec 2023].

Journal of Surveillance, Security and Safety
ISSN 2694-1015 (Online)
Follow Us

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/