REFERENCES

1. Deuerlein C, Langer M, Seßner J, Heß P, Franke J. Human-robot-interaction using cloud-based speech recognition systems. Procedia CIRP 2021;97:130-5.

2. Cheng JR, Liu JX, Xu XB, Xia DW, Liu L, Sheng VS. A review of Chinese named entity recognition. KSII T Internet Info 2021;15:2012-30.

3. Yu J, Bohnet B, Poesio M. Named entity recognition as dependency parsing. In: Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault, editors. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics; 2020 Jul 5-10; Online: Association for Computational Linguistics; 2020. pp. 6470–6.

4. Lin H, Lu Y, Tang J, et al. A rigorous study on named entity recognition: can fine-tuning pretrained model lead to the promised land? In: Webber B, Cohn T, He Y, and Liu Y, editors. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing; 2020 Nov 8-12; Online: Association for Computational Linguistics; 2020. pp. 7291–300.

5. Zhou G, Su J, Zhang J, Zhang M. Exploring various knowledge in relation extraction. In: Knight K, Ng HT, Oflazer K, editors. Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics; 2005 Jun 25-30; Ann Arbor, Michigan. Association for Computational Linguistics; 2005. pp. 427–34.

6. Cheng P, Erk K. Attending to entities for better text understanding. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence; 2020 Feb 7-12; New York, USA: AAAI; 2020. pp. 7554–61.

7. Petkova D, Croft WB. Proximity-based document representation for named entity retrieval. In: Mário J. Silva, Alberto A. F. Laender, Ricardo Baeza-Yates, Deborah L. McGuinness, Bjorn Olstad, Øystein Haug Olsen, André O. Falcão, editors. Proceedings of the Sixteenth ACM Conference on Conference on Information and Knowledge Management; 2007 Nov 6-10; New York, USA: Association for Computing Machinery; 2007. pp. 731–40.

8. Virga P, Khudanpur S. Transliteration of proper names in cross-lingual information retrieval. In: Hinrichs EW, Roth D, editors. Proceedings of the ACL 2003 Workshop on Multilingual and Mixed-language Named Entity Recognition; 2003 Jul 7-12; Sapporo, Japan: Association for Computational Linguistics; 2003. pp. 57–64.

9. Chen HH, Yang C, Lin Y. Learning formulation and transformation rules for multilingual named entities. In: Hinrichs EW, Roth D, editors. Proceedings of the ACL 2003 Workshop on Multilingual and Mixed-language Named Entity Recognition; 2003 Jul 7-12; Sapporo, Japan: Association for Computational Linguistics; 2003. pp. 1–8.

10. Light M. Corpus processing for lexical acquisition. J Logic Lang Inf 1998;7:111-4.

11. Sun Z, Deng Z. Unsupervised neural word segmentation for Chinese via segmental language modeling. In: Riloff E, Chiang D, Hockenmaier J, Tsujii J, editors. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing; 2018 Nov 2-4; Brussels, Belgium: Association for Computational Linguistics; 2018. pp. 4915–20.

12. Shaalan K. A survey of Arabic named entity recognition and classification. Comput Linguist 2014;40:469-510.

13. Wang Y, Tong H, Zhu Z, Li Y. Nested named entity recognition: a survey. ACM T Knowl Discov D 2022;16:1-29.

14. He S, Sun D, Wang Z. Named entity recognition for Chinese marine text with knowledge-based self-attention. Multimed Tools Appl 2022;81:19135-49.

15. Wang TB, Huang RY, Hu N, Wang HS, Chu GH. Chinese named entity recognition method based on dictionary semantic knowledge enhancement. Ieice T Inf Syst 2023;E106D:1010-7.

16. Zhang H, Wang XY, Liu JX, Zhang L, Ji LX. Chinese named entity recognition method for the finance domain based on enhanced features and pretrained language models. Inf Sci 2023;625:385-400.

17. Goyal A, Gupta V, Kumar M. Recent named entity recognition and classification techniques: a systematic review. Comput Sci Rev 2018;29:21-43.

18. Zhu E, Li J. Boundary smoothing for named entity recognition. In: Muresan S, Nakov P, Villavicencio A, editors. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics; 2022 May 22-27; Dublin, Ireland: Association for Computational Linguistics; 2022. pp. 7096–108.

19. Zhang Y, Wang M, Huang Y, Gu Q. Improving Chinese segmentation-free word embedding with unsupervised association measure. [Preprint]. arXiv. July 5, 2020[accessed 2023 July 21]. Available from: https://doi.org/10.48550/arXiv.2007.02342.

20. Mena G, Belanger D, Linderman S, Snoek J. Learning latent permutations with Gumbel-Sinkhorn networks. In: Yoshua B, Yann LeCun, Tara S, editors. 6th International Conference on Learning Representations; 2018 Apr 30- May 3; Vancouver, BC, Canada: OpenReview. net; 2022. pp. 1–22. Available from: https://openreview.net/forum?id=Byt3oJ-0W.

21. Zhou G, Su J. Named entity recognition using an HMM-based chunk tagger. In: Isabelle P. Proceedings of the 40th Annual Meeting on Association for Computational Linguistics; 2002 Jul 7-12; Philadelphia Pennsylvania: Association for Computational Linguistics; 2002. pp. 473–80.

22. Dos Santos CIC, Guimar A Es V. Boosting named entity recognition with neural character embeddings. In: Duan X, Banchs RE, Zhang M, Li H, Kumaran A, editors. Proceedings of the Fifth Named Entity Workshop; 2015 Jul 7-12; Beijing, China: Association for Computational Linguistics; 2015. pp. 25–33.

23. Chiu J, Nichols E. Named entity recognition with bidirectional LSTM-CNNs. Trans Assoc Comput Linguist 2016;4:357-70.

24. Ma X, Hovy E. End-to-end sequence labeling via Bi-directional LSTM-CNNs-CRF. In: Erk K, Smith NA, editors. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics; 2016 Aug 7-12; Berlin, Germany: Association for Computational Linguistics; 2016. pp. 1064–74.

25. Dyer C, Ballesteros M, Ling W, Matthews A, Smith NA. Transition-based dependency parsing with stack long short-term memory. In: Zong C, Strube M, editors. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing; 2015 Jul 26-31; Beijing, China: Association for Computational Linguistics; 2016. pp. 334–43.

26. Tran Q, MacKinlay A, Jimeno Yepes A. Named entity recognition with stack residual LSTM and trainable bias decoding. In: Kondrak G, Watanabe T, editors. Proceedings of the Eighth International Joint Conference on Natural Language; 2017 Nov 27–Dec 1; Taipei, Taiwan: Asian Federation of Natural Language Processing; 2017. pp. 566–75. Available from: https://aclanthology.org/I17-1057.

27. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition; 2016 Jun 27–30; Las Vegas, NV, USA: IEEE; 2016. pp. 770–8.

28. Zhang Y, Clark S. Chinese segmentation with a word-based perceptron algorithm. In: Zaenen A, Bosch A, editors. Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics; 2007 Jun 23–30; Prague, Czech Republic: Association for Computational Linguistics; 2007. pp. 840–7. Available from: https://aclanthology.org/P07-1106.

29. Collins M. Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms. In: Hajic J, Matsumoto Y, editors. Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing; 2002 Jul 6–7; Prague, Czech Republic: Association for Computational Linguistics; 2002. pp. 1–8.

30. Ma J, Hinrichs E. Accurate linear-time Chinese word segmentation via embedding matching. In: Zong C, Strube M, editors. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing; 2015 Jul 26–31; Beijing, China: Association for Computational Linguistics; 2015. pp. 1733–43.

31. Deng X, Sun Y. An improved embedding matching model for Chinese word segmentation. In: Wang X, Zhou J, editors. 2018 International Conference on Artificial Intelligence and Big Data; 2018 May 26–28; Chengdu, China: IEEE; 2018. pp. 195–200.

32. Zhang Q, Liu X, Fu J. Neural networks incorporating dictionaries for Chinese word segmentation. In: McIlraith SA, Weinberger KQ, editors. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence; 2018 Feb 2–7; New Orleans Louisiana USA: AAAI; 2018. pp. 5682–9.

33. Ye Y, Li W, Zhang Y, Qiu L, Sun J. Improving cross-domain Chinese word segmentation with word embeddings. In: Burstein J, Doran C, Solorio T, editors. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2019 Jun 2–7; Minneapolis, Minnesota: Association for Computational Linguistics; 2019. pp. 2726–35.

34. Tang X, Huang Y, Xia M, Long C. A multi-task BERT-BiLSTM-AM-CRF strategy for Chinese named entity recognition. Neural Process Lett 2023;55:1209-29.

35. Huang W, Cheng X, Chen K, Wang T, Chu W. Towards fast and accurate neural Chinese word segmentation with multi-criteria learning. In: Scott D, Bel N, Zong C, editors. Proceedings of the 28th International Conference on Computational Linguistics; 2020 Dec 8–13; Barcelona, Spain: International Committee on Computational Linguistics; 2020. pp. 2062–72.

36. Tian Y, Song Y, Xia F, Zhang T, Wang Y. Improving Chinese word segmentation with wordhood memory networks. In: Jurafsky D, Chai J, Schluter N, Tetreault J, editors. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics; 2020 Jul 5-10; Online: Association for Computational Linguistics; 2020. pp. 8274–85.

37. Liu A, Du J, Stoyanov V. Knowledge-augmented language model and its application to unsupervised named-entity recognition. In: Burstein J, Doran C, Solorio T, editors. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2019 Jun 2–7; Minneapolis, Minnesota: Association for Computational Linguistics; 2019. pp. 1142–50.

38. Jing L, Tian Y. Self-supervised visual feature learning with deep neural networks: a survey. IEEE Trans Pattern Anal Mach Intell 2021;43:4037-58.

39. Jaiswal A, Babu AR, Zadeh MZ, Banerjee D, Makedon F. A survey on contrastive self-supervised learning. Technologies 2021;9:2.

40. Liu X, Zhang F, Hou Z, et al. Self-supervised learning: generative or contrastive. IEEE Trans Knowl Data Eng 2023;35:857-76.

41. McDonald D. Large-scale kernel machines. In: Bottou L, Chapelle O, DeCoste D, Weston J, editors. Scaling Learning Algorithms toward AI. Cambridge: MIT Press; 2007. pp. 321–59. Available from: https://ieeexplore.ieee.org/servlet/opac?bknumber=6267226.

42. Bengio Y, Lamblin P, Popovici D, Larochelle H. Greedy layer-wise training of deep networks. In: Schölkopf B, Platt JC, Hoffman T, editors. Proceedings of the 19th International Conference on Neural Information Processing Systems; 2006 Dec 4-7;Canada: MIT Press, 2006. pp. 153–60. Available from: https://ieeexplore.ieee.org/document/6287632.

43. Hinton GE, Osindero S, Teh Y. A fast learning algorithm for deep belief nets. Neural Comput 2006;18:1527-54.

44. Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Bach F, Blei D, editors. Proceedings of the 32nd International Conference on International Conference on Machine Learning; 2015 Jul 6-11; Lille, France: JMLR. org, 2015. pp. 448–56.

45. Nair V, Hinton GE. Rectified linear units improve restricted boltzmann machines. In: Fürnkranz J, Joachims T, editors. Proceedings of the 27th International Conference on International Conference on Machine Learning; 2010 Jun 21-24; Haifa, Israel: Omnipress, 2010. pp. 807–14.

46. Giorgi J, Nitski O, Wang B, Bader G. DeCLUTR: deep contrastive learning for unsupervised textual representations. In: Zong C, Xia F, Li W, Navigli R, editors. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing; 2021 Aug 1-8; Online: Association for Computational Linguistics, 2021. pp. 879–95.

47. Devlin J, Chang M, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. In: Ammar W, Louis A, Mostafazadeh N, editors. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2019 Jun 1-8; Minneapolis, Minnesota: Association for Computational Linguistics, 2019. pp. 4171–86.

48. Fang H, Wang S, Zhou M, Ding J, Xie P. CERT: contrastive self-supervised learning for language understanding. [Preprint]. arXiv. June 18 2020[accessed 2023 July 21]. Available from: https://doi.org/10.48550/arXiv.2005.12766.

49. Yang B, Mitchell T. Leveraging knowledge bases in LSTMs for improving machine reading. In: Barzilay R, Kan MY, editors. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics; 2017 Jul 30–Aug 4; Vancouver, Canada: Association for Computational Linguistics; 2017. pp. 1436–46.

50. Lample G, Ballesteros M, Subramanian S, Kawakami K, Dyer C. Neural architectures for named entity recognition. In: Knight K, Nenkova A, Rambow O, editors. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2016 Jun 12–17; San Diego, California: Association for Computational Linguistics, 2016. pp. 260–70.

Intelligence & Robotics
ISSN 2770-3541 (Online)
Follow Us

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/