REFERENCES

1. Wang, W. Y.; Zhang, S.; Li, G.; et al. Artificial intelligence enabled smart design and manufacturing of advanced materials: the endless Frontier in AI+ era. Mater. Genome. Eng. Adv. 2024, 2, e56.

2. Dagdelen, J.; Dunn, A.; Lee, S.; et al. Structured information extraction from scientific text with large language models. Nat. Commun. 2024, 15, 1418.

3. Ghafarollahi, A.; Buehler, M. J. SciAgents: automating scientific discovery through bioinspired multi-agent intelligent graph reasoning. Adv. Mater. 2025, 37, e2413523.

4. Chen, X.; Yi, H.; You, M.; et al. Enhancing diagnostic capability with multi-agents conversational large language models. NPJ. Digit. Med. 2025, 8, 159.

5. Sendek, A. D.; Ransom, B.; Cubuk, E. D.; Pellouchoud, L. A.; Nanda, J.; Reed, E. J. Machine learning modeling for accelerated battery materials design in the small data regime. Adv. Energy. Mater. 2022, 12, 2200553.

6. Chen, C.; Ong, S. P. A universal graph deep learning interatomic potential for the periodic table. Nat. Comput. Sci. 2022, 2, 718-28.

7. Chen, C.; Ye, W.; Zuo, Y.; Zheng, C.; Ong, S. P. Graph networks as a universal machine learning framework for molecules and crystals. Chem. Mater. 2019, 31, 3564-72.

8. Choudhary, K.; Decost, B. Atomistic line graph neural network for improved materials property predictions. npj. Comput. Mater. 2021, 7, 185.

9. Schütt, K.; Kindermans, P. J.; Sauceda, H. E.; Chmiela, S.; Tkatchenko, A.; Müller, K. R. SchNet: a continuous-filter convolutional neural network for modeling quantum interactions. arXiv 2017, arXiv:1706.08566. Available online: https://arxiv.org/abs/1706.08566 (accessed 12 December 2025).

10. Xie, T.; Grossman, J. C. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. Phys. Rev. Lett. 2018, 120, 145301.

11. Kaba, S. O.; Ravanbakhsh, S. Equivariant networks for crystal structures. arXiv 2022, arXiv:2211.15420. Available online: https://arxiv.org/abs/2211.15420 (accessed 12 December 2025).

12. Yan, K.; Liu, Y.; Lin, Y.; Ji, S. Periodic Graph Transformers for Crystal Material Property Prediction. arXiv 2022, arXiv:2209.11807. Available online: https://arxiv.org/abs/2209.11807 (accessed 12 December 2025).

13. Zhang, Y.; Khan, S. A.; Mahmud, A.; et al. Exploring the role of large language models in the scientific method: from hypothesis to discovery. npj. Artif. Intell. 2025, 1, 14.

14. Chen, Q.; Yang, M.; Qin, L.; et al. AI4Research: a survey of artificial intelligence for scientific research. arXiv 2025, arXiv:2507.01903. Available online: https://arxiv.org/abs/2507.01903 (accessed 12 December 2025).

15. Yao, T.; Yang, Y.; Cai, J.; et al. From LLM to Agent: a large-language-model-driven machine learning framework for catalyst design of MgH2 dehydrogenation. J. Magnes. Alloys. 2025.

16. Robson, M. J.; Xu, S.; Wang, Z.; Chen, Q.; Ciucci, F. Multi-agent-network-based idea generator for zinc-ion battery electrolyte discovery: a case study on zinc tetrafluoroborate hydrate-based deep eutectic electrolytes. Adv. Mater. 2025, 37, e2502649.

17. Lohana Tharwani, K. K.; Kumar, R.; Sumita; Ahmed, N.; Tang, Y. Large language models transform organic synthesis from reaction prediction to automation. arXiv 2025, arXiv:2508.05427. Available online: https://arxiv.org/abs/2508.05427 (accessed 12 December 2025).

18. Niyongabo Rubungo, A.; Arnold, C.; Rand, B. P.; Dieng, A. B. LLM-Prop: predicting the properties of crystalline materials using large language models. npj. Comput. Mater. 2025, 11, 186.

19. Liu, J.; Anderson, H.; Waxman, N. I.; Kovalev, V.; Fisher, B.; Li, E.; Guo, X. Thermodynamic prediction enabled by automatic dataset building and machine learning. arXiv 2025, arXiv:2507.07293. Available online: https://arxiv.org/abs/2507.07293 (accessed 12 December 2025).

20. Polak, M. P.; Morgan, D. Extracting accurate materials data from research papers with conversational language models and prompt engineering. Nat. Commun. 2024, 15, 1569.

21. Ma, Y.; Gou, Z.; Hao, J.; Xu, R.; Wang, S.; Pan, L.; Yang, Y.; Cao, Y.; Sun, A.; Awadalla, H.; Chen, W. SciAgent: tool-augmented language models for scientific reasoning. arXiv 2024, arXiv:2402.11451. Available online: https://arxiv.org/abs/2402.11451 (accessed 12 December 2025).

22. Skarlinski, M. D.; Cox, S.; Laurent, J. M.; et al. Language agents achieve superhuman synthesis of scientific knowledge. arXiv 2024, arXiv:2409.13740. Available online: https://arxiv.org/abs/2409.13740 (accessed 12 December 2025).

23. Zheng, M.; Feng, X.; Si, Q.; et al. Multimodal table understanding. arXiv 2024, arXiv:2406.08100. Available online: https://arxiv.org/abs/2406.08100 (accessed 12 December 2025).

24. Masry, A.; Long, D. X.; Tan, J. Q.; Joty, S.; Hoque, E. ChartQA: a benchmark for question answering about charts with visual and logical reasoning. arXiv 2022, arXiv:2203.10244. Available online: https://arxiv.org/abs/2203.10244 (accessed 12 December 2025).

25. Zhang, D.; Jia, X.; Hung, T. B.; et al. “DIVE” into hydrogen storage materials discovery with AI agents. arXiv 2025, arXiv:2508.13251. Available online: https://arxiv.org/abs/2508.13251 (accessed 12 December 2025).

26. Yang, F.; Sato, R.; Cheng, E. J.; et al. Data-driven viewpoint for developing next-generation Mg-ion solid-state electrolytes. J. Electrochem. 2024, 30, 3.

27. Wang, Q.; Yang, F.; Wang, Y.; et al. Unraveling the complexity of divalent hydride electrolytes in solid-state batteries via a data-driven framework with large language model. Angew. Chem. Int. Ed. Engl. 2025, 64, e202506573.

28. Beltagy, I.; Lo, K.; Cohan, A. SciBERT: a pretrained language model for scientific text. arXiv 2019, arXiv:1903.10676. Available online: https://arxiv.org/abs/1903.10676 (accessed 12 December 2025).

29. Gupta, T.; Zaki, M.; Krishnan, N. M. A. Mausam. MatSciBERT: a materials domain language model for text mining and information extraction. npj. Comput. Mater. 2022, 8, 102.

30. Huang, S.; Cole, J. M. BatteryBERT: a pretrained language model for battery database enhancement. J. Chem. Inf. Model. 2022, 62, 6365-77.

31. Trewartha, A.; Walker, N.; Huo, H.; et al. Quantifying the advantage of domain-specific pre-training on named entity recognition tasks in materials science. Patterns 2022, 3, 100488.

32. Song, Y.; Miret, S.; Liu, B. MatSci-NLP: evaluating scientific language models on materials science language tasks using text-to-schema modeling. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada, July 9-14, 2023; Rogers, A.; Boyd-Graber, J.; Okazaki, N., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2023; pp 3621-39.

33. Boiko, D. A.; MacKnight, R.; Kline, B.; Gomes, G. Autonomous chemical research with large language models. Nature 2023, 624, 570-8.

34. M Bran, A.; Cox, S.; Schilter, O.; Baldassari, C.; White, A. D.; Schwaller, P. Augmenting large language models with chemistry tools. Nat. Mach. Intell. 2024, 6, 525-35.

35. Choi, J. Y.; Kim, D. E.; Kim, S. J.; Choi, H.; Yoo, T. K. Application of multimodal large language models for safety indicator calculation and contraindication prediction in laser vision correction. NPJ. Digit. Med. 2025, 8, 82.

36. Kang, Y.; Kim, J. ChatMOF: an autonomous AI system for predicting and generating metal-organic frameworks. arXiv 2023, arXiv:2308.01423. Available online: https://arxiv.org/abs/2308.01423 (accessed 12 December 2025).

37. Zheng, Z.; Rong, Z.; Rampal, N.; Borgs, C.; Chayes, J. T.; Yaghi, O. M. A GPT‐4 reticular chemist for guiding MOF discovery. Angew. Chem. 2023, 135, e202311983.

38. Ruan, Y.; Lu, C.; Xu, N.; et al. An automatic end-to-end chemical synthesis development platform powered by large language models. Nat. Commun. 2024, 15, 10160.

39. Liu, S.; Xu, H.; Ai, Y.; Li, H.; Bengio, Y.; Guo, H. Expert-guided LLM reasoning for battery discovery: from AI-driven hypothesis to synthesis and characterization. arXiv 2025, arXiv:2507.16110. Available online: https://arxiv.org/abs/2507.16110 (accessed 12 December 2025).

40. Wang, X. D.; Chen, Z. R.; Guo, P. J.; Gao, Z. F.; Mu, C.; Lu, Z. Y. Perovskite-R1: a domain-specialized LLM for intelligent discovery of precursor additives and experimental design. arXiv 2025, arXiv:2507.16307. Available online: https://arxiv.org/abs/2507.16307 (accessed 12 December 2025).

41. Liu, X.; Sun, P.; Chen, S.; Zhang, L.; Dong, P.; You, H.; et al. Perovskite-LLM: knowledge-enhanced large language models for perovskite solar cell research. arXiv 2025, arXiv:2502.12669. Available online: https://arxiv.org/abs/2502.12669 (accessed 12 December 2025).

42. Xie, T.; Wan, Y.; Zhou, Y.; et al. Creation of a structured solar cell material dataset and performance prediction using large language models. Patterns. 2024, 5, 100955.

43. Oikawa, Y.; Deffrennes, G.; Abe, T.; Tamura, R.; Tsuda, K. aLLoyM: a large language model for alloy phase diagram prediction. arXiv 2025, arXiv:2507.22558. Available online: https://arxiv.org/abs/2507.22558 (accessed 12 December 2025).

44. Zaki, M.; Jayadeva; Mausam; Anoop Krishnan, N. M. MaScQA: a question answering dataset for investigating materials science knowledge of large language models. arXiv 2023, arXiv:2308.09115. Available online: https://arxiv.org/abs/2308.09115 (accessed 12 December 2025).

45. Ansari, M.; Watchorn, J.; Brown, C. E.; Brown, J. S. dZiner: rational inverse design of materials with AI agents. arXiv 2024, arXiv:2410.03963. Available online: https://arxiv.org/abs/2410.03963 (accessed 12 December 2025).

46. O’Neill, C.; Ghosal, T.; Răileanu, R.; Walmsley, M.; Bui, T.; Schawinski, K.; Ciucă, I. Sparks of science: hypothesis generation using structured paper data. arXiv 2025, arXiv:2504.12976. Available online: https://arxiv.org/abs/2504.12976 (accessed 12 December 2025).

47. Liu, Y.; Yang, Z.; Xie, T.; Ni, J.; Gao, B.; Li, Y.; Tang, S.; Ouyang, W.; Cambria, E.; Zhou, D. ResearchBench: benchmarking LLMs in scientific discovery via inspiration-based task decomposition. arXiv 2025, arXiv:2503.21248. Available online: https://arxiv.org/abs/2503.21248 (accessed 12 December 2025).

48. Pham, T. D.; Tanikanti, A.; Keçeli, M. ChemGraph: an agentic framework for computational chemistry workflows. arXiv 2025, arXiv:2506.06363. Available online: https://arxiv.org/abs/2506.06363 (accessed 12 December 2025).

49. Chiang, Y.; Hsieh, E.; Chou, C.-H.; Riebesell, J. LLaMP: large language model made powerful for high-fidelity materials knowledge retrieval and distillation. arXiv 2024, arXiv:2401.17244. Available online: https://arxiv.org/abs/2401.17244 (accessed 12 December 2025).

50. Gottweis, J.; Weng, W. H.; Daryin, A.; et al. Towards an AI co-scientist. arXiv 2025, arXiv:2502.18864. Available online: https://arxiv.org/abs/2502.18864 (accessed 12 December 2025).

51. Oliveira ON, J. R.; Christino, L.; Oliveira, M. C. F.; Paulovich, F. V. Artificial intelligence agents for materials sciences. J. Chem. Inf. Model. 2023, 63, 7605-9.

52. Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; Cao, Y. ReAct: synergizing reasoning and acting in language models. arXiv 2022, arXiv:2210.03629. Available online: https://arxiv.org/abs/2210.03629 (accessed 12 December 2025).

53. Ghafarollahi, A.; Buehler, M. J. ProtAgents: protein discovery via large language model multi-agent collaborations combining physics and machine learning. Digit. Discov. 2024, 3, 1389-409.

54. Ghafarollahi, A.; Buehler, M. J. AtomAgents: alloy design and discovery through physics-aware multi-modal multi-agent artificial intelligence. arXiv 2024, arXiv:2407.10022. Available online: https://arxiv.org/abs/2407.10022 (accessed 12 December 2025).

55. Ni, B.; Buehler, M. J. MechAgents: large language model multi-agent collaborations can solve mechanics problems, generate new data, and integrate knowledge. Extreme. Mech. Lett. 2024, 67, 102131.

56. Ding, K.; Yu, J.; Huang, J.; Yang, Y.; Zhang, Q.; Chen, H. SciToolAgent: a knowledge graph-driven scientific agent for multi-tool integration. arXiv 2025, arXiv:2507.20280. Available online: https://arxiv.org/abs/2507.20280 (accessed 12 December 2025).

57. Chen, Z.; Chen, S.; Ning, Y.; et al. ScienceAgentBench: toward rigorous assessment of language agents for data-driven scientific discovery. arXiv 2024, arXiv:2410.05080. Available online: https://arxiv.org/abs/2410.05080 (accessed 12 December 2025).

58. Prasad Majumder, B.; Surana, H.; Agarwal, D.; et al. DiscoveryBench: towards data-driven discovery with large language models. arXiv 2024, arXiv:2407.01725. Available online: https://arxiv.org/abs/2407.01725 (accessed 12 December 2025).

59. Lookman, T.; Balachandran, P. V.; Xue, D.; Yuan, R. Active learning in materials science with emphasis on adaptive sampling using uncertainties for targeted design. npj. Comput. Mater. 2019, 5, 21.

60. Snoek, J.; Larochelle, H.; Adams, R. P. Practical bayesian optimization of machine learning algorithms. arXiv 2012, arXiv:1206.2944. Available online: https://doi.org/10.48550/arXiv.1206.2944 (accessed 12 December 2025).

61. Jones, D. R.; Schonlau, M.; Welch, W. J. Efficient global optimization of expensive black-box functions. J. Glob. Optim. 1998, 13, 455-92.

62. Daulton, S.; Balandat, M.; Bakshy, E. Differentiable expected hypervolume improvement for parallel multi-objective bayesian optimization. arXiv 2020, arXiv:2006.05078. Available online: https://arxiv.org/abs/2006.05078 (accessed 12 December 2025).

63. Letham, B.; Karrer, B.; Ottoni, G.; Bakshy, E. Constrained Bayesian optimization with noisy experiments. arXiv 2017, arXiv:1706.07094. Available online: https://arxiv.org/abs/1706.07094 (accessed 12 December 2025).

64. Gardner, J. R.; Kusner, M. J.; Xu, Z. X.; Weinberger, K. Q.; Cunningham, J. P. Bayesian optimization with inequality constraints. In Proceedings of the 31st International Conference on International Conference on Machine Learning, Beijing, China, June 21-26, 2014; JMLR.org: Online, 2014; Vol. 32, pp 937-45. https://www.researchgate.net/profile/Jacob-Gardner-2/publication/271195338_Bayesian_Optimization_with_Inequality_Constraints/links/54bfd8ca0cf28a63249ff25c/Bayesian-Optimization-with-Inequality-Constraints.pdf (accessed 2025-12-12).

65. Zhang, J.; Lv, D.; Dai, Q.; Xin, F.; Dong, F. Noise-aware local model training mechanism for federated learning. ACM. Trans. Intell. Syst. Technol. 2023, 14, 1-22.

66. Lu, C.; Lu, C.; Tjarko Lange, R.; Foerster, J.; Clune, J.; Ha, D. The AI scientist: towards fully automated open-ended scientific discovery. arXiv 2024, arXiv:2408.06292. Available online: https://arxiv.org/abs/2408.06292 (accessed 12 December 2025).

67. Schmidgall, S.; Su, Y.; Wang, Z.; et al. Agent laboratory: using LLM agents as research assistants. arXiv 2025, arXiv:2501.04227. Available online: https://arxiv.org/abs/2501.04227 (accessed 12 December 2025).

68. Canty, R. B.; Bennett, J. A.; Brown, K. A.; et al. Science acceleration and accessibility with self-driving labs. Nat. Commun. 2025, 16, 3856.

69. Hatakeyama-Sato, K.; Nishida, T.; Kitamura, K.; Ushiku, Y.; Takahashi, K.; Nabae, Y.; Hayakawa, T. Perspective on utilizing foundation models for laboratory automation in materials research. arXiv 2025, arXiv:2506.12312. Available online: https://arxiv.org/abs/2506.12312 (accessed 12 December 2025).

70. Su, H.; Chen, R.; Tang, S.; et al. Many heads are better than one: improved scientific idea generation by a LLM-based multi-agent system. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vienna, Austria, July 27-August 1, 2025; Che, W.; Nabende, J.; Shutova, E.; Pilehvar M. T., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2025, pp 28201-40.

71. Li, L.; Xu, W.; Guo, J.; et al. Chain of ideas: revolutionizing research via novel idea development with LLM agents. In Findings of the Association for Computational Linguistics: EMNLP 2025, Suzhou, China, November 4-9, 2025; Christodoulopoulos, C.; Chakraborty, T.; Rose, C.; Peng, V., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2025, pp 8971-9004.

72. Kang, Y.; Kim, J. ChatMOF: an artificial intelligence system for predicting and generating metal-organic frameworks using large language models. Nat. Commun. 2024, 15, 4705.

73. Yang, Z.; Liu, W.; Gao, B.; et al. MOOSE-Chem: large language models for rediscovering unseen chemistry scientific hypotheses. arXiv 2024, arXiv:2410.07076. Available online: https://arxiv.org/abs/2410.07076 (accessed 12 December 2025).

74. Buehler, M. J. Accelerating scientific discovery with generative knowledge extraction, graph-based representation, and multimodal intelligent graph reasoning. Mach. Learn. Sci. Technol. 2024, 5, 035083.

75. Zhang, Q.; Hu, Y.; Yan, J.; et al. Large-language-model-based AI agent for organic semiconductor device research. Adv. Mater. 2024, 36, e2405163.

76. Van, M. H.; Verma, P.; Zhao, C.; Wu, X. A survey of AI for materials science: foundation models, LLM agents, datasets, and tools. arXiv 2025, arXiv:2506.20743. Available online: https://arxiv.org/abs/2506.20743 (accessed 12 December 2025).

77. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Ichter, B.; Xia, F.; Chi, E.; Le, Q.; Zhou, D. Chain-of-thought prompting elicits reasoning in large language models. arXiv 2022, arXiv:2201.11903. Available online: https://arxiv.org/abs/2201.11903 (accessed 12 December 2025).

78. White, J.; Fu, Q.; Hays, S.; et al. A prompt pattern catalog to enhance prompt engineering with ChatGPT. arXiv 2023, arXiv:2302.11382. Available online: https://arxiv.org/abs/2302.11382 (accessed 12 December 2025).

79. Zhou, Y.; Ioan Muresanu, A.; Han, Z.; et al. Large language models are human-level prompt engineers. arXiv 2022, arXiv:2211.01910. Available online: https://arxiv.org/abs/2211.01910 (accessed 12 December 2025).

80. Alberts, M.; Schilter, O.; Zipoli, F.; Hartrampf, N.; Laino, T. Unraveling molecular structure: a multimodal spectroscopic dataset for chemistry. arXiv 2024, arXiv:2407.17492. Available online: https://arxiv.org/abs/2407.17492 (accessed 12 December 2025).

81. Nguyen, E.; Poli, M.; Durrant, M. G.; et al. Sequence modeling and design from molecular to genome scale with Evo. Science 2024, 386, eado9336.

82. Chithrananda, S.; Grand, G.; Ramsundar, B. ChemBERTa: large-scale self-supervised pretraining for molecular property prediction. arXiv 2020, arXiv:2010.09885. Available online: https://arxiv.org/abs/2010.09885 (accessed 12 December 2025).

83. Raissi, M.; Perdikaris, P.; Karniadakis, G. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686-707.

84. Batzner, S.; Musaelian, A.; Sun, L.; et al. E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nat. Commun. 2022, 13, 2453.

85. Lakshminarayanan, B.; Pritzel, A.; Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. arXiv 2016, arXiv:1612.01474. Available online: https://arxiv.org/abs/1612.01474 (accessed 12 December 2025).

86. Angelopoulos, A. N.; Bates, S. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv 2021, arXiv:2107.07511. Available online: https://arxiv.org/abs/2107.07511 (accessed 12 December 2025).

87. Yu, S.; Ran, N.; Liu, J. Large-language models: The game-changers for materials science research. Art. Int. Chem. 2024, 2, 100076.

88. Yanai, I.; Lercher, M. What is the question? Genome. Biol. 2019, 20, 289.

89. Berglund, L.; Tong, M.; Kaufmann, M.; et al. The reversal curse: LLMs trained on “A is B” fail to learn “B is A”. arXiv 2023, arXiv:2309.12288. Available online: https://arxiv.org/abs/2309.12288 (accessed 12 December 2025).

90. Kambhampati, S.; Valmeekam, K.; Guan, L.; et al. LLMs can’t plan, but can help planning in LLM-modulo frameworks. arXiv 2024, arXiv:2402.01817. Available online: https://arxiv.org/abs/2402.01817 (accessed 12 December 2025).

91. Dutta, S.; Leal De Freitas, I.; Maciel Xavier, P.; Miceli De Farias, C.; Bernal Neira, D. E. Federated learning in chemical engineering: a tutorial on a framework for privacy-preserving collaboration across distributed data sources. Ind. Eng. Chem. Res. 2025, 64, 7767-83.

92. Ranga, S.; Mao, R.; Cambria, E.; Chattopadhyay, A. The plagiarism singularity conjecture. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Albuquerque, New Mexico, April 29-May 4, 2025; Association for Computational Linguistics: Stroudsburg, PA, USA, 2025, pp 10245-55. https://aclanthology.org/2025.naacl-long.514.pdf (accessed 2025-12-12).

93. Zhao, H.; Tang, X.; Yang, Z.; et al. ChemSafetyBench: benchmarking LLM safety on chemistry domain. arXiv 2024, arXiv:2411.16736. Available online: https://arxiv.org/abs/2411.16736 (accessed 12 December 2025).

94. Zhou, J.; Wang, L.; Yang, X. GUARDIAN: safeguarding LLM multi-agent collaborations with temporal graph modeling. arXiv 2025, arXiv:2505.19234. Available online: https://arxiv.org/abs/2505.19234 (accessed 12 December 2025).

95. Zhang, Y.; Ling, S.; Chen, W.; Buehler, M. J.; Kaplan, D. L. Exploring nature’s toolbox: the role of biopolymers in sustainable materials science. Adv. Mater. 2025, 37, e2507822.

96. Zhang, H.; Song, Y.; Hou, Z.; Miret, S.; Liu, B. HoneyComb: a flexible LLM-based agent system for materials science. arXiv 2024, arXiv:2409.00135. Available online: https://arxiv.org/abs/2409.00135 (accessed 12 December 2025).

97. Bik, E. M. Publishing negative results is good for science. Access. Microbiol. 2024, 6, 000792.

98. Echevarría, L.; Malerba, A.; Arechavala-Gomeza, V. Researcher’s perceptions on publishing “negative” results and open access. Nucleic. Acid. Ther. 2021, 31, 185-9.

99. Taragin, M. I. Learning from negative findings. Isr. J. Health. Policy. Res. 2019, 8, 38.

100. Urbina, F.; Lentzos, F.; Invernizzi, C.; Ekins, S. Dual use of artificial intelligence-powered drug discovery. Nat. Mach. Intell. 2022, 4, 189-91.