REFERENCES

1. Bin, T.; Yan, H.; Wang, N.; Nikolić, M. N.; Yao, J.; Zhang, T. A survey on the visual perception of humanoid robot. Biomim. Intell. Robot. 2025, 5, 100197.

2. Wang, J.; Wang, C.; Chen, W.; Dou, Q.; Chi, W. Embracing the future: the rise of humanoid robots and embodied AI. Intell. Robot. 2024, 4, 196-9.

3. Kaneko, K.; Kaminaga, H.; Sakaguchi, T.; et al. Humanoid robot HRP-5P: an electrically actuated humanoid robot with high-power and wide-range joints. IEEE. Robot. Autom. Lett. 2019, 4, 1431-8.

4. Kakiuchi, Y.; Kojima, K.; Kuroiwa, E.; et al. Development of humanoid robot system for disaster response through team NEDO-JSK's approach to DARPA Robotics Challenge Finals. In 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Korea. Nov 03-05, 2015. IEEE; 2015. pp. 805–10.

5. Radford, N. A.; Strawser, P.; Hambuchen, K.; et al. Valkyrie: NASA's first bipedal humanoid robot. J. Field. Robot. 2015, 32, 397-419.

6. Faraji, S.; Razavi, H.; Ijspeert, A. J. Bipedal walking and push recovery with a stepping strategy based on time-projection control. Int. J. Robot. Res. 2019, 38, 587-611.

7. Boston Dynamics. Picking up momentum. https://bostondynamics.com/blog/picking-up-momentum/. (accessed 14 Jul 2025).

8. Hirose, M.; Ogawa, K. Honda humanoid robots development. Philos. Trans. R. Soc. A. 2007, 365, 11-9.

9. Shamsuddoha, M.; Nasir, T.; Fawaaz, M. S. Humanoid robots like Tesla Optimus and the future of supply chains: enhancing efficiency, sustainability, and workforce dynamics. Automation 2025, 6, 9.

10. Piperakis, S.; Orfanoudakis, E.; Lagoudakis, M. G. Predictive control for dynamic locomotion of real humanoid robots. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, USA. Sep 14-18, 2014. IEEE; 2014. pp. 4036–43.

11. Kajita, S.; Kanehiro, F.; Kaneko, K.; et al. Biped walking pattern generation by using preview control of zero-moment point. In 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), Taipei, Taiwan. Sep 14-19, 2003. IEEE; 2003. pp. 1620-6.

12. Kajita, S.; Tani, K. Study of dynamic biped locomotion on rugged terrain-derivation and application of the linear inverted pendulum mode. In Proceedings. 1991 IEEE International Conference on Robotics and Automation, Sacramento, USA. Apr 09-11, 1991. IEEE; 1991. pp. 1405-11.

13. Kajita, S.; Kanehiro, F.; Kaneko, K.; Yokoi, K.; Hirukawa, H. The 3D linear inverted pendulum mode: a simple modeling for a biped walking pattern generation. In Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No. 01CH37180), Maui, USA. Oct 29 - Nov 03, 2001. IEEE; 2001. pp. 239–46.

14. Shafii, N.; Lau, N.; Reis, L. P. Learning a fast walk based on ZMP control and hip height movement. In 2014 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Espinho, Portugal. May 14-15, 2014. IEEE; 2014. pp. 181–6.

15. Urbann, O.; Schwarz, I.; Hofmann, M. Flexible linear inverted pendulum model for cost-effective biped robots. In 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Korea. Nov 03-05, 2015. IEEE; 2015. pp. 128–31.

16. Bae, H.; Jeong, H.; Oh, J.; Lee, K.; Oh, J. -H. Humanoid robot COM kinematics estimation based on compliant inverted pendulum model and robust state estimator. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain. Oct 01-05, 2018. IEEE; 2018. pp. 747–53.

17. Pratt, J.; Carff, J.; Drakunov, S.; Goswami, A. Capture point: a step toward humanoid push recovery. In 2006 6th IEEE-RAS International Conference on Humanoid Robots, Genova, Italy. Dec 04-06, 2006. IEEE; 2006. pp. 200–7.

18. Kashyap, A. K.; Parhi, D. R. Dynamic walking of humanoid robot on flat surface using amplified LIPM plus flywheel model. Int. J. Intell. Unmanned. Syst. 2022, 10, 316-29.

19. Kashyap, A. K.; Parhi, D. R. Optimization of stability of humanoid robot NAO using ant colony optimization tuned MPC controller for uneven path. Soft. Comput. 2021, 25, 5131-50.

20. Kasaei, M.; Lau, N.; Pereira, A. An optimal closed-loop framework to develop stable walking for humanoid robot. In 2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Torres Vedras, Portugal. Apr 25-27, 2018. IEEE; 2018. pp. 30–5.

21. Kasaei, S. M.; Lau, N.; Pereira, A.; Shahri, E. A reliable model-based walking engine with push recovery capability. In 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Coimbra, Portugal. Apr 26-28, 2017. IEEE; 2017. pp. 122–7.

22. Blickhan, R.; Full, R. J. Similarity in multilegged locomotion: bouncing like a monopode. J. Comp. Physiol. A. 1993, 173, 509-17.

23. Wensing, P. M.; Orin, D. E. High-speed humanoid running through control with a 3D-SLIP model. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan. Nov 03-07, 2013. IEEE; 2013. pp. 5134–40.

24. Xiong, X.; Ames, A. D. Dynamic and versatile humanoid walking via embedding 3D actuated SLIP model with hybrid LIP based stepping. IEEE. Robot. Autom. Lett. 2020, 5, 6286-93.

25. Shahbazi, M.; Babuška, R.; Lopes, G. A. D. Unified modeling and control of walking and running on the spring-loaded inverted pendulum. IEEE. Trans. Robot. 2016, 32, 1178-95.

26. Kuo, A. D.; Donelan, J. M.; Ruina, A. Energetic consequences of walking like an inverted pendulum: step-to-step transitions. Exerc. Sport. Sci. Rev. 2005, 33, 88-97.

27. Zhang, C.; Liu, T.; Song, S.; Wang, J.; Meng, M. Q. H. Dynamic wheeled motion control of wheel-biped transformable robots. Biomim. Intell. Robot. 2022, 2, 100027.

28. Kwon, S.; Oh, Y. Real-time estimation algorithm for the center of mass of a bipedal robot with flexible inverted pendulum model. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, USA. Oct 10-15, 2009. IEEE; 2009. pp. 5463–8.

29. Jo, J.; Park, G.; Oh, Y. Robust walking stabilization strategy of humanoid robots on uneven terrain via QP-based impedance/admittance control. Robot. Auton. Syst. 2022, 154, 104148.

30. Mahapatro, A.; Dhal, P. R.; Parhi, D. R.; Muni, M. K.; Sahu, C.; Patra, S. K. Towards stabilization and navigational analysis of humanoids in complex arena using a hybridized fuzzy embedded PID controller approach. Expert. Syst. Appl. 2023, 213, 119251.

31. Zhou, Y.; Sun, Z.; Chen, B.; Huang, G.; Wu, X.; Wang, T. Human gait tracking for rehabilitation exoskeleton: adaptive fractional order sliding mode control approach. Intell. Robot. 2023, 3, 95-112.

32. Erez, T.; Lowrey, K.; Tassa, Y.; Kumar, V.; Kolev, S.; Todorov, E. An integrated system for real-time model predictive control of humanoid robots. In 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Atlanta, USA. Oct 15-17, 2013. IEEE; 2013. pp. 292–9.

33. Ishihara, K.; Morimoto, J. MPC for humanoid control. In Robotics Retrospectives-Workshop at RSS 2020; 2020. https://openreview.net/forum?id=-xj822-1KE. (accessed 14 Jul 2025).

34. Scianca, N.; Cognetti, M.; De Simone, D.; Lanari, L.; Oriolo, G. Intrinsically stable MPC for humanoid gait generation. In 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico. Nov 15-17, 2016. IEEE; 2016. pp. 601–6.

35. García, G.; Griffin, R.; Pratt, J. MPC-based locomotion control of bipedal robots with line-feet contact using centroidal dynamics. In 2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids), Munich, Germany. Jul 19-21, 2021. IEEE; 2021. pp. 276–82.

36. Kim, D. W.; Kim, N.-H.; Park, G.-T. ZMP based neural network inspired humanoid robot control. Nonlinear. Dyn. 2012, 67, 793-806.

37. Kagami, S.; Nishiwaki, K.; Kitagawa, T.; Sugihara, T.; Inaba, M.; Inoue, H. A fast generation method of a dynamically stable humanoid robot trajectory with enhanced ZMP constraint. In Proceeding of IEEE International Conference on Humanoid Robotics (Humanoid2000). 2000. http://www.humanoids2000.org/31.pdf. (accessed 14 Jul 2025).

38. Smaldone, F. M.; Scianca, N.; Modugno, V.; Lanari, L.; Oriolo, G. ZMP constraint restriction for robust gait generation in humanoids. In 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France. May 31 - Aug 31, 2020. IEEE; 2020. pp. 8739–45.

39. Lloyd, S.; Irani, R. A.; Ahmadi, M. Fast and robust inverse kinematics of serial robots using Halleyos method. IEEE. Trans. Robot. 2022, 38, 2768-80.

40. Dou, R.; Yu, S.; Li, W.; et al. Inverse kinematics for a 7-DOF humanoid robotic arm with joint limit and end pose coupling. Mech. Mach. Theory. 2022, 169, 104637.

41. Ma, H.; Song, A.; Li, J.; Ge, L.; Fu, C.; Zhang, G. Legged odometry based on fusion of leg kinematics and IMU information in a humanoid robot. Biomim. Intell. Robot. 2025, 5, 100196.

42. Ferrolho, H.; Ivan, V.; Merkt, W.; Havoutis, I.; Vijayakumar, S. Inverse dynamics vs. forward dynamics in direct transcription formulations for trajectory optimization. In 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi'an, China. May 30 - Jun 05, 2021. IEEE; 2021. pp. 12752–8.

43. Reher, J.; Ames, A. D. Inverse dynamics control of compliant hybrid zero dynamic walking. In 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi'an, China. May 30 - Jun 05, 2021. IEEE; 2021. pp. 2040–7.

44. Tang, H. A fuzzy PID control system. Electr. Mach. Control. 2005.

45. Guo, Q.; Jiang, D. Moving process PID control in robots' field. In 2012 International Conference on Control Engineering and Communication Technology, Shenyang, China. Dec 07-09, 2012. IEEE; 2012. pp. 386–9.

46. Zeng, J.; Wang, L. G.; Ye, M. J.; Hu, C. H.; Ye, T. F. Research of several PID algorithms based on MATLAB. Adv. Mater. Res. 2013, 760, 1075-9.

47. Tong, L.; Cui, D.; Wang, C.; Peng, L. A novel zero-force control framework for post-stroke rehabilitation training based on fuzzy-PID method. Intell. Robot. 2024, 4, 125-45.

48. Alasiry, A. H.; Satria, N. F.; Sugiarto, A. Balance control of humanoid dancing robot ERISA while walking on sloped surface using PID. In 2018 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), Yogyakarta, Indonesia. Noc 21-22, 2018. IEEE; 2018. pp. 577–81.

49. Wang, H.; Chen, Q. Adaptive robust control for biped walking under uncertain external forces. Intell. Robot. 2023, 3, 479-94.

50. Sreenath, K.; Park, H. -W.; Poulakakis, I.; Grizzle, J. W. A compliant hybrid zero dynamics controller for stable, efficient and fast bipedal walking on MABEL. Int. J. Robot. Res. 2011, 30, 1170–93. https://www.researchgate.net/publication/220122146_A_Compliant_Hybrid_Zero_Dynamics_Controller_for_Stable_Efficient_and_Fast_Bipedal_Walking_on_MABELs. (accessed 14 Jul 2025).

51. Reher, J.; Ma, W. -L.; Ames, A. D. Dynamic walking with compliance on a cassie bipedal robot. In 2019 18th European Control Conference (ECC), Naples, Italy. Jun 25-28, 2019. IEEE; 2019. pp. 2589–95.

52. Chevallereau, C.; Abba, G.; Aoustin, Y.; et al. Rabbit: a testbed for advanced control theory. IEEE. Control. Syst. Mag. 2003, 23, 57-79.

53. Hereid, A.; Cousineau, E. A.; Hubicki, C. M.; Ames, A. D. 3D dynamic walking with underactuated humanoid robots: a direct collocation framework for optimizing hybrid zero dynamics. In 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden. May 16-21, 2016. IEEE; 2016. pp. 1447–54.

54. Li, J.; Nguyen, Q. Force-and-moment-based model predictive control for achieving highly dynamic locomotion on bipedal robots. In 2021 60th IEEE Conference on Decision and Control (CDC), Austin, USA. Dec 14-17, 2021. IEEE; 2021. pp. 1024–30.

55. Khatib, O.; Sentis, L.; Park, J.; Warren, J. Whole-body dynamic behavior and control of human-like robots. Int. J. Humanoid. Robot. 2004, 01, 29-43.

56. Sentis, L.; Khatib, O. Synthesis of whole-body behaviors through hierarchical control of behavioral primitives. Int. J. Humanoid. Robot. 2005, 02, 505-18.

57. Moro, F. L.; Sentis, L. Whole-body control of humanoid robots. In: Goswami, A., Vadakkepat, P.; editers. Humanoid robotics: a reference. Springer, Dordrecht; 2018. pp. 1–23.

58. Kim, D.; Di Carlo, J.; Katz, B.; Bledt, G.; Kim, S. Highly dynamic quadruped locomotion via whole-body impulse control and model predictive control. arXiv 2019, arXiv: 1909.06586. https://doi.org/10.48550/arXiv.1909.06586. (accessed 14 Jul 2025).

59. Zhu, Z.; Ding, W.; Zhu, W.; et al. NP-MBO: a newton predictor-based momentum observer for interaction force estimation of legged robots. Biomim. Intell. Robot. 2024, 4, 100160.

60. Zhang, H.; He, L.; Wang, D. Deep reinforcement learning for real-world quadrupedal locomotion: a comprehensive review. Intell. Robot. 2022, 2, 275-97.

61. Peters, J.; Vijayakumar, S.; Schaal, S. Reinforcement learning for humanoid robotics. In Proceedings of the third IEEE-RAS international conference on humanoid robots. 2003. pp. 1–20. https://www.ias.informatik.tu-darmstadt.de/uploads/Team/JanPeters/peters-ICHR2003.pdf. (accessed 14 Jul 2025).

62. Hester, T.; Quinlan, M.; Stone, P. Generalized model learning for Reinforcement Learning on a humanoid robot. In 2010 IEEE International Conference on Robotics and Automation, Anchorage, USA. May 03-07, 2010. IEEE; 2010. pp. 2369–74.

63. Danel, M. Reinforcement learning for humanoid robot control. 2017. https://poster.fel.cvut.cz/poster2017/proceedings/Poster_2017/Section_IC/IC_021_Danel.pdf. (accessed 14 Jul 2025).

64. Li, Z.; Cheng, X.; Peng, X. B.; et al. Reinforcement learning for robust parameterized locomotion control of bipedal robots. In 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi'an, China. May 30 - Jun 05, 2021. IEEE; 2021. pp. 2811–7.

65. Le, T. D.; Le, A. T.; Nguyen, D. T. Model-based Q-learning for humanoid robots. In 2017 18th International Conference on Advanced Robotics (ICAR), Hong Kong, China. Jul 10-12, 2017. IEEE; 2017. pp. 608–13.

66. Liu, Y.; Zhou, M.; Guo, X. An improved Q-learning algorithm for human-robot collaboration two-sided disassembly line balancing problems. In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic. Oct 09-12, 2022. IEEE; 2022. pp. 568–73.

67. Tai, L.; Liu, M. A robot exploration strategy based on Q-learning network. In 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), Angkor Wat, Cambodia. Jun 16-10, 2016. pp. 57–62.

68. Tavakoli, F.; Derhami, V.; Kamalinejad, A. Control of humanoid robot walking by Fuzzy Sarsa Learning. In 2015 3rd RSI International Conference on Robotics and Mechatronics (ICROM), Tehran, Iran. Oct 07-09, 2015. IEEE; 2015. pp. 234–9.

69. Xu, W.; Yu, B.; Cheng, L.; Li, Y.; Cao, X. Multi-fuzzy Sarsa learning-based sit-to-stand motion control for walking-support assistive robot. Int. J. Adv. Robot. Syst. 2021, 18.

70. Anas, H.; Ong, W. H.; Malik, O. A. Comparison of deep Q-learning, Q-learning and SARSA reinforced learning for robot local navigation. In Robot Intelligence Technology and Applications 6. Cham: Springer International Publishing; 2022. pp. 443–54.

71. Huang, J.; Zhang, Z.; Ruan, X. An improved Dyna-Q algorithm inspired by the forward prediction mechanism in the rat brain for mobile robot path planning. Biomimetics 2024, 9, 315.

72. Pei, M.; An, H.; Liu, B.; Wang, C. An improved Dyna-Q algorithm for mobile robot path planning in unknown dynamic environment. IEEE. Trans. Syst. Man. Cybern. Syst. 2022, 52, 4415-25.

73. GarcȪa, J.; Shafie, D. Teaching a humanoid robot to walk faster through Safe Reinforcement Learning. Eng. Appl. Artif. Intell. 2020, 88, 103360.

74. Zhao, X.; Han, S.; Tao, B.; Yin, Z.; Ding, H. Model-based actor?critic learning of robotic impedance control in complex interactive environment. IEEE. Trans. Ind. Electro. 2022, 69, 13225-35.

75. Wu, R.; Yao, Z.; Si, J.; Huang, H. H. Robotic knee tracking control to mimic the intact human knee profile based on actor-critic reinforcement learning. IEEE/CAA. J. Autom. Sinica. 2022, 9, 19-30.

76. Zhao, X.; Tao, B.; Qian, L.; Ding, H. Model-based actor-critic learning for optimal tracking control of robots with input saturation. IEEE. Trans. Ind. Electron. 2021, 68, 5046-56.

77. Gu, Y.; Zhu, Z.; Lv, J.; Shi, L.; Hou, Z.; Xu, S. DM-DQN: dueling munchausen deep Q network for robot path planning. Complex. Intell. Syst. 2023, 9, 4287-300.

78. da Silva, I. J.; Perico, D. H.; Homem, T. P. D.; da Costa Bianchi, R. A. Deep reinforcement learning for a humanoid robot soccer player. J. Intell. Robot. Syst. 2021, 102, 69.

79. Kuo, P.-H.; Hu, J.; Lin, S.-T.; Hsu, P.-W. Fuzzy deep deterministic policy gradient-based motion controller for humanoid robot. Int. J. Fuzzy. Syst. 2022, 24, 2476-92.

80. Tiong, T.; Saad, I.; Teo, K. T. K.; Lago, H. Deep reinforcement learning with robust deep deterministic policy gradient. In 2020 2nd International Conference on Electrical, Control and Instrumentation Engineering (ICECIE), Kuala Lumpur, Malaysia. Nov 28, 2020. IEEE; 2020. pp. 1–5.

81. Kuo, P.-H.; Yang, W.-C.; Hsu, P.-W.; Chen, K.-L. Intelligent proximal-policy-optimization-based decision-making system for humanoid robots. Adv. Eng. Inform. 2023, 56, 102009.

82. Melo, L. C.; Melo, D. C.; Maximo, M. R. Learning humanoid robot running motions with symmetry incentive through proximal policy optimization. J. Intell. Robot. Syst. 2021, 102, 54.

83. Liu, R.; Wang, J.; Chen, Y.; Liu, Y.; Wang, Y.; Gu, J. Proximal policy optimization with time-carying muscle synergy for the control of an upper limb musculoskeletal system. IEEE. Trans. Autom. Sci. Eng. 2024, 21, 1929-40.

84. Hayamizu, Y.; Amiri, S.; Chandan, K.; Zhang, S.; Takadama, K. Guided Dyna-Q for mobile robot exploration and navigation. arXiv 2020, arXiv: 2004.11456. https://doi.org/10.48550/arXiv.2004.11456. (accessed 14 Jul 2025).

85. Zhao, Z.; Huang, H.; Sun, S.; Jin, J.; Lu, W. Reinforcement learning for dynamic task execution: a robotic arm for playing table tennis. In 2024 IEEE International Conference on Robotics and Biomimetics (ROBIO), Bangkok, Thailand. Dec 10-14, 2024. IEEE; 2024. pp. 608–13.

86. Zhao, Z.; Huang, H.; Sun, S.; Li, C.; Xu, W. Fusing dynamics and reinforcement learning for control strategy: achieving precise gait and high robustness in humanoid robot locomotion. In 2024 IEEE-RAS 23rd International Conference on Humanoid Robots (Humanoids), Nancy, France. Nov 22-24, 2024. IEEE; 2024. pp. 1072–9.

87. Zhao, Z.; Sun, S.; Li, C.; Huang, H.; Xu, W. Design and control of continuous gait for humanoid robots: jumping, walking, and running based on reinforcement learning and adaptive motion functions. In Intelligent Robotics and Applications. ICIRA 2024. Singapore: Springer Nature Singapore; 2025. pp. 159–73.

88. Zhao, Z.; Sun, S.; Huang, H.; Gao, Q.; Xu, W. Design and control of continuous jumping gaits for humanoid robots based on motion function and reinforcement learning. Procedia. Comput. Sci. 2024, 250, 51-7.

89. Huang, H.; Sun, S.; Zhao, Z.; Huang, H.; Shen, C.; Xu, W. PTRL: prior transfer deep reinforcement learning for legged robots locomotion. arXiv 2025, arXiv: 2504.05629. https://doi.org/10.48550/arXiv.2504.05629. (accessed 14 Jul 2025).

90. Liu, Y.; Palmieri, L.; Georgievski, I.; Aiello, M. Human-flow-aware long-term mobile robot task planning based on hierarchical reinforcement learning. IEEE. Robot. Autom. Lett. 2023, 8, 4068-75.

91. Yang, X.; Ji, Z.; Wu, J.; et al. Hierarchical reinforcement learning with universal policies for multistep robotic manipulation. IEEE. Trans. Neural. Netw. Learn. Syst. 2022, 33, 4727-41.

92. Eppe, M.; Gumbsch, C.; Kerzel, M.; Nguyen, P. D.; Butz, M. V.; Wermter, S. Intelligent problem-solving as integrated hierarchical reinforcement learning. Nat. Mach. Intell. 2022, 4, 11-20.

93. Haarnoja, T.; Moran, B.; Lever, G.; et al. Learning agile soccer skills for a bipedal robot with deep reinforcement learning. Sci. Robot. 2024, 9, eadi8022.

94. Wei, W.; Wang, Z.; Xie, A.; Wu, J.; Xiong, R.; Zhu, Q. Learning gait-conditioned bipedal locomotion with motor adaptation. In 2023 IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids), Austin, USA. Dec 12-14, 2023. IEEE; 2023. p. 1–7.

95. Han, L.; Zhu, Q.; Sheng, J.; et al. Lifelike agility and play in quadrupedal robots using reinforcement learning and generative pre-trained models. Nat. Mach. Intell. 2024, 6, 787-98.

96. Lee, J.; Hwangbo, J.; Wellhausen, L.; Koltun, V.; Hutter, M. Learning quadrupedal locomotion over challenging terrain. Sci. Robot. 2020, 5, eabc5986.

97. Peng, T.; Bao, L.; Humphreys, J.; Delfaki, A. M.; Kanoulas, D.; Zhou, C. Learning bipedal walking on a quadruped robot via adversarial motion priors. In Towards Autonomous Robotic Systems. Cham: Springer Nature Switzerland; 2025. pp. 118–29. doi:.

98. Hua, J.; Zeng, L.; Li, G.; Ju, Z. Learning for a robot: deep reinforcement learning, imitation learning, transfer learning. Sensors 2021, 21, 1278.

99. Hussein, A.; Gaber, M. M.; Elyan, E.; Jayne, C. Imitation learning: a survey of learning methods. ACM. Comput. Surv. 2017, 50, 1-35.

100. Argall, B. D.; Chernova, S.; Veloso, M.; Browning, B. A survey of robot learning from demonstration. Robot. Auton. Syst. 2009, 57, 469-83.

101. Ng, A. Y.; Russell, S. Algorithms for inverse reinforcement learning. 2000. https://www.cl.cam.ac.uk/ey204/teaching/ACS/R244_2022_2023/papers/NG_ICML_2000.pdf. (accessed 14 Jul 2025).

102. Ho, J.; Ermon, S. Generative adversarial imitation learning. In 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2016. https://proceedings.neurips.cc/paper_files/paper/2016/file/cc7e2b878868cbae992d1fb743995d8f-Paper.pdf. (accessed 14 Jul 2025).

103. Bong, J. H.; Jung, S.; Kim, J.; Park, S. Standing balance control of a bipedal robot based on behavior cloning. Biomimetics 2022, 7, 232.

104. Florence, P.; Lynch, C.; Zeng, A.; et al. Implicit behavioral cloning. In Proceedings of the 5th Conference on Robot Learning. PMLR; 2022. pp. 158–68. https://proceedings.mlr.press/v164/florence22a.html. (accessed 14 Jul 2025).

105. Yang, C.; Yuan, K.; Heng, S.; Komura, T.; Li, Z. Learning natural locomotion behaviors for humanoid robots using human bias. IEEE. Robot. Autom. Lett. 2020, 5, 2610-7.

106. Gan, L.; Grizzle, J. W.; Eustice, R. M.; Ghaffari, M. Energy-based legged robots terrain traversability modeling via deep inverse reinforcement learning. IEEE. Robot. Autom. Lett. 2022, 7, 8807-14.

107. Liu, W.; Zhong, J.; Wu, R.; Fylstra, B. L.; Si, J.; Huang, H. H. Inferring human-robot performance objectives during locomotion using inverse reinforcement learning and inverse optimal control. IEEE. Robot. Autom. Lett. 2022, 7, 2549-56.

108. Kubík, J.; Čížek, P.; Szadkowski, R.; Faigl, J. Experimental leg inverse dynamics learning of multi-legged walking robot. In Modelling and Simulation for Autonomous Systems. Cham: Springer International Publishing; 2021. pp. 154–68.

109. Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; et al. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2014. https://proceedings.neurips.cc/paper_files/paper/2014/file/f033ed80deb0234979a61f95710dbe25-Paper.pdf. (accessed 14 Jul 2025).

110. Peng, X. B.; Ma, Z.; Abbeel, P.; Levine, S.; Kanazawa, A. AMP: adversarial motion priors for stylized physics-based character control. ACM. Trans. Graph. 2021, 40, 1-20.

111. Peng, X. B.; Guo, Y.; Halper, L.; Levine, S.; Fidler, S. ASE: large-scale reusable adversarial skill embeddings for physically simulated characters. ACM. Trans. Graph. 2022, 41, 1-17.

112. Dai, H.; Tedrake, R. Planning robust walking motion on uneven terrain via convex optimization. In 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico. Nov 15-17, 2016. IEEE; 2016. pp. 579–86.

113. Escontrela, A.; Peng, X. B.; Yu, W.; et al. Adversarial motion priors make good substitutes for complex reward functions. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan. Oct 23-27, 2022. IEEE; 2022. pp. 25–32.

114. Zhang, Q.; Cui, P.; Yan, D.; et al. Whole-body humanoid robot locomotion with human reference. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates. Oct 14-18, 2024. IEEE; 2024. pp. 11225–31.

115. Cheng, X.; Ji, Y.; Chen, J.; Yang, R.; Yang, G.; Wang, X. Expressive whole-body control for humanoid robots. arXiv 2024, arXiv: 2402.16796. https://doi.org/10.48550/arXiv.2402.16796. (accessed 14 Jul 2025).

116. Lai, C. -M.; Wang, H. -C.; Hsieh, P. -C.; Wang, Y. -C. F.; Chen, M. H.; Sun, S. H. Diffusion-reward adversarial imitation learning. In 38th Conference on Neural Information Processing Systems (NeurIPS 2024). Curran Associates, Inc.; 2024. https://proceedings.neurips.cc/paper_files/paper/2024/file/ad47b1801557e4be37d30baf623de426-Paper-Conference.pdf. (accessed 14 Jul 2025).

117. Li, C.; Blaes, S.; Kolev, P.; Vlastelica, M.; Frey, J.; Martius, G. Versatile skill control via self-supervised adversarial imitation of unlabeled mixed motions. In 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK. May 29 - Jun 02, 2023. IEEE; 2023. pp. 2944–50.

118. Peng, X. B.; Abbeel, P.; Levine, S.; van de Panne, M. DeepMimic: example-guided deep reinforcement learning of physics-based character skills. ACM. Trans. Graph. 2018, 37, 1-14.

119. Bergamin, K.; Clavet, S.; Holden, D.; Forbes, J. R. DReCon: data-driven responsive control of physics-based characters. ACM. Trans. Graph. 2019, 38, 1-11.

120. Finn, C.; Tan, X. Y.; Duan, Y.; Darrell, T.; Levine, S.; Abbeel, P. Deep spatial autoencoders for visuomotor learning. In 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden. May 16-21, 2016. IEEE; 2016. pp. 512–9.

121. Lee, Y.; Wampler, K.; Bernstein, G.; Popović, J.; Popović, Z. Motion fields for interactive character locomotion. In ACM SIGGRAPH Asia 2010 Papers, New York, USA. Association for Computing Machinery; 2010. p. 1-8.

122. Levine, S.; Wang, J. M.; Haraux, A.; Popović, .; Z, .; Koltun, V. Continuous character control with low-dimensional embeddings. ACM. Trans. Graph. 2012, 31, 1-10.

123. Starke, P.; Starke, S.; Komura, T.; Steinicke, F. Motion in-betweening with phase manifolds. Proc. ACM. Comput. Graph. Interact. Tech. 2023, 6, 1-17.

124. Shafiee, M.; Bellegarda, G.; Ijspeert, A. Viability leads to the emergence of gait transitions in learning agile quadrupedal locomotion on challenging terrains. Nat. Commun. 2024, 15, 3073.

125. Aswin Nahrendra, I. M.; Yu, B.; Myung, H. DreamWaQ: learning robust quadrupedal locomotion with implicit terrain imagination via deep reinforcement learning. In 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK. May 29 - Jun 02, 2023. IEEE; 2023. pp. 5078–84.

126. Berseth, G.; Golemo, F.; Pal, C. Towards learning to imitate from a single video demonstration. J. Mach. Learn. Res. 2023, 24, 1-26.

127. Kim, N. H.; Xie, Z.; van de Panne, M. Learning to correspond dynamical systems. In Proceedings of the 2nd Conference on Learning for Dynamics and Control. PMLR; 2020. pp. 105–17. https://proceedings.mlr.press/v120/kim20a.html. (accessed 14 Jul 2025).

128. Starke, S.; Mason, I.; Komura, T. DeepPhase: periodic autoencoders for learning motion phase manifolds. ACM. Trans. Graph. 2022, 41, 1-13.

129. Li, C.; Stanger-Jones, E.; Heim, S.; bae Kim, S. FLD: Fourier latent dynamics for structured motion representation and learning. In The Twelfth International Conference on Learning Representations. 2024. https://openreview.net/forum?id=xsd2llWYSA. (accessed 14 Jul 2025).

130. Firoozi, R.; Tucker, J.; Tian, S.; et al. Foundation models in robotics: applications, challenges, and the future. arXiv 2023, arXiv: 2312.07843. https://doi.org/10.48550/arXiv.2312.07843. (accessed 14 Jul 2025).

131. Devlin, J.; Chang, M. -W.; Lee, K.; Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota. Association for Computational Linguistics; 2019. pp. 4171–86.

132. Brown, T.; Mann, B.; Ryder, N.; et al. Language models are few-shot learners. In 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Curran Associates, Inc.; 2020. pp. 1877–901. https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf. (accessed 14 Jul 2025).

133. Xu, F. F.; Alon, U.; Neubig, G.; Hellendoorn, V. J. A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, New York, USA. Association for Computing Machinery; 2022. p. 1–10.

134. Egli, A. ChatGPT, GPT-4, and other large language models: the next revolution for clinical microbiology? Clin. Infect. Dis. 2023, 77, 1322-8.

135. Kirillov, A.; Mintun, E.; Ravi, N.; et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023. pp. 4015–26. https://openaccess.thecvf.com/content/ICCV2023/html/Kirillov_Segment_Anything_ICCV_2023_paper.html. (accessed 14 Jul 2025).

136. Jhajaria, S.; Kaur, D. Study and comparative analysis of ChatGPT, GPT and DAll-E2. In 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), Delhi, India. Jul 06-08, 2023. IEEE; 2023. p. 1–5.

137. Radford, A.; Kim, J. W.; Hallacy, C.; et al. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning. PMLR; 2021. pp. 8748–63. https://proceedings.mlr.press/v139/radford21a.html. (accessed 14 Jul 2025).

138. Wang, J.; Shi, E.; Hu, H.; et al. Large language models for robotics: opportunities, challenges, and perspectives. J. Autom. Intell. 2025, 4, 52-64.

139. Kumar, K. N.; Essa, I.; Ha, S. Words into action: learning diverse humanoid robot behaviors using language guided iterative motion refinement. In 2nd Workshop on Language and Robot Learning: Language as Grounding. 2023. https://openreview.net/forum?id=K62hUwNqCn. (accessed 14 Jul 2025).

140. Chen, L.-H.; Lu, S.; Zeng, A.; et al. MotionLLM: understanding human behaviors from human motions and videos. CoRR 2024. https://openreview.net/forum?id=B7IGpMVPZU. (accessed 14 Jul 2025).

141. Shek, C. L.; Wu, X.; Suttle, W. A.; et al. LANCAR: leveraging language for context-aware robot locomotion in unstructured environments. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates. Oct 14-18, 2024. IEEE; 2024. pp. 9612–9.

142. Wang, Y. -J.; Zhang, B.; Chen, J.; Sreenath, K. Prompt a robot to walk with large language models. In 2024 IEEE 63rd Conference on Decision and Control (CDC), Milan, Italy. Dec 16-19, 2024. IEEE; 2024. pp. 1531–8.

143. Ma, Y. J.; Liang, W.; Wang, G.; et al. Eureka: human-level reward design via coding large language models. In The Twelfth International Conference on Learning Representations. 2024. https://openreview.net/forum?id=IEduRUO55F. (accessed 14 Jul 2025).

144. Yu, W.; Gileadi, N.; Fu, C.; et al. Language to rewards for robotic skill synthesis. In Proceedings of The 7th Conference on Robot Learning. PMLR; 2023. pp. 374–404. https://proceedings.mlr.press/v229/yu23a.html. (accessed 14 Jul 2025).

145. Song, J.; Zhou, Z.; Liu, J.; Fang, C.; Shu, Z.; Ma, L. Self-refined large language model as automated reward function designer for deep reinforcement learning in robotics. CoRR 2023.

146. Sun, S.; Li, C.; Zhao, Z.; Huang, H.; Xu, W. Leveraging large language models for comprehensive locomotion control in humanoid robots design. Biomim. Intell. Robot. 2024, 4, 100187.

147. Ingelhag, N.; Munkeby, J.; van Haastregt, J.; Varava, A.; Welle, M. C.; Kragic, D. A robotic skill learning system built upon diffusion policies and foundation models. In 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN), Pasadena, USA. Aug 26-30, 2024. IEEE; 2024. pp. 748–54.

148. Lykov, A.; Litvinov, M.; Konenkov, M.; et al. CognitiveDog: large multimodal model based system to translate vision and language into action of quadruped robot. In Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, New York, USA. Association for Computing Machinery; 2024. pp. 712–6.

149. Li, Y.; Li, J.; Fu, W.; Wu, Y. Learning agile bipedal motions on a quadrupedal robot. In 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan. May 13-17, 2024. IEEE; 2024. pp. 9735–42.

150. Xiao, Z.; Wang, T.; Wang, J.; et al. Unified human-scene interaction via prompted chain-of-contacts. In The Twelfth International Conference on Learning Representations. 2024. https://openreview.net/forum?id=1vCnDyQkjg. (accessed 14 Jul 2025).

151. Ouyang, Y.; Li, J.; Li, Y.; et al. Long-horizon locomotion and manipulation on a quadrupedal robot with large language models. arXiv 2024, arXiv: 2404.05291. https://doi.org/10.48550/arXiv.2404.05291. (accessed 14 Jul 2025).

152. Bärmann, L.; Kartmann, R.; Peller-Konrad, F.; Niehues, J.; Waibel, A.; Asfour, T. Incremental learning of humanoid robot behavior from natural interaction and large language models. Front. Robot. AI. 2024, 11, 1455375.

153. Luan, Z.; Lai, Y.; Huang, R.; et al. Automatic robotic development through collaborative framework by large language models. In 2023 China Automation Congress (CAC), Chongqing, China. Nov 17-19, 2023. IEEE; 2023. pp. 7736–41.

154. Tang, Y.; Yu, W.; Tan, J.; et al. SayTap: language to quadrupedal locomotion. In Proceedings of The 7th Conference on Robot Learning. PMLR; 2023. pp. 3556–70. https://proceedings.mlr.press/v229/tang23a.html. (accessed 14 Jul 2025).

155. Xu, M.; Huang, P.; Yu, W.; et al. Creative robot tool use with large language models. arXiv 2023, arXiv: 2310.13065. https://doi.org/10.48550/arXiv.2310.13065. (accessed 14 Jul 2025).

156. Yoshida, T.; Masumori, A.; Ikegami, T. From text to motion: grounding GPT-4 in a humanoid robot palter3q. Front. Robot. AI. 2025, 12, 1581110.

157. Chen, Y.; Ye, Y.; Chen, Z.; Zhang, C.; Ang, M. H. ARO: large language model supervised robotics Text2Skill autonomous learning. arXiv 2024, arXiv: 2403.15834. https://doi.org/10.48550/arXiv.2403.15834. (accessed 14 Jul 2025).

158. Chu, K.; Zhao, X.; Weber, C.; Li, M.; Lu, W.; Wermter, S. Large language models for orchestrating bimanual robots. In 2024 IEEE-RAS 23rd International Conference on Humanoid Robots (Humanoids), Nancy, France. Nov 22-24, 2024. IEEE; 2024. pp. 328–34.

159. Liang, J.; Xia, F.; Yu, W.; et al. Learning to learn faster from human feedback with language model predictive control. In First Workshop on Vision-Language Models for Navigation and Manipulation at ICRA 2024. 2024. https://openreview.net/forum?id=BdEnrtBlms. (accessed 14 Jul 2025).

160. Bien, S.; Skerlj, J.; Thiel, P.; et al. Human-inspired audiovisual inducement of whole-body responses. In 2023 IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids), Austin, USA. Dec 12-14, 2023. IEEE; 2023. p. 1–8.

161. Huang, J.; Yong, S.; Ma, X.; et al. An embodied generalist agent in 3D world. In Proceedings of the 41st International Conference on Machine Learning. PMLR; 2024. pp. 20413–51. https://proceedings.mlr.press/v235/huang24ae.html. (accessed 14 Jul 2025).

162. Habekost, J. -G.; Gäde, C.; Allgeuer, P.; Wermter, S. Inverse kinematics for neuro-robotic grasping with humanoid embodied agents. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates. Oct 14-18, 2024. IEEE; 2024. pp. 7315–22.

163. Kim, C. Y.; Lee, C. P.; Mutlu, B. Understanding large-language model (LLM)-powered human-robot interaction. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, New York, USA. Association for Computing Machinery; 2024. pp. 371–80.

164. Wang, H.; Chen, J.; Huang, W.; et al. GRUtopia: dream general robots in a city at scale. CoRR 2024. https://openreview.net/forum?id=T8d6Cd4p6X. (accessed 14 Jul 2025).

165. Miyake, T.; Wang, Y.; Yang, P.; Sugano, S. Feasibility study on parameter adjustment for a humanoid using LLM tailoring physical care. In Social Robotics, Singapore. Springer Nature Singapore; 2024. pp. 230–43.

166. Sun, J.; Zhang, Q.; Duan, Y.; Jiang, X.; Cheng, C.; Xu, R. Prompt, plan, perform: LLM-based humanoid control via quantized imitation learning. In 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan. May 13-17, 2024. IEEE; 2024. pp. 16236–42.

167. Sovukluk, S.; Englsberger, J.; Ott, C. Whole body control formulation for humanoid robots with closed/parallel kinematic chains: kangaroo case study. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, USA. Oct 01-05, 2023. IEEE; 2023. pp. 10390–6.

168. Lee, Y.; Hwang, S.; Park, J. Balancing of humanoid robot using contact force/moment control by task-oriented whole body control framework. Auton. Robot. 2016, 40, 457-72.

169. Yamamoto, K.; Ishigaki, T.; Nakamura, Y. Humanoid motion control by compliance optimization explicitly considering its positive definiteness. IEEE. Trans. Robot. 2022, 38, 1973-89.

170. Wensing, P. M.; Posa, M.; Hu, Y.; Escande, A.; Mansard, N.; Del Prete, A. Optimization-based control for dynamic legged robots. IEEE. Trans. Robot. 2024, 40, 43-63.

171. Li, Q.; Meng, F.; Yu, Z.; Chen, X.; Huang, Q. Dynamic torso compliance control for standing and walking balance of position-controlled humanoid robots. IEEE/ASME. Trans. Mechatron. 2021, 26, 679-88.

172. Onishi, Y.; Kajita, S. Understanding how a 3-dimensional ZMP exactly decouples the horizontal and vertical dynamics of the CoM-ZMP model. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates. Oct 14-18, 2024. IEEE; 2024. pp. 9036–42.

173. Elobaid, M.; Turrisi, G.; Rapetti, L.; et al. Adaptive non-linear centroidal MPC with stability guarantees for robust locomotion of legged robots. IEEE. Robot. Autom. Lett. 2025, 10, 2806-13.

174. Zhu, Z.; Zhang, G.; Li, Y.; et al. Observer-based state feedback model predictive control framework for legged robots. IEEE/ASME. Trans. Mechatron. 2025, 30, 1096-106.

175. Radosavovic, I.; Xiao, T.; Zhang, B.; Darrell, T.; Malik, J.; Sreenath, K. Real-world humanoid locomotion with reinforcement learning. Sci. Robot. 2024, 9, eadi9579.

176. Chen, Y.; Nguyen, Q. Learning agile locomotion and adaptive behaviors via RL-augmented MPC. In 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan. May 13-17, 2024. IEEE; 2024. pp. 11436–42.

177. Liu, L.; Wang, X.; Yang, X.; Liu, H.; Li, J.; Wang, P. Path planning techniques for mobile robots: review and prospect. Expert. Syst. Appl. 2023, 227, 120254.

178. Zhu, H.; Gu, S.; He, L.; Guan, Y.; Zhang, H. Transition analysis and its application to global path determination for a biped climbing robot. Appl. Sci. 2018, 8, 122.

179. Chen, W.; Chi, W.; Ji, S.; et al. A survey of autonomous robots and multi-robot navigation: perception, planning and collaboration. Biomim. Intell. Robot. 2025, 5, 100203.

180. Khairuddin, A. R.; Talib, M. S.; Haron, H. Review on simultaneous localization and mapping (SLAM). In 2015 IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Penang, Malaysia. Nov 27-29, 2015. IEEE; 2015. pp. 85–90.

181. Xu, X.; Zhang, L.; Yang, J.; et al. A review of multi-sensor fusion SLAM systems based on 3D LIDAR. Remote. Sens. 2022, 14, 2835.

182. Mur-Artal, R.; TardȮs, J. D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE. Trans. Robot. 2017, 33, 1255-62.

183. Shan, T.; Englot, B.; Ratti, C.; Rus, D. LVI-SAM: tightly-coupled lidar-visual-inertial odometry via smoothing and mapping. In 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi'an, China. May 30 - Jun 05, 2021. IEEE; 2021. pp. 5692–8.

184. Cormen, T. H.; Leiserson, C. E.; Rivest, R. L.; Stein, C. Introduction to algorithms. MIT press; 2022. https://mitpress.mit.edu/9780262046305/introduction-to-algorithms/. (accessed 14 Jul 2025).

185. Yao, J.; Lin, C.; Xie, X.; Wang, A. J.; Hung, C. -C. Path planning for virtual human motion using improved A* star algorithm. In 2010 Seventh International Conference on Information Technology: New Generations, Las Vegas, USA. Apr 12-14, 2010. IEEE; 2010. pp. 1154–8.

186. Vonásek, V.; Faigl, J.; Krajník, T.; Přeučil, L. RRT-path - a guided rapidly exploring random tree. In Robot Motion and Control 2009. Springer London; 2009. pp. 307–16. https://doi.org/10.1007/978-1-84882-985-5\_28.

187. Kavraki, L. E.; Svestka, P.; Latombe, J. C.; Overmars, M. H. Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE. Trans. Robot. Autom. 1996, 12, 566-80.

188. Kadry, S.; Alferov, G.; Fedorov, V.; Khokhriakova, A. Path optimization for D-star algorithm modification. AIP. Conf. Proc. 2022, 2425, 080002.

189. Liu, C.; Lee, S.; Varnhagen, S.; Tseng, H. E. Path planning for autonomous vehicles using model predictive control. In 2017 IEEE Intelligent Vehicles Symposium (Ⅳ), Los Angeles, USA. Jun 11-14, 2017. IEEE; 2017. pp. 174–9.

190. Khatib, O. Real-time obstacle avoidance for manipulators and mobile robots. In Proceedings. 1985 IEEE International Conference on Robotics and Automation, St. Louis, USA. Mar 25-28, 1985. IEEE; 1985. pp. 500–5.

191. Molinos, E. J.; Ángel Llamazares, .; Ocaña, M. Dynamic window based approaches for avoiding obstacles in moving. Robot. Auton. Syst. 2019, 118, 112-30.

192. Ni, J.; Chen, Y.; Tang, G.; Shi, J.; Cao, W.; Shi, P. Deep learning-based scene understanding for autonomous robots: a survey. Intell. Robot. 2023, 3, 374-401.

193. Kuindersma, S.; Deits, R.; Fallon, M.; et al. Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot. Auton. Robot. 2016, 40, 429-55.

194. Boston Dynamics. https://www.bostondynamics.com. (accessed 14 Jul 2025).

195. Haynes, G. C.; Stager, D.; Stentz, A.; et al. Developing a robust disaster response robot: CHIMP and the robotics challenge. J. Field. Robot. 2017, 34, 281-304.

196. Chestnutt, J.; Lau, M.; Cheung, G.; Kuffner, J.; Hodgins, J.; Kanade, T. Footstep planning for the Honda ASIMO humanoid. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain. Apr 18-22, 2005. IEEE; 2005. pp. 629–34.

197. Delfin, J.; Becerra, H. M.; Arechavaleta, G. Humanoid navigation using a visual memory with obstacle avoidance. Robot. Auton. Syst. 2018, 109, 109-24.

198. Sahu, C.; Kumar, P. B.; Parhi, D. R. An intelligent path planning approach for humanoid robots using adaptive particle swarm optimization. Int. J. Artif. Intell. Tools. 2018, 27, 1850015.

199. Kusuma, M.; Riyanto, ; Machbub, C. Humanoid robot path planning and rerouting using A-star search algorithm. In 2019 IEEE International Conference on Signals and Systems (ICSigSys), Bandung, Indonesia. Jul 16-18, 2019. IEEE; 2019. pp. 110–5.

200. Kuffner, J.; Nishiwaki, K.; Kagami, S.; Inaba, M.; Inoue, H. Motion planning for humanoid robots. In Robotics Research. The Eleventh International Symposium. Springer Berlin Heidelberg; 2005. pp. 365–74. https://link.springer.com/chapter/10.1007/11008941_39#citeas (accessed 2025-06-26).

201. Wang, J.; Chen, W.; Xiao, X.; et al. A survey of the development of biomimetic intelligence and robotics. Biomim. Intell. Robot. 2021, 1, 100001.

202. Castillo, G. A.; Weng, B.; Yang, S.; Zhang, W.; Hereid, A. Template model inspired task space learning for robust bipedal locomotion. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, USA. Oct 01-05, 2023. IEEE; 2023. pp. 8582–9.

203. Makoviychuk, V.; Wawrzyniak, L.; Guo, Y.; et al. Isaac Gym: high performance GPU based physics simulation for robot learning. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). 2021. https://openreview.net/forum?id=fgFBtYgJQX_. (accessed 14 Jul 2025).

204. Mittal, M.; Yu, C.; Yu, Q.; et al. Orbit: a unified simulation framework for interactive robot learning environments. IEEE. Robot. Autom. Lett. 2023, 8, 3740-7.

205. Foxman, M. United we stand: platforms, tools and innovation with the unity game engine. Soc. Media. Soc. 2019, 5, 2056305119880177.

206. Todorov, E.; Erez, T.; Tassa, Y. MuJoCo: a physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal. Oct 07-12, 2012. IEEE; 2012. pp. 5026–33.

207. Zakka, K.; Tabanpour, B.; Liao, Q.; et al. Demonstrating MuJoCo playground. In Robotics: Science and Systems 2025. 2025. https://www.roboticsproceedings.org/rss21/p020.pdf. (accessed 14 Jul 2025).

208. Geng, H.; Wang, F.; Wei, S.; et al. RoboVerse: towards a unified platform, dataset and benchmark for scalable and generalizable robot learning. https://github.com/RoboVerseOrg/RoboVerse. (accessed 14 Jul 2025).

Intelligence & Robotics
ISSN 2770-3541 (Online)

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/