Submitted:
23 May 2024
Posted:
30 May 2024
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Literature Review
- Text communications: The human user uses text for programming the cobot, and to send it commands, and information [11] (Jacomini et al. 2023) the cobot may also use text to communicate messages, explanations, cautionary warnings and other information types [12,13] (Scalise et al. 2017; Kontogiorgos, 2023).
- Visual communications: Graphical displays, augmented reality (AR), and vision systems are key components of visual interfaces that enable workers to comprehend and interpret information from cobots efficiently [14,15] (Zieliński et al. 2021; Pascher et al. 2022). Graphical displays can provide real-time feedback on cobot actions, task progress, and system status, enhancing transparency and situational awareness [16] (Eimontaite et al. 2022). Augmented reality overlays digital information onto the physical workspace, offering intuitive guidance for tasks and aiding in error prevention [17] (Carriero et al. 2023). Vision systems, equipped with cameras and sensors, enable cobots to recognize and respond to human gestures, further fostering natural and fluid interaction [18] (Sauer et al. 2021).
- Auditory communications: Human-cobot vocal communication has been a topic of intensive research in recent years [19] (Ionescu & Schlund, 2023). Auditory cues are valuable in environments where visual attention may be divided or compromised [20] (Turri et al. 2021). Sound alerts, spoken instructions, and auditory feedback mechanisms contribute to effective communication between human workers and cobots [21] (Su et al. 2023). For instance, audible signals can indicate the initiation or completion of a task, providing workers with real-time information without requiring constant visual focus [22] (Tran, 2020). Speech recognition technology enables cobots to understand verbal commands, fostering a more intuitive and dynamic interaction [23] (Telkes et al. 2024). Thoughtful use of auditory interfaces between humans and cobots, helps create a collaborative environment where information is conveyed promptly, enhancing overall responsiveness and coordination [24] (Salehzadeh et al, 2022). Several recent papers have proposed novel interfaces and platforms to facilitate this type of interaction. Rusan and Mocanu (2022) [25] introduced a framework that detects and recognizes speech messages, converting them into spoken commands for operating system instructions. Carr, Wang, and Wang (2023) [26] proposed a network-independent verbal communication platform for multi-robot systems, which can function in environments lacking network infrastructures. McMillan et al. (2023) [27] highlighted the importance of conversation as a natural way of communication between humans and robots, promoting inclusivity in human-robot interaction. Lee et al. (2023) [28] conducted a user study to understand the impact of robot attributes on team dynamics and collaboration performance, finding that vocalizing robot intentions can decrease team performance and perceived safety. Ionescu & Schlud (2021) [29] found that voice activated cobot programming is more efficient than typing and other programming techniques. These papers collectively contribute to the development of human-robot vocal communication systems and highlight the challenges and opportunities in this field.
- Tactile communications: The incorporation of tactile feedback mechanisms enhances the haptic dimension of human-cobot collaboration [30] (Sorgini et al. 2020). Tactile interfaces, such as force sensors and haptic feedback devices, enable cobots to perceive and respond to variations in physical interactions [31] (Guda et al. 2022). Force sensors can detect unexpected resistance, triggering immediate cessation of movement to prevent collisions or accidents [32] (Zurlo et al. 2023). Haptic feedback devices provide physical sensations to human operators, conveying information about the cobot's state or impending actions [33] (Costes & Le-cuyer, 2023). This tactile dimension contributes to a more nuanced and sophisticated collaboration, allowing for a greater degree of trust and coordination between human workers and cobots.
3. Main Assistive Scenarious
- Standard pick and place (known locations): there may be several different such tasks, and their names and locations should be clearly defined for the human and cobot.
- Fetch distinct tool/part/material: visual search may be needed for identifying the object, and corresponding grasping strategy.
- Return tool/part/material: visual search may be needed for identifying the placing location, and corresponding reach and align strategy.
- Dispose defective tool/part/material: Identifying the correct dispose location is necessary.
- Standard “Turn” object”: Identifying the object and pre-defining the meaning of “Turn” may be necessary.
- Turn Object degrees counter/clockwise: Identifying the object for planning the reach, grasp & turn trajectory.
- Move and align
- Drill: location must be pre-defined
- 9Screw: location must be pre-defined
- Solder/glue-point : location must be pre-defined
- Inspect: : location must be pre-defined
- Push/Press: location must be pre-defined
- Pull/Detach: location must be pre-defined
- Hold (reach and grasp): Identifying the object for planning the reach, &grasp trajectory.
4. Main Strategies for Optimizing Vocal Communication
- Workstation map: generate a map of the workstation with location identifying labels of various important points that the cobot arm may need to reach. Store the coordinates of each point in the cobot’s control system and hang the map in front of the worker.
- Dedicated space for placing tools and parts (for both the worker and the cobot): Dedicate a convenient place, close to the worker, for the cobot to place or take tools or parts or materials that the worker asked for. This place could serve several tools or parts to be placed from left to right and top to bottom. Store the coordinates of the place in the cobot’s control system, and the worker must be informed about tis place and understand the expected trajectory of the cobot for reaching this place.
- Dedicated storage place for tool/part: Dedicate a unique storage place for each tool and for part supply, that would be easy to reach for both the human worker and the cobot.
- Define names for mutual use: Make sure both cobot and workers are using the same name for each tool and for each part.
- Define predefined trajectories of the cobot from tool/part areas to the placement area. Thus, the worker knows what movements to expect from the cobot.
5. Discussion
6. Conclusions
References
- Faccio, M.; Cohen, Y. Intelligent cobot systems: human-cobot collaboration in manufacturing. Journal of Intelligent Manufacturing 2023, 1–3. [Google Scholar] [CrossRef]
- Faccio, M.; Granata, I.; Menini, A.; Milanese, M.; Rossato, C.; Bottin, M.; Rosati, G. Human factors in cobot era: A review of modern production systems features. Journal of Intelligent Manufacturing 2023, 34, 85–106. [Google Scholar] [CrossRef]
- Liu, L.; Schoen, A.J.; Henrichs, C.; Li, J.; Mutlu, B.; Zhang, Y.; Radwin, R.G. Human robot collaboration for enhancing work activities. Human Factors 2024, 66, 158–179. [Google Scholar] [CrossRef] [PubMed]
- Papetti, A.; Ciccarelli, M.; Scoccia, C.; Palmieri, G.; Germani, M. A human-oriented design process for collaborative robotics. International Journal of Computer Integrated Manufacturing 2023, 36, 1760–1782. [Google Scholar] [CrossRef]
- Gross, S.; Krenn, B. A Communicative Perspective on Human–Robot Collaboration in Industry: Mapping Communicative Modes on Collaborative Scenarios. International Journal of Social Robotics 2023, 1–18. [Google Scholar] [CrossRef]
- Moore, B.A.; Urakami, J. The impact of the physical and social embodiment of voice user interfaces on user distraction. International Journal of Human-Computer Studies 2022, 161, 102784. [Google Scholar] [CrossRef]
- Marklin, R.W., Jr.; Toll, A.M.; Bauman, E.H.; Simmins, J.J.; LaDisa, J.F., Jr.; Cooper, R. Do Head-Mounted Augmented Reality Devices Affect Muscle Activity and Eye Strain of Utility Workers Who Do Procedural Work? Studies of Operators and Manhole Workers. Human factors 2022, 64, 305–323. [Google Scholar] [PubMed]
- Heydaryan, S.; Suaza Bedolla, J.; Belingardi, G. Safety design and development of a human-robot collaboration assembly process in the automotive industry. Applied Sciences 2018, 8, 344. [Google Scholar] [CrossRef]
- Petzoldt, C.; Harms, M.; Freitag, M. Review of task allocation for human-robot collaboration in assembly. International Journal of Computer Integrated Manufacturing 2023, 1–41.–41. [Google Scholar] [CrossRef]
- Schmidbauer, C.; Zafari, S.; Hader, B.; Schlund, S. An Empirical Study on Workers' Preferences in Human–Robot Task Assignment in Industrial Assembly Systems. IEEE Transactions on Human-Machine Systems 2023, 53, 293–302. [Google Scholar] [CrossRef]
- Jacomini Prioli, J.P.; Liu, S.; Shen, Y.; Huynh, V.T.; Rickli, J.L.; Yang, H.J.; Kim, K.Y. Empirical study for human engagement in collaborative robot programming. Journal of Integrated Design and Process Science 2023, 1–23. [Google Scholar] [CrossRef]
- Scalise, R.; Rosenthal, S.; Srinivasa, S. Natural language explanations in human-collaborative systems. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction; 2017; pp. 377–378. [Google Scholar]
- Kontogiorgos, D. Utilising Explanations to Mitigate Robot Conversational Failures. arXiv 2023, arXiv:2307.04462. [Google Scholar]
- Zieliński, K.; Walas, K.; Heredia, J.; Kjærgaard, M.B. A Study of Cobot Practitioners Needs for Augmented Reality Interfaces in the Context of Current Technologies. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN); 2021; pp. 292–298. [Google Scholar]
- Pascher, M.; Kronhardt, K.; Franzen, T.; Gruenefeld, U.; Schneegass, S.; Gerken, J. My Caregiver the Cobot: Comparing Visualization Techniques to Effectively Communicate Cobot Perception to People with Physical Impairments. Sensors 2022, 22, 755. [Google Scholar] [CrossRef] [PubMed]
- Eimontaite, I.; Cameron, D.; Rolph, J.; Mokaram, S.; Aitken, J.M.; Gwilt, I.; Law, J. Dynamic graphical instructions result in improved attitudes and decreased task completion time in human–robot co-working: an experimental manufacturing study. Sustainability 2022, 14, 3289. [Google Scholar] [CrossRef]
- Carriero, G.; Calzone, N.; Sileo, M.; Pierri, F.; Caccavale, F.; Mozzillo, R. Human-Robot Collaboration: An Augmented Reality Toolkit for Bi-Directional Interaction. Applied Sciences 2023, 13, 11295. [Google Scholar] [CrossRef]
- Sauer, V.; Sauer, A.; Mertens, A. Zoomorphic gestures for communicating cobot states. IEEE Robotics and Automation Letters 2021, 6, 2179–2185. [Google Scholar] [CrossRef]
- Ionescu, T.B.; Schlund, S. Programming cobots by voice: a pragmatic, web-based approach. International Journal of Computer Integrated Manufacturing 2023, 36, 86–109. [Google Scholar] [CrossRef]
- Turri, S.; Rizvi, M.; Rabini, G.; Melonio, A.; Gennari, R.; Pavani, F. Orienting auditory attention through vision: the impact of monaural listening. Multisensory Research 2021, 35, 1–28. [Google Scholar] [CrossRef] [PubMed]
- Su, H.; Qi, W.; Chen, J.; Yang, C.; Sandoval, J.; Laribi, M.A. Recent advancements in multimodal human–robot interaction. Frontiers in Neurorobotics 2023, 17, 1084000. [Google Scholar] [CrossRef]
- Tran, N. Exploring mixed reality robot communication under different types of mental workload. 2020-Mines <italic>Theses & Dissertations</italic>. 2020.
- Telkes, P.; Angleraud, A.; Pieters, R. Instructing Hierarchical Tasks to Robots by Verbal Commands. In Proceedings of the 2024 IEEE/SICE International Symposium on System Integration (SII); 2024; pp. 1139–1145. [Google Scholar]
- Salehzadeh, R.; Gong, J.; Jalili, N. Purposeful Communication in Human–Robot Collaboration: A Review of Modern Approaches in Manufacturing. IEEE Access 2022, 10, 129344–129361. [Google Scholar] [CrossRef]
- Rusan, H.A.; Mocanu, B. Human-Computer Interaction Through Voice Commands Recognition. In Proceedings of the 2022 International Symposium on Electronics and Telecommunications (ISETC); 2022; pp. 1–4. [Google Scholar]
- Carr, C.; Wang, P.; Wang, S. A Human-friendly Verbal Communication Platform for Multi-Robot Systems: Design and Principles. In UK Workshop on Computational Intelligence; Springer Nature: Cham, Switzerland, 2023; pp. 580–594. [Google Scholar]
- McMillan, D.; Jaber, R.; Cowan, B.R.; Fischer, J.E.; Irfan, B.; Cumbal, R.; Lee, M. Human-Robot Conversational Interaction (HRCI). In Proceedings of the Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction; 2023; pp. 923–925. [Google Scholar]
- Lee, K.M.; Krishna, A.; Zaidi, Z.; Paleja, R.; Chen, L.; Hedlund-Botti, E.; Gombolay, M. The effect of robot skill level and communication in rapid, proximate human-robot collaboration. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction; 2023; pp. 261–270. [Google Scholar]
- Ionescu, T.B.; Schlund, S. Programming cobots by voice: A human-centered, web-based approach. Procedia CIRP 2021, 97, 123–129. [Google Scholar] [CrossRef]
- Sorgini, F.; Farulla, G.A.; Lukic, N.; Danilov, I.; Roveda, L.; Milivojevic, M.; Bojovic, B. Tactile sensing with gesture-controlled collaborative robot. In Proceedings of the 2020 IEEE International Workshop on Metrology for Industry 4.0 & IoT; 2020; pp. 364–368. [Google Scholar]
- Guda, V.; Mugisha, S.; Chevallereau, C.; Zoppi, M.; Molfino, R.; Chablat, D. Motion strategies for a cobot in a context of intermittent haptic interface. Journal of Mechanisms and Robotics 2022, 14, 041012. [Google Scholar] [CrossRef]
- Zurlo, D.; Heitmann, T.; Morlock, M.; De Luca, A. Collision Detection and Contact Point Estimation Using Virtual Joint Torque Sensing Applied to a Cobot. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA); 2023; pp. 7533–7539. [Google Scholar]
- Costes, A.; Lécuyer, A. Inducing Self-Motion Sensations with Haptic Feedback: State-of-the-Art and Perspectives on “Haptic Motion”. IEEE Transactions on Haptics, 2023. [Google Scholar]
- Keshvarparast, A.; Battini, D.; Battaia, O.; Pirayesh, A. Collaborative robots in manufacturing and assembly systems: literature review and future research agenda. Journal of Intelligent Manufacturing 2023, 1–54. [Google Scholar] [CrossRef]
- Liu, L.; Guo, F.; Zou, Z.; Duffy, V.G. Application, development and future opportunities of collaborative robots (cobots) in manufacturing: A literature review. International Journal of Human–Computer Interaction 2024, 40, 915–932. [Google Scholar] [CrossRef]
- Schreiter, T.; Morillo-Mendez, L.; Chadalavada, R.T.; Rudenko, A.; Billing, E.; Magnusson, M.; Lilienthal, A.J. Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver. In Proceedings of the 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN); 2023; pp. 293–300. [Google Scholar]
- Rautiainen, S.; Pantano, M.; Traganos, K.; Ahmadi, S.; Saenz, J.; Mohammed, W.M.; Martinez Lastra, J.L. Multimodal interface for human–robot collaboration. Machines 2022, 10, 957. [Google Scholar] [CrossRef]
- Urakami, J.; Seaborn, K. Nonverbal Cues in Human–Robot Interaction: A Communication Studies Perspective. ACM Transactions on Human-Robot Interaction 2023, 12, 1–21. [Google Scholar] [CrossRef]
- Park, K.B.; Choi, S.H.; Lee, J.Y.; Ghasemi, Y.; Mohammed, M.; Jeong, H. Hands-free human–robot interaction using multimodal gestures and deep learning in wearable mixed reality. IEEE Access 2021, 9, 55448–55464. [Google Scholar] [CrossRef]
- Nagrani, A.; Yang, S.; Arnab, A.; Jansen, A.; Schmid, C.; Sun, C. Attention bottlenecks for multimodal fusion. Advances in neural information processing systems 2021, 34, 14200–14213. [Google Scholar]
- Javaid, M.; Haleem, A.; Singh, R.P.; Rab, S.; Suman, R. Significant applications of Cobots in the field of manufacturing. Cognitive Robotics 2022, 2, 222–233. [Google Scholar] [CrossRef]
| Related Strategies | |||||
|
Main Scenarios |
1 Map |
2 Placing Space |
3 Tool/part Storage |
4 Defined Names |
5 Path |
| Pick & place | V | V | V | ||
| Fetch | V | V | V | V | V |
| Return | V | V | V | V | V |
| Dispose | V | V | V | V | |
| Turn predefined | V | V | |||
| Turn degree | V | V | |||
| Align | V | V | V | ||
| Drill | V | V | V | ||
| Screw | V | V | V | V | |
| Solder | V | V | V | ||
| Inspect | V | V | V | ||
| Push/press | V | V | V | ||
| Pull/detach | V | V | V | ||
| Hold | V | V | |||
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
