Skip to main content

Research Repository

Advanced Search

Outputs (44)

An RGB-D based social behavior interpretation system for a humanoid social robot (2014)
Presentation / Conference Contribution
Zaraki, A., Giuliani, M., Dehkordi, M. B., Mazzei, D., D'ursi, A., & Rossi, D. D. (2014, October). An RGB-D based social behavior interpretation system for a humanoid social robot. Paper presented at 2nd RSI International Conference on Robotics and Mechatronics (ICRoM 2014), Tehran, Iran

We used a new method called “Ghost-in-the-Machine” (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intenti... Read More about An RGB-D based social behavior interpretation system for a humanoid social robot.

Combining unsupervised learning and discrimination for 3D action recognition (2014)
Journal Article
Chen, G., Clarke, D., Giuliani, M., Gaschler, A., & Knoll, A. (2015). Combining unsupervised learning and discrimination for 3D action recognition. Signal Processing, 110, 67-81. https://doi.org/10.1016/j.sigpro.2014.08.024

© 2014 Elsevier B.V. Previous work on 3D action recognition has focused on using hand-designed features, either from depth videos or 2D videos. In this work, we present an effective way to combine unsupervised feature learning with discriminative fea... Read More about Combining unsupervised learning and discrimination for 3D action recognition.

Task-based evaluation of context-sensitive referring expressions in human–robot dialogue (2014)
Journal Article
Foster, M. E., Giuliani, M., & Isard, A. (2014). Task-based evaluation of context-sensitive referring expressions in human–robot dialogue. Language Cognition and Neuroscience, 29(8), 1018-1034. https://doi.org/10.1080/01690965.2013.855802

© 2013 Taylor & Francis. The standard referring-expression generation task involves creating stand-alone descriptions intended solely to distinguish a target object from its context. However, when an artificial system refers to objects in the cours... Read More about Task-based evaluation of context-sensitive referring expressions in human–robot dialogue.

Designing and evaluating a social gaze-control system for a humanoid robot (2014)
Journal Article
Zaraki, A., Mazzei, D., Giuliani, M., & De Rossi, D. (2014). Designing and evaluating a social gaze-control system for a humanoid robot. IEEE Transactions on Human-Machine Systems, 44(2), 157-168. https://doi.org/10.1109/THMS.2014.2303083

This paper describes a context-dependent social gaze-control system implemented as part of a humanoid social robot. The system enables the robot to direct its gaze at multiple humans who are interacting with each other and with the robot. The attenti... Read More about Designing and evaluating a social gaze-control system for a humanoid robot.

Action recognition using ensemble weighted multi-instance learning (2014)
Presentation / Conference Contribution
Chen, G., Giuliani, M., Clarke, D., Gaschler, A., & Knoll, A. (2014). Action recognition using ensemble weighted multi-instance learning. IEEE International Conference on Robotics and Automation, 4520-4525. https://doi.org/10.1109/ICRA.2014.6907519

© 2014 IEEE. This paper deals with recognizing human actions in depth video data. Current state-of-the-art action recognition methods use hand-designed features, which are difficult to produce and time-consuming to extend to new modalities. In this p... Read More about Action recognition using ensemble weighted multi-instance learning.

How can I help you? Comparing engagement classification strategies for a robot bartender (2013)
Presentation / Conference Contribution
Foster, M. E., Gaschler, A., & Giuliani, M. (2013, December). How can I help you? Comparing engagement classification strategies for a robot bartender. Paper presented at 15th International Conference on Multimodal Interfaces (ICMI 2013), Sydney, Australia

A robot agent existing in the physical world must be able to understand the social states of the human users it interacts with in order to respond appropriately. We compared two implemented methods for estimating the engagement state of customers for... Read More about How can I help you? Comparing engagement classification strategies for a robot bartender.

Unsupervised learning spatio-temporal features for human activity recognition from RGB-D video data (2013)
Presentation / Conference Contribution
Chen, G., Zhang, F., Giuliani, M., Buckl, C., & Knoll, A. (2013). Unsupervised learning spatio-temporal features for human activity recognition from RGB-D video data. Lecture Notes in Artificial Intelligence, 8239 LNAI, 341-350. https://doi.org/10.1007/978-3-319-02675-6_34

Being able to recognize human activities is essential for several applications, including social robotics. The recently developed commodity depth sensors open up newpossibilities of dealingwith this problem. Existing techniques extract hand-tuned fea... Read More about Unsupervised learning spatio-temporal features for human activity recognition from RGB-D video data.

Using Embodied Multimodal Fusion to Perform Supportive and Instructive Robot Roles in Human-Robot Interaction (2013)
Journal Article
Giuliani, M., & Knoll, A. (2013). Using Embodied Multimodal Fusion to Perform Supportive and Instructive Robot Roles in Human-Robot Interaction. International Journal of Social Robotics, 5(3), 345-356. https://doi.org/10.1007/s12369-013-0194-y

We present a robot that is working with humans on a common construction task. In this kind of interaction, it is important that the robot can perform different roles in order to realise an efficient collaboration. For this, we introduce embodied mult... Read More about Using Embodied Multimodal Fusion to Perform Supportive and Instructive Robot Roles in Human-Robot Interaction.

Two people walk into a bar: Dynamic multi-party social interaction with a robot agent (2012)
Presentation / Conference Contribution
Foster, M. E., Gaschler, A., Giuliani, M., Isard, A., Pateraki, M., & Petrick, R. P. A. (2012, October). Two people walk into a bar: Dynamic multi-party social interaction with a robot agent. Paper presented at 14th ACM International Conference on Multimodal Interaction (ICMI 2012), California, USA

We introduce a humanoid robot bartender that is capable of dealing with multiple customers in a dynamic, multi-party social setting. The robot system incorporates state-of-the-art components for computer vision, linguistic processing, state managemen... Read More about Two people walk into a bar: Dynamic multi-party social interaction with a robot agent.

Modelling state of interaction from head poses for social human-robot interaction (2012)
Presentation / Conference Contribution
Gaschler, A., Huth, K., Giuliani, M., Kessler, I., de Ruiter, J., & Knoll, A. (2012, March). Modelling state of interaction from head poses for social human-robot interaction. Paper presented at Gaze in Human-Robot Interaction Workshop held at the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2012), Boston, MA, USA

In this publication, we analyse how humans use head pose in various states of an interaction, in both human-human and human-robot observations. Our scenario is the short-term, every-day interaction of a customer ordering a drink from a bartender. To... Read More about Modelling state of interaction from head poses for social human-robot interaction.