The 12th International Conference on Ubiquitous Robots and Ambient Intelligence [URAI 2015]
October 28~30, 2015 / 3F, Convention Halls, Exhibition Center II, KINTEX, Goyang city, Korea
Martial Hebert (Director, Robotics Institute, Carnegie Mellon University, USA)
Presentation Title : Robots, autonomy, and computer vision
Abstract : Computer vision has made considerable progress over the past decade. Yet, it is still far from the capabilities needed in intelligent, autonomous robots.
In this presentation, I will discuss aspects of scene understanding, object recognition, and 3D reconstruction in the context of robotics. Of particular interest is understanding the fundamental challenges that need to be addressed for these capabilities to be migrated to dependable, trustable robots operating with people.
I'll identify research directions that need to be explored further in the area of integrating vision and mobile robotics.
Biography : Martial Hebert is a Professor of Robotics at Carnegie-Mellon University and director of the Robotics Institute. His interest includes computer vision, especially recognition and scene understanding in images and video data, model building and object recognition from 3D data, and perception for mobile robots and for intelligent vehicles. His group has developed approaches for object recognition and scene analysis in images, 3D point clouds, and video sequences. In the area of machine perception for robotics, his group has developed techniques for people detection, tracking, and prediction, and for understanding the environment of ground vehicles from sensor data.
Yasushi Yagi (Director, The Institute of Scientific and Industrial Research, OSAKA University, Japan)
Presentation Title : Gait Video Analysis and Its Applications
Abstract : We have been studying human gait analysis for more than ten years. Considering the fact that everyone's walking style is unique, a human gait can be applied for person authentication task. Fortunately, our gait analysis technologies are getting to be used in real criminal investigation.
We have constructed a large-scale gait database, and proposed several methods of gait analysis. The appearance of gait patterns are influenced by the changes of viewpoints, walking directions, speeds, clothes, and shoes. To overcome this problems, we have proposed several approaches with a part-based method, an appearance-based view transformation model, a periodic temporal super resolution method, a manifold based method and score level fusion. We show the efficiency of our approaches by evaluating on our large gait database.
Furthermore, I introduce a novel research project "Behavior Understanding based on Intention-Gait Model" supported by JST-CREST in 2010. In this project, we focus on a new aspect that a human gait pattern is influenced by one's emotion, the object of one's activity, one's physical/mental conditions, and surrounding people. In this talk, I briefly introduce overview of this project and some studies on medical application.
Biography : Yasushi Yagi is a Professor of Computer Science and the Director of the Institute of Scientific and Industrial Research, Osaka University. The studies in his laboratory focus on computer vision and media processing including basic technologies such as sensor design, and applications such as an intelligent system with visual processing functions. Some of our major research projects are development of a novel vision sensor such as an omnidirectional catadioptric system, biomedical image processing such as an endoscope and microscope images, person authentication, intention, and emotion estimation from human gait, and its applications to forensic and medical fields, photometry analysis and its application to computer graphics, an anticrime system using a wearable camera, 3D shape and human measurement using infrared light.
Young-Su Moon (Samsung Electronics (SEC), Republic of Korea)
Presentation Title : Next Generation of Context-AwareConsumer Electronics
Abstract : Our world is on the verge of the inflection point toward ambient intelligence. Through all our life domains like home, mobile, automobile, office, and industry, consumer devices are expected to be connected more seamlessly and to have the capability of sensing and computing huge amount of signals or data. Recently the technology concepts of Internet of Things (IoT) and context-awareness have emerged as a new mega-trend especially across all IT products includingTV, camera, printer, home appliances, smart phone, wearable devices, VR/AR devices, smart drone, self-driving car, smart home,and etc.
In other words, the previous vision-based machine intelligence technologies, that have been actively studied in the research area of intelligent robots, vehicles, and factory automation, are being explosively employed along with various kinds of cutting-edge context-sensing and high performance computing technologies, so as to make conventional consumer devicesmore responsive and more context-aware to users and their environments. After all, such a mega-trend will lead to the emergence of next generation of intelligent consumer devices to offer us more context-adaptive user experiences.
Biography : Young-Su Moon works at visual processing team of the Digital Media & Communication R&D center, Samsung Electronics (SEC), in Suwon, Republic of Korea. His research interests include DTV image post-processing, Camera image signal processing (ISP), Computational photography, Printer/copier image enhancement, Augmented Reality, Content-based video indexing and retrieval, Depth perception improvement in 2DTV, Vision-based intelligent traffic system, Pattern recognition, Embedded image signal processing, Embedded image system architecture. He is the author or co-author of over 30 academic papers, and 120 patents, and he was selected as a SEC’s master in 2015.