基于Leapmotion的PPT手势控制系统设计外文翻译资料

 2022-03-14 08:03

英语原文共 10 页,剩余内容已隐藏,支付完成后下载完整资料


原文:

Multi-LeapMotion sensor based demonstration for robotic refine tabletop

Haiyang Jin a,b,c, Qing Chen a,b, Zhixian Chen a,b, Ying Hu a,b,*, Jianwei Zhang c

a Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China b Chinese University of Hong Kong, Hong Kong, China c University of Hamburg, Hamburg, Germany

Available online 2 June 2016

Abstract

In some complicated tabletop object manipulation task for robotic system, demonstration based control is an efficient way to enhance the stability of execution. In this paper, we use a new optical hand tracking sensor, LeapMotion, to perform a non-contact demonstration for robotic systems. A Multi-LeapMotion hand tracking system is developed. The setup of the two sensors is analyzed to gain a optimal way for efficiently use the informations from the two sensors. Meanwhile, the coordinate systems of the Mult-LeapMotion hand tracking device and the robotic demonstration system are developed. With the recognition to the element actions and the delay calibration, the fusion principles are developed to get the improved and corrected gesture recognition. The gesture recognition and scenario experiments are carried out, and indicate the improvement of the proposed Multi-LeapMotion hand tracking system in tabletop object manipulation task for robotic demonstration. Copyright copy; 2016, Chongqing University of Technology. Production and hosting by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Keywords: LeapMotion sensor; Muti-sensor fusion; Tele-operative demonstration; Gesture recognition; Tabletop object manipulation

1. Introduction

For intelligent robots, tabletop object manipulation is one of the most common task. It combines the capabilities of the robot in vision, image procession, object recognition, handarm manipulation, etc. However, the real indoor environment is much more complicated than experimental scenarios. The vision of the robot sometimes can hardly provides enough information for successfully executing some difficult tasks, such as pick, place or assemble some small objects [1]. In these cases, if two objects are too close to each other, it will be difficult to correctly segment them; moreover, some occlusion cases often occur in real indoor environment. So, teleoperative demonstration method is an efficient way to overcome these problems [2,3].

These demonstration methods have already been used on industrial robots for some years. For instance, the controller with buttons or a six-dimensional mouse are used to control the robot and tell the key positions and orientations, so that the robot can plan the trajectory and correctly reach each key position with desired orientations and perform a smooth movement [4]. However, the interface of this kind of demonstration method is not efficient for an intelligent robotic system. And in most such systems, the robot only records

*

This research is funded by National Natural Science Foundation of China under Project no. 61210013, Science and Technology Planning Project of Guangdong Province under no. 2014A020215027.

* Corresponding author. Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Xueyuan Avenue 1068, Shenzhen, 518055, China. Tel.: thorn;86 755 86392182.

E-mail address: ying.hu@siat.ac.cn (Y. Hu).

Peer review under responsibility of Chongqing University of Technology. position and orientations without interpreting gestures, so these systems are not applicable to more complex tabletop object manipulation tasks. A more natural method based on a kinesthetic interface is used for demonstration. One can drag the robotic arm to follow his actions, such as the researches on humanoid robots by Hersch et al. [5] and Hwang et al. [6]. However, this method also aims at the trajectory tracking

http://dx.doi.org/10.1016/j.trit.2016.03.010

2468-2322/Copyright copy; 2016, Chongqing University of Technology. Production and hosting by Elsevier B.V. This is an open access article under the CC BY-NCND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

rather than at gesture recognition. Furthermore, this is a typical contact control method in which a human works within the same environment as the robot. Therefore, it is hardly used in human-unfriendly environments. For this reason, noncontact tele-control methods are more appropriate for these situation. For example, some mechanical based [7e9], optical tracking based or vision based master-slave-device and teleoperation system [10e12] are developed for robotic systems. Comparing with the mechanical devices, the optical and vision tracking systems are lower cost and easier to be mounted in difference environment.

For hand gesture recognition, a highly efficient way is using data glove that can record the motion of each finger [13,14]; some kinds of data glove can even measure the contact force of a grasping or pinching action [15]. However, beside the high cost of data glove, they lack the capability to track position of the hand. Therefore, extra approaches are added to track hand positions [16,17], such as inferred optical tracking [18], which also increases the complexity of the system.

Some scholars only use the vision based method for both the hand tracking and gesture recognition. But the performance of the gesture recognition is much effected by the lighting and background conditions [19e21]. Thus, some aiding methods like skin color and pure color background are used to improve the recognition accuracy [22,23]. Some other scholars use RGB-D data from Kinect for gesture recognition [24]. However,

全文共33414字,剩余内容已隐藏,支付完成后下载完整资料


资料编号:[16387],资料为PDF文档或Word文档,PDF文档可免费转换为Word

原文和译文剩余内容已隐藏,您需要先支付 30元 才能查看原文和译文全部内容!立即支付

以上是毕业论文外文翻译,课题毕业论文、任务书、文献综述、开题报告、程序设计、图纸设计等资料可联系客服协助查找。