B . M . MBuddhikaDepartmentof Electrical Engineering, University of Moratuwa,Colombo,Sri LankaE-mail: [email protected]:0717201764 Abstract—This paper presents an image based visual servoing of 3degrees-of-freedom (D.O.F) manipulator with 2D information. Visual servoing androbot manipulator control is constructed as single algorithm with coupledoperation. In this research image information obtained from camera istransformed into angle information by using forward and inverse kinematics andresults are transferred to control module in order to manipulate robot arm.
Experimentresults are proofs of success.keywords— visualservoing,degree of freedom,human robot interaction I. Introduction Atpresent large number of older and disabled peoplewith vision problems in eyes and movement issues in hands and legs areexpecting modern technological solutions in robotics and image processing. Thosesolutions to be involved with advanced maneuverability in safety, smooth,accurate and Comfortable. Robots to be developed to uplift the living standardof them by creating robots arm like human hand that has ability to coincidewith object in Front of eyes.
Those robots should have various capabilitiessuch as object manipulations, navigation etc. Azimo humanoid robot has manyhuman like behaviors for H-R interaction in convenience of humans. Within those ability of object manipulationplays a major role in human-robot interaction. VM1 InRomeo robot object grasping 18 illustratesthe basics of object manipulation combined with visual servoing MB2 MB3 of H-R interaction.
Therefore, aAtpresent object manipulation with comfortable human-robot interaction is popularresearch topic worldwide. But there are considerable amount ofdrawbacks with modern technics. Handover procedure sis independent of pose ofhand 7, delivering process is not comfortable always, location error increases whenthe object position is far of the calibration area 7, different types of task errors havedifferent error magnitudes provided the greatest challenge 8 and fullyspecifying tasks requires many actions by the user. While choosing the exactgeometric constraints 8 and D.O.F F ofthe robot is not straightforward always. Visual servo controltechniques 17 allow the guidanceof a robotic system using visualinformationRequirement of visual servoing forobject manipulationsVM4 / handover applications. It also controls arobot manipulator to track 17 desired imagetrajectories taking explicitly into account the robot dynamics.
Visual servoing has been a very active research subjectfor the past three decadesVM5 MB6 1. The term “vision servoing” appears to have been introduced by Hill andPark in 1979 to distinguish their approach from earlier experiments where thesystem alternated between picture taking and moving. With the progress inelectronic hardware requirementMB7 of machine vision system is realized 6. Scope of visual servoing applications are spreadingfrom simple “pick and place” robot to an advanced manufacturing robot-team 78.
This is the fusion of many active research areas which includes high speedimage process, kinematics, dynamics, control theory and real-time computation.There are many kinds of robotic systems, but the robot arm is the one mostused. Such as car assembly plants, humanoid robots etc… The use of robot armsis an important tool in the manufacturing process. Robot arms to be controlledaccording to the target positions and designed to acquire stability andprecision 1. As the recognition technology has improved in a variety of ways,robots have become more human-like 2. Robots now offer valuable assistancefor humans in their everyday life.Finally 8 the current state ofvisual servoing HRI is not a perfectly feasibly for any user. It bias to one orcountable people.
Theobjective of the study is developing algorithm to detect object then pick itand hand over to human according to pose of hand. During this process, imageprocessing detects objects and visual servoing achieve the task movement. I.e.,controller communicates with the arm and allows moving to the desired position17 .
The solution will be cognitive and precious for existing trade off invisual servoing and create broad area interdisciplinary educationproject obtaining critical analysis based on user oriented design and theconsequences of adopting advanced new technology in visual servoing. II. SYSTEM OVERVIEWThe system contains three majormodules. They are Visual Information Extraction Module (VIEM), manipulator(servo) controller and 3 D.O.F manipulator 16.The output of webcam is used toextract the position and the orientation of hand which is performed using theVisual Information Extraction Module (VIEM). A.
Visual Information Extraction Module (VIEM). Here VIEM is consists with two majorparts. Those are software and hardware. Software part 16 consists with openCV C++ program and hardware is 5MP webcam (manual focus). Fig. 1: OverviewVM8 of Hardware and Software modules of System including opencv installed computerconnected with usb webcam for image extraction. Interaction manager alsoincluded as software platform written in open CV c++ .Interaction managerdecides behavior of gripper with position and posture of human hand with somefuzzy logic.
B. ManipulationManager (IM)Servo controller (Arduino Mega Board),servo manipulator and power supply are included in Interaction manager. I. VIEM is consists with two major parts. Those aresoftware and hardware parts.
Software part consists with open cv C++ programand hardware is 5MP webcam (manual focus).Servo controller (Arduino MegaBoard), servo manipulator and power supply are included in Interaction manager.IM manages the interaction between the human user and the robot16.
The dataset from VIEM is fed to IM.Interaction manager (IM) uses these data tounderstand the information in user’s commands 16.Action Manager (AM) manageshigh level control of the robot and guide Manipulation manager to handle theplacement of the object on the table 16. Low –level control of themanipulator is handled using the Robot Controller and the manipulatorcontroller respectively. I.
III. Control Algorithm. Fig.2 : SystemControl Algorithm After power on the system ,Initially Object(Small Bottle)has been detected by Haar classifier andpick by manipulator9.Then it move to hold position of task.
It waits untilhand appears in relavant frame (11cm X 11cm) to deliver the object according topose of palm.Palm detection is carried out with convex hull methodefficiently11.But for comfortability in communication with robot controllerclassifier has been continued for palm detection15. IV.
Hand Posture and Position DetectionThere is a way that isable to detect out hands, track them in real time and gesture recognition 10.It has to be perform with image processing on images obtained from a regular web-camera. It is time consuming coding and thresholdvalues in the code including the canny filter needs to be fine-tuned 5.
Thisisn’t well perform with changing background in intensity and color.A. A. Haar Cascade Classifier Haar feature-based objectdetection is machine learning fast and accurate method where a cascade function is trained from a lot of positive and negative images15.
It is then used to detect same objects in other images. We have tocollaborate with hand and palm detection. Initially, the algorithm needs a lotof positive images (images with hand) and negative images (images in absence ofhand) to train the classifier 15. Then we need to extract features from it.For this, haar features shown in below images are used.
Every feature is asingle value obtained by subtracting sum of pixels under white rectangle fromsum of pixels under black rectangle.VM9 B. Forward Kinematics of 3 D.O.F Manipulator Fig.3(a) : Angles and lengths of robot manipulator Fig.
3(b) : Mathematical view of robot manipulator Figures 3(a) and Figure 3(b) illustrates mathematical modelparameters of 3 D.O.F manipulator robot arm to calculatenecessary joint angles and end effectorposition. (1) h=z- (2)x= (3)y= (4)z= (5) C. InverseKinematics of 3 D.O.F Manipulator h=z- (6) q1= atan2(y, x) (7) (8) q3= atan2 () (9) q2=atan2 (z-) (10) Equation (7), (9) and (10) to determine the requiredjoint angles to robot controller as per the present position of end-effector.
In this case and . D. TheoreticalBackground on Image Based Visual Servoing . e (t) = s(m(t), a) – s* (11)Thevector s* contains thedesired values and s contains the actual values of the features 3 4.E.
IBVSControl law Vc=-l (12) Vc = (vc,wc) (13) = (14) (15) where Le is of full rank 6.1Le= Lx Lx= (16) Equation (12) to (16)describes IBVS control law. At least we need three points for accurate IBVS.= (17) Velocities,Vc (vc,wc)==-l etc. (18)Here 1point is center of rectangle contour and other two are vertices.Equation (12) to (16) describes IBVS control law.
At least we need three pointsfor accurate IBVS. V. Experimental Results Obviously if the HAAR classifier-based detection results becomeunstable, in my opinion which means the detection is not stable and jumpsaround the detecting image. Detection level is based on the quality of classifier. Even there are enoughpositive/negative samples, let’s say 5000 positive and 7000 negative samples,the results should be quite robust already 15. Based on my experiences, Ihave used 700 positive hand gesture samples and 1000 negative samples, and theresults seemed sufficient to some extent. . Each palm posture needs differentXml files (Haar-cascades).
This means for one degree resolution of detectionrequired one haar isn’t practical. But this method can communicate withmanipulator controller easily and accurately 15. Because of that reason thisalgorithm is applied here.Convex hull and feature detection scheme hasadvantages. Like continuous rotation angle of palm can be calculated, palmposture identification with contour area analyzing sounds good 12.
It hassome defects, those are complex coding, background conditions are effecting onsensitivity of detection and higher error percentage. It output large datastream and manipulator controller and servo motors hasn’t capacity to responsethat kind of large data amount in that time frame. Considering these resultsand conclusions Haar classifier method is chosen for this researchFig.4 (a) :Open Palm Object Robot to Human Handover (Still Image) Fig.
4 (b) :Close Palm Object Robot to Human Handover (Still Image) Figures 4(a) and 4(b) explain objectdeliver with H-R interaction for two distinguish postures of palm. Its illustratehand and gripper postures and object behavior between them. Fig.5 (a): Open Palm Object Robot to Human Handover (Video Stream Frames) Fig.5 (b): ClosePalm Robot to Human Object Handover (Video Stream Frames)Figures 5(a) and 5(b)are frames of videos streams that shows object handover with H-R interactionfor user comfort.
They show object detection ,pick object,detect hand anddeliver object to human hand in different manner depend on hand position andposture. VI. Simulation Fig.6 (a) Fig.6(b) Fig.6(c),Fig.
6 (a) ,Fig.6(b) illustrates image based visual servoing between two points andFig.6(c) end effector linear velocity ,joint angle rates in task steps domainSimulationsare carried out in ViSP environment. ViSP standing for Visual Servoing Platformis a modular cross platform library that allows prototyping and developingapplications using visual tracking and visual servoing technics, developed byInria Lagadic team1. ViSP has ability to compute control laws in robotic systems.
It hastracking abilities with real time image processing or computer visionalgorithms 1. It seems that angle rates and velocity are reducing with errorbetween desired and actual. VII. Conclusion and Future WorkWe havedeveloped a user interface for HRI that facilitates semi-autonomous robotmanipulator 8. The user describes versatile high-level actions using visualtask specification. We have conducted experiments illustrate performed actionswith visual servoing. It also proves the system is capable of executing a rangeof tasks spanning both coarse and fine manipulation. The visual taskspecification system also has some drawbacks.
Also choosing the correctgeometric constraints and D.O.F of the robot is not always straightforward. Although the current state of thesystem is not a perfectly feasible system for any user, it has to be developedstep forward of better human robot interaction with visual servoing. Thisresearch focuses on analyzing human arm postures based on human handcharacteristics (e.g., palm posture, position).
There should be more sensitiveand accurate schemes to detect the hand posture and position preciously. The kinetsensor may brought advantage for those drawbacks in webcam. DC servo motors arenot 100% accurate position drivers.
They have lot of errors while operating andcoding. Stepper or DC gear motors with optical encoders be better for positiondrive problem.REFERENCES 1. E.Marchand, F. Spindler, F. Chaumette. ViSP for visual servoing: ageneric software platform with a wide class of robot control skills.
IEEE Robotics and Automation Magazine, Special Issue on “Software Packages forVision-Based Control of Motion”, P. Oh, D. Burschka (Eds.), 12(4):40-52,December 2005. 2. E.Marchand.
ViSP: A Software Environment for Eye-in-Hand VisualServoing. In IEEE Int. Conf. on Robotics and Automation,ICRA’99, Volume 4, Pages 3224-3229, Detroit, Michigan, Mai 1999. 3.
F. Chaumette,S. Hutchinson. Visual servo control, Part I: Basic approaches. IEEE Robotics and Automation Magazine, 13(4): pages82-90, December 2006. 4. F. Chaumette,S.
Hutchinson. Visual servo control, Part II: Advanced approaches. IEEE Robotics and Automation Magazine, 14(1): pages109-118, March 2007. 5. D. Kuang, C.
Yang. Wang, G. Peng: An Improved Approach for GestureRecognition, Chinese AutomationCongress (CAC), pages 4856-4861, October 2017.
6. B.Espiau, F. Chaumette, P. Rives. A new approach to visual servoing in robotics. IEEE Trans.
on Robotics and Automation, 8(3): pages313-326, June 1992. 7. A.J. Sanchezand J.M.
Martinez, Robot-arm Pick and Place Behavior Programming System UsingVisual Perception, Proceedings 15th InternationalConference on Pattern Recognition, pages 507-510,September 2000. 8. M. Gridseth, O. Ramirez, C.P. Quintero and M.
Jagersand,ViTa: Visual TaskSpecification Interface for Manipulation with Uncalibrated Visual Servoing, 2016 IEEE International Conference on Robotics andAutomation (ICRA),pages 3434-3440,May 2016 9. E. Marchand, F.
Chaumette. Feature tracking for visual servoing purposes. Robotics and Autonomous Systems, Special issue on Advances in Robot Vision, D. Kraig, H. Christensen (Eds.), 52(1):pages 53-70, July 2005. 10.
A.Dame, E. Marchand. Video mosaicing using a MutualInformation-based Motion Estimation Process. In IEEE Int. Conf.
on Image Processing, ICIP’11, Pages1525-1528, Bruxelles, Belgique, September 2011. 11. A.
Dame, E. Marchand. Accurate real-time tracking using mutual information. In IEEE Int. Symp. On Mixed and Augmented Reality, ISMAR’10,Pages 47-56, Seoul, Korea, and October 2010. 12.
K. Kadbe,Real time Finger Tracking and Contour Detection forGesture Recognition using OpenCV, International Conference on Industrial Instrumentation and Control(ICIC),pages 974-977,May 2015 13. M.F. Zaman,S.T.
Monserrat , F.I and D.Karmaker,Real- Time Hand Detection and Tracking with Depth Values, Proceedingsof 3rd International Conference onAdvances in Electrical Engineering,pages129-132,Dhaka,Bangladesh,December 2015. 14. I. Hussain,A.K Talukdar, K.K Sarma, Hand Gesture Recognition System with Real-Time PalmTracking, Annual IEEE India Conference (INDICON), India, pages 1-6, December2014.
15. G. Mao, Y.W.MHor,C.Y. Tang, Real-Time Hand Detection and Tracking against ComplexBackground, Fifth International Conference on Intelligent Information Hidingand Multimedia Signal Processing, pages 906-908,Kyoto,Japan,November 2009 16.
P.H.D.Arjuna,S.Srimal and A.G.Buddhika P.
Jayasekara, A Multi-modal Approach for EnhancingObject Placement,6th National Conference on Technology andManagement(NCTM),pages 17-22,Malabe Sri lanka,January 2017 17. H. Wu, T.TAndersen, N.A Andersen, O. Ravn, Visual Servoingfor Object Manipulation: A Case Study in laughterhouse,14thInternational Conference on Control,Automation,Robotics &Vision,Phuket,Thailand,November 2016 18.
https://www.youtube.com/watch?v=6yB5pQm4s_c VM1Includea paragraph describing these points. One ot two sentences for each point.
MB2 MB3 VM4Include2/3 sentences describing the importance of visual servoing for objectmanipulation applications VM5 MB6 MB7 VM8Thereshould be feature databases in the system for storing the harr classifier etc VM9NotclearWhat is the used method for hand posture and positiondetection ?Convex hull method or Haar classifier or both