这篇是对halcon例程pick_and_place_with_2d_matching_stationary_cam.hdev的学习笔记。
它讲得是利用halcon做了手眼标定后,怎么应用标定结果做机器人抓取物料。
一般学习了halcon的手眼标定例程后,第一个想到的问题估计就是想到应用的问题。
是不是想屏幕上指定一个像素点,然后转为机器人的坐标让它走到这个点上去?
以前九点标定是用affine_trans_point_2d算子传入像素点就可以转为机械点了。
想知道怎么用手眼标定做到这个要求,这个例程中就有答案。
在这个示例程序中,使用固定相机设置,通过基于形状的匹配在2D中查找对象,然后用机器人抓住这些物体。 脱机步骤 为相机/机器人设置执行手眼校准 (在不同的示例中执行此步骤,因为校准数据与抓取的物体无关。) 2) 创建对象的二维形态模板。 3) 可选:教机器人如何接近、抓住和移动物体。 1) 获取对象的图像。 您将得到以下结果: 一) 物体相对于机器人的姿态,以及 二) 如果执行了第3)步,则姿势将接近、抓住和移动对象。 initialize_program (WindowHandle, ImageDir, DataDir) dev_disp_introduction (WindowHandle, WindowHandleGraphics) stop ()
下图中由上到下对象依次为: 固定相机、机器人工具、标定板、要抓取的对象、Base(机器人基坐标系)
1) 请在代码中指定手眼校准的结果。 您可以使用HDevelop示例程序 calibrate_hand_eye_stationary_cam_approx.hdev 获取校准数据。 * ** 1) Read the results of hand-eye calibration. dev_disp_read_hand_eye_calib_data_instructions (WindowHandle, WindowHandleGraphics) read_hand_eye_calib_data (DataDir, HandEyeCalibData) dev_set_window (WindowHandleGraphics) dev_close_window ()
函数read_hand_eye_calib_data代码如下:
* 有关如何执行此校准的示例 *请看程序calibrate_hand_eye_stationary_cam_approx.hdev read_cam_par (DataDir + 'campar_hand_eye_stationary_cam.dat', CamParam) read_pose (DataDir + 'plane_in_cam_pose_hand_eye_stationary_cam.dat', PlaneInCamPose) read_pose (DataDir + 'base_in_cam_pose_hand_eye_stationary_cam.dat', BaseInCamPose) stop () * * Create output dict. create_dict (HandEyeCalibData) set_dict_tuple (HandEyeCalibData, 'CamParam', CamParam) set_dict_tuple (HandEyeCalibData, 'PlaneInCamPose0', PlaneInCamPose) set_dict_tuple (HandEyeCalibData, 'BaseInCamPose', BaseInCamPose) return ()
BaseInCamPose 机器人基坐标在相机坐标系中的位姿
CamParam 相机内参
PlaneInCamPose 标定板在相机坐标系中的位姿
2) 求你做两件事: -读取适合的已经创建好的2D匹配模型的图像, -以米为单位指定对象高度 指定对象高度:0.032m read_image (Image2DMatching, ImageDir + 'corner_bracket_model_stationary_cam') * 指定对象高度,单位米. ObjectHeight := 0.032 dev_disp_2d_matching_instructions (Image2DMatching, ObjectHeight) stop ()
接下来程序要求你手绘一个矩形,创建工件的形态模板。
dev_disp_draw_region_of_interest_instructions (ImageRectified, WindowHandle) draw_rectangle1 (WindowHandle, Row1, Column1, Row2, Column2) gen_rectangle1 (Rectangle, Row1, Column1, Row2, Column2) reduce_domain (ImageRectified, Rectangle, ImageReduced) median_image (ImageReduced, ImageReduced, 'circle', 3, 'mirrored') create_shape_model (ImageReduced, 'auto', rad(0), rad(360), 'auto', 'auto', \ 'use_polarity', 'auto', 'auto', ModelID) dev_disp_check_matching_contours_instructions (ImageRectified, Rectangle, ModelID, WindowHandle) stop ()
接下来以3D虚拟方式显示一些信息:
机器人和匹配对象的初始位置。 请检查此可视化是否符合您的设置
3.0)如果需要,可以为机器人实施基本的路径规划。 例如,您可以确定机器人应该接近、抓住并移动找到的物体。 设置变量NumRobotPathSteps:=0 以绕过此步骤。 将其设置为1以教机器人如何抓取对象。 在这里,它被设置为2以指定接近姿势避免物体翻倒。 NumRobotPathSteps := 2 dev_disp_robot_path_steps_instructions (WindowHandle, WindowHandleGraphics) stop () if (NumRobotPathSteps > 0) train_robot_paths_steps (NumRobotPathSteps, ModelID, ImageDir, DataDir, RectificationData, HandEyeCalibData, Poses, ObjectModels3DData, WindowHandle, ToolInModelRobotPathPoses) endif dev_set_window (WindowHandleGraphics) dev_close_window ()
继续……
3.1)获取对象的图像 可以用来训练抓取路径的步骤。 read_image (Image, ImageDir + 'corner_bracket_model_stationary_cam') dev_disp_acquire_image_for_teaching_path_steps_instructions (Image) stop () rectify_image (Image, ImageRectified, RectificationData)
3.2)将机器人移动到抓取过程的1/2位置 并将机器人姿势保存在ToolInBaseRobotPathPose中。 for Index := 1 to NumRobotPathSteps by 1 read_pose (DataDir + 'tool_in_base_pose_stationary_cam_robot_path_0' + Index + '.dat', 、 ToolInBaseRobotPathPose) dev_disp_read_robot_path_poses (ImageRectified, Index, NumRobotPathSteps, ModelID) stop () ToolInBaseRobotPathPoses.at(Index-1) := ToolInBaseRobotPathPose endfor
继续………
机器人工具坐标相对于抓取零件的位置 * 计算ToolInModelRobotPathPoses. find_shape_model (Image, ModelID, 0, rad(360), 0.5, 1, 0.5, 'least_squares', 0, 0.9, Row, Column, Angle, Score) obtain_3d_pose_of_match_stationary_cam (Row, Column, Angle, HandEyeCalibData, Poses, RectificationData, ModelInBaseRobotPathPose) calculate_tool_in_model_robot_path_poses (ModelInBaseRobotPathPose, ToolInBaseRobotPathPoses, NumRobotPathSteps, ToolInModelRobotPathPoses) * * 想象机器人的姿势,训练接近,抓取……对象 visualize_robot_path_poses (ToolInBaseRobotPathPoses, ModelInBaseRobotPathPose, WindowHandle, HandEyeCalibData, ObjectModels3DData) *
相机拍照 取机器人要抓取物料图片
模板匹配后取得ModelInBasePose 它是模形在机器人基坐标系中的姿式。 在位姿检测窗口中有两个ToolInBaseRobotPathPoses,是因为我们的规划路径中有两个点。
接下来继续拍其它物料。
继续其它的物料直到程序退出。
主代码如下:
* In this example program, a stationary camera setup is used * to find objects in 2D with shape-based matching. * Then, the found object is grasped with a robot. * initialize_program (WindowHandle, ImageDir, DataDir) dev_disp_introduction (WindowHandle, WindowHandleGraphics) stop () * * ** OFFLINE *** * * ** 1) Read the results of hand-eye calibration. dev_disp_read_hand_eye_calib_data_instructions (WindowHandle, WindowHandleGraphics) read_hand_eye_calib_data (DataDir, HandEyeCalibData) dev_set_window (WindowHandleGraphics) dev_close_window () * * ** 2) Create a matching object with 2D shape-based matching. * * Read an image of the object you want to grasp. read_image (Image2DMatching, ImageDir + 'corner_bracket_model_stationary_cam') * Specify the height of the object, in meters. ObjectHeight := 0.032 dev_disp_2d_matching_instructions (Image2DMatching, ObjectHeight) stop () * * Prepare the needed poses to match and grasp * and compute the rectification map. prepare_poses_and_rectification_data_stationary_cam (ObjectHeight, 'true', HandEyeCalibData, Poses, RectificationData) rectify_image (Image2DMatching, ImageRectified, RectificationData) * * Draw the region that defines the model you want to match. * Depending on the shape of your object, you may want to draw a * different shape or use the ROI dialog in the Graphics Window. dev_disp_draw_region_of_interest_instructions (ImageRectified, WindowHandle) draw_rectangle1 (WindowHandle, Row1, Column1, Row2, Column2) gen_rectangle1 (Rectangle, Row1, Column1, Row2, Column2) reduce_domain (ImageRectified, Rectangle, ImageReduced) median_image (ImageReduced, ImageReduced, 'circle', 3, 'mirrored') create_shape_model (ImageReduced, 'auto', rad(0), rad(360), 'auto', 'auto', 'use_polarity', 'auto', 'auto', ModelID) dev_disp_check_matching_contours_instructions (ImageRectified, Rectangle, ModelID, WindowHandle) stop () * * Visualize the setup with camera, robot base, and the object in 3D. visualize_setup_in_3d (ImageRectified, ModelID, ObjectHeight, WindowHandle, HandEyeCalibData, Poses, RectificationData, ObjectModels3DData) * * ** 3) Optionally: Teach the robot how to approach, grasp, and move * the object. NumRobotPathSteps := 2 dev_disp_robot_path_steps_instructions (WindowHandle, WindowHandleGraphics) stop () if (NumRobotPathSteps > 0) train_robot_paths_steps (NumRobotPathSteps, ModelID, ImageDir, DataDir, RectificationData, HandEyeCalibData, Poses, ObjectModels3DData, WindowHandle, ToolInModelRobotPathPoses) endif dev_set_window (WindowHandleGraphics) dev_close_window () * * ** ONLINE *** NumImages := 4 create_pose (0.089, 0.5, 20, 106, 0.65, 93.5, 'Rp+T', 'gba', 'point', PoseIn) dev_open_window_for_3d_visualization (WindowHandle, WindowHandle3D) for Index := 1 to NumImages by 1 * * ** 1) Read an image of your object. read_image (Image, ImageDir + 'corner_bracket_stationary_cam_' + Index$'02') dev_clear_window () dev_disp_read_image_instructions (Image, WindowHandle) stop () * * Find the object with 2D shape-based matching. rectify_image (Image, ImageRectified, RectificationData) find_shape_model (ImageRectified, ModelID, 0, rad(360), 0.1, 1, 0.5, ['least_squares','max_deformation 5'], 0, 0.9, RowMatch, ColumnMatch, AngleMatch, ScoreMatch) * Get the pose of the robot relative to the robot (ModelInBasePose). obtain_3d_pose_of_match_stationary_cam (RowMatch, ColumnMatch, AngleMatch, HandEyeCalibData, Poses, RectificationData, ModelInBasePose) * dev_disp_match_model_in_base_pose (ImageRectified, ModelID, RowMatch, ColumnMatch, AngleMatch, ModelInBasePose, WindowHandle) if (NumRobotPathSteps > 0) * Get poses of the robot to approach, grasp, ... * the found match (ToolInBaseGraspPoses). calculate_tool_in_base_robot_path_poses (ToolInModelRobotPathPoses, ModelInBasePose, Poses, ToolInBaseRobotPathPoses) * dev_inspect_ctrl (ToolInBaseRobotPathPoses) visualize_match_and_robot_path_poses_in_3d (ModelInBasePose, ToolInBaseRobotPathPoses, WindowHandle3D, PoseIn, ObjectModels3DData, PoseIn) dev_close_inspect_ctrl (ToolInBaseRobotPathPoses) else visualize_match_in_3d (ObjectModels3DData, WindowHandle3D, PoseIn, PoseIn) endif endfor
函数obtain_3d_pose_of_match_stationary_cam用于把模板位置(像素坐标)转为机器人基坐标系位置。
其源码如下:
* This procedure obtains the 3D pose from the model to the base of * the robot. *这个程序从模板位置获得对应的机器人基坐标系的3D位姿 read_dict_tuple (HandEyeCalibData, 'CamParam', CamParam) read_dict_tuple (HandEyeCalibData, 'BaseInCamPose', BaseInCamPose) read_dict_tuple (Poses, 'PlaneInModelPose', PlaneInModelPose) read_dict_tuple (Poses, 'MatchingPlaneInCamPose', MatchingPlaneInCamPose) read_dict_tuple (RectificationData, 'RectifyImage', RectifyImage) if (RectifyImage == 'true') read_dict_tuple (RectificationData, 'ScaleRectification', ScaleRectification) endif * * Keep track of the pose type used by the robot. * 跟踪机器人使用的姿势类型 get_pose_type (PlaneInModelPose, OrderOfTransform, OrderOfRotation, ViewOfTransform) * Convert to default pose type. * 转换为默认姿势类型 convert_pose_type (MatchingPlaneInCamPose, 'Rp+T', 'gba', 'point', MatchingPlaneInCamPose) convert_pose_type (PlaneInModelPose, 'Rp+T', 'gba', 'point', PlaneInModelPose) if (|Row| == 1 and |Column| == 1 and |Angle| == 1) vector_angle_to_rigid (0, 0, 0, Row, Column, Angle, HomMat2DObject) * col = x, row = y if (RectifyImage == 'false') affine_trans_pixel (HomMat2DObject, 0, 0, RowObject, ColObject) image_points_to_world_plane (CamParam, MatchingPlaneInCamPose, RowObject, ColObject, 'm', PXM, PYM) HomMat3DObject := [HomMat2DObject[4],HomMat2DObject[3],0,PXM,HomMat2DObject[1],HomMat2DObject[0],0,PYM,0,0,1,0] hom_mat3d_to_pose (HomMat3DObject, ModelToMatchInPlanePose) pose_compose (ModelToMatchInPlanePose, PlaneInModelPose, ModelInPlanePose) pose_compose (MatchingPlaneInCamPose, ModelInPlanePose, ModelInCamPose) elseif (RectifyImage == 'true') HomMat3DObject := [HomMat2DObject[4],HomMat2DObject[3],0,HomMat2DObject[5] * ScaleRectification,HomMat2DObject[1],HomMat2DObject[0],0,HomMat2DObject[2] * ScaleRectification,0,0,1,0] hom_mat3d_to_pose (HomMat3DObject, ModelToMatchInPlanePartRectPose) pose_compose (ModelToMatchInPlanePartRectPose, PlaneInModelPose, ModelInPlanePartRectPose) pose_compose (MatchingPlaneInCamPose, ModelInPlanePartRectPose, ModelInCamPose) else throw ('Please set the parameter RectifyImage correctly') endif pose_invert (BaseInCamPose, CamInBasePose) pose_compose (CamInBasePose, ModelInCamPose, ModelInBasePose) * convert_pose_type (ModelInBasePose, OrderOfTransform, OrderOfRotation, ViewOfTransform, ModelInBasePose) else throw ('Exactly one match should be given as input') endif return ()
---------------------
作者:hackpig
来源:www.skcircle.com
版权声明:本文为博主原创文章,转载请附上博文链接!

