ImageVerifierCode 换一换
格式:DOC , 页数:28 ,大小:849.50KB ,
资源ID:22683      下载积分:10 金币
验证码下载
登录下载
邮箱地址:
验证码: 获取验证码
温馨提示:
支付成功后,系统会自动生成账号(用户名为邮箱地址,密码是验证码),方便下次登录下载和查询订单;
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝   
验证码:   换一换

 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.thwenku.com/down/22683.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  
下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(外文翻译-一个有关移动机器人定位的视觉传感器模型.doc)为本站会员主动上传,图海文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知图海文库(发送邮件至admin@thwenku.com或直接QQ联系客服),我们立即给予删除!

外文翻译-一个有关移动机器人定位的视觉传感器模型.doc

1、毕业设计(论文)外文资料翻译院 系专 业学生姓名班级学号外文出处Machine Vision and Applications指导教师评语 指导老师签名: 日期A Visual-Sensor Model for Mobile Robot LocalisationMatthias Fichtner Axel Gro_mannArti_cial Intelligence InstituteDepartment of Computer ScienceTechnische Universitat DresdenTechnical Report WV-03-03/CL-2003-02AbstractWe

2、 present a probabilistic sensor model for camera-pose estimation in hallways and cluttered o_ce environments. The model is based on the comparison of features obtained from a given 3D geometrical model of the environment with features presentin the camera image. The techniques involved are simpler t

3、han state-of-the-art photogrammetric approaches. This allows the model to be used in probabilistic robot localisation methods. Moreover, it is very well suited for sensor fusion. The sensor model has been used with Monte-Carlo localisation to track the position of a mobile robot in a hallway navigat

4、ion task. Empirical results are presented for this application.1 IntroductionThe problem of accurate localisation is fundamental to mobile robotics. To solve complex tasks successfully, an autonomous mobile robot has to estimate its current pose correctly and reliably. The choice of the localization

5、 method generally depends on the kind and number of sensors, the prior knowledge about the operating environment, and the computing resources available. Recently, vision-based navigation techniques have become increasingly popular 3. Among the techniques for indoor robots, we can distinguish methods

6、 that were developed in the _eld of photogrammetry and computer vision, and methods that have their origin in AI robotics.An important technical contribution to the development of vision-based navigationtechniques was the work by 10 on the recognition of 3D-objects from unknown viewpoints in single

7、images using scale-invariant features. Later, this technique was extended to global localisation and simultaneous map building 11.The FINALE system 8 performed position tracking by using a geometrical model of the environment and a statistical model of uncertainty in the robots pose given the comman

8、ded motion. The robots position is represented by a Gaussian distribution and updated by Kalman _ltering. The search for corresponding features in camera image and world model is optimized by projecting the pose uncertainty into the camera image.Monte Carlo localisation (MCL) based on the condensati

9、on algorithm has been applied successfully to tour-guide robots 1. This vision-based Bayesian _ltering technique uses a sampling-based density representation. In contrast to FINALE, it can represent multi-modal probability distributions. Given a visual map of the ceiling, it localises the robot glob

10、ally using a scalar brightness measure. 4 presented a vision-based MCL approach that combines visual distance features and visual landmarks in a RoboCup application. As their approach depends on arti_cial landmarks, it is not applicable in o_ce environments.The aim of our work is to develop a probab

11、ilistic sensor model for camerapose estimation. Given a 3D geometrical map of the environment, we want to find an approximate measure of the probability that the current camera image has been obtained at a certain place in the robots operating environment. We use this sensor model with MCL to track

12、the position of a mobile robot navigating in a hallway. Possibly, it can be used also for localization in cluttered o_ce environments and for shape-based object detection.On the one hand, we combine photogrammetric techniques for map-based feature projection with the exibility and robustness of MCL,

13、 such as the capability to deal with localisation ambiguities. On the other hand, the feature matching operation should be su_ciently fast to allow sensor fusion. In addition to the visual input, we want to use the distance readings obtained from sonars and laser to improve localisation accuracy.The

14、 paper is organised as follows. In Section 2, we discuss previous work. In Section 3, we describe the components of the visual sensor model. In Section 4, we present experimental results for position tracking using MCL. We conclude in Section 5.2 Related WorkIn classical approaches to model-based po

15、se determination, we can distinguish two interrelated problems. The correspondence problem is concerned with _nding pairs of corresponding model and image features. Before this mapping takes place, the model features are generated from the world model using a given camera pose. Features are said to match if they are located close to each other. Whereas the pose problem consists of _nding the 3D camera coordinates with respect to the origin of the world model given the pairs of corresponding features 2. Apparently, the one problem requires the other to be solved befo

网站客服QQ:2356858848

  客服联系电话:18503783681

copyright@ 2008-2022 thwenku网站版权所有

ICP备案:豫ICP备2022023751号-1


>


客服