Robotics机器人技术PPT

上传人:人*** 文档编号:590596368 上传时间:2024-09-14 格式:PPT 页数:57 大小:2.75MB
返回 下载 相关 举报
Robotics机器人技术PPT_第1页
第1页 / 共57页
Robotics机器人技术PPT_第2页
第2页 / 共57页
Robotics机器人技术PPT_第3页
第3页 / 共57页
Robotics机器人技术PPT_第4页
第4页 / 共57页
Robotics机器人技术PPT_第5页
第5页 / 共57页
点击查看更多>>
资源描述

《Robotics机器人技术PPT》由会员分享,可在线阅读,更多相关《Robotics机器人技术PPT(57页珍藏版)》请在金锄头文库上搜索。

1、Smart Home TechnologiesAutomation and RoboticsMotivationnIntelligent Environments are aimed at improving the inhabitants experience and task performancenAutomate functions in the homenProvide services to the inhabitantsnDecisions coming from the decision maker(s) in the environment have to be execut

2、ed. nDecisions require actions to be performed on devicesnDecisions are frequently not elementary device interactions but rather relatively complex commandsnDecisions define set points or results that have to be achievednDecisions can require entire tasks to be performedAutomation and Robotics in In

3、telligent EnvironmentsControl of the physical environmentAutomated blindsThermostats and heating ductsAutomatic doorsAutomatic room partitioningPersonal service robotsHouse cleaningLawn mowingAssistance to the elderly and handicappedOffice assistantsSecurity servicesRobotsnRobota (Czech) = A worker

4、of forced laborFrom Czech playwright Karel Capeks 1921 play “R.U.R” (“Rossums Universal Robots”)nJapanese Industrial Robot Association (JIRA) :“A device with degrees of freedom that can be controlled.”nClass 1 : Manual handling devicenClass 2 : Fixed sequence robotnClass 3 : Variable sequence robotn

5、Class 4 : Playback robotnClass 5 : Numerical control robotnClass 6 : Intelligent robotA Brief History of RoboticsnMechanical Automata nAncient Greece & EgyptnWater powered for ceremoniesn14th 19th century EuropenClockwork driven for entertainmentnMotor driven Robotsn1928: First motor driven automata

6、n1961: UnimatenFirst industrial robotn1967: ShakeynAutonomous mobile research robotn1969: Stanford ArmnDextrous, electric motor driven robot armMaillardets AutomatonUnimateRobotsnRobot ManipulatorsnMobile RobotsRobotsnWalking RobotsnHumanoid RobotsAutonomous RobotsnThe control of autonomous robots i

7、nvolves a number of subtasksnUnderstanding and modeling of the mechanismnKinematics, Dynamics, and OdometrynReliable control of the actuatorsnClosed-loop controlnGeneration of task-specific motionsnPath planningnIntegration of sensorsnSelection and interfacing of various types of sensorsnCoping with

8、 noise and uncertaintynFiltering of sensor noise and actuator uncertaintynCreation of flexible control policiesnControl has to deal with new situationsTraditional Industrial RobotsnTraditional industrial robot control uses robot arms and largely pre-computed motions1.Programming using “teach box”2.R

9、epetitive tasks3.High speed4.Few sensing operations 5.High precision movements6.Pre-planned trajectories and 7.task policies8.No interaction with humansProblems nTraditional programming techniques for industrial robots lack key capabilities necessary in intelligent environments1.Only limited on-line

10、 sensing2.No incorporation of uncertainty3.No interaction with humans4.Reliance on perfect task information5.Complete re-programming for new tasksRequirements for Robots in Intelligent EnvironmentsnAutonomynRobots have to be capable of achieving task objectives without human inputnRobots have to be

11、able to make and execute their own decisions based on sensor informationnIntuitive Human-Robot InterfacesnUse of robots in smart homes can not require extensive user trainingnCommands to robots should be natural for inhabitantsnAdaptationnRobots have to be able to adjust to changes in the environmen

12、tRobots for Intelligent EnvironmentsnService RobotsnSecurity guardnDeliverynCleaningnMowingnAssistance RobotsnMobilitynServices for elderly and People with disabilitiesAutonomous Robot ControlnTo control robots to perform tasks autonomously a number of tasks have to be addressed:nModeling of robot m

13、echanismsnKinematics, DynamicsnRobot sensor selectionnActive and passive proximity sensorsnLow-level control of actuatorsnClosed-loop controlnControl architecturesnTraditional planning architecturesnBehavior-based control architecturesnHybrid architecturesnForward kinematics describes how the robots

14、 joint angle configurations translate to locations in the worldnInverse kinematics computes the joint angle configuration necessary to reach a particular point in space. nJacobians calculate how the speed and configuration of the actuators translate into velocity of the robotModeling the Robot Mecha

15、nism(x, y, z) 1 2(x, y, )nIn mobile robots the same configuration in terms of joint angles does not identify a unique locationnTo keep track of the robot it is necessary to incrementally update the location (this process is called odometry or dead reckoning)nExample: A differential drive robot Mobil

16、e Robot Odometry(x, y, )RLActuator ControlnTo get a particular robot actuator to a particular location it is important to apply the correct amount of force or torque to it.nRequires knowledge of the dynamics of the robotnMass, inertia, frictionnFor a simplistic mobile robot: F = m a + B v nFrequentl

17、y actuators are treated as if they were independent (i.e. as if moving one joint would not affect any of the other joints).nThe most common control approach is PD-control (proportional, differential control)nFor the simplistic mobile robot moving in the x direction: Robot NavigationnPath planning ad

18、dresses the task of computing a trajectory for the robot such that it reaches the desired goal without colliding with obstaclesnOptimal paths are hard to compute in particular for robots that can not move in arbitrary directions (i.e. nonholonomic robots)nShortest distance paths can be dangerous sin

19、ce they always graze obstaclesnPaths for robot arms have to take into account the entire robot (not only the endeffector)Sensor-Driven Robot ControlnTo accurately achieve a task in an intelligent environment, a robot has to be able to react dynamically to changes ion its surroundingnRobots need sens

20、ors to perceive the environmentnMost robots use a set of different sensorsnDifferent sensors serve different purposesnInformation from sensors has to be integrated into the control of the robotRobot SensorsnInternal sensors to measure the robot configurationnEncoders measure the rotation angle of a

21、jointnLimit switches detect when the joint has reached the limitRobot SensorsnProximity sensors are used to measure the distance or location of objects in the environment. This can then be used to determine the location of the robot.nInfrared sensors determine the distance to an object by measuring

22、the amount of infrared light the object reflects back to the robotnUltrasonic sensors (sonars) measure the time that an ultrasonic signal takes until it returns to the robot nLaser range finders determine distance by measuring either the time it takes for a laser beam to be reflected back to the rob

23、ot or by measuring where the laser hits the object nComputer Vision provides robots with the capability to passively observe the environmentnStereo vision systems provide complete location information using triangulationnHowever, computer vision is very complexnCorrespondence problem makes stereo vi

24、sion even more difficultRobot SensorsUncertainty in Robot SystemsRobot systems in intelligent environments have to deal with sensor noise and uncertaintySensor uncertaintySensor readings are imprecise and unreliableNon-observabilityVarious aspects of the environment can not be observed The environme

25、nt is initially unknown Action uncertaintyActions can failActions have nondeterministic outcomesProbabilistic Robot LocalizationExplicit reasoning about Uncertainty using Bayes filters:Used for: Localization Mapping Model buildingDeliberative Robot Control ArchitecturesnIn a deliberative control arc

26、hitecture the robot first plans a solution for the task by reasoning about the outcome of its actions and then executes itnControl process goes through a sequence of sencing, model update, and planning stepsDeliberative Control ArchitecturesnAdvantagesnReasons about contingenciesnComputes solutions

27、to the given tasknGoal-directed strategiesnProblemsnSolutions tend to be fragile in the presence of uncertaintynRequires frequent replanningnReacts relatively slowly to changes and unexpected occurrencesBehavior-BasedRobot Control ArchitecturesnIn a behavior-based control architecture the robots act

28、ions are determined by a set of parallel, reactive behaviors which map sensory input and state to actions. Behavior-BasedRobot Control ArchitecturesnReactive, behavior-based control combines relatively simple behaviors, each of which achieves a particular subtask, to achieve the overall task.nRobot

29、can react fast to changes nSystem does not depend on complete knowledge of the environmentnEmergent behavior (resulting from combining initial behaviors) can make it difficult to predict exact behaviornDifficult to assure that the overall task is achieved nComplex behavior can be achieved using very

30、 simple control mechanismsnBraitenberg vehicles: differential drive mobile robots with two light sensorsnComplex external behavior does not necessarily require a complex reasoning mechanismComplex Behavior from Simple Elements: Braitenberg Vehicles+“Coward”“Aggressive”+-“Love”“Explore”-Behavior-Base

31、d Architectures: Subsumption ExamplenSubsumption architecture is one of the earliest behavior-based architecturesnBehaviors are arranged in a strict priority order where higher priority behaviors subsume lower priority ones as long as they are not inhibited.Subsumption ExamplenA variety of tasks can

32、 be robustly performed from a small number of behavioral elements MIT AI Labhttp:/www-robotics.usc.edu/majaReactive, Behavior-Based Control ArchitecturesnAdvantagesnReacts fast to changesnDoes not rely on accurate modelsn“The world is its own best model”nNo need for replanningnProblemsnDifficult to

33、anticipate what effect combinations of behaviors will havenDifficult to construct strategies that will achieve complex, novel tasksnRequires redesign of control system for new tasksHybrid Control ArchitecturesnHybrid architectures combine reactive control with abstract task planningnAbstract task pl

34、anning layernDeliberative decisionsnPlans goal directed policiesnReactive behavior layernProvides reactive actionsnHandles sensors and actuatorsHybrid Control PoliciesTask Plan:Behavioral Strategy:Example Task: Changing a Light BulbHybrid Control ArchitecturesnAdvantagesnPermits goal-based strategie

35、snEnsures fast reactions to unexpected changesnReduces complexity of planningnProblemsnChoice of behaviors limits range of possible tasksnBehavior interactions have to be well modeled to be able to form plansTraditional Human-Robot Interface: TeleoperationRemote Teleoperation: Direct operation of th

36、e robot by the userUser uses a 3-D joystick or an exoskeleton to drive the robotSimple to installRemoves user from dangerous areasProblems:Requires insight into the mechanismCan be exhaustive Easily leads to operation errorsHuman-Robot Interaction in Intelligent EnvironmentsnPersonal service robotnC

37、ontrolled and used by untrained usersnIntuitive, easy to use interfacenInterface has to “filter” user inputnEliminate dangerous instructionsnFind closest possible actionnReceive only intermittent commandsnRobot requires autonomous capabilitiesnUser commands can be at various levels of complexitynCon

38、trol system merges instructions and autonomous operationnInteract with a variety of humansnHumans have to feel “comfortable” around robotsnRobots have to communicate intentions in a natural wayExample: Minerva the Tour Guide Robot (CMU/Bonn) CMU Robotics Institutehttp:/www.cs.cmu.edu/thrun/movies/mi

39、nerva.mpgIntuitive Robot Interfaces:Command InputnGraphical programming interfacesnUsers construct policies form elemental blocksnProblems:nRequires substantial understanding of the robotnDeictic (pointing) interfacesnHumans point at desired targets in the world ornTarget specification on a computer

40、 screennProblems: nHow to interpret human gestures ?nVoice recognitionnHumans instruct the robot verballynProblems:nSpeech recognition is very difficultnRobot actions corresponding to words has to be definedIntuitive Robot Interfaces:Robot-Human InteractionnHe robot has to be able to communicate its

41、 intentions to the humannOutput has to be easy to understand by humansnRobot has to be able to encode its intentionnInterface has to keep humans attention without annoying hernRobot communication devices:nEasy to understand computer screensnSpeech synthesisnRobot “gestures”Example: The Nursebot Proj

42、ect CMU Robotics Institutehttp:/www/cs/cmu.edu/thrunHuman-Robot InterfacesnExisting technologiesnSimple voice recognition and speech synthesisnGesture recognition systemsnOn-screen, text-based interactionnResearch challengesnHow to convey robot intentions ?nHow to infer user intent from visual obser

43、vation (how can a robot imitate a human) ?nHow to keep the attention of a human on the robot ?nHow to integrate human input with autonomous operation ?Integration of Commands and Autonomous OperationnAdjustable AutonomynThe robot can operate at varying levels of autonomy nOperational modes:nAutonomo

44、us operation nUser operation / teleoperation nBehavioral programmingnFollowing user instructionsnImitationnTypes of user commands:nContinuous, low-level instructions (teleoperation)nGoal specifications nTask demonstrationsExample SystemSocial Robot InteractionsnTo make robots acceptable to average u

45、sers they should appear and behave “natural” nAttentional Robots nRobot focuses on the user or the tasknAttention forms the first step to imitationnEmotional RobotsnRobot exhibits “emotional” responsesnRobot follows human social norms for behaviornBetter acceptance by the user (users are more forgiv

46、ing)nHuman-machine interaction appears more “natural”nRobot can influence how the human reactsSocial Robot Example: Kismet MIT AI Labhttp:/www.ai.mit.eduSocial Robot InteractionsnAdvantages: nRobots that look human and that show “emotions” can make interactions more “natural”nHumans tend to focus mo

47、re attention on people than on objectsnHumans tend to be more forgiving when a mistake is made if it looks “human”nRobots showing “emotions” can modify the way in which humans interact with themnProblems:nHow can robots determine the right emotion ?nHow can “emotions” be expressed by a robot ?Human-

48、Robot Interfaces for Intelligent EnvironmentsnRobot Interfaces have to be easy to usenRobots have to be controllable by untrained usersnRobots have to be able to interact not only with their owner but also with other peoplenRobot interfaces have to be usable at the humans discretionnHuman-robot inte

49、raction occurs on an irregular basisnFrequently the robot has to operate autonomouslynWhenever user input is provided the robot has to react to itnInterfaces have to be designed human-centricnThe role of the robot is it to make the humans life easier and more comfortable (it is not just a tech toy)n

50、Intelligent Environments are non-stationary and change frequently, requiring robots to adaptnAdaptation to changes in the environmentnLearning to address changes in inhabitant preferencesnRobots in intelligent environments can frequently not be pre-programmednThe environment is unknown nThe list of

51、tasks that the robot should perform might not be known beforehandnNo proliferation of robots in the homen Different users have different preferences Adaptation and Learning for Robots in Smart HomesAdaptation and LearningIn Autonomous RobotsnLearning to interpret sensor informationnRecognizing objec

52、ts in the environment is difficultnSensors provide prohibitively large amounts of datanProgramming of all required objects is generally not possiblenLearning new strategies and tasksnNew tasks have to be learned on-line in the home nDifferent inhabitants require new strategies even for existing task

53、snAdaptation of existing control policies nUser preferences can change dynamicallynChanges in the environment have to be reflected Learning Approaches for Robot Systems nSupervised learning by teaching nRobots can learn from direct feedback from the user that indicates the correct strategynThe robot

54、 learns the exact strategy provided by the usernLearning from demonstration (Imitation)nRobots learn by observing a human or a robot perform the required task nThe robot has to be able to “understand” what it observes and map it onto its own capabilitiesnLearning by exploration nRobots can learn aut

55、onomously by trying different actions and observing their results nThe robot learns a strategy that optimizes rewardLearning Sensory PatternsChairnLearning to Identify ObjectsnHow can a particular object be recognized ?nProgramming recognition strategies is difficult because we do not fully understa

56、nd how we perform recognitionnLearning techniques permit the robot system to form its own recognition strategynSupervised learning can be used by giving the robot a set of pictures and the corresponding classification nNeural networksnDecision trees:Learning Task Strategies by ExperimentationnAutono

57、mous robots have to be able to learn new tasks even without input from the user nLearning to perform a task in order to optimize the reward the robot obtains (Reinforcement Learning)nReward has to be provided either by the user or the environmentnIntermittent user feedbacknGeneric rewards indicating

58、 unsafe or inconvenient actions or occurrencesnThe robot has to explore its actions to determine what their effects arenActions change the state of the environmentnActions achieve different amounts of rewardnDuring learning the robot has to maintain a level of safety Example: Reinforcement Learning

59、in a Hybrid Architecturen Policy Acquisition Layer nLearning tasks without supervisionn Abstract Plan LayernLearning a system modelnBasic state space compressionn Reactive Behavior LayernInitial competence and reactivityExample Task: Learning to WalkScaling Up: Learning Complex Tasks from Simpler Ta

60、sksnComplex tasks are hard to learn since they involve long sequences of actions that have to be correct in order for reward to be obtainednComplex tasks can be learned as shorter sequences of simpler tasksnControl strategies that are expressed in terms of subgoals are more compact and simpler nFewe

61、r conditions have to be considered if simpler tasks are already solved nNew tasks can be learned fasternHierarchical Reinforcement LearningnLearning with abstract actions nAcquisition of abstract task knowledgeExample: Learning to WalkConclusionsnRobots are an important component in Intelligent Envi

62、ronmentsnAutomate devices nProvide physical servicesnRobot Systems in these environments need particular capabilitiesnAutonomous control systemsnSimple and natural human-robot interfacenAdaptive and learning capabilitiesnRobots have to maintain safety during operationnWhile a number of techniques to address these requirements exist, no functional, satisfactory solutions have yet been developednOnly very simple robots for single tasks in intelligent environments exist

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 办公文档 > 工作计划

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号