Robotics Channel

Robotics is a multidisciplinary field that combines electrical, mechanical, sensing, controlling, recognizing context, and decision making into a single system. In essence, robotics is embedded design made obviously visible to the end user. This series focuses on the tools, technologies, issues, and opportunities facing designers that specify, design, build, and evolve autonomous and robotic systems.

Robotics and autonomous systems

Tuesday, July 27th, 2010 by Robert Cravotta

Robotics is embedded design made visible. It is one of the few ways that users and designers can see and understand the rate of change in embedded technology. The various sensing and actuating subsystems are not the end-system, nor does the user much care how they are implemented, but both user and designer can recognize how each of the subsystems contribute, at a high level of abstraction, to the behavior of the end-system.

The state of the art for robotic systems keeps improving. Robots are not limited to military applications. Robots are entrenched in the consumer market in the form of toys and cleaning robots. Aquaproducts and iRobot are two companies that sell robots into the consumer market that clean pools, carpets, roof gutters, and hard floors.

A recent video from the GRASP (General Robotics Automation Sensing and Perception) Laboratory at the University of Pennsylvania demonstrates aggressive maneuvers for an autonomous, flying quadrotor (or quadrocopter). The quadrotor video demonstrates that it can autonomously sense and adjust for obstacles, as well as execute and recover from performing complex flight maneuvers.

An even more exciting capability is groups of autonomous robots that are able to work together toward a single goal. A recent video demonstrates multiple quadrotors flying together to carry a rigid structure. At this point, the demonstration only involves rigid structures, and I have not yet been able to confirm whether the cooperative control mechanism can work with carrying non-rigid structures.

Building robots that can autonomously work together in groups is a long-term goal. There are robot soccer competitions that groups such as FIRA and RoboCup sponsor throughout the year to promote interest and research into cooperative robots. However, building packs of cooperating robots is not limited to games. Six development teams were recently announced as finalists for the inaugural MAGIC (Multi Autonomous Ground-Robotic International Challenge) event.

Robotics relies on the integration of software, electronics, and mechanical systems. Robotics systems need to be able to coordinate sensing the external world with their own internal self-state to navigate through the real world and accomplish a task or goal. As robotic systems continue to mature, they are incorporating more context recognition of their surroundings, self-state, and goals, so that they can perform effective planning. Lastly, multiprocessing concepts are put to practical tests, not only within a single robot, but these concepts are tested within packs of robots. Understanding what does and does not work with robots may strongly influence the next round of innovations within embedded designs as they adopt and implement more multiprocessing concepts.

Question of the Week: Is there a fundamental difference in the skills required to build visibly moving robots versus autonomous embedded control subsystems?

Wednesday, May 5th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

A few weeks ago, I asked if robotics engineering is different enough from embedded engineering to warrant being treated as a separate discipline. I asked this question because I think a general engineering curriculum is valuable – especially at the bachelor level. Engineers should at least have exposure and experience with a broad set of skills and tools because you never know which one will be the right one to solve any given situation, and it seems a shame to focus multi-discipline skills and tools into a narrow topic when they really apply to any embedded system that deals with real world interfaces and control.

I offered three examples in the original post: my clothes-washing machine, and the braking systems and stability-control systems resident in many high-end automobiles. Differential braking systems are fairly sophisticated and they involve much more than just a software control system. They must have an understanding of the complex interactions of friction, steering, engine force, and inertia in order to accomplish their coordinated task correctly. The same can be doubly said for stability control systems that work to prevent vehicles from skidding and rolling over. The humble washing machine is a modern marvel. I have watched as the washing machine gets into an unbalanced condition. Instead of walking and banging around like its predecessor, it performs a set of gyrations that actually corrects the imbalance and allows the washing machine to work at higher efficiency even with heavy, bulky loads. Each of these examples is a collection of interconnected embedded systems working together to affect the overall system in some mechanical fashion.

In thinking about this question, I remembered spending time with the Velodyne brothers, David and Bruce Hall, for the earlier DARPA Grand Challenges and their TeamDAD autonomous automobile. I was kindred spirits with these gentlemen as I also worked on autonomous vehicles almost 15 years earlier. In addition to their innovative LIDAR sensor system, I remember their answer to how they got involved in autonomous automobiles when their background was premium subwoofers. They use motion-feedback technology in their sub woofers and it is not a large leap to applying those concepts to motion control technology for robots.

In a similar fashion, none of the examples above act like a robot that exhibits obvious motion, and yet they all use motion-feedback to accomplish their primary tasks. The relevance here is that robots also use a collection of interconnected embedded systems that work together in order to achieve mechanical degrees of free motion in the real world, which I have observed as a bias in multiple online conversations as a necessity for an autonomous system to be considered a robot. None of the examples is limited to just embedded software – they all are firmly entrenched in multidiscipline mechanical control and manipulation.

Is there a fundamental difference in the skills required to build visibly moving robots versus autonomous embedded control subsystems? My experience says they are the same set of skills.

Question of the Week: Is robotics engineering different enough from embedded engineering to warrant being treated as a separate discipline?

Wednesday, April 7th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Robotics Engineering seems to be gaining momentum as a formal engineering discipline. Worcester Polytechnic Institute just became the third U.S. university to offer a doctoral program for Robotics Engineering. The university also offers a Bachelor and Masters program for Robotic Engineering. The interdisciplinary programs draw on faculty from the Computer Science, Electrical and Computer Engineering, and Mechanical Engineering departments. I fear though that there is an ambiguity about the type of engineering that goes into building robotics versus “smart” embedded systems.

When I worked on a Robotics hands-on project, I noticed parallels between the issues designers have to address regardless of whether they are working on robotic designs or embedded semi-autonomous subsystems. Additionally, relying on interdisciplinary skills is not unique to robotics – many embedded systems also rely on the same sets of interdisciplinary skills.

From my own experience working with autonomous vehicles, I know that these systems can sense the world in multiple ways – for example inertially and visually – they have a set of goals to accomplish, have a means to move, interact with, and affect the world around them, and are “smart enough” to be able to adjust their behavior based on how the environment changes. We never referred to these as robots, and I never thought to apply the word robot to them until I worked on this hands-on project.

Defining what makes something a robot is not clearly established. I found a description for robots from the Robot Institute of America (1979) that says a robot is “A reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks.” Our autonomous vehicles met that description. They were reprogrammable and they could manipulate the system through six degrees of freedom to accomplish a variety of tasks. Despite this, I think it would still be difficult to get people to call them robots.

Additionally, it seems there are many embedded subsystems, such as the braking systems or stability-control systems resident in many high-end automobiles, that might also fit this description—but we do not call them robots either. Even my clothes-washing machine can sense and change its behavior based on how the cleaning cycle is or is not proceeding according to a predicted plan; the system can compensate for many anomalous behaviors. These systems can sense the world in multiple ways, they make increasingly complex decisions as their designs continue to mature, they meaningfully interact with the physical world, and they adjust their behavior based on arbitrary changes in the environment.

It seems to me that the principles identified as fundamental to a robotics curriculum should be taught to all engineers and embedded developers – not just robotics majors. Do you think this is a valid concern? Are there any skills that are unique to robotics engineering that warrant a new specialized curriculum versus being part of an embedded engineering curriculum?