[Editor's Note: This was originally posted on the Embedded Master]
A few weeks ago, I asked if robotics engineering is different enough from embedded engineering to warrant being treated as a separate discipline. I asked this question because I think a general engineering curriculum is valuable – especially at the bachelor level. Engineers should at least have exposure and experience with a broad set of skills and tools because you never know which one will be the right one to solve any given situation, and it seems a shame to focus multi-discipline skills and tools into a narrow topic when they really apply to any embedded system that deals with real world interfaces and control.
I offered three examples in the original post: my clothes-washing machine, and the braking systems and stability-control systems resident in many high-end automobiles. Differential braking systems are fairly sophisticated and they involve much more than just a software control system. They must have an understanding of the complex interactions of friction, steering, engine force, and inertia in order to accomplish their coordinated task correctly. The same can be doubly said for stability control systems that work to prevent vehicles from skidding and rolling over. The humble washing machine is a modern marvel. I have watched as the washing machine gets into an unbalanced condition. Instead of walking and banging around like its predecessor, it performs a set of gyrations that actually corrects the imbalance and allows the washing machine to work at higher efficiency even with heavy, bulky loads. Each of these examples is a collection of interconnected embedded systems working together to affect the overall system in some mechanical fashion.
In thinking about this question, I remembered spending time with the Velodyne brothers, David and Bruce Hall, for the earlier DARPA Grand Challenges and their TeamDAD autonomous automobile. I was kindred spirits with these gentlemen as I also worked on autonomous vehicles almost 15 years earlier. In addition to their innovative LIDAR sensor system, I remember their answer to how they got involved in autonomous automobiles when their background was premium subwoofers. They use motion-feedback technology in their sub woofers and it is not a large leap to applying those concepts to motion control technology for robots.
In a similar fashion, none of the examples above act like a robot that exhibits obvious motion, and yet they all use motion-feedback to accomplish their primary tasks. The relevance here is that robots also use a collection of interconnected embedded systems that work together in order to achieve mechanical degrees of free motion in the real world, which I have observed as a bias in multiple online conversations as a necessity for an autonomous system to be considered a robot. None of the examples is limited to just embedded software – they all are firmly entrenched in multidiscipline mechanical control and manipulation.
Is there a fundamental difference in the skills required to build visibly moving robots versus autonomous embedded control subsystems? My experience says they are the same set of skills.