Articles by Robert Cravotta

As a former Technical Editor covering Embedded Processing at EDN, Robert has been following and commenting on the embedded processing space since 2001 (see article index). His expertise includes software development and system design using microprocessors, microcontrollers, digital signal processors (DSPs), multiprocessor architectures, processor fabrics, coprocessors, and accelerators, plus embedded cores in FPGAs, SOCs, and ASICs. Robert's embedded engineering background includes 16 years as a Member of the Technical Staff at Boeing and Rockwell International working on path-finding avionics, power and laser control systems, autonomous vehicles, and vision sensing systems.

Travelling the Road of Natural Interfaces

Thursday, July 28th, 2011 by Robert Cravotta

The forms for interfacing between humans and the machines are constantly evolving, and the creation rate of new forms for human-machine interfacing seems to be increasing. Long gone are the days of using punch cards and card reader to tell a computer what to do. Most contemporary users are unaware of what a command line prompt and optional argument is. Contemporary touch, gesture, stylus, and spoken language interfaces threaten to make the traditional hand shaped mouse a quaint and obsolete idea.

The road from idea, to experimental implementations, to production forms for human interfaces usually spans many attempts over years. For example, the first computer mouse prototype was made by Douglas Engelbart, with the assistance of Bill English, at the Stanford Research Institute in 1963. The computer mouse became a public term and concept around 1965 when it was associated to a pointing device in Bill English’s publication of “Computer-Aided Display Control.” Even though the mouse was available as a pointing device for decades, it finally became a ubiquitous pointing device with the release of Microsoft Windows 95. The sensing mechanisms for the mouse pointer evolved though mechanical methods using wheels or balls to detect when and how the user moved the mouse. The mechanical methods have been widely replaced with optical implementations based around LEDs and lasers.

3D pointing devices started to appear in market the early 1990’s, and they have continued to evolve and grow in their usefulness. 3D pointing devices provide positional data along at least 3 axes with contemporary devices often supporting 6 degrees of freedom (3 positional and 3 angular axes). Newer 9 degrees of freedom sensors (the additional 3 axes are magnetic compass axes), such as from Atmel, are approaching integration levels and price points that practically ensure they will find their way into future pointing devices. Additional measures of sensitivity for these types of devices may include temperature and pressure sensors. 3D pointing devices like Nintendo’s Wii remote combine spatial and inertial sensors with vision sensing in the infrared spectrum that relies on a light bar with two infrared light sources that are spaced at a known distance from each other.

Touch Interfaces

The release of Apple’s iPhone marked the tipping point for touch screen interfaces. However, the IBM Simon smartphone predates the iPhone by nearly 14 years, and it sported similar, even if primitive, support for a touchscreen interface. Like many early versions of human-machine interfaces that are released before the tipping point of market acceptance, the Simon did not enjoy the same market wide adoption as the iPhone.

Touchscreen interfaces span a variety of technologies including capacitive, resistive, inductive, and visual sensing. Capacitive touch sensing technologies, along with the software necessary to support these technologies, are offered by many semiconductor companies. The capacitive touch market has not yet undergone the culling that so many other technologies experience as they mature. Resistive touch sensing technology has been in production use for decades and many semiconductor companies still offer resistive touch solutions; there remain opportunities for resistive technologies to remain competitive with capacitive touch into the future by harnessing larger and more expensive processors to deliver better signal-to-noise performance. Vision based touch sensing is still a relatively young technology that exists in higher-end implementations, such as the Microsoft Surface, but as the price of the sensors and compute performance needed to use vision-based sensing continues to drop, it may move into direct competition with the aforementioned touch sensing technologies.

Touch interfaces have evolved from the simple drop, lift, drag, and tap model of touch pads to supporting complex multi-touch gestures such as pinch, swipe, and rotate. However, the number and types of gestures that touch interface systems can support will explode in the near future as touch solutions are able to continue to ride Moore’s law and push more compute processing and gesture databases into the system for negligible additional cost and energy consumption. In addition to gestures that touch a surface, touch commands are beginning to be able to incorporate proximity or hovering processing for capacitive touch.

Examples of these expanded gestures include using more than two touch points, such as placing multiple fingers from one or both hands on the touch surface and performing a personalized motion. Motions can consist of nearly any repeatable motion, including time sensitive swipes and pauses, and it can be tailored for each individual user. As the market moves closer to a cloud computing and storage model, this type of individual tailoring becomes even more valuable because the cloud will enable a user to untether themselves from a specific device and access their personal gesture database on many different devices.

Feedback latency to the user is an important measurement and a strong limiter on the adoption rate of expanded human interface options that include more complex gestures and/or speech processing. A latency target of about 100ms has consistently been the basic advice for feedback responses for decades (Miller, 1968; Myers 1985; Card et al. 1991) for user interfaces; however, according to the Nokia Forum, for tactile responses, the response latency should be kept under 20ms or the user will start to notice the delay between a user interface event and the feedback. Staying within these response time limits affects how complicated a gesture a system can handle and provide satisfactory response times to the user. Some touch sensing systems can handle single touch events satisfactorily but can, under the right circumstances, cross the latency threshold and become inadequate for handling two touch gestures.

Haptic feedbacks provide a tactile sensation, such as a slight vibration, to the user to provide immediate acknowledgement that the system has registered an event. This type of feedback is useful in noisy environments where a sound or beep is insufficient, and it can allow the user to operate the device without relying on visual feedback. An example is when a user taps a button on the touch screen and the system signals the tap with a vibration. The forum goes on to recommend that the tactile feedback is not exaggerated and short (less than 50ms) so as to keep the sensations pleasant and meaningful to the user. Vibrating the system too much or often makes the feedback meaningless to the user and risks draining any batteries in the system. Tactile feedbacks should also be coupled with visual feedbacks.

Emerging Interface Options

An emerging tactile feedback involves simulating texture on the user’s fingertip (Figure 1). Tesla Touch is currently demonstrating this technology that does not rely on mechanical actuators typically used in haptic feedback approaches. The technology simulates textures by applying and modulating a periodic electrostatic charge across the touch surface. By varying the sign (and possibly magnitude) of the charge, the electrons in the user’s fingertip will be drawn towards or away from the surface – effective creating a change in friction on the touch surface. Current prototypes are able to use signals as low as 8V to generate tactile sensations. No electric charge passes through the user.

Pranav Mistry at Fluid Interfaces Group | MIT Media Lab has demonstrated a wearable gesture interface setup combines digital information with the physical world through hand gestures and a camera sensor. The project is built with commercially available parts consisting of a pocket projector, a mirror, and a camera. The current prototype system costs approximate $350 to build. The projector projects visual information on surfaces, such as walls and physical objects within the immediate environment. The camera tracks the user’s hand gestures and physical objects. The software program processes the camera video stream and tracks the locations of the colored markers at the tip of the user’s fingers. Interpreted hand gestures act commands for projector and digital information interfaces.

Another researcher/designer is Fabian Hemmert whose projects explore emerging haptic feedback techniques including shape-changing and weight-shifting devices. His latest public projects include adding friction to a to a touch screen stylus that works through the stylus rather than through the user’s fingers like the Tesla Touch approach. The thought is that the reflective tactile feedback can prioritize displayed information, providing inherent confirmation of a selection by making the movement of the stylus heavier or lighter, and taking advantage of the manual dexterity of the user and providing friction that is similar to writing on a surface – of which the user is already familiar with.

Human media Lab recently unveiled and is demonstrating a “paper bending” interface that takes advantage of E-ink’s flexible display technology (Figure 2). The research team suggests that bending a display, such as to page forward, shows promise as an interaction mechanism. The team identified six simple bend gestures, out of 87 possible, that users preferred based around bending forward or backwards at two corners or the outside edge of the display. The research team identifies potential uses for bend gestures when the user is wearing gloves that inhibit interacting with a touch screen. Bend gestures may also prove useful to users that have motor skill limitations that inhibit the use of other input mechanisms. Bend gestures may be useful as a means to engage a device without require visual confirmation of an action.

In addition to supporting commands that are issued via bending the display, the approach allows a single display to operate in multiple modes. The Snaplet project is a paper computer that can act as a watch and media player when wrapped like a bracelet on the user’s arm. It can function as a PDA with notepad functionality when held flat, and it can operate as a phone when held in a concave shape. The demonstrated paper computer can accept, recognize, and process touch, stylus, and bend gestures.

If the experiences of the computer mouse and touch screens are any indication of what these emerging interface technologies are in for, there will be a number of iterations for each of these approaches before they evolve into something else or happen upon the proper mix of technology, low cost and low power parts, sufficient command expression, and acceptable feedback latency to hit the tipping point of market adoption.

What tools do you use to program multiple processor cores?

Wednesday, July 27th, 2011 by Robert Cravotta

Developers have been designing and building multi-processor systems for decades. New multicore processors are entering the market on a regular basis. However, it seems that the market for new development tools that help designers analyze, specify, code, test, and maintain software targeting multi-processor systems is lagging further and further behind the hardware offerings.

A key function of development tools is to help abstract the complexity that developers must deal with to build the systems they are working on. The humble assembler abstracted the zeros and ones of machine code into more easily remembered mnemonics that enabled developers to build larger and more complex programs. Likewise, compilers have been evolving to provide yet another important level of abstraction for programmers and have all but replaced the use of assemblers for the vast majority of software projects. A key value of operating systems is that it abstracts the configuration, access, and scheduling of the increasing number of hardware resources available in a system from the developer.

If multicore and multi-processor designs are to experience an explosion in use in the embedded and computing markets, it seems that development tools should provide more abstractions to simplify the complexity of building with these significantly more complex processor configurations.

In general, programming languages do not understand the concept of concurrency, and the extensions that do exist usually require the developer to identify the concurrency and explicitly identify where and when such concurrency exists. Developing software as a set of threads is an approach for abstracting concurrency; however, it is not clear how using a threading design method will be able to scale as systems approach ever larger numbers of cores within a single system. How do you design a system with enough threads to occupy more than a thousand cores – or is that the right question?

What tools do you use when programming a multicore or multi-processor system? Does your choice of programming language and compiler reduce your complexity in such designs or does it require you to actively engage more complexity by explicitly identifying areas for parallelism? Do your debugging tools provide you with adequate visibility and control of a multicore/multi-processor system to be able to understand what is going on within the system without requiring you to spend ever more time at the debugging bench with each new design? Does using a hypervisor help you, and if so, what are the most important functions you look for in a hypervisor?

Will flying cars start showing up on the road?

Wednesday, July 20th, 2011 by Robert Cravotta

People have been dreaming of flying cars for decades. The Aerocar made its first flight in 1949; however, it never entered production manufacturing. The Terrafugia Transition recently passed a significant milestone when it was cleared for takeoff by the U.S. National Highway Safety Administration. Does this mean flying cars will soon start appearing on the roads? To clarify, these vehicles are not flying cars in so much as they are roadable light sport aircraft – in essence, they are aircraft that could be considered legal to drive on the streets. The approximately $230,000 price tag is also more indicative of an aircraft rather than an automobile.

The Transition incorporates automotive safety features such as a purpose-built energy absorbing crumple zone, a rigid carbon fiber occupant safety cage, and automotive-style driver and passenger airbags. According to the company, the Transition can take off or land at any public use general aviation airport with at least 2,500′ of runway. On the ground, the Transition can be driven on any road and parked in a standard parking space or household garage. The wings can fold and be stowed vertically on the sides of the vehicle in less than 30 seconds. Pilots will need a Sport Pilot license to fly the vehicle, which requires a minimum of 20 hours of flight time and passing a simple practical test in the aircraft. Drivers will also need a valid driver’s license for use on the ground.

So what makes this vehicle different from the many earlier, and unsuccessful, attempts at bringing a flying car or roadable aircraft to market? In addition to relying on modern engines and composite materials, this vehicle benefits from computer-based avionics. Are modern embedded systems sufficiently advanced and powerful enough to finally push the dream of a roadable aircraft into reality within the next few years? Or will such a dual-mode vehicle make more sense only after automobiles are better able to drive themselves around on the ground? While the $230,000 price tag will limit how many people can gain access to one of these vehicles (if they make it to production), I wonder if aircraft flying into homes will become an issue. Is this just another pipe dream, or are things different this time around that such a vehicle may start appearing on our roads?

Will the Internet become obsolete?

Wednesday, July 13th, 2011 by Robert Cravotta

I saw an interesting question posed in a video the other day: “How much money would someone have to pay you to give up the internet for the rest of your life?” A professor in the video points out the huge gap between the value of using the Internet and the cost to use it. An implied assumption in the question is that the Internet will remain relevant throughout your entire lifetime, but the more I thought about the question, the more I began to wonder if that assumption is reasonable.

While there are many new technologies, devices, and services available today that did not exist a few decades ago, there is no guarantee that any of them will exist a few decades hence. I recently discovered a company that makes custom tables, and their comment on not integrating technology into their table designs illustrates an important point.

“We are determined to give you a table that will withstand the test of time. For example, if you wanted a music player in your table in the 1970s, you wanted an 8-track tape deck, 1980s a cassette tape deck, 1990s a CD player, 2000s an iPod docking station, 2010s a streaming device, and 2020s small spike that you impale into the listener’s tympanic bone, which is now the only way to listen to music, rendering the installation of any of the previous technology a useless scar upon your beautiful table. (No, we don’t actually know if that last one is where music is heading, but if it does, you heard it here first.) The same goes for laptop electrical cords. We can install attachments to deal with power cords, but at the rate battery technology is changing, like your cellular phone or mp3 player, you may just have a docking station you set it on at night, rendering the need for cords obsolete.”

I have seen a number of electronic technologies disappear from my own home and work office over the past few years. When I first setup a home office, I needed a fax machine and dedicated phone line for it. Both are gone today. I watched as my VHS tape collection became worthless, and as a result, my DVD collection is a bit more modest – thank goodness because now I hardly ever watch DVDs anymore because I can stream almost anything I want to watch on a demand basis. While we still have the expensive and beautiful cameras my wife and I bought, we never use them because some of the devices with integrated digital cameras are good enough quality, much easier to use, and much cheaper to use. My children would rather text their friends than actually talk to each other.

So, will the Internet become obsolete in a few decades time as something with more or better functions and is cheaper and easier to use replaces it? I am not sure because the Internet seems to embody a different concept than all of those other technologies that have become obsolete. The Internet is not tied to a specific technology, form factor, access method, or function other than connecting computing devices together.

In a sense, the Internet may be the ultimate embedded system because nearly everyone that uses it does not care about how it is implemented. Abstracting the function of connecting two sites from the underlying technology implemented may allow the Internet to avoid becoming obsolete and replaced. Or does it? Some smartphones differentiate themselves by how they access the Internet – 3G or 4G. Those smartphone will definitely become obsolete in a few years because the underlying technology of the Internet will definitely keep changing.

Will the Internet be replaced by something else? If so, what is you guess as to what will replace the Internet? If not, how will it evolve to encompass the new functions that currently do not exist? As more people and devices attach to the Internet, will it make sense to have separate infrastructures to support data for human and machine consumption?

What does the last Space Shuttle flight mean?

Wednesday, July 6th, 2011 by Robert Cravotta

The final Space Shuttle launch is scheduled for July 8, 2011. This upcoming event is a bittersweet moment for me and, I suspect, for many other people. I spent many years working in aerospace on projects that included supporting the Space Shuttle Main Engines as well as a payload that was cancelled for political (rather than technical) reasons after two years of pre-launch effort.

Similar to the tip of an iceberg, the Space Shuttle is just the front face of the launch and mission infrastructure that was the Space Shuttle program. Like many embedded systems that are contained within end systems, there is a huge amount of ground equipment and technical teams that work behind the scenes to make the Space Shuttle a successful endeavor. So one question is – what is the future of that infrastructure once the Space Shuttle program is completely closed down?

While the United States space program has been a largely publicly funded effort for many decades, the door is now opening for private entities to step up and take the stage. I am hopeful this type of shift will enable a resurgence in the space program because more ideas will be able to compete on how to best deliver space-based services rather than relying on a central group driving the vast majority of the direction that the space program could go. The flurry of aerospace activity and innovation that the Orteig Prize spawned demonstrated that private groups of individuals can accomplish Herculean feats – in this case, flying non-stop across the Pacific Ocean, in either direction, between New York and Paris.

However, I am not sure that a public prize is necessary to spawn a resurgence in aerospace innovation. There are a number of private space ventures already underway, including Virgin Galactic, SpaceX, as well as those companies in the list of private spaceflight companies on Wikipedia.

Does the end of the Space Shuttle program as it has been for the past few decades mean the space program will change? If so, how will it change – especially the hidden (or embedded) infrastructure? Is space just an academic exercise or are there any private/commercial ventures that you think will crack open the potential of space services that become self-sustaining in a private world?

What game(s) do you recommend?

Thursday, June 30th, 2011 by Robert Cravotta

I have been thinking about how games and puzzles can help teach concepts and strengthen a person’s thought patterns for specific types of problem solving. However, there are literally thousands of games available across a multitude of forms, whether they are card, board, or computer-based games. The large number of options can make it challenging to even know when one might be particularly well suited to helping you train your mind for a type of design project. Discussion forums, like this one can collect lessons learned and make you aware of games or puzzles that others have found useful in exercising their minds – as well as being entertaining.

I have a handful of games that I could suggest, but I will start by offering only one recommendation in the hopes that other people will share their finds and thoughts about when and why the recommendation would be worthwhile to someone else.

For anyone that needs to do deep thinking while taking into account a wide range of conditions from a system perspective, I recommend checking out the ancient game of Go. It is a perfect knowledge game played between two players, and it has a ranking or handicap system that makes it possible for two players that are of slightly different strengths to play a challenging game for both players. Rather than explaining the specifics of the game here, I would instead like to focus on what the game forces you to do in order to play competently.

The rules are very simple – each player alternates turns placing a stone on a grid board. The goal of the game is to surround and capture the most territory. The grid is of sufficient size (19×19 points) that your moves have both a short term and a long term impact. Understanding the subtlety and depth of the long term impact of your moves grows in richness with experience and practice – not unlike designing a system in such a way as to avoid shooting yourself in the foot during troubleshooting. If you are too cautious, your opponent will capture too much of the board for your excellent long term planning to matter. If you play too aggressively – such as to capture as much territory as directly or as quickly as possible, you risk trying to defend what you have laid a claim to with a structure that is too weak to withstand any stress from your opponent.

The more I play Go, the easier I am able to see how the relationships between decisions and trade-offs affect how well the game – or a project – will turn out. Being able to find an adequate balance between building a strong structure and progressing forward at an appropriate pace is a constant exercise in being able to read your environment and adjusting to changing conditions.

I would recommend Go to anyone that needs to consider the system level impacts of their design decisions. Do you have a game you would recommend for embedded developers? If so, what is it and why might an embedded developer be interested in trying it out?

Touch me (too) tender

Tuesday, June 28th, 2011 by Robert Cravotta

A recent video of a fly activating commands on a touchscreen provides an excellent example of a touchscreen implementation that is too sensitive. In the video, you can see the computing system interpreting the fly’s movements as finger taps and drags. Several times the fly’s movement causes sections of text to be selected and another time you can see selected text that is targeted for a drag and drop command. Even when the fly just momentarily bounces off the touchscreen surface, the system detects and recognizes that brief contact as a touch command.

For obvious reasons, such over sensitivity in a touchscreen application is undesirable in most cases – that is unless the application is to detect and track the behavior of flies making contact with a surface. The idea that a fly could accidentally delete your important files or even send sensitive files to the wrong person (thanks to field auto-fill technology) is unpleasant at best.

Touchscreens have been available as an input device for decades, so why is the example of a fly issuing commands only surfacing now? First, the fly in the video is walking and landing on a capacitive touchscreen. Capacitive touch screens became more prevalent in consumer products after the launch of the Apple iPhone in 2007. Because capacitive touch screens rely on the conductive properties of the human finger, a touch command does not necessarily require a minimum amount of physical force to activate.

This contrasts with resistive touch screens which do require a minimum amount of physical force to cause two layers on the touch screen surface to make physical contact with each other. If the touch sensor in the video was a screen with a resistive touch sensor layered over it, the fly would most likely never be able to cause the two layers to make contact with each other by walking across the sensor surface; however, it might be able to make the surfaces contact each other if it forcefully collided into the screen area.

Touchscreens that are too sensitive are analogous to keyboards that do not implement an adequate debounce function for the keys. In other words, there are ways that capacitive touch sensors can mitigate spurious inputs such as flies landing on the sensor surface. There are two areas within the sensing system that a designer can work with to filter out unintended touches.

The first area to address in the system is to properly set the gain levels so that noise spikes and small conductive objects (like the feet and body of a fly) do not trigger a count threshold that would be interpreted as a touch. Another symptom of an oversensitive capacitive touch sensor is that it may classify a finger hovering over the touch surface as a touch before it makes contact with the surface. Many design specifications for touch systems explicitly state an acceptable distance above the touch surface that can be recognized as a touch (on the order of a fraction of a mm above the surface). I would share a template for specifying the sensitivity of a touch screen, but the sources I checked with consider that template proprietary information.

One reason why a touch system might be too sensitive is because the gain is set too high so as to allow the system to recognize when the user is using a stylus with a small conductive material within its tip. A stylus tip is sufficiently smaller than a human finger, and without the extra sensitivity in the touch sensor, a user will not be able to use a stylus because the sensor will fail to detect the stylus tip near the display surface. Another reason a touch system could be too sensitive is to accommodate a use-case that involves the user wearing gloves. In essence, the user’s finger never actually makes contact with the surface (the glove does), and the sensor system must be able to detect the finger through the glove even though it is hovering over the touch surface.

The other area of the system a designer should address to mitigate spurious and unintended touches is through shape processing. Capacitive touch sensing is similar to image or vision processing in that the raw data consists of a reading for each “pixel” in the touch area for each cycle or frame of input processing. In addition to looking for peaks or valleys in the pixel values, the shape processing can compare the shape of the pixels around the peak/valley to confirm that it is a shape and size that is consistent with what it expects. Shapes that are outside the expected set, such as six tiny spots that are close to each other in the shape of a fly’s body, can be flagged and ignored by the system.

This also suggests that the shape processing should be able to track context because it needs to be able to remember information between data frames and track the behavior of each blob of pixels to be able to recognize gestures such as pinch and swipe. This is the basis of cheek and palm rejection processing as well as ignoring a user’s fingers that are gripping the edge of the touch display for hand held devices.

One reason why a contemporary system, such as the one in the video, might not properly filter out touches from a fly is that the processor bandwidth of the processing used for the shape processing algorithm could not perform the more complex filtering in the time frame allotted. In addition to actually implementing additional code to handle more complex tracking and filtering, the system has to allocate enough processing resources to complete those tasks. As the number of touches that the controller can detect and track increases, the amount of processing required to resolve all of those touches goes up faster than linearly. Part of the additional complexity for complex shape processing comes from determining which blobs are associated with other blobs and which ones are independent from the others. This correlation function requires multi-frame tracking.

This video is a good reminder that what is good enough in the lab might be completely insufficient in the field.

How is embedded debugging different?

Wednesday, June 22nd, 2011 by Robert Cravotta

Despite all the different embedded designs I worked on, one of the projects that stands out the most is the first embedded project I worked on – despite the fact that I already had ten years of experience with programming computers before that. I had received money for writing simulators, database engines, an assembler, a time share system, as well as several automation tools for production systems. All of these projects executed on mainframe systems or desktop computers. None of them quite prepared me for how different working on an embedded design is.

My first embedded design was a simple box that would reside on a ground equipment test rack that supported the flight system we were building and demonstrating. There was nothing particularly special about this box – it had a number of input and select lines and it had a few output lines. What surprised me most when putting it through its first checkout tests was how clueless I was as to how to troubleshoot the problems that did arise.

While I was aware of keyboard debounce routines from using my desktop system, I had never had to so completely understand the characteristics of different types of switches before. I had never before had to be aware of the wiring within the system, nor had I ever even considered doing an end-to-end check on every wire in a system ever before. While putting this simple box together, I became aware of so many new ways a design could go wrong that I had never had to consider in my earlier designs.

On top of the new ways that the system could behave incorrectly, the system had no file system, no display system, and no way to print out a trace log or memory dump. This made debugging a very different experience. Printf statements would be of no use, and there was no single-step debugger available. Worse yet, running the target program on my desktop computer, so that it could simulate the code, was mostly useless because I could not bring the real-world inputs and outputs that the box worked with into the desktop system.

As I tackled each debugging issue, I went from a befuddled state of having no idea how to proceed to a state where I adopted new ways of thinking that let me gain the insights I needed to infer how the system was (or was not) working and what needed to change. I worked on that project alone, and it welcomed me into the world of embedded design and working with real world signals with wide open arms.

How did your introduction to embedded systems go? What insights can you share to warn those that are entering the embedded design community about how designing, debugging, and integrating embedded components is different from writing application-level software?

What techniques do you use to protect information?

Thursday, June 16th, 2011 by Robert Cravotta

The question of how to protect information on computers and networks has been receiving a lot of visibility with public disclosures of more networks being hacked over the past few weeks. The latest victims of hacking in the last week include the United States CIA site, the United States Senate site, and Citibank. Based on conversations with people about mine and their own experiences with having account and personal information compromised, I suspect there are a number of techniques that each of us use that could prove useful to share with each other on how to improve the protections on your data.

Two techniques that I have started to adopt in specific situations involve the use of secure tokens and the use of dummy email addresses. The secure token option is not available for every system, and it does add an extra layer of passwords to the login process. The secure token approach that I use generates a new temporary passcode every 30 seconds. Options for generating the temporary passcode include using a hardware key-fob or a software program that runs on your computer or even your cell phone. The secure token approach is far from transparent, and there is some cost in setting up the token.

I have only just started playing with the idea of using temporary or dummy email addresses to provide a level of indirection between my login information and my email account. In this case, my ISP allows me to create up to 500 temporary email ids that I can create, manage, and destroy at a moment’s notice. I can create a separate email address for each service. What makes these email addresses interesting though is that there is no way to actually log into the email account with those names as they are merely aliases for my real account which remains private. I’m not positive if this is a better way than just using a real email address, but I know I was worried the one time I had an service hacked because I realized that the email address that was connected to that service was also used by other services – and that represented a potential single point of failure or security access point to a host of private accounts.

One challenge of the dummy email accounts is keeping track of each one; however, because there is no access point available for any of these addresses, I feel more comfortable using a text file to track which email address goes to which service. On the other hand, I am careful to never place the actual email address that I use to access those dummy addresses in the same place.

Do you have some favorite techniques that you have adopted over the years to protect your data and information? Are they techniques that require relying on an intermediary – such as with the secure tokens, or are they personal and standalone like the dummy email address idea? Are any of your techniques usable in an embedded device, and if so, does the design need to include any special hardware or software resources to include it in the design?

Are GNU tools good enough for embedded designs?

Wednesday, June 8th, 2011 by Robert Cravotta

The negative responses to the question about Eclipse-based tools surprised me. It had been at least four years since I tried an Eclipse-based development tool, and I assumed that with so many embedded companies adopting the Eclipse IDE that the environment would have cleaned up nicely.

This got me wondering if GNU-based tools, especially compilers targeting embedded processors, fare better within the engineering community or not. Similar to using the Eclipse IDE, it has been far too many years since I used a GCC compiler to know how it has or has not evolved. Unlike an IDE, a compiler does not need to support a peppy graphical user interface – it just needs to generate strong code that works on the desired target. The competition to GCC compilers are proprietary tools that claim to perform significantly better at generating target code.

Are the GNU-based development tools good enough for embedded designs – especially those designs that do not provide a heavy user interface? The software for most embedded designs must operate within constrained memory sizes and need to operate efficiently or it will risk driving the cost of the embedded system higher than it needs to be.

Are you using GNU-based development tools – even when there is a proprietary compiler available for your target? What types of projects are GNU-based tools sufficient for and where is the line when the proprietary tools become a necessity (or not)?