Entries Tagged ‘Software Development’

Do you ever think about endianess?

Wednesday, February 8th, 2012 by Robert Cravotta

I remember when I first learned about this thing called endianess as it pertains to ordering higher and lower order bits for data that consumes more than a single byte of data. The two most common ordering schemes were big and little endian. Big endian stored the most significant bytes ahead of the least significant bytes; little endian stored data in the opposite order with the least significant bytes ahead of the most significant bytes. The times when I was most aware of endianess was when we were defining data communication streams (telemetry data in my case) that transferred data from one system to another that did not use the same type of processors. The other context where knowing endianess mattered was when the program needed to perform bitwise operations on data structures (usually for execution efficiency purposes).

If what I hear from semiconductor and software development tool providers is correct, only a very small minority of developers deal with assembly language anymore. Additionally, I suspect that most designers are not involved in driver development anymore either. With the abstractions that compiled languages and standard drivers offer, does endianess affect how software developers write their code? In other words, are you working with datatypes that abstract how the data is stored and used, or are you implementing functions in such a way that require you to know how your data is internally implemented? Have software development tools successfully abstracted this concept away from most developers?

Are software development tools affecting your choice of 8-bit vs. 32-bit processors?

Wednesday, February 1st, 2012 by Robert Cravotta

I have always proposed that the market for 8-bit processors would not fade away – in fact there are still a number of market niches that rely on 4-bit processors (such as clock movements and razor blades that sport a vibrating handle for men when shaving their faces). The smaller processor architectures can support the lowest cost price points and the lowest energy consumption years before the larger 32-bit architectures can begin to offer anything close to parity with the smaller processors. In other words, I believe there are very small application niches that even 8-bit processors are currently too expensive or energy hungry to support just yet.

Many marketing reports have identified that the available software development tool chains play a significant role in whether a given processor architecture is chosen for a design. It seems that the vast majority of resources spent evolving software development tools are focused on the 32-bit architectures. Is this difference in how software development tools for 8- and 32-bit processors are evolving affecting your choice of processor architectures?

I believe the answer is not as straight forward as some processor and development tool providers would want to make it out to be. First, 32-bit processors are generally much more complex to configure than 8-bit processors, so the development environments, which often include drivers and configuration wizards, are nearly a necessity for 32-bit processors and almost a non-issue for 8-bit processors. Second, the type of software that 8-bit processors are used for are generally smaller and contend with less system-level complexity. Additionally, as embedded processors continue to find their way into smaller tasks, the complexity of the software may need to be simpler than current 8-bit software to meet the energy requirements of the smallest subsystems.

Do you feel there is a significant maturity difference between software development tools targeting 8- and 32-bit architectures? Do you think there is/will be a widening gap in the capabilities of software development tools targeting different size processors? Are software development tools affecting your choice of using an 8-bit versus a 32-bit processor or are other considerations, such as the need for additional performance headroom for future proofing, driving your decisions?

What tools do you use to program multiple processor cores?

Wednesday, July 27th, 2011 by Robert Cravotta

Developers have been designing and building multi-processor systems for decades. New multicore processors are entering the market on a regular basis. However, it seems that the market for new development tools that help designers analyze, specify, code, test, and maintain software targeting multi-processor systems is lagging further and further behind the hardware offerings.

A key function of development tools is to help abstract the complexity that developers must deal with to build the systems they are working on. The humble assembler abstracted the zeros and ones of machine code into more easily remembered mnemonics that enabled developers to build larger and more complex programs. Likewise, compilers have been evolving to provide yet another important level of abstraction for programmers and have all but replaced the use of assemblers for the vast majority of software projects. A key value of operating systems is that it abstracts the configuration, access, and scheduling of the increasing number of hardware resources available in a system from the developer.

If multicore and multi-processor designs are to experience an explosion in use in the embedded and computing markets, it seems that development tools should provide more abstractions to simplify the complexity of building with these significantly more complex processor configurations.

In general, programming languages do not understand the concept of concurrency, and the extensions that do exist usually require the developer to identify the concurrency and explicitly identify where and when such concurrency exists. Developing software as a set of threads is an approach for abstracting concurrency; however, it is not clear how using a threading design method will be able to scale as systems approach ever larger numbers of cores within a single system. How do you design a system with enough threads to occupy more than a thousand cores – or is that the right question?

What tools do you use when programming a multicore or multi-processor system? Does your choice of programming language and compiler reduce your complexity in such designs or does it require you to actively engage more complexity by explicitly identifying areas for parallelism? Do your debugging tools provide you with adequate visibility and control of a multicore/multi-processor system to be able to understand what is going on within the system without requiring you to spend ever more time at the debugging bench with each new design? Does using a hypervisor help you, and if so, what are the most important functions you look for in a hypervisor?

Do you use any custom or in-house development tools?

Wednesday, May 11th, 2011 by Robert Cravotta

Developing embedded software differs from developing application in many ways. The most obvious difference is that there is usually no display available in embedded systems whereas most application software would be useless without a display to communicate with the user. Another difference is that it can be challenging to know whether the software for an embedded system is performing the correct functions for the right reasons or if it is performing what appear to be proper functions coincidentally. This is especially relevant to closed-loop control systems that include multiple types of sensors in the control loop, such as with fully autonomous systems.

Back when I was building fully autonomous vehicles, we had to build a lot of custom development tools because standard software development tools just did not perform the tasks we needed. Some of the system-level simulations that we used were built from the ground up. These simulations modeled the control software, rigid body mechanics, and inertial forces from actuating small rocket engines. We built a hardware-in-the-loop rig so that we could swap in and out real hardware with simulated modules so that we could verify the operation of each part of the system as well as inject faults into the system to see how it would fare. Instead of a display or monitor to provide feedback to the operator, the system used a telemetry link which allowed us to effectively instrument the code and capture the state of the system at regular points of time.

Examining the telemetry data was cumbersome due to the massive volume of data – not unlike trying to perform debugging analysis with today’s complex SOC devices. We used a custom parser to extract the various data channels that we wanted to examine together and then used a spreadsheet application to scale and manipulate the raw data and to create plots of the data that we were looking for correlations in. If I was working on a similar project today, I suspect we would still be using a lot of same types of custom tools as back then. I suspect that the market for embedded software development tools is so wide and fragmented that it is difficult for a tools company to justify creating many tools that meet the unique needs of embedded systems. Instead, there is much more money available from the application side of the software development tool market, and it seems that embedded developers must choose between figuring out how to use tools that address the needs of application software work in their project or to create and maintain their own custom tools.

In your own projects, are standard tools meeting your needs or are you using custom or in-house development tools? What kind of custom tools are you using and what problems do they help you solve?

Is peer code inspection worthwhile?

Wednesday, April 6th, 2011 by Robert Cravotta

I am a strong believer in applying multiple sets of eyes to tasks and projects. Design reviews provide a multi-stage mechanism for bringing independent eyes into a design to improve the probability of uncovering poor assumptions or missed details. Peer performed code inspection is another mechanism to bring multiple sets of eyes to the task of implementing software code. However, given the evolution of automated code checking tools, is the manual task of inspecting a peer’s code still a worthwhile task?

Even when tools were not readily available to check a developer’s code, my personal experience involved some worthwhile and some useless code inspection efforts. In particular, the time I engaged in a useless code inspection was not so much about the code, but rather about how the team leader approach the code inspection and micromanaged the process. That specific effort left a bad taste in my mouth for overly formal and generic procedures for a task that requires specific and deep knowledge to perform well.

A staggering challenge facing code inspectors is the sheer magnitude of software that is available for inspecting. The labor for inspecting software is significant and it requires a high level of special skills and knowledge to perform. Tools that perform automated code inspections have proliferated, and they continue to improve over time, but are they good enough alternative to peer code inspections? I like Jack Ganssle’s “Guide to Code Inspections”, but even his final statement in the article (“Oddly, many of us believe this open-source mantra with religious fervor but reject it in our own work.”) suggests that the actions of software developers imply that they do not necessarily consider code inspections a worthwhile redirection of development team’s time.

Is peer-based code inspection worthwhile? Are the automated code inspection tools good enough to replace peer inspection? When is peer inspection necessary, or in what ways is automated inspection insufficient?

Are you, or someone you know, using voting within your embedded system designs?

Wednesday, November 3rd, 2010 by Robert Cravotta

With the midterm elections in the United States winding down, I thought I’d try to tie embedded design concepts to the process of elections. On some of the aerospace projects I worked on, we used voting schemes as fault tolerant techniques. In some cases, because we could not trust the sensors, we used multiple sensors, and performed voting among the sensor controllers (along with separate and independent filtering) to improve the quality of the data that we used for our control algorithms. We might use multiple of the same type of sensor, and in some cases we would use sensors that differed from each other significantly so that they would not be susceptible to the same types of bad readings.

I did a variety of searches on fault tolerance and voting to see if there was any recent material on the topic. There was not a lot of material available, and what was available was scholarly, and I was generally not able to download the files. It is possible I did a poor job choosing my search terms. However, this lack of material made me wonder if people are using the technique at all and/or has it evolved into a different form. In this case, sensor fusion.

Sensor fusion is the practice of combining data derived from sensors from disparate sources to deliver “better” data than would be possible if these sources were used individually. “Better” in this case can mean more accurate, complete, reliable data. From this perspective, the fusion of the data is not strictly a voting scheme, but there are some similarities with the original concept.

This leads me to this week’s question. Are you, or someone you know, using voting or sensor fusion within embedded system designs? As systems continue to increase in complexity, the need for robust design principles that can enable systems to correctly operate with less-than-perfect components becomes more relevant. Is the voting schemes of yesterday still relevant, or have they evolved into something else?

Does and should IT exercise complete control over an embedded developer’s workstation?

Wednesday, October 27th, 2010 by Robert Cravotta

It seems to me everyone has their own personal IT horror stories. I am one of those few people that have lived on both sides of the fence when it comes to running an IT department and doing embedded development. My stint with IT occurred during the transition of combining many independent islands of department networks into a single robust company-wide network.

I enjoyed both jobs. They both had tough challenges, unpredictable and uncontrollable environments, limited budgets, and the end goal of keeping the system operating no matter the failures. I found that the IT team was frustrated at how the users seem bent on purposely destroying the operation of the network while the users were frustrated at how the IT team always tried to prevent them from doing their job. The truth is there were real problems that each side had to solve that the other side didn’t always understand. Worse, each side’s approach often sub-optimized the efforts of the other side.

One strategy that we used to preserve the robustness of the network while allowing the embedded developers the ability to do what they needed with their workstations was to allow them to work in isolated labs. This reduced the variability of hardware and software on the production network without restricting the types of tools the developers could use. However, there were always some on the IT team that did not like this approach because it represented exceptions to the “grand architecture” of the network.

Embedded development is the practice of trade-offs – but then, so is developing a good network design and IT team to keep it running in a productive fashion. Equipment fails all of the time – not because it is of poor quality, but because the equipment runs non-stop every day and the mechanical parts do fail over time. When you consider the thousands of network devices, something is breaking all of the time. To me, it was a system design issue that was similar to the embedded systems I worked on before transferring to the IT group.

Given the horror stories I hear from other embedded developers, maybe our site was not the norm in how we worked with the embedded development teams. Does the IT team find ways to work with your needs or do they force you to work around them like the horror stories seem to indicate. Or are the horror stories merely the result of people embellishing a single bad experience so long ago?

What are good examples of how innovative platforms are supporting incremental migration?

Wednesday, September 15th, 2010 by Robert Cravotta

Companies are regularly offering new and innovative processing platforms and functions in their software development tools. One of the biggest challenges I see for bringing new and innovative capabilities to market is supporting incremental adoption and migration. Sometimes the innovative approach for solving a set of problems requires a completely different way of attacking the problem and, as a result, it requires developers to rebuild their entire system from scratch. Requiring developers to discard their legacy development and tools in order to leverage the innovative offering is a red flag that the offering may have trouble gaining acceptance by the engineering community.

I have shared this concern with several companies that brought out multicore or many-core systems over the past few years because their value proposition did not support incremental migration. They required the developer to completely reorganize their system software from scratch in order to fully take advantage of their innovative architecture. In many cases, this level of reorganizing a design represents an extremely high level of risk compared to the potential benefit of the new approach. If a development team could choose to migrate a smaller portion of their legacy design to the new approach and successfully release it in the next version of the product, they could build their experience with the new approach and gradually adopt the “better” approach without taking a risk that was larger than their comfort level.

Based on stories from several software development companies, software development tools are another area of opportunity for new capabilities to benefit from being able to support incremental adoption. In this case though, the new capabilities do not require a redesign, but they require the developer to accept an unknown learning curve to even evaluate the new feature. As the common storyline goes, the software tool vendor has found that describing the new capability, describing how to use it, and getting the developer to understand how that new capability benefits them is not as straight forward as they would like. As a result, some of the newest features they have added to their products go largely unnoticed and unused by their user base. Their frustration is obvious when they share these stories – especially when they talk about the circumstances where developers do adopt the new features. In many cases, a developer calls them with a problem and the software tool vendor explains how they have a capability already in the tool that will help the developer solve that problem. Only then does the developer try out the feature and then adopt using it in future projects, but they had to experience the type of problem that the feature was designed to assist with before they even recognized that the feature was already added to the tool suite.

Rather than harp on the failures of accommodating incremental migration or easing the adoption learning curve, I would like to uncover examples and ideas of how new innovations can support incremental adoption. For example, innovative multicore structures would probably be able to better support incremental migration if they accommodated inter processor communication mechanisms with cores that exist outside the multi-core fabric rather than leave it to the developer to build such a mechanism from scratch.

Texas Instruments’ recent 0.9V MSP430L092 microcontroller announcement provides two examples. The microcontroller itself is capable of operating the entire digital and analog logic chain from a 0.9V power supply without the need for an on-board boost converter. To support legacy tools sets, the available flash emulation tools provide a mechanism to transparently translate the power and signals to support debugging the 0.9V device with legacy tools.

The other example of the L092 device is that it includes a new type of analog peripheral block that TI calls the A-POOL (Analog Functions Pool). This analog block combines five analog functions into a common block that shares transistors between the different functions. The supported functions are an ADC (analog-to-digital converter), a DAC (digital-to-analog converter), a comparator, a temperature sensor, and an SVS (system voltage supervisor). The analog block includes a microcode engine that supports up to a 16-statement program to autonomously activate and switch between the various peripheral functions without involving the main processor core. The company tells me that in addition to directly programming the microcode stack, the IAR and Code Composer development tools understand the A-POOL and can compile C code into the appropriate microcode for the A-POOL.

If we can develop an industry awareness of ways to supporting incremental adoption and migration, we might help some really good ideas to get off the ground faster than otherwise. Do you have any ideas for how to enable new features to support an incremental adoption?