Entries Tagged ‘Software Development Tools’

What embedded development tool do you want?

Wednesday, December 14th, 2011 by Robert Cravotta

Over the years, embedded development tools seem to deliver more capabilities at lower price points. Over the years, $40,000+ development workstations and $10,000+ in-circuit emulators have given way to tools that are lower in cost and more capable. Compilers can produce production quality code in record compilation times by arming the compilation activity across a networked farm of workstations with available processing bandwidth. IDE’s (integrated development environments) greatly improve the productivity of developers by seamlessly automating the process of switching between editing and debugging software on a workstation and target system. Even stepping through software execution backwards to track down the source of difficult bugs has been possible for several years.

The tools available today make it a good time to be an embedded developer. But can the tools be even better? We are fast approaching the end of one year and the beginning of the next, and this often marks an opportunity to propose your wish items for the department or project budget. When I meet with development tool providers, I always ask what directions they are pushing their future tools development. In this case, I am looking for something that goes beyond faster and more towards assisting the developer with analyzing their system.

One tool capability that I would like to see more of is a simulator/profiler feedback based compiler tool that enables you to quickly explore many different ways to partition your system across multiple processors. Embedded systems have been using multiple processors in the same design for decades, but I think the trend is accelerating to include even more processors (even in the same package) than before to squeeze out costs and energy efficiency as well as to handle increasingly complex operating scenarios.

This partition exploration tool goes beyond those current tools that perform multiple compilations with various compiler switches and presents you with a code size versus performance trade-off. This tool should help developers understand how a particular partitioning approach will or will not affect the performance and robustness of a design. Better yet, the tool would assist in automating how to explore different partitioning approaches so that developers could explore dozens (or more) partitioning implementations instead of the small handful that can be done on today’s tight schedules with an expert effectively doing the analysis by hand. I suspect this type of capability would provide a much needed productivity boost for developers to handle the growing complexity of tomorrow’s applications.

Is there an embedded development tool that lifts your productivity to new heights such that you would recommend to your management that every member of your team had it? Is there a capability you wish your development tools had that isn’t quite available yet? What are the top one to three development tools you would recommend as must-have for embedded developers?

What tools do you use to program multiple processor cores?

Wednesday, July 27th, 2011 by Robert Cravotta

Developers have been designing and building multi-processor systems for decades. New multicore processors are entering the market on a regular basis. However, it seems that the market for new development tools that help designers analyze, specify, code, test, and maintain software targeting multi-processor systems is lagging further and further behind the hardware offerings.

A key function of development tools is to help abstract the complexity that developers must deal with to build the systems they are working on. The humble assembler abstracted the zeros and ones of machine code into more easily remembered mnemonics that enabled developers to build larger and more complex programs. Likewise, compilers have been evolving to provide yet another important level of abstraction for programmers and have all but replaced the use of assemblers for the vast majority of software projects. A key value of operating systems is that it abstracts the configuration, access, and scheduling of the increasing number of hardware resources available in a system from the developer.

If multicore and multi-processor designs are to experience an explosion in use in the embedded and computing markets, it seems that development tools should provide more abstractions to simplify the complexity of building with these significantly more complex processor configurations.

In general, programming languages do not understand the concept of concurrency, and the extensions that do exist usually require the developer to identify the concurrency and explicitly identify where and when such concurrency exists. Developing software as a set of threads is an approach for abstracting concurrency; however, it is not clear how using a threading design method will be able to scale as systems approach ever larger numbers of cores within a single system. How do you design a system with enough threads to occupy more than a thousand cores – or is that the right question?

What tools do you use when programming a multicore or multi-processor system? Does your choice of programming language and compiler reduce your complexity in such designs or does it require you to actively engage more complexity by explicitly identifying areas for parallelism? Do your debugging tools provide you with adequate visibility and control of a multicore/multi-processor system to be able to understand what is going on within the system without requiring you to spend ever more time at the debugging bench with each new design? Does using a hypervisor help you, and if so, what are the most important functions you look for in a hypervisor?

Do you care if your development tools are Eclipse based?

Wednesday, May 25th, 2011 by Robert Cravotta

I first explored the opportunity of using the Eclipse and Net Beans open source projects as a foundation for embedded software development tools in an article a few years back. Back then these Java-based IDEs (Integrated Development Environments) were squarely targeting application developers, but the embedded community was beginning to experiment with using these platforms for their own development tools. Since then, many companies have built and released Eclipse-based development tools – and a few have retained using their own IDE.

This week’s question is an attempt to start evaluating how theses open source development platforms are working out for embedded suppliers and developers. In a recent discussion with IAR Systems, I felt like the company’s recent announcement about an Eclipse plug-in for the Renesas RL78 was driven by developer request. IAR also supports its own proprietary IDE – the IAR Embedded WorkBench. Does a software development tools company supporting two different IDEs signal something about the open source platform?

In contrast, Microchip’s MPLAB X IDE is based on the Net Beans platform – effectively a competing open source platform to Eclipse. One capability that using the open source platform provides is that the IDE supports development on a variety of hosts running Linux, Mac OS, and Windows operating systems.

I personally have not tried using either an Eclipse or Net Beans tool in many years, so I do not know yet how well they have matured over the past few years. I do recall that managing installations was somewhat cumbersome, and I expect that is much better now. I also recall that the tools were a little slower to react to what I wanted to do, and again, today’s newer computers may have made that a non-issue. Lastly, the open source projects were not really built with the needs of embedded developers in mind, so the embedded tools that migrated to these platforms had to conform as best they could to architectural assumptions that were driven by the needs of application developers.

Do you care if an IDE is Eclipse or Net Beans based? Does the open source platform enable you to manage a wider variety of processor architectures from different suppliers in a meaningfully better way? Does it matter to your design-in decision if a processor is supported by one of these platforms? Are tools based on these open source platforms able to deliver the functionality and responsiveness you need for embedded development?

Do you use any custom or in-house development tools?

Wednesday, May 11th, 2011 by Robert Cravotta

Developing embedded software differs from developing application in many ways. The most obvious difference is that there is usually no display available in embedded systems whereas most application software would be useless without a display to communicate with the user. Another difference is that it can be challenging to know whether the software for an embedded system is performing the correct functions for the right reasons or if it is performing what appear to be proper functions coincidentally. This is especially relevant to closed-loop control systems that include multiple types of sensors in the control loop, such as with fully autonomous systems.

Back when I was building fully autonomous vehicles, we had to build a lot of custom development tools because standard software development tools just did not perform the tasks we needed. Some of the system-level simulations that we used were built from the ground up. These simulations modeled the control software, rigid body mechanics, and inertial forces from actuating small rocket engines. We built a hardware-in-the-loop rig so that we could swap in and out real hardware with simulated modules so that we could verify the operation of each part of the system as well as inject faults into the system to see how it would fare. Instead of a display or monitor to provide feedback to the operator, the system used a telemetry link which allowed us to effectively instrument the code and capture the state of the system at regular points of time.

Examining the telemetry data was cumbersome due to the massive volume of data – not unlike trying to perform debugging analysis with today’s complex SOC devices. We used a custom parser to extract the various data channels that we wanted to examine together and then used a spreadsheet application to scale and manipulate the raw data and to create plots of the data that we were looking for correlations in. If I was working on a similar project today, I suspect we would still be using a lot of same types of custom tools as back then. I suspect that the market for embedded software development tools is so wide and fragmented that it is difficult for a tools company to justify creating many tools that meet the unique needs of embedded systems. Instead, there is much more money available from the application side of the software development tool market, and it seems that embedded developers must choose between figuring out how to use tools that address the needs of application software work in their project or to create and maintain their own custom tools.

In your own projects, are standard tools meeting your needs or are you using custom or in-house development tools? What kind of custom tools are you using and what problems do they help you solve?

Debugging Stories: Development Tools

Monday, January 31st, 2011 by Robert Cravotta

Anyone that has developed a system has debugging stories. A number of those stories are captured in the responses to a Question-of-the-Week posed a while ago about your favorite debugging anecdote. While collecting the different stories together reveals some worthwhile lessons learned, reading through all of the stories can be time consuming and random as to the type of content in each story. This article, and future others like it, will attempt to consolidate a class of debugging stories together to ease access for you. The rest of this article will focus on the stories and lessons based around issues with the development tools.

In reading through the stories, I am reminded that I worked with a C cross compiler that did not generate the proper code for declaring and initializing float variables. The work around was to avoid initializing the float variable as part of the declaration. The initialization had to be performed as a distinct and separate assignment within the body code. Eventually, within a year of us finding the problem, the company that made the compiler fixed it, but I continued to maintain the code so as to keep the declaration and initialization separate. It felt safer to the whole development team to comment the initialization value with the declaration line and place all of the initialization code at the beginning of the code block.

Two stories identified how the debugger can misrepresent how the actual runtime code executes with and without the debugger in the system. Andrew Coombes shared a story about how the debugger inappropriately assumed when a block of code had the same CRC value as the previously loaded code that it was identical and skipped the process of loading the new code onto the target. The problem was exacerbated by the fact that the debugger did not calculate the CRC correctly. S.B. @ LI shared a story where the debugger was intercepting and correcting the data types in a call structure to an operating system call. This masked the real behavior of the system when the debugger was not active where the data types were not correct.

There were stories about compilers that would allocate data to inappropriate or unavailable memory resources. RSK @ LI shared how he had to use an inline-like function using preprocessor macros to reduce the call depth to avoid overflowing the hardware stack. E.P. @ LI’s story does not specify whether the compiler set the cache size, but the debugged code used a cache block that was one database block large and this inappropriate sizing caused the database application to run excessively slow. R.D @ LI recounts how a compiler was automatically selecting a 14-bit register to store a 16-bit address value, and how adding a NOP in front of the assignment cause the compiler to choose the correct register type to store the value.

I recall hearing many admonishments when I was a junior member of the staff to not turn on the compiler optimizations. I would hear stories about compiler optimizations that did not mix well with processor pipelines that did not include interlocks, and the horrible behaviors that would ensue. J.N. @ LI recounts an experience with a compiler optimization that scheduled some register writes just before a compare so that the system behaved incorrectly.

M.B. @ LI reminds us that even library code that has been used for long periods of time over many projects can include latent problems – especially for functions embedded within libraries, such as newlib in this case. L.W. @ LI’s story tells of when he found a NULL pointer access that had been within a seldom activated conditional with a library call.

I like J.N. @ LI‘s summary – “Different tools have different strengths, which is why you learn to use several and switch off when one isn’t finding the problem. And sometimes one tool gives you a hint that gets you closer, but it takes a different tool (or tools) to get you the rest of the way.”

Please let me know if you find this type of article useful. If so, I will try to do more on the topics that receive large numbers of responses that can be grouped into a smaller set of categories.