What should compilers do better?

Wednesday, August 11th, 2010 by Robert Cravotta

Is assembly language programming a dead-end skill? So much marketing material for embedded development tools seems geared toward making it seem that developers no longer need to work at the level of assembly language for their designs. How true is this message? My observation of the embedded market suggests that the need to program in assembly language is quite alive and well.

I think I have an explanation as to why compilers have enabled the majority of software development to occur in a higher-level language, and yet, they are falling short of eliminating that “final mile” of assembly programming. One target goal for compilers is that it can generate code that is as good as hand-coded assembly, but requires less time, energy, and resources to produce than hand-coded assembly.

While faster compilation is a good thing, I think we have reached a point of diminishing returns for many types of embedded software projects. While automated project and configuration management enables developers to spend even more minutes on software, it does not actually make those extra minutes more productive. Increasing the productivity of saved minutes is essential for today’s software development tools because embedded systems are growing in complexity, and a significant portion of that additional complexity (more than 50% from my estimate) manifests in the software portion of the design.

The problem I see is that while compilers are strong at scheduling and sequencing the loading and execution of opcodes and operands, they are horrible at partitioning and allocating global resources, such as memory spaces beyond the register set and possibly tightly coupled memories. This limits the compiler’s ability to increase a developer’s productivity in precisely the area that contributes a significant portion of the additional complexity in new designs.

Processor architects perform at herculean levels to deliver memory architectures that minimize silicon cost, minimize energy dissipation, and the hide latency of data reads and writes in the memory system so that the instruction execution engines do not stall. A problem is that programming languages do not capture, beyond register files, how the memory subsystem experiences varying access times for each of the different types of limited memory resources. Currently, it is only through manual assembly language coding that the software can match the different types of data with the access times for each memory resource that the processor architect provided.

Matching data with the different memory resources is an involved skill set, and it may be a long while before compilers can perform that partitioning with any level of proficiency. However, what other things should compilers do better to improve the productivity of software developers and offset the increasing amount of complexity they must account for in their embedded designs?

2 Responses to “What should compilers do better?”

  1. E.G. @ LI says:

    “partitioning and allocating global resources” is this a job for the linker anyhow?

  2. D.A. @ LI says:

    Possible prespective: the demand for software has always far out-stripped our ability to produce it, and I don’t see this trend ending for decades. So we have higher order languages, infrastructures, clouds, modeling tools, agile development processes (etc) – and we keep pushing the edge on what we can deliver just to survive the market. There are specialized niches where something other than “the need for tons of code” is the main driving force; in these niches some (not all) of the advances are hard to use or even work against the niche constraints. For example, virtualization can require a higher tolerance to temporal jitter, which may be unacceptable to a hard real-time application (depending, or course, on the nature of the deadlines).

    The pattern I’ve seen is 1) we can’t fulfill the market demands for software 2) there’s an innovation in how we do the job 3) some niche applications can’t use the innovation 4) a group of specialty vendors go off and modify the innovation 5) more niche application can use the innovation but 6) there are a set of apps that it never works for. You can apply this to languages, operating systems, infrastructure, tools, processes, … which would be a fun exercise if anyone is interested…

    There is no fully universal solution because the driving forces are different. Somebody is always going to have to understand the real hardware.

    IMHO

Leave a Reply