Entries Tagged ‘Abstraction’

What tools do you use to program multiple processor cores?

Wednesday, July 27th, 2011 by Robert Cravotta

Developers have been designing and building multi-processor systems for decades. New multicore processors are entering the market on a regular basis. However, it seems that the market for new development tools that help designers analyze, specify, code, test, and maintain software targeting multi-processor systems is lagging further and further behind the hardware offerings.

A key function of development tools is to help abstract the complexity that developers must deal with to build the systems they are working on. The humble assembler abstracted the zeros and ones of machine code into more easily remembered mnemonics that enabled developers to build larger and more complex programs. Likewise, compilers have been evolving to provide yet another important level of abstraction for programmers and have all but replaced the use of assemblers for the vast majority of software projects. A key value of operating systems is that it abstracts the configuration, access, and scheduling of the increasing number of hardware resources available in a system from the developer.

If multicore and multi-processor designs are to experience an explosion in use in the embedded and computing markets, it seems that development tools should provide more abstractions to simplify the complexity of building with these significantly more complex processor configurations.

In general, programming languages do not understand the concept of concurrency, and the extensions that do exist usually require the developer to identify the concurrency and explicitly identify where and when such concurrency exists. Developing software as a set of threads is an approach for abstracting concurrency; however, it is not clear how using a threading design method will be able to scale as systems approach ever larger numbers of cores within a single system. How do you design a system with enough threads to occupy more than a thousand cores – or is that the right question?

What tools do you use when programming a multicore or multi-processor system? Does your choice of programming language and compiler reduce your complexity in such designs or does it require you to actively engage more complexity by explicitly identifying areas for parallelism? Do your debugging tools provide you with adequate visibility and control of a multicore/multi-processor system to be able to understand what is going on within the system without requiring you to spend ever more time at the debugging bench with each new design? Does using a hypervisor help you, and if so, what are the most important functions you look for in a hypervisor?

When do you use your compiler’s inline assembler feature and for what reasons?

Wednesday, September 1st, 2010 by Robert Cravotta

I am working on a mapping for software that is analogous to the mapping I developed to describe the different types of processing options. The value of this type of mapping is that it improves the visibility as to the assumptions and optimization trade-offs that drive the design and implementation details of a given tool or architecture. A candidate mapping criteria is the coupling between different layers of abstraction between the software and the hardware target. I will be asking questions that try to tease out the assumptions and trade-offs behind the tools you use to move between different layers of abstraction in your designs.

For example, a compiler allows a software developer to write instructions in a high-level language that generally allows the developer to focus on what the software needs to accomplish without having to worry about partitioning and scheduling the execution engine resources such as register reads and writes. For the mapping model, a compiler would have a strong coupling with the high-level language. Additionally, if the developer is using an operating system, the compiler may also support targeting the software to the operating system API (application programming interface) rather than a privileged mode on the target processor. This would constitute another layer of coupling that the compiler must account for.

However, most compilers also include an inline assembler that allows the developer to break these abstraction layers and work at the level of the target processor’s assembly instructions and resources. Using the inline assembler usually means more complexity for the software developer to manage because the compiler is no longer directly controlling some of the target processor’s resources. Using assembly language can also reduce the portability of the software, so developers usually have a good reason to break the abstraction layer and work at the level of the target processor. Reasons for using an inline assembler include improving the execution speed of the software, optimizing the memory usage in the system, and directly controlling special hardware resources in the processor such as co-processors, accelerators, and peripherals.

Under what conditions do you use the inline assembler (or a separate assembler) for you software? What are the project management and technical trade-offs you consider when choosing to work at the assembly level? What features would a compiler need to support that would allow you to avoid using assembly language? Your answers will help refine the software sweet spot mapping that I am currently developing.