which Build Method?

There are three options when it comes to building your unit tests. Two of them are good options. The third is the answer many think is the obvious solution. Most of the time the third option is wrong. Let's take a look at all three.

Option 1: Simulators

Many embedded targets have simulators. Often their usefulness is overlooked because they usually do a great job of simulating the processor's instruction set, but a poor job of treating registers, peripherals, and memory the way your target does... instead, simulators often treat all memory locations as if they were RAM. 

For Unit Testing, this is AWESOME! Seriously.

Let's say we have a register USART1_STATUS and it has a bit USART_STATUS_OVERFLOW which is supposed to tell our application that we've overflowed the peripheral's input buffer. If the simulator treats USART1_STATUS as RAM, our test can write to USART1_STATUS whenever we want (Even if it's READ-ONLY on our target). Our test can fake a USART_STATUS_OVERFLOW and then check to make sure our function handles it correctly. No muss, no fuss.

To make the simulator option work, it is necessary that a simulator exists for your target (no, really?). Also, if you want to automate the process at all (you WANT to automate as much as possible!), it is necessary that the simulator allow you to load and run applications from a scriptable interface (like the command line) and that it have some method of collecting results (most often this takes the form of special character write routines that log characters to a file or dump them over standard out).


  • Tests are built using the same compiler as your release code
  • You have full test control over all registers and peripherals


  • Simulators can be slow to execute tests
  • Configuration can sometimes be complicated (depending on your simulator)

Option 2: Native Executable

The second good option is to build your tests as native executables. This means that if you are running Microsoft Windows, you build your tests to be run directly on Windows (using Microsoft Visual Studio or MinGW or something like that). If you're running on a Apple OS X or Linux, you're likely going to build using gcc or clang.

There's a trick to this that requires some up-front work: preparing to test registers. Remember how we talked about how awesome it is that we can read or write any register we want on a Simulator? We want that same ability for our native tests! But how can we? Just because address 0x5000C000 is the start of a GPIO configuration register set on our target, doesn't mean we can just decide to write to that location on our native host, right? It could be ANYTHING there!

The answer is that we must define our registers in a particular way. We'll get into the details of this on another page, but basically the trick is to take normal register definitions like these:

#define PORTG_CONFIG (*(volatile unsigned int*)(0x5000C000))
#define PORTA_CONFIG (*(struct PORT_CONFIG_TYPE*)(0x50000000))

During a test, you want them to act more like this:

volatile unsigned int PORTG_CONFIG;

Now, if you look closely, the first two instances are common ways of mapping integers and structs to become nice C-friendly registers. The latter two provide the very same interface, but to an integer or struct sitting in RAM. So, if we can make sure we use the first set for our release build and the second for our tests (which is easy, we assure you), then you are ready for testing! (well, once you update all your registers to be this style.)


  • You have full test control over all registers and peripherals
  • (Usually) faster builds
  • (Usually) (MUCH) faster test execution
  • Promotes portable C


  • Requires two toolchains (one for release, one for test)
  • Might require some small modification to source if your compiler has non-standard features
  • Upfront time investment for creating flexible register set


OPTION 3: On Target

(probably the wrong choice)

If you've read the other two options, you might be able to guess why executing your tests on the actual embedded target is the worst option. If you guessed "Because you don't have control over the registers" you WIN! Unit Testing is about testing that your code handles all the situations it might encounter: good and bad. That means you need to be able to inject any error code that you might encounter. It means you will want to write to read-only registers and read from write-only registers. You need access. Executing on the Target doesn't get you that.

That isn't to say there isn't a place for executing code on your target. Quite the contrary, we at ThrowTheSwitch.org are big believers in the power of System Testing. But that's a different animal. For a System Test, you want to treat your final system as a black box. You want to control the external controls and stimuli to your system and measure the responses. It happens against your RELEASE build (please don't use your debug or test builds for this).

Having said all that, there may be the rare case where executing on the target is the best (likely only?) choice. If you think this case applies to you, here's our recommendation:

  1. Start by rethinking it. Are you REALLY sure this is the best choice? It's likely you may never be able to effectively unit test your code this way.
  2. If the answer is still yes, then Unity can still work for you. Focus on our helpful hints about using a simulator. When we talk about loading and executing tests on the simulator, replace that with loading and executing tests on your target. When we talk about collecting results from your simulator, replace that with a communication channel through your debugger or usart or similar to collect results from your target.

It CAN be made to work... but it's not likely the best option.


Have you made a decision? Excellent! If you already know which tools you need, you can jump straight to the configuration pages below. Otherwise, you might want to start by determining which tools are right for you.