L'Hexapod: Repeatable Unit Testing with AVR Assembler and AVR Studio

Previously published

This article was previously published on lhexapod.com as part of my journey of discovery into robotics and embedded assembly programming. A full index of these articles can be found here.

As I mentioned yesterday the servo controller project has got to the point where being able to unit test the code would be useful to me. In my day job as a C++ server developer I’ve been using unit tests for several years and most of the code that I write is written in a Test Driven Development style. This works well for me and was one of the first things that I missed when I started to develop in AVR assembler in AVR Studio. At the time I didn’t know enough about how I would structure my code, or even how I’d write the code at all, and testing, though obviously missing, was something I managed to do without.

Now that my code has got pretty large I’ve begun to split it into separate asm files. These are broken down by purpose, in effect I’m creating different modules of functionality. Due to the way that AVR Studio works with assembler projects these files are then included into a master file which is then assembled. Since my code is already broken down into modules, if I’m careful with how I split the code up I’ve found that I can build separate test projects which include one block of real functionality from the servo controller project and which then include test harness code that provides the functionality that the real code needs to build; for example, I have a function called SerialEchoCommand which will echo a command back to the serial port, this is used once the command’s arguments have been validated. By placing this function in one file and the code that uses it in another I can build a test which can replace SerialEchoCommand with a function that the test harness can use to determine if the real code would have done the right thing if it were linked with the real implementation of SerialEchoCommand. In testing terms this is called mocking. I’ve provided a mock implementation of an interface that the code under test uses and the test can interrogate the mock to determine if the code under test is behaving as expected. With most testing the challenge is to separate the code that you want to test from the code that you don’t want to test, or the code that is too hard to manipulate within the test.

Once I realised that I could use relatively standard mocking techniques to separate the code under test from the underlying hardware it became obvious that I could build a test harness that would run in the AVR Studio Simulator and that could test various parts of my servo controller. Unit testing was within reach.

I’ve spent a few hours adding some tests to some of the simpler parts of the serial protocol handling code and it’s going well. I seem to have a structure that works and so far it’s been easy to provide data for the function under test and then to test that it produces the correct outcome and uses the correct services in the expected way. The next release of the servo controller source code will include my test harnesses.

I expect an example will help. Especially if you’re not used to testing!

Suppose we have a function called **SerialProcessCommandSetServoMinPosn **which is called from the code that accumulates and executes serial commands and who’s job it is to take a servo index and a position and to update the servo configuration data so that the supplied position is the minimum position that the servo can be moved to. This function might look something like this:

SerialProcessCommandSetServoMinPosn :

    ld servoIndex, X+

    cpi servoIndex, NUM_SERVOS                  ; check the servo index is valid
    brlt PC+2
    rjmp SerialServoOutOfRange

    ld temp1, X                                 ; load new min posn

    rcall SerialSelectServoData

    adiw XL, MAX_POS_OFFSET

    ld temp2, X                                 ; read existing max position

    cp temp2, temp1                             ; new min must be less than
    brsh PC+2                                   ; or equal to exisiting max
    rjmp SerialPosnOutOfRange

    sbiw XL, MAX_POS_OFFSET
    adiw XL, MIN_POS_OFFSET

    st X, temp1
       
    rcall SerialEchoCommand
   
    rjmp SerialStart

This is generally how all of the serial command processing code is structured. The call into us is a rjmp from the serial command dispatch code. We validate our parameters, report errors or echo our command back to the guy on the end of the serial port and then either jump back to the serial data accumulation code or execute the command and then jump back to the serial data accumulation code.

It’s probably clear from the code above that to be able to test it we need to set X to be pointing to some valid data; outside of the test this would be pointing into the serial data accumulation buffer at the point just after the command code that tells the dispatcher that this is the ‘set min’ command. In our test X can point anywhere that has two bytes of data available, our test harness will set this up and set the contents of the buffer that X is pointing to so that it contains a servo index and a position. Our test also needs to provide implementations of SerialServoOutOfRange, SerialPosnOutOfRange, SerialEchoCommand and SerialStart. As long as these labels exist in a different file we can replace the real code with mocks for our test simply by including the mock code rather than the real code. We’ll use the real implemetation of SerialSelectServoData as that function is responsible for taking a servo index and pointing X to the right place in the servo position data. It looks like this:

SerialSelectServoData :

    push temp2

    ldi XL, LOW(POSITION_DATA_START)      
    ldi XH, HIGH(POSITION_DATA_START)

    ldi temp2, BYTES_PER_SERVO
   
    mul servoIndex, temp2

    add XL, resl
    adc XH, resh

    pop temp2

    ret

Our test just needs to provide a valid position data buffer (i.e. an equate that sets POSITION_DATA_START to something sensible) with some known values in it.

The test might look something like this:

TestSerialProcessCommandSetServoMinPosn :

    ldi temp1, 15
    mov testIndex, temp1
   
    rcall InitialiseSerialOutputBuffer

    rcall InitialisePositionDataToKnownValues

    ldi XL, LOW(TEST_SERIAL_INPUT_BUFFER)    
    ldi XH, HIGH(TEST_SERIAL_INPUT_BUFFER)

    ldi temp1, 0                           
    st X+, temp1                            ; servo index to change

    ldi temp1, 20                           
    st X, temp1                             ; new min value

    ldi XL, LOW(TEST_SERIAL_INPUT_BUFFER)   ; reset X  
    ldi XH, HIGH(TEST_SERIAL_INPUT_BUFFER)

    rjmp SerialProcessCommandSetServoMinPosn

    rjmp TestsFailed

For now we can ignore the testIndex register. Here we set up an output buffer for our implementation of SerialEchoCommand and initialise POSITION_DATA_START and the data in the buffer to sensible values. We then set up the serial input buffer to contain a servo index and a position value and set X to point to the servo index in the buffer. This is how the serial dispatch code would leave the X pointer after examining the previous byte in the real serial accumulation buffer and switching on it depending on which command code it represents. We then jump to the code under test and, hopefully, never return. In case we DO return we then end the test by jumping to the TestsFailed label.

The TestsFailed label is one place where the test code will end up after tests have been run. The other is the TestsSucceeded label. Both simply consist of a jump to themselves. By setting break points on each of these jumps we can run the tests and discover if there are any failures. This serial code is slightly harder to test than it could be because it isn’t structured as functions which return to their caller. Instead we jump into the functions and they jump back to the serial command accumulation loop when they’re done. This makes error handling easier; failures result in the code at the level of the failure reporting the error back to the serial port and then jumping straight back to accumulate a new command. As such all of the code under test will eventually jump to SerialStart. To be able to determine if the test passed we need to be able to examine what the code under test did whilst it was running. This is where the testIndex register comes in. The mocked out code for SerialStart contains a jump table that jumps based on the contents of the testIndex register. Each test has a corresponding ‘check results’ function and the SerialStart code jumps to the check results code associated with the test that’s currently running.

TestSerialProcessCommandSetServoMinPosnCheckResults might look something like this:

TestSerialProcessCommandSetServoMinPosnCheckResults :

    ; The call should leave X pointing at the config value that we have changed...

    cpi XL, LOW(POSITION_DATA_START + MIN_POS_OFFSET)
    breq PC+2
    rjmp TestsFailed

    cpi XH, HIGH(POSITION_DATA_START + MIN_POS_OFFSET)
    breq PC+2
    rjmp TestsFailed

    ld temp1, X                     ; validate that we changed the value we wanted to change
    cpi temp1, 20                   ; to the correct value
    breq PC+2
    rjmp TestsFailed       

    clr temp1                       ; reset the value to its starting value
    st X, temp1

    rcall ValidatePositionDataIsUnchanged   ; and then make sure nothing else was changed

    ; now validate that the correct mock functions were called...

    ldi XL, LOW(TEST_SERIAL_OUTPUT_BUFFER)    
    ldi XH, HIGH(TEST_SERIAL_OUTPUT_BUFFER)

    ld temp1, X+
    cpi temp1, 1            ; there should have been 1 call
    breq PC+2
    rjmp TestsFailed

    ld temp1, X+
    cpi temp1, 0xFF         ; echo command
    breq PC+2
    rjmp TestsFailed

    inc testResult
    ret

Note that we can check that the X pointer has been left where we expect it to end up and that the data that should have been manipulated has been changed as expected. We can then check that the correct mock functions were called. In this case we check that one function, SerialEchoCommand, was called. The implementation of this could be something like this:

SerialEchoCommand :

    ldi serialChar, 0xFF

    rcall SendSerial
   
    ret

Where SendSerial might be implemented like this:

SendSerial :
    push XL                                     ; save the registers that we use
    push XH
    push temp1
    push temp2

    ldi XL, LOW(TEST_SERIAL_OUTPUT_BUFFER)    
    ldi XH, HIGH(TEST_SERIAL_OUTPUT_BUFFER)

    ld temp1, X                                 ; load the number of bytes currently stored in the serial
    clr temp2                                   ; output buffer.
                                   
    inc temp1                                   ; increment the number of bytes as the offset from the
                                                ; start of the buffer is one greater than the number of bytes
                                                ; as the buffer also holds the count itself at offset 0
    add XL, temp1
    adc XH, temp2

    st X, serialChar                            ; store the data that would be written to the serial port in
                                                ; out buffer for later analysis in the test

    ldi XL, LOW(TEST_SERIAL_OUTPUT_BUFFER)
    ldi XH, HIGH(TEST_SERIAL_OUTPUT_BUFFER)

    st X, temp1                                 ; save the number of bytes stored, note that we incremented
                                                ; this value above

    pop temp2                                   ; clean up the stack
    pop temp1
    pop XH
    pop XL

    ret

This makes it easy to check for data that the functions under test might write directly to the serial port, using SendSerial, or other mock functions that they may call, such as SerialEchoCommand. There’s no need to test the functionality of SerialEchoCommand here, we can test that separately, so it’s adequate that it simply writes a single well known token into the test output buffer.

Of course, things get more complex when we’re testing for correct handling of invalid values (i.e. if we pass in a servo index that’s too big, or if we try and set a min posn that’s out of range) but most of the tests required can be built on the same framework.

Most functions, even PWM generating timer interrupt code, can be tested in a similar way. The complex part is always getting the granularity of the code packaging correct so that you can mock the appropriate layers. This often causes several simple functions, or macros, to be required rather than direct hardware access but it’s often a good design decision to abstract these hardware access points away anyway. I’ve found in the past that allowing the tests to lead your design where appropriate (as is the way with TDD) usually results in a better design!

Now, off to fix those bugs in the multi-move command…