I’m noticing that DW packet exchanges that require a fast turnaround are at best twitchy, and often flat out fail.  I’m wondering if the task switch between SPI driver and the DW app. is just taking too long.  That would be a disappointing development, having sunk so much effort into RSX.


DC: Excuse my ignorance. I’m going to ask a bunch of simpleton questions so you can see what’s going on in my mind. You don’t need to answer them.

ML: They’re not stupid questions.  They are exactly the questions I should be answering to figure out the problem.

What does RSX stand for?

RSX – dunno (probably Real Time eXecutive), the RTOS that I’m using for task scheduling etc.


What’s difference between a fast-turnaround packet and a non-fast-turnaround packet? Why is there a need for a fast turnaround?

I didn’t explain that well. It’s my nomenclature.  Think of a simple P2P example, e.g. Peer-A: “Please turn on the liight”.  Peer-B: “OK, done it”.   Normally the uC listens to the DW for a packet, then it reads it in from the DW buffer, parses “pls turn on the light”, executes it, and then may/may not send an “OK, done it” reply.  I call that slow-turnaround.  For ranging packets the turnaround time has to be precise, so it can be subtracted from the total roundtrip time.  Since the inbound, and outbound packet formats are standard, the DW can be told to handle that transaction autonomously, without involving the uC till after it is done. – i.e. fast-turnaround.  At least that used to be my theory until I saw the code.


If this is a problem, is it because of a DW hardware requirement or the uC’s inability to complete tasks fast enough? Or both? What is the packet turnaround time requirement?

At the moment I don’t know what the problem is, or what the turnaround requirement is.  The turnaround time (TT) is a parameter.  The maximum TT is perhaps discussed in the manual, or maybe I have to use a scope to figure it out.  But hopefully I can just fix it.  The longer the TT, the more likely that the whole ranging sequence gets trashed by some 3rd party wireless traffic, so there is a heavy incentive to make it short.   To figure this all out I need an extended period of uninterrupted time to figure out exactly what’s going on (Like B going out of town for a week!).  The only thing I know is that the response packets are being sent, because I see the “sent” confirmation, but the peer-receiver is issuing a timeout (TO) – which indicates that the sender took too long to respond.  I think the TO is a parameter too.  One option is simply to extend the TT and TO parameters and see if that makes the problem go away.


How long does it now take for a packet turnaround to occur?

Dunno yet.  


What is the time requirement for the packet turnaround? What is happening while the packet is being turned around?

In theory it is a short fixed amount within the limits of the DW timer resolution(15ps) and width (40bits) ~ 16s.  This is not as straightforward as it sounds because the timer wraps, and when it does the DW generates an interrupt and abandons the sequence.  (Why?)  The thing I don’t get, is why the uC needs to be involved at all, but in the demo code it clearly is.  That’s a thing I have to understand, along with a few other things.  It looks like the uC extracts the arrival time from the DW, computes the required response time, and stuff both numbers in the outgoing packet.  Clearly it would make a lot more sense for that to be done automatically by the DW hardware, but it appears not to be the case.


How much time does it take now to switch tasks between the SPI driver and the DW app? Why do the tasks need to be switched? Is it because the uC can do only one thing at a time?

Task switch time?  That’s another thing I don’t know, and need to find out.   The standard architectural approach is to make all of the drivers a separate task, so the application can conceivable wait for several streams of IO to complete.  A rather bogus example would be waiting for some long-winded SPI IO to finish, unless there was some other IO activity – a key press signalling to abort the SPI activity.  I think I can redesign the architecture to get rid of the SPI task, so there isn’t a task switch, but I’d like to know that’s the issue before I head down that rabbit hole.


Would it help to have a uC dedicated to DW tasks? Is this a problem because we are relying on the Nordic to do too many things?

A processor dedicated to SPI/DW tasks would be good.  Indeed I think that’s what BeSpoon do, which theoretically makes their uC interface a lot simpler.  But I can probably achieve the same thing by disabling task switches until DW activity finishes.  Our environment has fixed-function tasks so we don’t have to cater for tasks that have unknown IO requirements by providing the general solution I have actually implemented.

Good questions David.  You have made me reflect on my debug strategy.

Leave a Reply