I have the basic comms for a mesh working (no actual packet forwarding, but the basic IO is working, i.e. a TS can be in listen, and broadcast mode.
I have rigged up push-buttons to simulate some IMU event.   So when a node senses it has moved  it would fire off a ranging sequence to recompute the revised topology of the mesh.   
So now I can press a button on either node and the other node get's told about it – that's the big breakthroug.  It provides the basic transport mechanism for propagating topology changes throughout the mesh.
So far so good, but of course…
Obviously a node will not advertise changes unless it has some to share, so that side of things should not consume much power.
On the other hand, listening out for advertisements is something you have to do all the time.  After some experimentation I begin to see that the listen overhead is kind of fixed.
If an advert occurs once every 10 ms, then you have to listen out for 10 ms to be sure of hearing a packet go by.  Of course you don't have to listen every 10ms (i.e. continuously) because you could just decide that you are not going to try and catch every advert, and will just listen for one in ten. So you listen for 10ms every 100ms.  On the other hand it requires that the advertiser sends the same packet 10 times at regular intervals.
The consequence of this that you get to choose the trade off between time spent listening, and time spent advertising.   You can advertise once an hour if the receiver listens continuously for an hour. You can listen for just ten minutes continuously if the advertiser sends the same packet 6 times an hour.  What that split should be is determined by the anticipated frequency of transmissions, and the power-budgets.  I'd like to measure thos values.  Maybe we are in the toilet already.
The other consequence of the choice is latency.  If you only design to hear one out of 10 advertising packets, then that means on average you will miss 5 before you hear the first.
So the important thing to decide for our application is how long the application can tolerate to wait before it should hear updates.   
I have paramterised my code so that I can choose any average latency from a few milliseconds out to 5 seconds (i.e. a worst case wait of 10s naively assuming no packet losses).   The longer the latency, the less time spent listening, and that's going to be the battery killer.
It should also be noted that the latency is for one hop.  If it takes 3 hops to get to the edge of the network and thus to the cloud then there could be a 30s wait.

//Mik

Leave a Reply