On Fri, Oct 31, 2014 at 8:20 AM, Mik Lamming <mik@lamming.com> wrote:

Nordic announce support of some near-field charging standard.

I don’t understand what Nordic have done.

——————–
On Fri, Oct 31, 2014 at 8:56 AM, David Carkeek <dcarkeek@gmail.com> wrote:

There are a bunch of new things in this press release so I don’t understand it either. They say “magnetic resonance” and “spatial freedom” but then talk about pads and surfaces as though the device has to be resting on the surface. It s says “Out of Band” which I don’t understand and “up to eight devices”. Why couldn’t it be nine devices? Why do you need an SDK to make a charging system?

What I kind of think is that there a pad which is a kind of transmitting antenna that emits a signal to which a device resonantly couples over a distance that is not zero for the purpose of transferring energy for battery charging. Each device needs a channel (up to 8 are possible). That would be super cool if true. Need to read about Rezence. I might be completely wrong.

“The nRF51 Wireless Charging SDK is available today on a limited basis to lead customers.”
but “Engineering build of S120 and an updated nRF51 SDK is available today as a download for existing customers of nRF51822.”

Whom do I ask for the link? Not that I would know what to do with it after downloading.

————————-
On Fri, Oct 31, 2014 at 10:46 AM, Mik Lamming <mik@lamming.com> wrote

Following up on I’m assuming that the range is at best centimeters because it is near-field which drops off dramatically after a few wavelengths… doesn’t it?  The frequency is 6.8MHz, so the wavelength is ~5cms?  (Wrong 44.1 meters)

Aha…  I found this.
http://en.wikipedia.org/wiki/Rezence_(wireless_charging_standard)  which contains this useful information sentence:

The interface standard supports power transfer up to 50 Watts,[1] at distances up to 5 centimeters.[2] 

I also found this excellent video, the first 2-3 minutes of which gave me a strong intuitive sense of the characteristics.  https://www.youtube.com/watch?v=r1UT4NuygmQ

————————–
On Fri, Oct 31, 2014 at 11:29 AM, David Carkeek <dcarkeek@gmail.com> wrote:

Too bad it doesn’t work across a room, but still quite interesting. I watched about 5 minutes but I’ll have to finish it later.

I’m somewhat ashamed to say I don’t know much about RF fields, near-field, and the relationship of wavelength to power transfer. The wavelength of 6.8 MHz is 44 meters according to the calculator. I wonder why that frequency was chosen. It must mess pretty badly with nearby radios.
Tesla coil can work at tens of feet, maybe hundreds of feet.

—————————–
On Fri, Oct 31, 2014 at 13:59 AM, Mik Lamming <mik@lamming.com> wrote

It’s only yhe first 5mins that are worth the time. The rest got out of my depth anyway. It just gave me a good feel for the strengths and weaknesses of near-field.

Ah MHz, not GHz! Yes – I agree. Doh.  Good job you check all my math.

c = 300,000 = 3 * 10^5 kms/s = 3*10^8 m/s
6.8 MHz = 6.8 * 10^6Wavelength = (3*10^8) / (6.8 *10^6) = 44.1mIt’s only the first 5mins that are worth the time. The rest got out of my depth anyway. It just gave me a good feel for the strengths and weaknesses of near-field.

After almost two weeks of doing other stuff I have finally got back to looking at TS.  Today I took a cursory look at the DecaWave code.  It certainly looks daunting.

The good news is that they have a carefully worked, and documented example of the ranging process.  The bad news is that it looks very complicated, and I fear for the ARM M0.

The thing I am trying to figure out right now is whether it is how easy it is to change role from tag to anchor.  It’s almost a requirement for our architecture to be able to do that dynamically, but I fear the power-overhead of doing that might be large.

I started reading the Source Code Guide.  I found a couple of very interesting paragraphs.
  • In a microprocessor system the TRSP time is best reduced to complete the ranging as quickly as possible. This improves the accuracy of the result as it minimises the difference in TRSP due to the different local clock rates at each end. A time should be chosen that is a little greater than the worst case response latency possible, so that the command to do a delayed TX is not issued after that time has gone past, which would essentially delay TX until the sys clock wraps around to the specified TXtime again. A status bit is provided to warn of this “delayed TX more than half a clock period away” event indicating the TX start was late. If this late error occurs frequently then it would probably be a good idea to use a longer delay, but if this was a very rare event it may be better to keep the specified period and just recover from the error when it occurs. This is a system design choice.
  • Where two peer mobile devices are ranging between each other (e.g. in a separation alarm say) then for battery conservation it is not practical for either device to have to listen for long periods. Therefore the devices have to operate in a more synchronised fashion, turning on the receiver only when the message from the peer device is expected, (i.e. long enough before the expected message and for a period long enough to detect it, given the maximum possible timing drifts between the two devices clocks). In such a scheme initial pairing will probably be initiated by some manual means like coordinated button pressing on both devices, or perhaps by employing low-powered listening, a technique that samples for preamble occasionally (e.g. once per second) looking for a wakeup sequence sent for the whole second.
Not much to show for a whole day of slaving over a hot IDE.  Just some non-functional code.

Grrrr….

I’ve sweated over an example bit of code, and just can’t get it to work, but I suppose the good news is that I have learned some useful things about debugging.

Finally deciding that I had not done anything obviously stupid I turned to the web, and found someone who had exactly the same problem.

I have an issue with the ble_app_lbs. I can compile it OK, but when I run it it gets as far as calling sd_ble_gap_device_name_set(), which returns error 0x3001.
Can someone tell me why this might be?

https://devzone.nordicsemi.com/question/14188/ble_app_lbs-error/   explains that with each upgrade of the soft-machine there is a corresponding upgrade of the SDK.  That was to be expected.

0x3001 is error code for ” SVC handler is missing “
You get this error when you use the wrong softdevice version with wrong header file (i.e. an old SDK version ).
-c
Update : lbs not works with SD S110 v.7.0. There was a previous question with answer from Hung Buy, a nordic guy.

0x3001 is actually “BLE_ERROR_NOT_ENABLED”, which is raised due to an extra step needed in S110 7.0.
Ulrich Myhre (Jul 27 ’14)convert to answer 

However I didn’t expect them not to update the examples shipped with the SDK.

When you download the individual releases, there is always a document called “Migration Document” that follows the release bundle. Everything you need to know for migrating between any version should be there. Unfortunately, not every SDK example is able to keep up, but the migration doc should outline the necessary changes if you need an older example to work.
Ulrich Myhre (Jul 27 ’14)convert to answer 

Following the answer chain I found this code snippet:

// Enable BLE stack ble_enable_params_t ble_enable_params; memset(&ble_enable_params, 0, sizeof(ble_enable_params)); ble_enable_params.gatts_enable_params.service_changed = IS_SRVC_CHANGED_CHARACT_PRESENT; err_code = sd_ble_enable(&ble_enable_params); APP_ERROR_CHECK(err_code);

 So what else needs to be changed?

Migration Document

So where is this mythical migration document.  It was allegedly inside the zip file that the Softmachine came in.  But NO!!   There are only migration documents for the major releases.  Getting a major release that is now out of date is quite a trick.  One might think that it would come from the same place as all the other releases:

But 7.0.0 is not in the stack.  So where do I find it.  AFter fiddling around for ages I eventually found the “beware of the leopard” sign and went through the trapdoor to the secret basement.

The trick is to click on the release number on the right.

This gets you here where you can download 7.0.0.

And there in the filing cabinet, obscured from  public gaze are a bunch of files all with the same name, but with different icons.  Hovering over the names one by one reveals:

Which is eleven pages of fun reading.

Anyway, back to the plot.  I figured out all the updates, and made sure I had the right combo of SDK, and softmachine, and sure enough…. it worked.

I continued working on the BLE stuff today, and in thinking about one of the protocol issues I had an idea for making our platform level abstraction more powerful.  I’m thinking that perhaps the key to making this more generally useful is to provide a genuine latitude and longitude layer on top of the basic mesh?

I’d summarize the task as figuring out the mesh location and orientation in world coordiantes, i.e. <Lat, Long, Heading> (LLH)

Of course it will also accumulate range information, and all the other useful TS05b stuff, but this extra information would be available to applications that already have databases expressed in terms of LLH.

There are two challenging problems to be solved to be able to do that, but I can imagine that any platform that can do this satisfactorily will have an advantage in the market.  Fortunately I made a lot of progress on this during the TS05 work.

Mesh Orientation Problem

The underlying ranging technology is capable of estimating the shape and size of a mesh, and tracking the position of the mobile subject, relative to the mesh – and quite accurately.  But the orientation of the mesh on surface of the planet is unknown.  So notice that in the following example below: on the left, the subject is facing anchor #2 (e.g. the TV, or an exhibit in a museum); and on the right the subject ia clearly not facing anchor #2.

Two valid estimates of the same wireless mesh.

For a company like Neilson, or a museum, this could lead to a lot of misleading assumptions.

TS always knows which way is North.  It can also estimate the magnitude and direction of any acceleration forces in the TS frame of reference.  A little matrix algebra produces a vector in the world frame of reference – i.e. relative to North.  Adding these vectors together can produce a trajectory.  This technique is called deduced reckoning, or DR.

A trajectory created by TS05’s IMU

So in the above example, the subject starts at point A and arrives at point B via a windy path.  Notice that the heading of the north vector changes relative to the TS05’s frame of reference, and so does the acceleration vector, and the difference between the headings represents the orientation of the device, not the subject, relative to North.

Although this is not germane to this discussion notice that if we assume that a subject is always upright when walking, and always walks forwards, we can figure out the relative orientation of the TS05 on their body – the so called “body-mount transformation”.

So back to the example.  The subject leaves ‘A’, somewhere in the vicinity of anchor #1 and arrives at ‘B’ somewhere in the vicinity of anchor #2.  TS05 calculates the ranges to each of the anchors, and uses trilateration to estimate A and B.

Position estimates using ranging and trilateration

For simplicity I have just shown the start and end points of the subjects trajectory.

The start and end points are now known in both the ranging model, and the DR model.  The ranger has no orientation estimate, just an un-oriented mesh, but accurate mesh.  The DR model has an estimate of the overall heading, path length, and probably the number of paces.  By applying a scale and rotation to the mesh, the points A and B can be superimposed.

Transforming the mesh

So now we have a good mesh orientation, and a good path scale value.

In this example we only used the start and end points of a single trajectory.  In reality the more trajectories that are used to estimate the mesh orientation, and the trajectory scale the better.  A minimization algorithm would be used to find the best fit estimates for both.

Armed with these estimates we can always figure out the subject’s heading in the context of the mesh.  We can also use the scale factor to estimate pace length.  Or with a good estimate of pace length we can spot ranging inaccuracies.

The world position problem.

One bit of information that is still missing from our model is the position of the mesh in the world coordinate system.  Is it in Green Street, San Francisco, or Trafalgar Road, Cambridge, UK.  For most of the applications we have considered so far this in not an issue, but if these systems are spread over a campus, where each mesh is out of range of another mesh then it will be helpful if they automatically identified their location.
I have two solutions to this problem.  
Solution 1 is to ensure that at least one node contains a GPS, and has LOS to the satellites.  Perhaps a solution is build a TS06 version that can stick to a window.  It could perhaps be powered by sunlight?
Solution 2 is to use the subject’s phone and some DR software to build a trajectory between the last good GPS coordinate and the first mesh node.  This seems cheaper, but technically harder and less reliable than putting a GPS in each anchor.  Some anchors won’t get any signal, but maybe one will and that might be enough if fixes can be averaged before the battery runs down.
————–
DC:
That was a lot of work, and good thinking.

I have a basic understanding. But I wouldn’t be able to explain it back to you.
GPS chips seem to be cheap and ubiquitous. However, I remember the experience we had on Sylvian Way with the GPS board you got from Sparkfun. It seems to me it never did lock on to enough satellites to get a fix. Nevertheless, GPS seems to be the way to go for something like you described. Using a phone would be a nightmare to get consistent results. A window-mounted, solar powered anchor would be perfect. The GPS radio would power up only when there is bright light.
I have a big pile of wires on my workbench cut to length and stripped to start soldering onto modules. I need to order my Segger.
——————-
MGL:
Good progress. How tedious to solder all those wires.  I’m hoping that the Segger arrives shortly after you are done, and that the flashing instructions i sent you are correct.
Interesting strap.  And the secret sauce: "powerful algorithms".  Wish I had some of them.

https://www.indiegogo.com/projects/olive-conquer-stress-be-stellar

That's why Olive is putting science and technology to work for you. It analyzes your patterns and biological indicators, helps you be aware of when your body experiences a stress response, and empowers you with a range of exercises that can be used to bring your body back into balance. Above all, Olive is designed to handle the complexity of your stress behind the scenes so it is super simple to use.

Olive sits comfortably and discreetly on your wrist and monitors stress-related data in the background. Olive:

  • Tracks physical indicators of stress based on changes in heart rate, reactions in your skin, and trends in skin temperature
  • Analyzes habits that contribute to stress like your physical activity, sleep, and exposure to light
  • Talks with your smartphone to understand your lifestyle through your calendar, your location, and other available data

All of this information is meticulously stitched together with powerful algorithms to paint a more complete picture of your stress than has been possible, until now.

I have started to make a custom hardware profile for the TS06_1 hardware.  I have recompiled programs to use this program so that the right LED, UART and flow-control definitions are used for our hardware.

Test programs can be found at: ts06_1 on our shared Google drive.

Tools for flashing a TS06

To download the test program hex to the TS06 you first have to install some stuff (sigh)

To do that you have to install three things from copies in Nordic Files on our shared Google drive:

nRFgo Studio           nrfgostudio_win-64_1.17.0_installer
Segger’s JLink drivers Setup_JLink_V492
The JLink patch        SEGGER_PATCH_MDK_JL2CM3_DLL_2_71

To download anything I might have forgotten from Nordic you need a product key.  Here is the only one I know:

B381591KQOL  That’s an ‘O’ for Oscar, not a zero

Loading a hex

  • Plug in the Segger JLink EDU.  Connect it to the TS06. Power up the TS06.
  • Start nRFgo Studio

  • Select nRF51 Programming
  • Make sure that the serial number (written on the back of the Segger) shows up in the “SEGGER to use” menu at the top, and select it.
  • Click Erase all
  • Selet the “Program Application” Tab on the right
  • Navigate to the pre-compiled hex of the test program which will be in the test program’s arm/_build directory.  Example:  the hex for the blinky program is here …/ts06_1/blinky/_build
  • Click on the “Program” button.

If all goes well then the program will be downloaded, flashed and will start running.

I’m working on getting serial IO working on our TS06.1.

I have read up about Serial Wire Debug.  I have set up the interface so it ought to work, and managed to implement code to drive that interface, but it just doesn’t work.

I think it has to be something to do with the missing SWO pin.

This blog post seems to indicate it isn’t possible.   However it does hint at an alternative method.

Does anyone know how to get debug messages out to host console via SWD? I know SWD supports Serial Wire Viewer, but not sure how to get it working over GDB with JLink.
I am using: nRF51822 ARM GCC toolchain OSX
Thanks,
Chris

Doesn’t that require SWO to be implemented? (Which is not on the Nordic part). I would tend to think that Nordic wouldn’t have piped the serial port through the Segger on the eval kit if there was a proper SWV/SWO implementation :) -m
Marc Nicholas (Sep 26 ’13)convert to answer

I have read up about Serial Wire Debug.  I have set up
O jeez, I can’t believe this.   Communication with the UART has been there all along.  I don’t know where it is written down – one of those myriad of “beware of the panther” documents I guess.  On the devkit board you simply connect to the same COM port as the SEGGER uses itself.  The clue is that when you plug in the DevKit two devices appear in the Device Manager.  The JLink USB driver of course, but also a special CDC(?) UART  driver on a COM port.

So for the devkit, it’s this easy, you simply connect to that COM port and get the baud rate right and everything works just fine.

So that’s how the devkit does it, so now, how do I get to do it via the SEGGER?

When I plug in the SEGGER, I notice that I get a JLink driver, and a new COM8 port, but of course the com port doesn’t work.  I guess that’s because our hardware doesn’t route that pin through the SEGGER.  So how should it be routed?

The schematic for the devkit board PCA10001 shows some connections going from the nRF51822 P0.08-P0.11 (UART, rts,txd,cts,rxd) to the SEGGER pins 38-41 (CTS/RxD/RTS/TxD).  Now the question is: where do these present themselves on the JLink-EDU?

Is there a schematic for the JLink EDU.
P279 of J-Link / J-Trace User Guide, Software Version V4.86 Manual Rev. 2 Document: UM08001, Date: June 6, 2014  shows this diagram.

I’m not sure this is the right document but it seems pretty relevant.  It seems to suggest that pins 17 and 5 should be connected as follows:
JLINK TX  5 -> P0.11 RxD
JLINK RX 17 <- P0.09 TxD

Nada…   Ah, it has hardware flow-control turned on,  I’ll turn it off..

Yay.
It works.  There’s some screwy characters at the start that I don’t understand, but I expect that’s some configuration issue, but it does echo characters.