I'm working on some sensor drivers for Mynewt 1.4, and have run into an
issue that I'm not sure has a single perfect answer, but should perhaps
be addressed or further discussed.
Most sensors have specific timing constraints, and in certain instances
these can change dynamically. The TSL2561 light sensor, for example, can
have it's integration time set from 13ms to 402ms, depending on the
level of light sensitivity required. The TSL2591 driver I'm writing
(since the TSL2561 is EOL) has a variable integration time from 100..600ms.
The 'problem' is that there is no concept of minimum time between sample
reads in the sensor API at present (to my knowledge, feel free to
correct me!), and I'm not sure the best way to insert this delay between
valid reads so that the data we get back can be considered reliable or
fresh.
If a sensor has a 300ms delay between valid samples, for example, we can
still request data every 10ms but the response is undetermined in the
sense that each sensor will handle this differently. In the case of the
TSL2561 and TSL2591 the first sample requested will likely be invalid
since a single valid integration cycle hasn't finished, and then it will
buffer and continue to return values until the NEXT valid sample is
available. This is visible in the following sequence where the first IR
reading is completely out of range, and some subsequent values are
actually cached entries that might not reflect current light levels
since they happen before the next integration time ellapses:
011023 compat> tsl2591 r 10
011799 Full: 30
011799 IR: 61309
011799 Full: 30
011800 IR: 13
011801 Full: 30
011801 IR: 13
011801 Full: 30
011801 IR: 13
011802 Full: 30
011802 IR: 13
011802 Full: 30
011803 IR: 13
011803 Full: 30
011803 IR: 13
011804 Full: 30
011804 IR: 13
011804 Full: 30
011804 IR: 13
011805 Full: 30
011805 IR: 13
I'm not sure what the best way to handle this is, though.
Some options are:
* Add a blocking delay in the read task to take into account the
current minimum delay between valid samples (at the risk of causing
problems on the I2C bus if other devices perform transactions in
between)
* Add a concept of 'minimum time' between sample reads at the sensor
API level and enforce this at a higher level, with one of the
following consequences for read requests that occur before this
delay: (*Keep in mind that this min value can change dynamically
based on sensor config or auto-ranging!)
o Return an appropriate error value
o Return the previous cached value with the sample still marked as
valid
o Return the previous cached value with the sample marked as invalid
o Other?
There are other solutions, but I was hoping to get some feedback on this
to hear what other people think of the issue of the current disparity
between the sensor API and real-world timing constraints of the sensors
themselves. An argument could be made that the end user should know and
work with the constraints of their HW, but it seems like this could also
be handled with some small API additions as well?