Replies: 7 comments 46 replies
-
|
One additiona thing: in the MovingRegressior you are checking: while (i < aMatrix.length && quadraticTheilSenRegressor.reliable()) {Where reliable is only the length of the array. But this also means that in case the globalGOF is zero but the array is large enough you will do calculation of 0 weights and include those in the weighetedAverages. I dont think this makes sense but I also acknowladge that ithese cases are limited :) any way I would check for Global GOF > 0 instead But I am not sure if this is deliberately needed by the algorithm. Furthermore, when local GOF is zero again weight is zero so I dont see much point in pushing these zero weighted points nor calculating the Gaussian weight (saving some execution time in some cases) into the WeightedAverage Series. |
Beta Was this translation helpful? Give feedback.
-
In my implementation, yes, as the X-coordinates of the boundaries are changed for each regression run. In essence, the Gaussian filter rewards you for being close to the center of the regression analysis, and punishes it if it is on the edges (thus putting focus on the more reliable data for that datapoint). This absolute approach theoretically is more sound, but I'm far from a 100% certain that would be the practical case here as well. There is a static variant I've played with that can be cached: don't use the absolute X-values, but the position (index) in the X-array. As the array length has a constant flankLength, results won't change from run to run. Theoretically, it loses some precission, but it works. That would allow the creation of a lookup table as your constructor could include window size, and create a lookup table for retrieval. In practice it has quite a similar result, but I must do a long term tests to actually see the difference between these two approaches. Especially as I'm tweaking the entire filter chain, results can be extremely difficult to predict. Given that you are much more CPU constrained, I'd go for the static version.
Thanks for observing this. My average function leans on two stored variables (a sum and a count). Admitted, it is stored in another object, and a division has to be made, but I'm not gaining much CPU power here. I'm experimenting with an additional filter which aims to reduce systematic noise. Technically relatively simple:
My initial tests show I just measure, I always converge to a stable filter. The same pattern (correction values) I also find across multiple sessions (albeit shifted, as the 1st magnet is not always the same for each session), suggesting that it is machine dependent (as one would expect) and not session dependent. Even with raw recordings years apart, I find the same sinoid-like pattern, similar to our manual visualisations of the Concept2 systematic error. This suggests we could remove this type of error. Even potentially allowing to reduce the flankLength, which would reduce CPU-load and improve responsiveness of the results to true changes. The current key challenge is that when I actually apply the filter (and simultanuously training it), it seems to run away after 7K ending in total chaos. As a person who typically rows 10K's, not really achieving my design goal :). Might be caused by stupid bugs, it might be due to compkex feedback mechanisms in the overall algorithm. I'll drop the code as soon as I find a solution to this issue. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the above. I agree that micro optimisation is not for JS really. For me even a division is expensive, because of the emulation of the double precision on the MCU :) so even a stupid simple division adds to time because of the single precision of FPU. Anyway I will do the benchmark this weekend and see what the costs are. I have modernised the code base recently for c++ 26 (as ESP IDF has now support for this) and now it is using gcc 14.2!!! so the compiler got rather smart compared to the 8.x that I used previously. This is very useful. Once I see the cost I will try to see some force curves. I agree that if this can shave off one flanklength the benefit is bigger than the cost as that is a very big saving. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
i have another question regarding hiw the totalSpinningTime is used: openrowingmonitor/app/engine/Flywheel.js Line 102 in 2b13b59 why are you feeding the rror filter with series begin delta time? what I am struggling with is the fact that when we record the raw data we use the totalNumberOfImpulses but that does not get inclemented when we are in the stop state while the flywhhel could still be rotating. so after restart I think the slots get misplaced. shouldn't we actually use the number of impulses that physically have been made? EDIT: But I suppose we could use absolute raw values which could be maintained across stops hence no need for restart (I am considering the option for stopping the data generation for the filter lets say after 10000 pints for the entire session - depending on the calculation cost whuch i also expect to be minimal) |
Beta Was this translation helpful? Give feedback.
-
|
I wonder whether you have considered using exponential moving average for the cyclicfilter instead of the ring buffer type averager? Reason for asking is that I did some tests and it handled the errors in my data seemingly slightly better or equivalent (though in the ESP port), while it is extremely light weight. It had smoother and faster response to these errors. However, theoretically erroneous values would never completely "disappear" but converge to zero. Also since newer values are considered more the initial incoming error effect would potentially be higher I suppose. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.






















Uh oh!
There was an error while loading. Please reload this page.
-
I have been reviewing the new moving regressor algorithm and created the initial port. I will need to measure the performance but before I do I have some questions:
This is my most important question to you because it requires understanding of the Gaussain filter which I currently lack :)
Currently, you recalculate the Gaussian weight for each position on every push, for every point, and that is a lot of calculation. but after the flanklenght is full this will be static no? the relative positions don't change once the window is full. At least this is is what I deciphered from the filter description on the internet (but admittedly I do not full understand how this works in the ORM algorithm exactly apart from the fact that right now there is a lot of calculation ).
What I am looking for is whether I could calculate a lookup table at class construction or even at compile time. While I have yet to run benchmarks, unfortuantely I fear that the exponent calculation with a double will be expensive on the microcontroller becuase it has only single precision FPU...
Potential micro optimisations:
These micro optimisations exists all around your code but as you said its not much worse it might not worth the hustle.
Beta Was this translation helpful? Give feedback.
All reactions