I think I've found & fixed this, maybe also the events starvation issue. See https://github.com/openvehicles/Open-Vehicle-Monitoring-System-3/issues/418#issuecomment-683286134<https://github.com/openvehicles/Open-Vehicle-Monitoring-System-3/issues/418> and https://github.com/openvehicles/Open-Vehicle-Monitoring-System-3/commit/6b70... Please test. Regards, Michael Am 29.08.20 um 10:34 schrieb Michael Balzer:
I've just found another clue to the events task starvations:
https://github.com/openvehicles/Open-Vehicle-Monitoring-System-3/issues/418
TL;DR: the events task can be blocked by running lower priority tasks as well, not just higher priority tasks.
So raising the events task priority cannot be the solution.
Regards, Michael
Am 26.08.20 um 20:28 schrieb Michael Balzer:
Am 26.08.20 um 06:17 schrieb Mark Webb-Johnson:
It seems to be a web socket (web ui?) connection went away, a logging message was produced, and that overflowed the queue. Perhaps the user is just logging too much? Can you log at debug level to web uI (or does it cause the problem I mentioned earlier about debug logging the debug logs)? Any ideas? I haven't been able to get a correlation to any specific configuration or system situation yet on these event queue overflows.
The event queue has room for 40 entries by default, so the system normally has been in an overload situation (starved event task) for at least 30-40 seconds when the abort is triggered.
Event: ticker.1@EventScript 0 secs …tells the events task wasn't completely stuck before the abort, just didn't get a lot of CPU share. That's why I thought raising the task priority could have an effect.
I totally need user data on this, as the crash doesn't happen at all on my module. So it's also possible the issue is caused by a vehicle module bug, or by the poller receive bug we discovered a few days ago:
https://github.com/openvehicles/Open-Vehicle-Monitoring-System-3/pull/405#is...
This is solved now by the merge of the fast polling branch, which includes the fixes for this. So we may see already an improvement with the next build.
But there are at least three more bugs/shortcomings in the current poller which can potentially also cause this issue (and others) by high vehicle task load / wrong memory allocations / out of bounds accesses. Which is why I'm currently working on a rewrite of the receiver.
Regards, Michael
-- Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal Fon 02333 / 833 5735 * Handy 0176 / 206 989 26