[Ovmsdev] lost important event => aborting

Mark Webb-Johnson mark at webb-johnson.net
Mon Aug 31 16:52:19 HKT 2020


My work is in the public for-v3.3 branch.

> On 31 Aug 2020, at 4:24 PM, Michael Balzer <dexter at expeedo.de> wrote:
> 
>  I've made a stupid mistake leading to events at least not being forwarded correctly to Duktape, fix is pushed.
> 
> Regards,
> Michael
> 
> PS: Mark, your split of the scripting module may not allow easy merging of my two commits. They are simple, but I can do the merge if you push them into a branch.
> 
> 
> Am 29.08.20 um 15:54 schrieb Michael Balzer:
>> I think I've found & fixed this, maybe also the events starvation issue.
>> 
>> See https://github.com/openvehicles/Open-Vehicle-Monitoring-System-3/issues/418#issuecomment-683286134
>> 
>> and https://github.com/openvehicles/Open-Vehicle-Monitoring-System-3/commit/6b709a0cb399d1a7c3866f68d7f9dbf92f9246c8
>> 
>> Please test.
>> 
>> Regards,
>> Michael
>> 
>> 
>> Am 29.08.20 um 10:34 schrieb Michael Balzer:
>>> I've just found another clue to the events task starvations:
>>> 
>>> https://github.com/openvehicles/Open-Vehicle-Monitoring-System-3/issues/418
>>> 
>>> TL;DR: the events task can be blocked by running lower priority tasks as
>>> well, not just higher priority tasks.
>>> 
>>> So raising the events task priority cannot be the solution.
>>> 
>>> Regards,
>>> Michael
>>> 
>>> 
>>> Am 26.08.20 um 20:28 schrieb Michael Balzer:
>>>>> Am 26.08.20 um 06:17 schrieb Mark Webb-Johnson:
>>>>> It seems to be a web socket (web ui?) connection went away, a logging
>>>>> message was produced, and that overflowed the queue. Perhaps the user
>>>>> is just logging too much? Can you log at debug level to web uI (or
>>>>> does it cause the problem I mentioned earlier about debug logging the
>>>>> debug logs)? Any ideas?
>>>> I haven't been able to get a correlation to any specific configuration
>>>> or system situation yet on these event queue overflows.
>>>> 
>>>> The event queue has room for 40 entries by default, so the system
>>>> normally has been in an overload situation (starved event task) for at
>>>> least 30-40 seconds when the abort is triggered.
>>>> 
>>>>> Event: ticker.1 at EventScript 0 secs
>>>> …tells the events task wasn't completely stuck before the abort, just
>>>> didn't get a lot of CPU share. That's why I thought raising the task
>>>> priority could have an effect.
>>>> 
>>>> I totally need user data on this, as the crash doesn't happen at all on
>>>> my module. So it's also possible the issue is caused by a vehicle module
>>>> bug, or by the poller receive bug we discovered a few days ago:
>>>> 
>>>> https://github.com/openvehicles/Open-Vehicle-Monitoring-System-3/pull/405#issuecomment-674499551
>>>> 
>>>> This is solved now by the merge of the fast polling branch, which
>>>> includes the fixes for this. So we may see already an improvement with
>>>> the next build.
>>>> 
>>>> But there are at least three more bugs/shortcomings in the current
>>>> poller which can potentially also cause this issue (and others) by high
>>>> vehicle task load / wrong memory allocations / out of bounds accesses.
>>>> Which is why I'm currently working on a rewrite of the receiver.
>>>> 
>>>> Regards,
>>>> Michael
>>>> 
>> 
>> -- 
>> Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal
>> Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
>> 
>> 
>> _______________________________________________
>> OvmsDev mailing list
>> OvmsDev at lists.openvehicles.com
>> http://lists.openvehicles.com/mailman/listinfo/ovmsdev
> 
> -- 
> Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal
> Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
> _______________________________________________
> OvmsDev mailing list
> OvmsDev at lists.openvehicles.com
> http://lists.openvehicles.com/mailman/listinfo/ovmsdev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvehicles.com/pipermail/ovmsdev/attachments/20200831/f24cd4d6/attachment.htm>


More information about the OvmsDev mailing list