<div dir="ltr"><div>I have a question for Michael B (or whoever) - I have a commit lined up that would add a bit of a time check to the poller loop. What do we expect the maximum time to execute a poller loop command should be? </div><div>This is a rough idea (in ms) I have.. based on nothing much really, so any ideas would be appreciated:<br></div><div> int TardyMaxTime_ms(OvmsPoller::OvmsPollEntryType entry_type)<br> {<br> switch (entry_type)<br> {<br> case OvmsPoller::OvmsPollEntryType::Poll: return 80;<br> case OvmsPoller::OvmsPollEntryType::FrameRx: return 30;<br> case OvmsPoller::OvmsPollEntryType::FrameTx: return 20;<br> case OvmsPoller::OvmsPollEntryType::Command: return 10;<br> case OvmsPoller::OvmsPollEntryType::PollState: return 15;<br> default: return 80;<br> }<br> }<br></div><div><br></div><div>//.ichael<br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, 6 May 2024 at 07:45, Michael Geddes <<a href="mailto:frog@bunyip.wheelycreek.net" target="_blank">frog@bunyip.wheelycreek.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="auto">I realise that I was only using the standard cable to test - which probably is not sufficient - I haven't looked closely at how the Leaf OBD to Db9 cable is different from standard.<br></div><div dir="auto"><br></div><div dir="auto">Ah, my bad out the queue length. We are definitely queueing more messages though. From my log of when the overflow happened, the poller was in state 0 which means OFF - ie nothing was being sent!!</div><div dir="auto"><br></div><div dir="auto">I'll look at the TX message thing - opt in sounds good - though it shouldn't be playing that much of a part here as the TXs are infrequent in this case (or zero when the leaf is off or driving) - On the ioniq 5 when I'm using the HUD - I'm polling quite frequently - multiple times per second and that seems to be fine!.</div><div dir="auto"><br></div><div dir="auto">I did find an issue with the throttling .. but it would still mostly apply the throttling where it matters, so again, it shouldn't be the problem (also, we aren't transmitting in the leaf case).</div><div dir="auto"><div dir="auto"><div dir="auto"><br></div><div dir="auto">The change I made to the logging of RX messages showed how many in a row were dropped... and it was mostly 1 only in a run - which means even if it is a short time between - that means that the drops are being interleaved by at least one success!</div><div><br></div><div>Sooo.. I'm still wondering what is going on. Some things I'm going to try:</div><div><br></div><div>* If the number of messages on the Can bus (coming in through RX) means that the queue is slowly getting longer and not quite catching up, then making the queue longer will help it last longer... but only pushes the problem down the road.</div><div> - Add 'current queue length' to the poller status information to see if this is indeed the case? </div><div> - Add some kind of alert when the queue reaches a % full?</div><div>* Once you start overflowing and getting overflow log messages, I wonder if this is then contributing to the problem.</div><div> - Push the overflow logging into Poller Task which can look at how many drops occurred since last received item.</div><div>* Split up the flags for the poller messages into 2:<br></div><div> - Messages that are/could be happening in the TX/RX tasks</div><div> - Other noisy messages that always happen in the poller task.</div><div><br></div><div>Thoughts on what else we might measure to figure out what is going on?<br></div><div><br></div><div>//.ichael</div><div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, 5 May 2024, 19:29 Michael Balzer via OvmsDev, <<a href="mailto:ovmsdev@lists.openvehicles.com" target="_blank">ovmsdev@lists.openvehicles.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><u></u>
<div>
Michael,<br>
<br>
the queue size isn't in bytes, it's in messages:<br>
<br>
<blockquote type="cite"><font face="monospace"> * @param
uxQueueLength The maximum number of items that the queue can
contain.<br>
*<br>
* @param uxItemSize The number of bytes each item in the queue
will require.</font></blockquote>
<br>
Also, from the time stamps in Dereks log excerpt, there were quite
some dropped frames in that time window -- at least 23 frames in 40
ms, that's bad.<br>
<br>
Queue sizes are currently:<br>
<br>
<font face="monospace">CONFIG_OVMS_HW_CAN_RX_QUEUE_SIZE=60<br>
CONFIG_OVMS_VEHICLE_CAN_RX_QUEUE_SIZE=60</font><br>
<br>
The new poller now channels all TX callbacks through the task queue
additionally to RX and commands. So setting the queue size to be
larger than the CAN RX queue size seems appropriate.<br>
<br>
Nevertheless, an overflow with more than 60 waiting messages still
indicates some too long processing time in the vehicle task.<br>
<br>
TX callbacks previously were done directly in the CAN context, and
no current vehicle overrides the empty default handler, so this
imposed almost no additional overhead. By requiring a queue entry
for each TX callback, this feature now has a potentially high impact
for all vehicles. If passing these to the task is actually
necessary, it needs to become an opt-in feature, so only vehicles
subscribing to the callback actually need to cope with that
additional load & potential processing delays involved.<br>
<br>
Regards,<br>
Michael<br><br><blockquote type="cite"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div lang="EN-AU"><div><div><blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm"><div><blockquote style="margin-top:5pt;margin-bottom:5pt"><div><blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm"><div><blockquote style="margin-top:5pt;margin-bottom:5pt"><div><div><div><blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm"><div><blockquote style="margin-top:5pt;margin-bottom:5pt"><div><div><div><div><blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm"><div><blockquote style="margin-top:5pt;margin-bottom:5pt"><div><blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm"><div><blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm"><div><blockquote style="margin-top:5pt;margin-bottom:5pt"><div><blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm"><div><blockquote style="margin-top:5pt;margin-bottom:5pt"><blockquote style="margin-top:5pt;margin-bottom:5pt"><div><div><blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm"><div><blockquote style="margin-top:5pt;margin-bottom:5pt"><div><div><div><blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm"><div><div><div style="border:1pt solid rgb(204,204,204);padding:7pt">
<p class="MsoNormal" style="margin-bottom:7.5pt;background:whitesmoke;word-break:break-all"></p></div></div></div></blockquote></div></div></div></blockquote></div></blockquote></div></div></blockquote></blockquote></div></blockquote></div></blockquote></div></blockquote></div></blockquote></div></blockquote></div></blockquote></div></div></div></div></blockquote></div></blockquote></div></div></div></blockquote></div></blockquote></div></blockquote></div></blockquote></div></div></div></div></blockquote></div></blockquote></div></blockquote></div></blockquote></div>
</div>
</blockquote></div></div>