<div dir="ltr"><div dir="ltr"><div>Thanks Michael,</div><div dir="ltr"><br></div>My calculations give me ((2^32)-1) / (1000*1000*3600) = only 1.2 hours of processing time in 32bit.  The initial subtraction is 64bit anyway and I can't see a further 64bit addition being a problem. I have the calculations being performed in doubles at print-out where performance is not really an issue anyway. (Though apparently doing 64 bit division is worse than floating point).<div><br><div>In addition</div><div>* I currently have this being able to be turned on and off and reset manually (only do it when required).</div><div><br></div><div>*  For the lower volume commands, the smoothed average is not going to be useful - the count is more interesting for different reasons.</div><div><br></div><div>* The total time is quite useful. Ie a high average time doesn't matter if the count is low.  The things that are affecting performance are stuff with high total time. Stuff which is happening 100 times a second needs to be a much lower average than once a second.</div><div><div><div><br></div><div>* A measure like 'time per minute/second' and possibly count per minute/seconds as a smoothed average would potentially be more useful. (or in addition?)</div><div>I think we could do _that_ in a reasonably efficient manner using a 64 bit 'last measured time', a 32 bit accumulated value and the stored 32 bit rolling average. </div><div>It would boils down to some iterative (integer) sums and multiplications plus a divide by n ^ (time periods passed)  - which is a shift - and which can be optimised to '0' if 'time-periods-passed' is more than 32/(bits-per-n) - effectively limiting the number of iterations.</div><div>The one issue I can see is that we need to calculate 'number of time-periods passed' which is a 64 bit subtraction followed by a 32 bit division (not optimisable to a simple shift).</div><div><br></div></div></div><div>* I'm also happy to keep a rolling (32bit) average time.<br></div><div>   Even if you assume averages in the 100ms, 32bit is going to happily support an N of 64 or even 128.</div><div><div><div>   Am I right in thinking that the choice of N is highly dependent on frequency. For things happening 100 times per second, you might want an N like 128.. where things happening once per</div><div>   second, you might want an N of 4 or 8.  The other things we keep track of in this manner we have a better idea of the frequency of the thing.<br></div></div><div><br></div></div><div>How about we have (per record type):</div><div>  * total count (since last reset?) (32 bit)</div><div>  * smoothed average of time per instance (32 bit)</div><div><br></div><div>  * ?xx? total accumulated time since last reset (64bit) ?? <-- with the below stats this is much less useful</div><div><br></div><div>  * last-measured-time (64 bit) </div><div>  * accumulated count since last time-period (16bit - but maybe 32bit anyway for byte alignment?)</div><div>  * smoothed average of count per time-period (32bit)<br></div><div>  * accumulated time since last time-period (32bit)</div><div>  * smoothed average of time per time-period (32bit)</div><div><br></div><div>It's possible to keep the </div><div><br></div><div>Is this going to be too much per record type?  The number of 'records' we are keeping is quite low (so 10 to 20 maybe) - so it's not a huge memory burden.<br></div></div><div><br></div><div>Thoughts?</div><div><br></div></div>//.ichael<br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, 16 May 2024 at 03:09, Michael Balzer via OvmsDev <<a href="mailto:ovmsdev@lists.openvehicles.com">ovmsdev@lists.openvehicles.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><u></u>

  
    
  
  <div>
    esp_timer_get_time() is the right choice for precision timing.<br>
    <br>
    I'd say uint32 is enough though, even if counting microseconds that
    can hold a total of more than 71 hours of actual processing time.
    uint64 has a significant performance penalty, although I don't
    recall the overhead for simple additions.<br>
    <br>
    Also & more important, the average wouldn't be my main focus,
    but the maximum processing time seen per ID, which seems to be
    missing in your draft.<br>
    <br>
    Second thought on the average… the exact overall average really has
    a minor meaning, I'd rather see the current average, adapting to the
    current mode of operation (drive/charge/…). I suggest feeding the
    measurements to a low pass filter to get the smoothed average of the
    last n measurements. Pattern:<br>
    <br>
    runavg = ((N-1) * runavg + newval) / N<br>
    <br>
    By using a low power of 2 for N (e.g. 8 or 16), you can replace the
    division by a simple bit shift, and have enough headroom to use 32
    bit integers.<br>
    <br>
    Regards,<br>
    Michael<br>
    <br>
    <br>
    <div>Am 15.05.24 um 06:51 schrieb Michael
      Geddes via OvmsDev:<br>
    </div>
    <blockquote type="cite">
      
      <div dir="ltr">Formatting aside, I have implemented what I think
        Michael B was suggesting.  This is a sample run on the Ioniq 5
        (which doesn't have unsolicited RX events).
        <div><br>
        </div>
        <div>This uses the call esp_timer_get_time() got get a 64bit <b>microseconds</b>
          since started value - and works out the time to execute that
          way. It's looking at absolute time and not time in the Task -
          so other things going on at the same time in other tasks will
          have an effect.  (The normal tick count doesn't have nearly
          enough resolution to be useful - any other ideas on
          measurement?) I've got total accumulated time displaying in
          seconds and the average in milliseconds currently - but I can
          change that easy enough.</div>
        <div>The cumulative time is stored as uint64_t which will be
          plenty, as 32bit wouldn't be nearly enough.<br>
          <div><br>
          </div>
          <div><font face="monospace">OVMS# <b>poller time on</b><br>
              Poller timing is now on<br>
            </font>
            <div><br>
            </div>
            <div><font face="monospace">OVMS# <b>poller time status</b><br>
                Poller timing is: on<br>
                Poll [PRI]               : n=390 tot=0.2s ave=0.586ms<br>
                Poll [SRX]               : n=316 tot=0.1s ave=0.196ms<br>
                CAN1 RX[0778]            : n=382 tot=0.2s ave=0.615ms<br>
                CAN1 RX[07a8]            : n=48 tot=0.0s ave=0.510ms<br>
                CAN1 RX[07bb]            : n=162 tot=0.1s ave=0.519ms<br>
                CAN1 RX[07ce]            : n=33 tot=0.0s ave=0.469ms<br>
                CAN1 RX[07ea]            : n=408 tot=0.2s ave=0.467ms<br>
                CAN1 RX[07ec]            : n=486 tot=0.2s ave=0.477ms<br>
                CAN3 RX[07df]            : n=769 tot=0.2s ave=0.261ms<br>
                CAN1 TX[0770]            : n=191 tot=0.0s ave=0.054ms<br>
                CAN1 TX[07a0]            : n=16 tot=0.0s ave=0.047ms<br>
                CAN1 TX[07b3]            : n=31 tot=0.0s ave=0.069ms<br>
                CAN1 TX[07c6]            : n=11 tot=0.0s ave=0.044ms<br>
                CAN1 TX[07e2]            : n=82 tot=0.0s ave=0.067ms<br>
                CAN1 TX[07e4]            : n=54 tot=0.0s ave=0.044ms<br>
                Set State                : n=7 tot=0.0s ave=0.104ms<br>
              </font><br>
            </div>
          </div>
          <div>This is probably going to be quite useful in general! 
            The TX call-backs don't seem to be significant here. (oh, I
            should probably implement a reset of the values too).</div>
        </div>
        <div><br>
        </div>
        <div>//.ichael</div>
      </div>
      <br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">On Sun, 12 May 2024 at 22:58,
          Michael Geddes <<a href="mailto:frog@bunyip.wheelycreek.net" target="_blank">frog@bunyip.wheelycreek.net</a>>
          wrote:<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
          <div dir="ltr">
            <div dir="ltr">Yeah - I certainly wasn't going to put a hard
              limit. Just a log above a certain time, that being said,
              the idea of just collecting stats (being able to turn it
              on via a  "poller timer" set of commands) would be much
              more useful.  I'll look into that.</div>
            <div dir="ltr">
              <div><br>
              </div>
              <div>Average time is probably a good stat - and certainly
                what we care about.</div>
              <div><br>
              </div>
              <div>I actually am hopeful that those couple of things I
                did might help reduce that average time quite a bit
                (that short-cutting the isotp protocol handling
                especially). </div>
              <div><br>
              </div>
              <div>That p/r with logging changes might help reduce the
                unproductive log time further, but also makes it
                possible to turn on the poller logging without the RX
                task logs kicking in.</div>
              <div><br>
              </div>
              <div>//.ichael</div>
              <div><br>
              </div>
            </div>
            <br>
            <div class="gmail_quote">
              <div dir="ltr" class="gmail_attr">On Sun, 12 May 2024 at
                22:29, Michael Balzer via OvmsDev <<a href="mailto:ovmsdev@lists.openvehicles.com" target="_blank">ovmsdev@lists.openvehicles.com</a>>
                wrote:<br>
              </div>
              <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                <div> Warning / gathering debug statistics about slow
                  processing can be helpful, but there must not be a
                  hard limit. Frame/poll response processing may need
                  disk or network I/O, and the vehicle task may be
                  starving from punctual high loads on higher priority
                  tasks (e.g. networking) or by needing to wait for some
                  semaphore -- that's outside the application's control,
                  and must not lead to termination/recreation of the
                  task (in case you're heading towards that direction).<br>
                  <br>
                  I have no idea how much processing time the current
                  vehicles actually need in their respective worst
                  cases. Your draft is probably too lax, poll responses
                  and frames normally need to be processed much faster.
                  I'd say 10 ms is already too slow, but any wait for a
                  queue/semaphore will already mean at least 10 ms
                  (FreeRTOS tick). Probably best to begin with just
                  collecting stats.<br>
                  <br>
                  Btw, to help in narrowing down the actual problem
                  case, the profiler could collect max times per RX
                  message ID.<br>
                  <br>
                  Regards,<br>
                  Michael<br>
                  <br>
                  <br>
                  <div>Am 12.05.24 um 10:41 schrieb Michael Geddes:<br>
                  </div>
                  <blockquote type="cite">
                    <div dir="ltr">
                      <div>I have a question for Michael B  (or whoever)
                        - I have a commit lined up that would add a bit
                        of a time check to the poller loop. What do we
                        expect the maximum time to execute a poller loop
                        command should be? </div>
                      <div>This is a rough idea (in ms) I have.. based
                        on nothing much really, so any ideas would be
                        appreciated:<br>
                      </div>
                      <div>    int
                        TardyMaxTime_ms(OvmsPoller::OvmsPollEntryType
                        entry_type)<br>
                              {<br>
                              switch (entry_type)<br>
                                {<br>
                                case
                        OvmsPoller::OvmsPollEntryType::Poll: return 80;<br>
                                case
                        OvmsPoller::OvmsPollEntryType::FrameRx: return
                        30;<br>
                                case
                        OvmsPoller::OvmsPollEntryType::FrameTx: return
                        20;<br>
                                case
                        OvmsPoller::OvmsPollEntryType::Command: return
                        10;<br>
                                case
                        OvmsPoller::OvmsPollEntryType::PollState: return
                        15;<br>
                                default: return 80;<br>
                                }<br>
                              }<br>
                      </div>
                      <div><br>
                      </div>
                      <div>//.ichael<br>
                      </div>
                      <br>
                      <div class="gmail_quote">
                        <div dir="ltr" class="gmail_attr">On Mon, 6 May
                          2024 at 07:45, Michael Geddes <<a href="mailto:frog@bunyip.wheelycreek.net" target="_blank">frog@bunyip.wheelycreek.net</a>>
                          wrote:<br>
                        </div>
                        <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                          <div dir="ltr">
                            <div dir="auto">I realise that I was only
                              using the standard cable to test - which
                              probably is not sufficient - I haven't
                              looked closely at how the Leaf OBD to Db9
                              cable is different from standard.<br>
                            </div>
                            <div dir="auto"><br>
                            </div>
                            <div dir="auto">Ah, my bad out the queue
                              length.   We are definitely queueing more
                              messages though.   From my log of when the
                              overflow happened, the poller was in state
                              0 which means OFF - ie nothing was being
                              sent!!</div>
                            <div dir="auto"><br>
                            </div>
                            <div dir="auto">I'll look at the TX message
                              thing - opt in sounds good - though it
                              shouldn't be playing that much of a part
                              here as the TXs are infrequent in this
                              case (or zero when the leaf is off
                              or driving) - On the ioniq 5 when I'm
                              using the HUD - I'm polling quite
                              frequently - multiple times per second and
                              that seems to be fine!.</div>
                            <div dir="auto"><br>
                            </div>
                            <div dir="auto">I did find an issue with the
                              throttling .. but  it would still mostly
                              apply the throttling where it matters, so
                              again, it shouldn't be the problem (also,
                              we aren't transmitting in the leaf case).</div>
                            <div dir="auto">
                              <div dir="auto">
                                <div dir="auto"><br>
                                </div>
                                <div dir="auto">The change I made to the
                                  logging of RX messages showed how many
                                  in a row were dropped... and it was
                                  mostly 1 only in a run - which means
                                  even if it is a short time between -
                                  that means that the drops are being
                                  interleaved by at least one success!</div>
                                <div><br>
                                </div>
                                <div>Sooo.. I'm still wondering what is
                                  going on.  Some things I'm going to
                                  try:</div>
                                <div><br>
                                </div>
                                <div>* If the number of messages on the
                                  Can bus (coming in through RX) means
                                  that the queue is slowly getting
                                  longer and not quite catching up, then
                                  making the queue longer will help it
                                  last longer... but only pushes the
                                  problem down the road.</div>
                                <div>  - Add 'current queue length' to
                                  the poller status  information to see
                                  if this is indeed the case?  </div>
                                <div>  - Add some kind of alert when the
                                  queue reaches a % full?</div>
                                <div>* Once you start overflowing and
                                  getting overflow log messages, I
                                  wonder if this is then contributing to
                                  the problem.</div>
                                <div>  - Push the overflow logging into
                                  Poller Task which can look at how many
                                  drops occurred since last received
                                  item.</div>
                                <div>* Split up the flags for the poller
                                  messages into 2:<br>
                                </div>
                                <div>   - Messages that are/could be
                                  happening in the TX/RX tasks</div>
                                <div>   - Other noisy messages that
                                  always happen in the poller task.</div>
                                <div><br>
                                </div>
                                <div>Thoughts on what else we might
                                  measure to figure out what is going
                                  on?<br>
                                </div>
                                <div><br>
                                </div>
                                <div>//.ichael</div>
                                <div><br>
                                </div>
                              </div>
                            </div>
                            <br>
                            <div class="gmail_quote">
                              <div dir="ltr" class="gmail_attr">On Sun,
                                5 May 2024, 19:29 Michael Balzer via
                                OvmsDev, <<a href="mailto:ovmsdev@lists.openvehicles.com" target="_blank">ovmsdev@lists.openvehicles.com</a>>
                                wrote:<br>
                              </div>
                              <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                                <div> Michael,<br>
                                  <br>
                                  the queue size isn't in bytes, it's in
                                  messages:<br>
                                  <br>
                                  <blockquote type="cite"><font face="monospace"> * @param
                                      uxQueueLength The maximum number
                                      of items that the queue can
                                      contain.<br>
                                       *<br>
                                       * @param uxItemSize The number of
                                      bytes each item in the queue will
                                      require.</font></blockquote>
                                  <br>
                                  Also, from the time stamps in Dereks
                                  log excerpt, there were quite some
                                  dropped frames in that time window --
                                  at least 23 frames in 40 ms, that's
                                  bad.<br>
                                  <br>
                                  Queue sizes are currently:<br>
                                  <br>
                                  <font face="monospace">CONFIG_OVMS_HW_CAN_RX_QUEUE_SIZE=60<br>
CONFIG_OVMS_VEHICLE_CAN_RX_QUEUE_SIZE=60</font><br>
                                  <br>
                                  The new poller now channels all TX
                                  callbacks through the task queue
                                  additionally to RX and commands. So
                                  setting the queue size to be larger
                                  than the CAN RX queue size seems
                                  appropriate.<br>
                                  <br>
                                  Nevertheless, an overflow with more
                                  than 60 waiting messages still
                                  indicates some too long processing
                                  time in the vehicle task.<br>
                                  <br>
                                  TX callbacks previously were done
                                  directly in the CAN context, and no
                                  current vehicle overrides the empty
                                  default handler, so this imposed
                                  almost no additional overhead. By
                                  requiring a queue entry for each TX
                                  callback, this feature now has a
                                  potentially high impact for all
                                  vehicles. If passing these to the task
                                  is actually necessary, it needs to
                                  become an opt-in feature, so only
                                  vehicles subscribing to the callback
                                  actually need to cope with that
                                  additional load & potential
                                  processing delays involved.<br>
                                  <br>
                                  Regards,<br>
                                  Michael<br>
                                  <br>
                                  <blockquote type="cite">
                                    <div class="gmail_quote">
                                      <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                                        <div class="gmail_quote">
                                          <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                                            <div>
                                              <div lang="EN-AU">
                                                <div>
                                                  <div>
                                                    <blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm">
                                                      <div>
                                                        <blockquote style="margin-top:5pt;margin-bottom:5pt">
                                                          <div>
                                                          <blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm">
                                                          <div>
                                                          <blockquote style="margin-top:5pt;margin-bottom:5pt">
                                                          <div>
                                                          <div>
                                                          <div>
                                                          <blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm">
                                                          <div>
                                                          <blockquote style="margin-top:5pt;margin-bottom:5pt">
                                                          <div>
                                                          <div>
                                                          <div>
                                                          <div>
                                                          <blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm">
                                                          <div>
                                                          <blockquote style="margin-top:5pt;margin-bottom:5pt">
                                                          <div>
                                                          <blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm">
                                                          <div>
                                                          <blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm">
                                                          <div>
                                                          <blockquote style="margin-top:5pt;margin-bottom:5pt">
                                                          <div>
                                                          <blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm">
                                                          <div>
                                                          <blockquote style="margin-top:5pt;margin-bottom:5pt">
                                                          <blockquote style="margin-top:5pt;margin-bottom:5pt">
                                                          <div>
                                                          <div>
                                                          <blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm">
                                                          <div>
                                                          <blockquote style="margin-top:5pt;margin-bottom:5pt">
                                                          <div>
                                                          <div>
                                                          <div>
                                                          <blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm">
                                                          <div>
                                                          <div>
                                                          <div style="border:1pt solid rgb(204,204,204);padding:7pt"> </div>
                                                          </div>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          </div>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          </div>
                                                          </blockquote>
                                                          </blockquote>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          </div>
                                                          </div>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          </div>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                        </blockquote>
                                                      </div>
                                                    </blockquote>
                                                  </div>
                                                </div>
                                              </div>
                                            </div>
                                          </blockquote>
                                        </div>
                                      </blockquote>
                                    </div>
                                  </blockquote>
                                </div>
                              </blockquote>
                            </div>
                          </div>
                        </blockquote>
                      </div>
                    </div>
                  </blockquote>
                  <br>
                  <pre cols="72">-- 
Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal
Fon 02333 / 833 5735 * Handy 0176 / 206 989 26</pre>
                </div>
                _______________________________________________<br>
                OvmsDev mailing list<br>
                <a href="mailto:OvmsDev@lists.openvehicles.com" target="_blank">OvmsDev@lists.openvehicles.com</a><br>
                <a href="http://lists.openvehicles.com/mailman/listinfo/ovmsdev" rel="noreferrer" target="_blank">http://lists.openvehicles.com/mailman/listinfo/ovmsdev</a><br>
              </blockquote>
            </div>
          </div>
        </blockquote>
      </div>
      <br>
      <fieldset></fieldset>
      <pre>_______________________________________________
OvmsDev mailing list
<a href="mailto:OvmsDev@lists.openvehicles.com" target="_blank">OvmsDev@lists.openvehicles.com</a>
<a href="http://lists.openvehicles.com/mailman/listinfo/ovmsdev" target="_blank">http://lists.openvehicles.com/mailman/listinfo/ovmsdev</a>
</pre>
    </blockquote>
    <br>
    <pre cols="72">-- 
Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal
Fon 02333 / 833 5735 * Handy 0176 / 206 989 26</pre>
  </div>

_______________________________________________<br>
OvmsDev mailing list<br>
<a href="mailto:OvmsDev@lists.openvehicles.com" target="_blank">OvmsDev@lists.openvehicles.com</a><br>
<a href="http://lists.openvehicles.com/mailman/listinfo/ovmsdev" rel="noreferrer" target="_blank">http://lists.openvehicles.com/mailman/listinfo/ovmsdev</a><br>
</blockquote></div></div>