<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
OK, I see now why it wouldn't send the notification: the V2 & V3
server register for notifications up to COMMAND_RESULT_NORMAL = 1024
characters.<br>
<br>
The report quickly becomes larger than 1024 characters, so the
notifications no longer get sent via the server connectors.<br>
<br>
You need to either reduce the size, split the report, or use data
notifications instead.<br>
<br>
On the reset value init: for my float targeting smoothing helper
class for the UpMiiGo, I implemented a gradual ramp up from 1 to the
requested sample size. You can do something similar also with powers
of 2. IOW, yes, initialization from the first values received is
perfectly OK.<br>
<br>
Regards,<br>
Michael<br>
<br>
<br>
<div class="moz-cite-prefix">Am 26.05.24 um 14:35 schrieb Michael
Geddes:<br>
</div>
<blockquote type="cite"
cite="mid:004a01daaf69$287787c0$79669740$@bunyip.wheelycreek.net">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="Generator"
content="Microsoft Word 15 (filtered medium)">
<style>@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}@font-face
{font-family:"Yu Gothic";
panose-1:2 11 4 0 0 0 0 0 0 0;}@font-face
{font-family:Aptos;}@font-face
{font-family:Consolas;
panose-1:2 11 6 9 2 2 4 3 2 4;}@font-face
{font-family:"\@Yu Gothic";
panose-1:2 11 4 0 0 0 0 0 0 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
font-size:12.0pt;
font-family:"Aptos",sans-serif;}a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}pre
{mso-style-priority:99;
mso-style-link:"HTML Preformatted Char";
margin:0cm;
margin-bottom:.0001pt;
font-size:10.0pt;
font-family:"Courier New";}span.HTMLPreformattedChar
{mso-style-name:"HTML Preformatted Char";
mso-style-priority:99;
mso-style-link:"HTML Preformatted";
font-family:Consolas;
mso-ligatures:none;}span.EmailStyle20
{mso-style-type:personal-compose;}.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;
mso-ligatures:none;}div.WordSection1
{page:WordSection1;}</style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
<div class="WordSection1">
<div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt">It _<i>should</i>_
already be sending a report on charge stop.<o:p></o:p></p>
<p class="MsoNormal" style="margin-bottom:12.0pt">MyEvents.RegisterEvent(TAG,
"vehicle.charge.stop",
std::bind(&OvmsPollers::VehicleChargeStop, this, _1,
_2));<o:p></o:p></p>
<p class="MsoNormal" style="margin-bottom:12.0pt">Reset on
charge start/vehicle on is a good idea.<o:p></o:p></p>
<p class="MsoNormal" style="margin-bottom:12.0pt">A question
– would it be silly if the first value after a reset,
rather than using 0 average to start with, if the average
got set to the initial value? I’m in 2 minds about it. It
would make the average more useful quicker.<o:p></o:p></p>
<p class="MsoNormal" style="margin-bottom:12.0pt">//.ichael<o:p></o:p></p>
<div>
<div>
<p class="MsoNormal">On Sun, 26 May 2024, 19:39 Michael
Balzer via OvmsDev, <<a
href="mailto:ovmsdev@lists.openvehicles.com"
moz-do-not-send="true" class="moz-txt-link-freetext">ovmsdev@lists.openvehicles.com</a>>
wrote:<o:p></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt">
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt">As
the averages quickly decline when idle, an automatic
report should probably also be sent on charge stop.<br>
<br>
Also, I think you should automatically reset the
timer statistics on drive & charge start.<br>
<br>
First stats from charging my UpMiiGo:<br>
<br>
<span style="font-family:"Courier New"">Type
| count | Utlztn | Time <br>
| per s | [‰] | [ms]<br>
---------------+--------+--------+---------<br>
Poll:PRI Avg| 1.00| 0.723| 0.716<br>
Peak| | 1.282| 3.822<br>
---------------+--------+--------+---------<br>
Poll:SRX Avg| 7.72| 1.246| 0.184<br>
Peak| | 3.128| 1.058<br>
---------------+--------+--------+---------<br>
RxCan1[7ae] Avg| 2.48| 0.915| 0.362<br>
Peak| | 1.217| 1.661<br>
---------------+--------+--------+---------<br>
RxCan1[7cf] Avg| 4.76| 1.928| 0.397<br>
Peak| | 2.317| 2.687<br>
---------------+--------+--------+---------<br>
RxCan1[7ed] Avg| 3.38| 1.251| 0.327<br>
Peak| | 8.154| 12.273<br>
---------------+--------+--------+---------<br>
RxCan1[7ee] Avg| 0.21| 0.066| 0.297<br>
Peak| | 0.225| 1.690<br>
---------------+--------+--------+---------<br>
TxCan1[744] Avg| 1.49| 0.022| 0.011<br>
Peak| | 0.032| 0.095<br>
---------------+--------+--------+---------<br>
TxCan1[765] Avg| 3.89| 0.134| 0.027<br>
Peak| | 0.155| 0.113<br>
---------------+--------+--------+---------<br>
TxCan1[7e5] Avg| 2.32| 0.038| 0.013<br>
Peak| | 0.295| 0.084<br>
---------------+--------+--------+---------<br>
TxCan1[7e6] Avg| 1 0.21| 0.002| 0.008<br>
Peak| | 0.010| 0.041<br>
---------------+--------+--------+---------<br>
Cmd:State Avg| 0.00| 0.000| 0.007<br>
Peak| | 0.005| 0.072<br>
===============+========+========+=========<br>
Total Avg| 27.46| 6.324| 2.349<br>
</span><br>
<br>
Overall healthy I'd say, but let's see how it
compares.<br>
<br>
7ed is the BMS, the peak time is probably related to
the extended cell data logging -- I've enabled log
intervals for both cell voltages & temperatures.<br>
<br>
Regards,<br>
Michael<br>
<br>
<o:p></o:p></p>
<div>
<p class="MsoNormal">Am 26.05.24 um 08:42 schrieb
Michael Balzer via OvmsDev:<o:p></o:p></p>
</div>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">The notification works on my
devices, it only has a garbled per mille character
-- see attached screenshot. The same applies to
the mail version:<br>
<br>
<br>
<o:p></o:p></p>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt">
<pre>Poller timing is: on<o:p></o:p></pre>
<pre>Type | count | Utlztn | Time <o:p></o:p></pre>
<pre> | per s | [‰] | [ms]<o:p></o:p></pre>
<pre>---------------+--------+--------+---------<o:p></o:p></pre>
<pre>Poll:PRI Avg| 0.25| 0.119| 0.382<o:p></o:p></pre>
<pre> Peak| | 0.513| 0.678<o:p></o:p></pre>
<pre>---------------+--------+--------+---------<o:p></o:p></pre>
<pre>RxCan1[597] Avg| 0.01| 0.004| 0.021<o:p></o:p></pre>
<pre> Peak| | 0.000| 0.338<o:p></o:p></pre>
<pre>---------------+--------+--------+---------<o:p></o:p></pre>
<pre>RxCan1[59b] Avg| 0.01| 0.011| 0.053<o:p></o:p></pre>
<pre> Peak| | 0.000| 0.848<o:p></o:p></pre>
<pre>---------------+--------+--------+---------<o:p></o:p></pre>
<pre>Cmd:State Avg| 0.01| 0.002| 0.012<o:p></o:p></pre>
<pre> Peak| | 0.000| 0.120<o:p></o:p></pre>
<pre>===============+========+========+=========<o:p></o:p></pre>
<pre> Total Avg| 0.28| 0.135| 0.468<o:p></o:p></pre>
</blockquote>
<p class="MsoNormal" style="margin-bottom:12.0pt"><br>
The encoding is a general issue. The character
encoding for text messages via V2/MP is quite old
& clumsy, it's got an issue with the degree
celcius character as well. We previously tried to
keep all text messages within the SMS safe
character set (which e.g. lead to writing just "C"
instead of "°C"). I'd say we should head towards
UTF-8 now. If we ever refit SMS support, we can
recode on the fly.<br>
<br>
Regarding not seeing the notification on your
phone:<br>
<br>
a) Check your notification subtype/channel filters
on the module. See <a
href="https://docs.openvehicles.com/en/latest/userguide/notifications.html#suppress-notifications"
target="_blank" moz-do-not-send="true"
class="moz-txt-link-freetext">https://docs.openvehicles.com/en/latest/userguide/notifications.html#suppress-notifications</a><br>
<br>
b) Check your notification vehicle filters on the
phone (menu on notification tab): if you enabled
the vehicle filter, it will add the messages of
not currently selected vehicles to the list only,
but not raise a system notification. (Applies to
the Android App, no idea about the iOS version)<br>
<br>
Regards,<br>
Michael<br>
<br>
<o:p></o:p></p>
<div>
<p class="MsoNormal">Am 26.05.24 um 06:32 schrieb
Michael Geddes:<o:p></o:p></p>
</div>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<div>
<div>
<p class="MsoNormal">Hi,<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">I'm trying to finalise
this now .. and one last thing is that I
don't get the report coming to my mobile.
I'm using the command:<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"> <span
style="font-family:"Courier New"">
MyNotify.NotifyString("info",
"poller.report", buf.c_str());</span><o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Where the buffer string
is just the same as the report output.
Should I be using some other format or
command?<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">I get "alert" types
(like the ioniq5 door-open alert) fine to
my mobile.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Michael.<o:p></o:p></p>
</div>
</div>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal">On Sun, 19 May 2024,
12:51 Michael Balzer via OvmsDev, <<a
href="mailto:ovmsdev@lists.openvehicles.com" target="_blank"
moz-do-not-send="true"
class="moz-txt-link-freetext">ovmsdev@lists.openvehicles.com</a>>
wrote:<o:p></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt">
<div>
<p class="MsoNormal"
style="margin-bottom:12.0pt">A builtin web
UI for this seems a bit over the top.
Builtin web config pages should focus on
user features, this is clearly a feature
only needed during/for the
development/extension of a vehicle
adapter. Development features in the web
UI are confusing for end users.<br>
<br>
If persistent enabling/disabling is done
by a simple config command (e.g. "config
set can poller.trace on"), that's also
doable by users.<br>
<br>
Regards,<br>
Michael<br>
<br>
<o:p></o:p></p>
<div>
<p class="MsoNormal">Am 19.05.24 um 02:06
schrieb Michael Geddes:<o:p></o:p></p>
</div>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<div>
<div>
<p class="MsoNormal">I was so
focused on how I calculated the
value that I totally missed that ‰
would be a better description. I
could also use the system 'Ratio'
unit... so % or ‰. <o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">I'll make space
to put 'Avg' on the row. Was
trying to limit the width for
output on a mobile. I agree it
would make it easier to
understand.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">Totals also
makes sense.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Should I make
this a configuration that can be
set on the web-page? I'd
probably use a configuration
change notification so that the
very bit setting is sync'd with
the 'configuration' value.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">//.ichael<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal">On Sat, 18
May 2024, 14:05 Michael
Balzer, <<a
href="mailto:dexter@expeedo.de" target="_blank" moz-do-not-send="true"
class="moz-txt-link-freetext">dexter@expeedo.de</a>> wrote:<o:p></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt">
<div>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<p class="MsoNormal">I'm
not sure whether the
'max' should be the
maximum of the smoothed
value.. or the maximum
of the raw value.<o:p></o:p></p>
</div>
</blockquote>
<p class="MsoNormal"
style="margin-bottom:12.0pt"><br>
It should normally be the
maximum of the raw value I
think, the maximum of the
smoothed value cannot tell
about how bad the processing
of an ID can become.<br>
<br>
The naming in the table is a
bit confusing I think.
(besides: I've never seen
"ave" as the abbreviation
for average)<br>
<br>
If I understand you
correctly, "time ms per s"
is the time share in per
mille, so something in that
direction would be more
clear, and "length ms" would
then be "time [ms]".<br>
<br>
The totals for all averages
in the table foot would also
be nice.<br>
<br>
Maybe "Ave" (or avg?) also
should be placed on the
left, as the "peak" label
now suggests being the peak
of the average.<br>
<br>
Btw, keep in mind, not all
"edge" users / testers are
developers (e.g. the Twizy
driver I'm in contact with),
collecting stats feedback
for vehicles from testers
should be straight forward.
Maybe add a data/history
record, sent automatically
on every drive/charge stop
when the poller tracing is
on?<br>
<br>
Regards,<br>
Michael<br>
<br>
<o:p></o:p></p>
<div>
<p class="MsoNormal">Am
18.05.24 um 02:28 schrieb
Michael Geddes:<o:p></o:p></p>
</div>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<div>
<p class="MsoNormal">You
did say max/pead
value. I also halved
the N for both.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">I'm
not sure whether the
'max' should be the
maximum of the
smoothed value.. or
the maximum of the raw
value.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">This
is currently the
raw-value maximum. <o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">The
problem is that the
middle column is the
maximum of the {{sum
over 10s} /
(10*1000,000)<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">I
could easily change
the 'period' to 1s and
see how that goes..
was just trying to
reduce the larger
calculations. <o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<p class="MsoNormal"><span
style="font-family:"Courier New"">Usage: poller
[pause|resume|status|times|trace]</span>
<o:p></o:p></p>
<div>
<p class="MsoNormal"><span
style="font-family:"Courier New""><br>
OVMS# poller time
status<br>
Poller timing is: on<br>
Type | Count
| Ave time | Ave
length<br>
| per s
| ms per s | ms
<br>
-------------+----------+-----------+-----------<br>
Poll:PRI |
1.00| 0.559|
0.543<br>
peak |
| 0.663|
1.528<br>
-------------+----------+-----------+-----------<br>
Poll:SRX |
0.08| 0.009|
0.038<br>
peak |
| 0.068|
0.146<br>
-------------+----------+-----------+-----------<br>
CAN1 RX[778] |
0.11| 0.061|
0.280<br>
peak |
| 0.458|
1.046<br>
-------------+----------+-----------+-----------<br>
CAN1 RX[7a8] |
0.04| 0.024|
0.124<br>
peak |
| 0.160|
0.615<br>
-------------+----------+-----------+-----------<br>
CAN1 TX[770] |
0.05| 0.004|
0.016<br>
peak |
| 0.022|
0.102<br>
-------------+----------+-----------+-----------<br>
CAN1 TX[7a0] |
0.02| 0.002|
0.011<br>
peak |
| 0.010|
0.098<br>
-------------+----------+-----------+-----------<br>
CAN1 TX[7b3] |
0.01| 0.001|
0.006<br>
peak |
| 0.000|
0.099<br>
-------------+----------+-----------+-----------<br>
CAN1 TX[7e2] |
0.02| 0.002|
0.011<br>
peak |
| 0.010|
0.099<br>
-------------+----------+-----------+-----------<br>
CAN1 TX[7e4] |
0.08| 0.008|
0.048<br>
peak |
| 0.049|
0.107<br>
-------------+----------+-----------+-----------<br>
Cmd:State |
0.00| 0.000|
0.005<br>
peak |
| 0.000|
0.094</span><o:p></o:p></p>
</div>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal">On
Fri, 17 May 2024 at
15:26, Michael Geddes
<<a
href="mailto:frog@bunyip.wheelycreek.net" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">frog@bunyip.wheelycreek.net</a>>
wrote:<o:p></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt">
<div>
<div>
<p class="MsoNormal">This
is what I have
now.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">The
one on the end is
the one MIchael B
was after using an
N of 32. (up for
discussion).<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">The
middle is the
time spent in
that even t per
second. It
accumulates times
(in microseconds),
and then every 10s
it stores it as
smoothed (N=16)
value.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">The
Count is similar
(except that we
store a value of
'100' as 1 event
so it can be still
integers and has 2
decimal places).<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Every
received poll
does a
64bit difference
to 32bit (for the
elapsed time) and
64bit comparison
(for
end-of-period).<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">It
also does 1x 32bit
smoothing and 2x
32bit adds.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Then
at the end of a
10s period, it
will do a 64bit
add to get the
next end-of-period
value, as well as
the 2x 32bit
smoothing calcs.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">This
is from the Ioniq
5 so not any big
values yet. You
can certainly see
how insignificant
the TX callbacks
are.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">I'll
leave it on for
when the car is
moving and gets
some faster
polling.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal"><span
style="font-family:"Courier New"">OVMS# poll time status<br>
Poller timing
is: on<br>
Type |
Count | Ave
time | Ave
Length<br>
|
per s | ms
per s | ms <br>
-------------+----------+-----------+-----------<br>
Poll:PRI |
1.00|
0.540|
0.539<br>
Poll:SRX |
0.03|
0.004|
0.017<br>
CAN1 RX[778] |
0.06|
0.042|
0.175<br>
CAN1 TX[770] |
0.04|
0.002|
0.008<br>
Cmd:State |
0.01|
0.001|
0.005</span><o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal"><span
style="font-family:"Courier New"">----------------------8<--------------------------------</span><o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal"><span
style="font-family:"Courier New"">Nice smoothing class (forces
N as a power of
2):</span><o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">
constexpr unsigned
floorlog2(unsigned
x)<br>
{<br>
return x == 1
? 0 :
1+floorlog2(x
>> 1);<br>
}<br>
/* Maintain a
smoothed average
using shifts for
division.<br>
* T should be
an integer type<br>
* N needs to be
a power of 2<br>
*/<br>
template
<typename T,
unsigned N><br>
class
average_util_t<br>
{<br>
private:<br>
T m_ave;<br>
public:<br>
average_util_t() :
m_ave(0) {}<br>
static const
uint8_t _BITS =
floorlog2(N);<br>
void add( T
val)<br>
{<br>
static_assert(N ==
(1 <<
_BITS), "N must be
a power of 2");<br>
m_ave =
(((N-1) * m_ave) +
val) >>
_BITS;<br>
}<br>
T get() {
return m_ave; }<br>
operator T()
{ return m_ave; }<br>
};<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal">On
Thu, 16 May 2024
at 10:29, Michael
Geddes <<a
href="mailto:frog@bunyip.wheelycreek.net" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">frog@bunyip.wheelycreek.net</a>>
wrote:<o:p></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt">
<div>
<div>
<div>
<p
class="MsoNormal">Thanks Michael,<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<p
class="MsoNormal">My calculations give me ((2^32)-1) / (1000*1000*3600)
= only 1.2
hours of
processing
time in
32bit. The
initial
subtraction is
64bit anyway
and I can't
see a
further 64bit
addition being
a problem. I
have the
calculations
being
performed in
doubles at
print-out
where
performance is
not really an
issue anyway.
(Though
apparently
doing 64 bit
division is
worse than
floating
point). <o:p></o:p></p>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
<div>
<p
class="MsoNormal">In addition<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal">* I currently have this being able to be turned on and
off and reset
manually (only
do it when
required).<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">* For the lower volume commands, the smoothed average
is not going
to be useful -
the count is
more
interesting
for different
reasons.<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">* The total time is quite useful. Ie a high average
time doesn't
matter if the
count is low.
The things
that are
affecting
performance
are stuff with
high total
time. Stuff
which is
happening 100
times a second
needs to be a
much lower
average than
once a second.<o:p></o:p></p>
</div>
<div>
<div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">* A measure like 'time per minute/second' and possibly
count per
minute/seconds
as a
smoothed average
would
potentially be
more useful.
(or in
addition?)<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal">I think we could do _that_ in a reasonably efficient
manner using a
64 bit 'last
measured
time', a 32
bit
accumulated
value and the
stored 32 bit
rolling
average. <o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal">It would boils down to some iterative (integer) sums
and
multiplications
plus a divide
by n ^ (time
periods
passed) -
which is a
shift - and
which can be
optimised to
'0' if
'time-periods-passed'
is more than
32/(bits-per-n)
- effectively
limiting the
number of
iterations.<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal">The one issue I can see is that we need to calculate
'number of
time-periods
passed' which
is a 64 bit
subtraction
followed by a
32 bit
division (not
optimisable to
a simple
shift).<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
</div>
</div>
<div>
<p
class="MsoNormal">* I'm also happy to keep a rolling (32bit) average
time.<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> Even if you assume averages in the 100ms, 32bit is
going to
happily
support an N
of 64 or even
128.<o:p></o:p></p>
</div>
<div>
<div>
<div>
<p
class="MsoNormal"> Am I right in thinking that the choice of N is
highly
dependent on
frequency. For
things
happening 100
times per
second, you
might want an
N like 128..
where things
happening once
per<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> second, you might want an N of 4 or 8. The other
things we keep
track of in
this manner we
have a better
idea of the
frequency of
the thing.<o:p></o:p></p>
</div>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
</div>
<div>
<p
class="MsoNormal">How about we have (per record type):<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> * total count (since last reset?) (32 bit)<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> * smoothed average of time per instance (32 bit)<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal"> * ?xx? total accumulated time since last reset
(64bit) ??
<-- with
the below
stats this is
much less
useful<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal"> * last-measured-time (64 bit) <o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> * accumulated count since last time-period (16bit -
but maybe
32bit anyway
for byte
alignment?)<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> * smoothed average of count per time-period (32bit)<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> * accumulated time since last time-period (32bit)<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> * smoothed average of time per time-period (32bit)<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">It's possible to keep the <o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">Is this going to be too much per record type? The
number of
'records' we
are keeping is
quite low (so
10 to 20
maybe) - so
it's not a
huge memory
burden.<o:p></o:p></p>
</div>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">Thoughts?<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
</div>
<p
class="MsoNormal">//.ichael<o:p></o:p></p>
<div>
<div>
<p
class="MsoNormal">On Thu, 16 May 2024 at 03:09, Michael Balzer via
OvmsDev <<a
href="mailto:ovmsdev@lists.openvehicles.com" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">ovmsdev@lists.openvehicles.com</a>>
wrote:<o:p></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt">
<div>
<p
class="MsoNormal" style="margin-bottom:12.0pt">esp_timer_get_time() is
the right
choice for
precision
timing.<br>
<br>
I'd say uint32
is enough
though, even
if counting
microseconds
that can hold
a total of
more than 71
hours of
actual
processing
time. uint64
has a
significant
performance
penalty,
although I
don't recall
the overhead
for simple
additions.<br>
<br>
Also &
more
important, the
average
wouldn't be my
main focus,
but the
maximum
processing
time seen per
ID, which
seems to be
missing in
your draft.<br>
<br>
Second thought
on the
average… the
exact overall
average really
has a minor
meaning, I'd
rather see the
current
average,
adapting to
the current
mode of
operation
(drive/charge/…).
I suggest
feeding the
measurements
to a low pass
filter to get
the smoothed
average of the
last n
measurements.
Pattern:<br>
<br>
runavg =
((N-1) *
runavg +
newval) / N<br>
<br>
By using a low
power of 2 for
N (e.g. 8 or
16), you can
replace the
division by a
simple bit
shift, and
have enough
headroom to
use 32 bit
integers.<br>
<br>
Regards,<br>
Michael<br>
<br>
<o:p></o:p></p>
<div>
<p
class="MsoNormal">Am 15.05.24 um 06:51 schrieb Michael Geddes via
OvmsDev:<o:p></o:p></p>
</div>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<p
class="MsoNormal">Formatting aside, I have implemented what I think
Michael B was
suggesting.
This is a
sample run on
the Ioniq 5
(which doesn't
have
unsolicited RX
events). <o:p></o:p></p>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">This uses the call esp_timer_get_time() got get a
64bit <b>microseconds</b>
since started
value - and
works out the
time to
execute that
way. It's
looking at
absolute time
and not time
in the Task -
so other
things going
on at the same
time in other
tasks will
have an
effect. (The
normal tick
count doesn't
have nearly
enough
resolution to
be useful -
any other
ideas on
measurement?)
I've got total
accumulated
time
displaying in
seconds and
the average in
milliseconds
currently -
but I can
change that
easy enough.<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal">The cumulative time is stored as uint64_t which will
be plenty, as
32bit wouldn't
be nearly
enough.<o:p></o:p></p>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal"><span style="font-family:"Courier New"">OVMS#
<b>poller time
on</b><br>
Poller timing
is now on</span><o:p></o:p></p>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal" style="margin-bottom:12.0pt"><span
style="font-family:"Courier New"">OVMS# <b>poller time status</b><br>
Poller timing
is: on<br>
Poll [PRI]
:
n=390 tot=0.2s
ave=0.586ms<br>
Poll [SRX]
:
n=316 tot=0.1s
ave=0.196ms<br>
CAN1 RX[0778]
:
n=382 tot=0.2s
ave=0.615ms<br>
CAN1 RX[07a8]
:
n=48 tot=0.0s
ave=0.510ms<br>
CAN1 RX[07bb]
:
n=162 tot=0.1s
ave=0.519ms<br>
CAN1 RX[07ce]
:
n=33 tot=0.0s
ave=0.469ms<br>
CAN1 RX[07ea]
:
n=408 tot=0.2s
ave=0.467ms<br>
CAN1 RX[07ec]
:
n=486 tot=0.2s
ave=0.477ms<br>
CAN3 RX[07df]
:
n=769 tot=0.2s
ave=0.261ms<br>
CAN1 TX[0770]
:
n=191 tot=0.0s
ave=0.054ms<br>
CAN1 TX[07a0]
:
n=16 tot=0.0s
ave=0.047ms<br>
CAN1 TX[07b3]
:
n=31 tot=0.0s
ave=0.069ms<br>
CAN1 TX[07c6]
:
n=11 tot=0.0s
ave=0.044ms<br>
CAN1 TX[07e2]
:
n=82 tot=0.0s
ave=0.067ms<br>
CAN1 TX[07e4]
:
n=54 tot=0.0s
ave=0.044ms<br>
Set State
:
n=7 tot=0.0s
ave=0.104ms</span><o:p></o:p></p>
</div>
</div>
<div>
<p
class="MsoNormal">This is probably going to be quite useful in general!
The TX
call-backs
don't seem to
be significant
here. (oh, I
should
probably
implement a
reset of the
values too).<o:p></o:p></p>
</div>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">//.ichael<o:p></o:p></p>
</div>
</div>
<p
class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p
class="MsoNormal">On Sun, 12 May 2024 at 22:58, Michael Geddes <<a
href="mailto:frog@bunyip.wheelycreek.net" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">frog@bunyip.wheelycreek.net</a>>
wrote:<o:p></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt">
<div>
<div>
<p
class="MsoNormal">Yeah - I certainly wasn't going to put a hard limit.
Just a log
above a
certain time,
that being
said, the idea
of just
collecting
stats (being
able to turn
it on via a
"poller timer"
set of
commands)
would be much
more useful.
I'll look into
that.<o:p></o:p></p>
</div>
<div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">Average time is probably a good stat - and certainly
what we care
about.<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">I actually am hopeful that those couple of things I
did might help
reduce that
average time
quite a bit
(that
short-cutting
the isotp
protocol
handling
especially). <o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">That p/r with logging changes might help reduce the
unproductive
log time
further, but
also makes it
possible to
turn on the
poller logging
without the RX
task logs
kicking in.<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">//.ichael<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
</div>
<p
class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p
class="MsoNormal">On Sun, 12 May 2024 at 22:29, Michael Balzer via
OvmsDev <<a
href="mailto:ovmsdev@lists.openvehicles.com" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">ovmsdev@lists.openvehicles.com</a>>
wrote:<o:p></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt">
<div>
<p
class="MsoNormal" style="margin-bottom:12.0pt">Warning / gathering debug
statistics
about slow
processing can
be helpful,
but there must
not be a hard
limit.
Frame/poll
response
processing may
need disk or
network I/O,
and the
vehicle task
may be
starving from
punctual high
loads on
higher
priority tasks
(e.g.
networking) or
by needing to
wait for some
semaphore --
that's outside
the
application's
control, and
must not lead
to
termination/recreation
of the task
(in case
you're heading
towards that
direction).<br>
<br>
I have no idea
how much
processing
time the
current
vehicles
actually need
in their
respective
worst cases.
Your draft is
probably too
lax, poll
responses and
frames
normally need
to be
processed much
faster. I'd
say 10 ms is
already too
slow, but any
wait for a
queue/semaphore
will already
mean at least
10 ms
(FreeRTOS
tick).
Probably best
to begin with
just
collecting
stats.<br>
<br>
Btw, to help
in narrowing
down the
actual problem
case, the
profiler could
collect max
times per RX
message ID.<br>
<br>
Regards,<br>
Michael<br>
<br>
<o:p></o:p></p>
<div>
<p
class="MsoNormal">Am 12.05.24 um 10:41 schrieb Michael Geddes:<o:p></o:p></p>
</div>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<div>
<p
class="MsoNormal">I have a question for Michael B (or whoever) - I have
a commit lined
up that would
add a bit of a
time check to
the poller
loop. What do
we expect the
maximum time
to execute a
poller loop
command should
be? <o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal">This is a rough idea (in ms) I have.. based on nothing
much really,
so any ideas
would be
appreciated:<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> int TardyMaxTime_ms(OvmsPoller::OvmsPollEntryType
entry_type)<br>
{<br>
switch
(entry_type)<br>
{<br>
case
OvmsPoller::OvmsPollEntryType::Poll:
return 80;<br>
case
OvmsPoller::OvmsPollEntryType::FrameRx:
return 30;<br>
case
OvmsPoller::OvmsPollEntryType::FrameTx:
return 20;<br>
case
OvmsPoller::OvmsPollEntryType::Command:
return 10;<br>
case
OvmsPoller::OvmsPollEntryType::PollState:
return 15;<br>
default:
return 80;<br>
}<br>
}<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">//.ichael<o:p></o:p></p>
</div>
<p
class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p
class="MsoNormal">On Mon, 6 May 2024 at 07:45, Michael Geddes <<a
href="mailto:frog@bunyip.wheelycreek.net" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">frog@bunyip.wheelycreek.net</a>>
wrote:<o:p></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt">
<div>
<div>
<p
class="MsoNormal">I realise that I was only using the standard cable to
test - which
probably is
not sufficient
- I haven't
looked closely
at how the
Leaf OBD to
Db9 cable is
different from
standard.<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">Ah, my bad out the queue length. We are definitely
queueing more
messages
though. From
my log of when
the overflow
happened, the
poller was in
state 0 which
means OFF - ie
nothing was
being sent!!<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">I'll look at the TX message thing - opt in sounds good
- though it
shouldn't be
playing that
much of a part
here as the
TXs are
infrequent in
this case (or
zero when the
leaf is off
or driving) -
On the ioniq 5
when I'm using
the HUD - I'm
polling quite
frequently -
multiple times
per second and
that seems to
be fine!.<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">I did find an issue with the throttling .. but it
would still
mostly apply
the throttling
where it
matters, so
again, it
shouldn't be
the problem
(also, we
aren't
transmitting
in the leaf
case).<o:p></o:p></p>
</div>
<div>
<div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">The change I made to the logging of RX messages showed
how many in a
row were
dropped... and
it was mostly
1 only in a
run - which
means even if
it is a short
time between -
that means
that the drops
are being
interleaved by
at least one
success!<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">Sooo.. I'm still wondering what is going on. Some
things I'm
going to try:<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">* If the number of messages on the Can bus (coming in
through RX)
means that the
queue is
slowly getting
longer and not
quite catching
up, then
making the
queue longer
will help it
last longer...
but only
pushes the
problem down
the road.<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> - Add 'current queue length' to the poller status
information to
see if this is
indeed the
case? <o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> - Add some kind of alert when the queue reaches a %
full?<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal">* Once you start overflowing and getting overflow log
messages, I
wonder if this
is then
contributing
to the
problem.<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> - Push the overflow logging into Poller Task which
can look at
how many drops
occurred since
last received
item.<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal">* Split up the flags for the poller messages into 2:<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> - Messages that are/could be happening in the TX/RX
tasks<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"> - Other noisy messages that always happen in the
poller task.<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">Thoughts on what else we might measure to figure out
what is going
on?<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p
class="MsoNormal">//.ichael<o:p></o:p></p>
</div>
<div>
<p
class="MsoNormal"><o:p> </o:p></p>
</div>
</div>
</div>
<p
class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p
class="MsoNormal">On Sun, 5 May 2024, 19:29 Michael Balzer via OvmsDev,
<<a
href="mailto:ovmsdev@lists.openvehicles.com" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">ovmsdev@lists.openvehicles.com</a>>
wrote:<o:p></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt">
<div>
<p
class="MsoNormal">Michael,<br>
<br>
the queue size
isn't in
bytes, it's in
messages:<br>
<br>
<br>
<o:p></o:p></p>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt">
<p
class="MsoNormal"><span style="font-family:"Courier New""> *
@param
uxQueueLength
The maximum
number of
items that the
queue can
contain.<br>
*<br>
* @param
uxItemSize The
number of
bytes each
item in the
queue will
require.</span><o:p></o:p></p>
</blockquote>
<p
class="MsoNormal"><br>
Also, from the
time stamps in
Dereks log
excerpt, there
were quite
some dropped
frames in that
time window --
at least 23
frames in 40
ms, that's
bad.<br>
<br>
Queue sizes
are currently:<br>
<br>
<span
style="font-family:"Courier New"">CONFIG_OVMS_HW_CAN_RX_QUEUE_SIZE=60<br>
CONFIG_OVMS_VEHICLE_CAN_RX_QUEUE_SIZE=60</span><br>
<br>
The new poller
now channels
all TX
callbacks
through the
task queue
additionally
to RX and
commands. So
setting the
queue size to
be larger than
the CAN RX
queue size
seems
appropriate.<br>
<br>
Nevertheless,
an overflow
with more than
60 waiting
messages still
indicates some
too long
processing
time in the
vehicle task.<br>
<br>
TX callbacks
previously
were done
directly in
the CAN
context, and
no current
vehicle
overrides the
empty default
handler, so
this imposed
almost no
additional
overhead. By
requiring a
queue entry
for each TX
callback, this
feature now
has a
potentially
high impact
for all
vehicles. If
passing these
to the task is
actually
necessary, it
needs to
become an
opt-in
feature, so
only vehicles
subscribing to
the callback
actually need
to cope with
that
additional
load &
potential
processing
delays
involved.<br>
<br>
Regards,<br>
Michael<br>
<br>
<br>
<o:p></o:p></p>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</blockquote>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</blockquote>
</div>
</blockquote>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
</div>
</blockquote>
</blockquote>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal
Fon 02333 / 833 5735 * Handy 0176 / 206 989 26</pre>
</body>
</html>