<div dir="auto">Thanks for the fix, sending that much data would drain my Hologram account quite rapidly. I'm still using the included <span class="money">$5</span>. So far OVMS has drained about 15c in 8 months and that already seems like too much.</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu., Sep. 19, 2019, 05:04 Mark Webb-Johnson, <<a href="mailto:mark@webb-johnson.net">mark@webb-johnson.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word;line-break:after-white-space"><div><br></div>It has been a very very very long day 😵<div><br></div><div>Fixed to 7MB.</div><div><br></div><div>Regards, Mark.<br><div><br><blockquote type="cite"><div>On 19 Sep 2019, at 4:52 PM, Michael Balzer <<a href="mailto:dexter@expeedo.de" target="_blank" rel="noreferrer">dexter@expeedo.de</a>> wrote:</div><br><div>
<div text="#000000" bgcolor="#FFFFFF">
<div>OK, but 700 MB is a bit exaggerated now
;)<br>
<br>
Regards,<br>
Michael<br>
<br>
<br>
Am 19.09.19 um 10:22 schrieb Mark Webb-Johnson:<br>
</div>
<blockquote type="cite">
OK, I’ve built:
<div><br>
</div>
<blockquote style="margin:0 0 0 40px;border:none;padding:0px">
<div>
<div><font face="Andale Mono"><span style="font-style:normal;font-size:14px">2019-09-19
MWJ 3.2.005 OTA release</span></font></div>
<div><font face="Andale Mono"><span style="font-style:normal;font-size:14px">-
Default module/debug.tasks to FALSE</span></font></div>
<div><font face="Andale Mono"><span style="font-style:normal;font-size:14px">
Users that volunteer to submit tasks debug historical
data to the Open Vehicles</span></font></div>
<div><font face="Andale Mono"><span style="font-style:normal;font-size:14px">
project, should (with appreciation) set:</span></font></div>
<div><font face="Andale Mono"><span style="font-style:normal;font-size:14px">
config set module debug.tasks yes</span></font></div>
<div><font face="Andale Mono"><span style="font-style:normal;font-size:14px">
This will be transmit approximately 700MB of data a
month (over cellular/wifi).</span></font></div>
<div><font face="Andale Mono"><span style="font-style:normal;font-size:14px"><br>
</span></font></div>
<div><font face="Andale Mono"><span style="font-style:normal;font-size:14px">2019-09-19
MWJ 3.2.004 OTA release</span></font></div>
<div><font face="Andale Mono"><span style="font-style:normal;font-size:14px">-
Skipped for Chinese superstitous reasons</span></font></div>
</div>
</blockquote>
<div>
<div><br>
</div>
<div>In EAP now, and I will announce.</div>
<div><br>
</div>
<div>Regards, Mark.</div>
<div><br>
<blockquote type="cite">
<div>On 19 Sep 2019, at 3:34 PM, Michael Balzer
<<a href="mailto:dexter@expeedo.de" target="_blank" rel="noreferrer">dexter@expeedo.de</a>> wrote:</div>
<br>
<div>
<div text="#000000" bgcolor="#FFFFFF">
<div>Correct.<br>
<br>
Regards,<br>
Michael<br>
<br>
<br>
Am 19.09.19 um 09:29 schrieb Mark Webb-Johnson:<br>
</div>
<blockquote type="cite">
I’m just worried about the users who don’t know about
this new feature. When they deploy this version, they
suddenly start sending 6MB of data a month up to us.
<div><br>
</div>
<div>I think the ‘fix’ is just to change
ovms_module.c:</div>
<div><br>
</div>
<blockquote style="margin:0 0 0 40px;border:none;padding:0px">
<div>MyConfig.GetParamValueBool("module",
"debug.tasks", true)</div>
<div><br>
</div>
<div>to</div>
<div><br>
</div>
<div>MyConfig.GetParamValueBool("module",
"debug.tasks", false)</div>
</blockquote>
<div>
<div><br>
</div>
<div>That would then only submit these logs
for those that explicitly turn it on?</div>
<div><br>
</div>
<div>Regards, Mark.</div>
<div><br>
</div>
<div>
<blockquote type="cite">
<div>On 19 Sep 2019, at 3:23 PM,
Michael Balzer <<a href="mailto:dexter@expeedo.de" target="_blank" rel="noreferrer">dexter@expeedo.de</a>>
wrote:</div>
<br>
<div>
<div text="#000000" bgcolor="#FFFFFF">
<div>Sorry, I didn't
think about this being an issue elsewhere
-- german data plans typically start at
minimum 100 MB/month flat (that's my
current plan at 3 € / month).<br>
<br>
No need for a new release, it can be
turned off OTA by issueing<br>
<br>
<tt>config set module debug.tasks
no</tt><br>
<br>
Regards,<br>
Michael<br>
<br>
<br>
Am 19.09.19 um 09:08 schrieb Mark
Webb-Johnson:<br>
</div>
<blockquote type="cite">
Yep:
<div><br>
</div>
<blockquote style="margin:0 0 0 40px;border:none;padding:0px">
<div>758 bytes * (86400 / 300)
* 30 = 6.5MB/month</div>
</blockquote>
<div>
<div><br>
</div>
<div>That is going over data
(not SD). Presumably cellular data for
a large portion of the time.</div>
<div><br>
</div>
<div>I think we need to default
this to OFF, and make a 3.2.004 to
avoid this becoming an issue.</div>
<div><br>
</div>
<div>Regards, Mark.</div>
<div><br>
<blockquote type="cite">
<div>On 19 Sep 2019, at
2:04 PM, Stephen Casner <<a href="mailto:casner@acm.org" target="_blank" rel="noreferrer">casner@acm.org</a>>
wrote:</div>
<br>
<div>
<div>That's 6.55MB/month,
unless you have unusually short
months! :-)<br>
<br>
In what space is that data
stored? A log written to SD?
That's not<br>
likely to fill up the SD card
too fast, but what happens if no
SD card<br>
is installed?<br>
<br>
-- Steve<br>
<br>
On Thu, 19 Sep 2019, Mark
Webb-Johnson wrote:<br>
<br>
<blockquote type="cite">
<blockquote type="cite"> To enable CPU
usage statistics, apply the
changes to sdkconfig<br>
included.<br>
New history record:<br>
- "*-OVM-DebugTasks" v1:
<taskcnt,totaltime> +
per task:<br>
<tasknum,name,state,stack_now,stack_max,stack_total,<br>
heap_total,heap_32bit,heap_spi,runtime><br>
Note: CPU core use
percentage = runtime /
totaltime<br>
</blockquote>
<br>
I’ve just noticed that this is
enabled by default now (my
production build has the
sdkconfig updated, as per
defaults).<br>
<br>
I am seeing 758 bytes of
history record, every 5
minutes. About 218KB/day, or
654KB/month.<br>
<br>
Should this be opt-in?<br>
<br>
Regards, Mark.<br>
<br>
<blockquote type="cite">On 8 Sep 2019, at
5:43 PM, Michael Balzer <<a href="mailto:dexter@expeedo.de" target="_blank" rel="noreferrer">dexter@expeedo.de</a>>
wrote:<br>
<br>
I've pushed some
modifications and
improvements to (hopefully)
fix the timer issue or at
least be able to debug it.<br>
<br>
Some sdkconfig changes are
necessary.<br>
<br>
The build including these
updates is on my edge
release as
3.2.002-258-g20ae554b.<br>
<br>
Btw: the network restart
strategy seems to mitigate
issue #241; I've seen a
major drop on record
repetitions on my server
since the rollout.<br>
<br>
<br>
commit
99e4e48bdd40b7004c0976f51aba9e3da4ecab53<br>
<br>
Module: add per task CPU
usage statistics, add task
stats history records<br>
<br>
To enable CPU usage
statistics, apply the
changes to sdkconfig<br>
included. The CPU usage
shown by the commands is
calculated against<br>
the last task status
retrieved (or system boot).<br>
<br>
Command changes:<br>
- "module tasks" -- added
CPU (core) usage in percent
per task<br>
<br>
New command:<br>
- "module tasks data" --
output task stats in history
record form<br>
<br>
New config:<br>
- [module] debug.tasks --
yes (default) = send task
stats every 5 minutes<br>
<br>
New history record:<br>
- "*-OVM-DebugTasks" v1:
<taskcnt,totaltime> +
per task:<br>
<tasknum,name,state,stack_now,stack_max,stack_total,<br>
heap_total,heap_32bit,heap_spi,runtime><br>
Note: CPU core use
percentage = runtime /
totaltime<br>
<br>
commit
950172c216a72beb4da0bc7a40a46995a6105955<br>
<br>
Build config: default
timer service task priority
raised to 20<br>
<br>
Background: the FreeRTOS
timer service shall only be
used for very<br>
short and non-blocking
jobs. We delegate event
processing to our<br>
events task, anything
else in timers needs to run
with high<br>
priority.<br>
<br>
commit
31ac19d187480046c16356b80668de45cacbb83d<br>
<br>
DukTape: add build config
for task priority, default
lowered to 3<br>
<br>
Background: the DukTape
garbage collector shall run
on lower<br>
priority than tasks
like SIMCOM & events<br>
<br>
commit
e0a44791fbcfb5a4e4cad24c9d1163b76e637b4f<br>
<br>
Server V2: use
esp_log_timestamp for
timeout detection,<br>
add timeout config,
limit data records &
size per second<br>
<br>
New config:<br>
- [server.v2] timeout.rx
-- timeout in seconds,
default 960<br>
<br>
commit
684a4ce9525175a910040f0d1ca82ac212fbf5de<br>
<br>
Notify: use
esp_log_timestamp for
creation time instead of
monotonictime<br>
to harden against timer
service starvation / ticker
event drops<br>
<br>
<br>
Regards,<br>
Michael<br>
<br>
<br>
Am 07.09.19 um 10:55 schrieb
Michael Balzer:<br>
<blockquote type="cite">I think the RTOS
timer service task
starves. It's running on
core 0 with priority 1.<br>
<br>
Taks on core 0 sorted by
priority:<br>
<br>
Number of Tasks = 20
Stack: Now Max
Total Heap 32-bit
SPIRAM C# PRI<br>
3FFC84A8 6 Blk ipc0
388 500
1024 7788 0
0 0 24<br>
3FFC77F0 5 Blk OVMS CanRx
428 428 2048
3052 0 31844 0
23<br>
3FFAFBF4 1 Blk esp_timer
400 656 4096
35928 644 25804 0
22<br>
3FFD3240 19 Blk wifi
460 2716
3584 43720 0
20 0 22<br>
3FFC03C4 2 Blk eventTask
448 1984 4608
104 0 0 0
20<br>
3FFC8F14 17 Blk tiT
500 2308
3072 6552 0
0 * 18<br>
3FFE14F0 26 Blk OVMS COrx
456 456 4096
0 0 0 0
7<br>
3FFE19D4 27 Blk OVMS COwrk
476 476 3072
0 0 0 0
7<br>
3FFCBC34 12 Blk Tmr Svc
352 928 3072
88 0 0 0
1<br>
3FFE7708 23 Blk mdns
468 1396
4096 108 0
0 0 1<br>
<br>
I don't think it's our
CanRx, as that only
fetches and queues CAN
frames, the actual work is
done by the listeners. The
CO tasks only run for
CANopen jobs, which are
few for normal operation.<br>
<br>
That leaves the system
tasks, with main suspect
-once again- the wifi
blob.<br>
<br>
We need to know how much
CPU time the tasks
actually use now. I think
I saw some option for this
in the FreeRTOS config.<br>
<br>
Regards,<br>
Michael<br>
<br>
<br>
Am 06.09.19 um 23:15
schrieb Michael Balzer:<br>
<blockquote type="cite">The workaround
is based on the
monotonictime being
updated per second, as
do the history record
offsets.<br>
<br>
Apparently, that
mechanism doesn't work
reliably. That may be an
indicator for some
bigger underlying issue.<br>
<br>
Example log excerpt:<br>
<br>
2019-09-06
22:07:48.126919 +0200
info main: #173 C
MITPROHB rx msg h
964,0,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:09:03.089031 +0200
info main: #173 C
MITPROHB rx msg h
964,-10,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:09:05.041574 +0200
info main: #173 C
MITPROHB rx msg h
964,-20,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:09:05.052644 +0200
info main: #173 C
MITPROHB rx msg h
964,-30,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:09:05.063617 +0200
info main: #173 C
MITPROHB rx msg h
964,-49,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:09:05.077527 +0200
info main: #173 C
MITPROHB rx msg h
964,-59,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:09:05.193775 +0200
info main: #173 C
MITPROHB rx msg h
964,-70,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:09:13.190645 +0200
info main: #173 C
MITPROHB rx msg h
964,-80,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:09:22.077994 +0200
info main: #173 C
MITPROHB rx msg h
964,-90,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:09:54.590300 +0200
info main: #173 C
MITPROHB rx msg h
964,-109,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:11:10.127054 +0200
info main: #173 C
MITPROHB rx msg h
964,-119,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:11:16.794200 +0200
info main: #173 C
MITPROHB rx msg h
964,-130,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:11:22.455652 +0200
info main: #173 C
MITPROHB rx msg h
964,-140,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:12:49.423412 +0200
info main: #173 C
MITPROHB rx msg h
964,-150,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:12:49.442096 +0200
info main: #173 C
MITPROHB rx msg h
964,-169,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:12:49.461941 +0200
info main: #173 C
MITPROHB rx msg h
964,-179,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:14:39.828133 +0200
info main: #173 C
MITPROHB rx msg h
964,-190,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:14:39.858144 +0200
info main: #173 C
MITPROHB rx msg h
964,-200,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:14:52.020319 +0200
info main: #173 C
MITPROHB rx msg h
964,-210,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:14:54.452637 +0200
info main: #173 C
MITPROHB rx msg h
964,-229,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:15:12.613935 +0200
info main: #173 C
MITPROHB rx msg h
964,-239,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:15:35.223845 +0200
info main: #173 C
MITPROHB rx msg h
964,-250,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:16:09.255059 +0200
info main: #173 C
MITPROHB rx msg h
964,-260,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:17:31.919754 +0200
info main: #173 C
MITPROHB rx msg h
964,-270,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:19:23.366267 +0200
info main: #173 C
MITPROHB rx msg h
964,-289,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:21:57.344609 +0200
info main: #173 C
MITPROHB rx msg h
964,-299,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:23:40.082406 +0200
info main: #31 C
MITPROHB rx msg h
964,-1027,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
2019-09-06
22:25:58.061883 +0200
info main: #31 C
MITPROHB rx msg h
964,-1040,RT-BAT-C,5,86400,2,1,3830,3795,3830,-10,25,25,25,0<br>
<br>
<br>
This shows the ticker
was only run 299 times
from 22:07:48 to
22:21:57.<br>
<br>
After 22:21:57 the
workaround was triggered
and did a reconnect.
Apparently during that
network reinitialization
of 103 seconds, the per
second ticker was run
628 times.<br>
<br>
That can't be catching
up on the event queue,
as that queue has only
20 slots. So something
strange is going on
here.<br>
<br>
Any ideas?<br>
<br>
Regards,<br>
Michael<br>
<br>
<br>
Am 06.09.19 um 08:04
schrieb Michael Balzer:<br>
<blockquote type="cite">Mark &
anyone else running a
V2 server,<br>
<br>
as most cars don't
send history records,
this also needs the
change to the server I
just pushed, i.e.
server version 2.4.2.<br>
<br>
<a href="https://github.com/openvehicles/Open-Vehicle-Monitoring-System/commits/master" target="_blank" rel="noreferrer">https://github.com/openvehicles/Open-Vehicle-Monitoring-System/commits/master</a>
<<a href="https://github.com/openvehicles/Open-Vehicle-Monitoring-System/commits/master" target="_blank" rel="noreferrer">https://github.com/openvehicles/Open-Vehicle-Monitoring-System/commits/master</a>><br>
<br>
Regards,<br>
Michael<br>
<br>
<br>
Am 05.09.19 um 19:55
schrieb Michael
Balzer:<br>
<blockquote type="cite">I've
pushed the nasty
workaround: the v2
server checks for no
RX over 15 minutes,
then restarts the
network (wifi &
modem) as configured
for autostart.<br>
<br>
Rolled out on my
server in edge as
3.2.002-237-ge075f655.<br>
<br>
Please test.<br>
<br>
Regards,<br>
Michael<br>
<br>
<br>
Am 05.09.19 um 01:58
schrieb Mark
Webb-Johnson:<br>
<blockquote type="cite">
<blockquote type="cite">Mark,
you can check
your server logs
for history
messages with
ridiculous time
offsets:<br>
[sddexter@ns27
server]$ cat
log-20190903 |
egrep "rx msg h
[0-9]+,-[0-9]{4}" | wc -l<br>
455283<br>
</blockquote>
<br>
I checked my logs
and see 12
vehicles showing
this. But, 2 only
show this for a
debugcrash log
(which is
expected, I guess,
if the time is not
synced at report
time). I’ve got 4
cars with the
offset >
10,000.<br>
<br>
Regards, Mark.<br>
<br>
<blockquote type="cite">On 4
Sep 2019, at
4:45 AM, Michael
Balzer <<a href="mailto:dexter@expeedo.de" target="_blank" rel="noreferrer">dexter@expeedo.de</a>
<<a href="mailto:dexter@expeedo.de" target="_blank" rel="noreferrer">mailto:dexter@expeedo.de</a>>>
wrote:<br>
<br>
Everyone,<br>
<br>
I've pushed a
change that
needs some
testing.<br>
<br>
I had the issue
myself now
parking at a
certain distance
from my garage
wifi AP, i.e. on
the edge of
"in", after wifi
had been
disconnected for
some hours, and
with the module
still connected
via modem. The
wifi blob had
been trying to
connect to the
AP for about two
hours.<br>
<br>
As seen before,
the module saw
no error, just
the server
responses and
commands stopped
coming in. I
noticed the
default
interface was
still "st1"
despite wifi
having been
disconnected and
modem connected.
The DNS was also
still configured
for my wifi
network, and the
interface seemed
to have an IP
address -- but
wasn't pingable
from the wifi
network.<br>
<br>
A power cycle of
the modem solved
the issue
without reboot.
So the cause may
be in the
modem/ppp
subsystem, or it
may be related
(in some weird
way) to the
default
interface / DNS
setup.<br>
<br>
More tests
showed the
default
interface
again/still got
set by the wifi
blob itself at
some point,
overriding our
modem
prioritization.
The events we
didn't handle up
to now were
"sta.connected"
and
"sta.lostip", so
I added these,
and the bug
didn't show up
again since
then. That
doesn't mean
anything, so we
need to test
this.<br>
<br>
The default
interface really
shouldn't affect
inbound packet
routing of an
established
connection, but
there always may
be strange bugs
lurking in those
libs.<br>
<br>
The change also
reimplements the
wifi signal
strength
reading, as the
tests also
showed that
still wasn't
working well
using the CSI
callback. It now
seems to be much
more reliable.<br>
<br>
Please test
& report.
The single
module will be
hard to test, as
the bug isn't
reproducable
easily, but you
can still try if
wifi / modem
transitions work
well.<br>
<br>
Mark, you can
check your
server logs for
history messages
with ridiculous
time offsets:<br>
[sddexter@ns27
server]$ cat
log-20190903 |
egrep "rx msg h
[0-9]+,-[0-9]{4}" | wc -l<br>
455283<br>
The bug now
severely affects
the V2 server
performance, as
the server is
single threaded
and doesn't
scale very well
to this kind of
bulk data
bursts,
especially when
coming from
multiple modules
in parallel. So
we really need
to solve this
now. Slow
reactions or
connection drops
from my server
lately have been
due to this bug.
If this change
doesn't solve
it, we'll need
to add some
reboot trigger
on "too many
server v2
notification
retransmissions"
-- or maybe a
modem power
cycle will do,
that wouldn't
discard the
data.<br>
<br>
Thanks,<br>
Michael<br>
<br>
<br>
Am 03.09.19 um
07:46 schrieb
Mark
Webb-Johnson:<br>
<blockquote type="cite">No
problem. We
can hold. I
won’t commit
anything for
the next few
days (and
agree to
hold-off on
Markos’s
pull). Let me
know when you
are ready.<br>
<br>
Regards, Mark.<br>
<br>
<blockquote type="cite">On 3
Sep 2019, at
1:58 AM,
Michael Balzer
<<a href="mailto:dexter@expeedo.de" target="_blank" rel="noreferrer">dexter@expeedo.de</a>>
<<a href="mailto:dexter@expeedo.de" target="_blank" rel="noreferrer">mailto:dexter@expeedo.de</a>>
wrote:<br>
<br>
Mark, please
wait.<br>
<br>
I may just
have found the
cause for
issue #241, or
at least
something I
need to
investigate
before
releasing.<br>
<br>
I need to dig
into my logs
first, and try
something.<br>
<br>
Regards,<br>
Michael<br>
<br>
<br>
Am 02.09.19 um
12:23 schrieb
Michael
Balzer:<br>
<blockquote type="cite">Nothing
open from my
side at the
moment.<br>
<br>
I haven't had
the time to
look in to
Markos pull
request, but
from a first
check also
think that's
going too deep
to be included
in this
release.<br>
<br>
Regards,<br>
Michael<br>
<br>
<br>
Am 02.09.19 um
04:15 schrieb
Mark
Webb-Johnson:<br>
<blockquote type="cite">I
think it is
well past time
for a 3.2.003
release.
Things seems
table in edge
(although some
things only
partially
implemented).<br>
<br>
Anything
people want to
include at the
last minute,
or can we go
ahead and
build?<br>
<br>
Regards, Mark.<br>
<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
<br>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
<br>
<pre cols="144">--
Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal
Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
</pre>
</div>
_______________________________________________<br>OvmsDev mailing list<br><a href="mailto:OvmsDev@lists.openvehicles.com" target="_blank" rel="noreferrer">OvmsDev@lists.openvehicles.com</a><br><a href="http://lists.openvehicles.com/mailman/listinfo/ovmsdev" target="_blank" rel="noreferrer">http://lists.openvehicles.com/mailman/listinfo/ovmsdev</a><br></div></blockquote></div><br></div></div>_______________________________________________<br>
OvmsDev mailing list<br>
<a href="mailto:OvmsDev@lists.openvehicles.com" target="_blank" rel="noreferrer">OvmsDev@lists.openvehicles.com</a><br>
<a href="http://lists.openvehicles.com/mailman/listinfo/ovmsdev" rel="noreferrer noreferrer" target="_blank">http://lists.openvehicles.com/mailman/listinfo/ovmsdev</a><br>
</blockquote></div>