We need to get the v3.3 release ready for release now, as the new v3.3 hardware is about to enter production and the factory needs the firmware. Accordingly, I suggest: I have already updated the for-v3.3 branch with all the recent changes from v3.2. We now (today/tomorrow) release the current v3.2 branch as tag 3.2.018, OTA release. This would be the (hopefully) last v3.2 release. After release of that, we branch off master to a v3.2 branch, for historical purposes (and if we need it for anything). We then merge back for-v3.3 into the master branch and start working through the usual daily builds for that to get it ready for production. I think there is still some work to do on pouring the plugins, but the core code (and in particular new modem support) should be ok now. I think we will need to investigate an approach for dual builds for v3.3-or-later and pre-v3.3 hardware, to take advantage of the new ESP32 rev3 chips. I think those will require a different sdkconfig. We can do that automatically in the OTA system, by building the URL based on the ESP32 platform revision. Should we do that with the v3.3 release (even if the builds are for the moment the same), or at a later date? Regards, Mark.
On 11/17/21 23:06, Mark Webb-Johnson wrote:
We now (today/tomorrow) release the current v3.2 branch as tag 3.2.018, OTA release. This would be the (hopefully) last v3.2 release.
Hum... I was running 3.2.017-5-g5a19e477 on my modules and just tried to update. I removed the build directory (as I sometimes do) but I got many errors from the linker (~3000 lines). I double checked and my esp-idf appears to be up to date. I get similar errors building the for-v3.3 branch. Craig Toolchain path: /home/ice/u0/leres/bin/xtensa-esp32-elf-gcc Toolchain version: crosstool-ng-1.22.0-97-gc752ad5 Compiler version: 5.2.0 Python requirements from /home/ice/u0/leres/esp/openvehicles-xtensa-esp32-elf/requirements.txt are satisfied. App "ovms3" version: 3.2.017-11-gbd192f5c LD /home/ice/u0/leres/src/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/build/ovms3.elf /usr/local/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/bin/ld: /home/ice/u0/leres/src/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/build/ovms3.elf section `.rtc.bss' will not fit in region `rtc_slow_seg' [...] /home/ice/u0/leres/esp/openvehicles-xtensa-esp32-elf/components/soc/esp32/cpu_util.c:29:(.iram1.0+0x45): dangerous relocation: l32r: literal placed after use: (.iram1.0.literal+0x10) [...]
On 11/18/21 09:46, Craig Leres wrote:
Hum... I was running 3.2.017-5-g5a19e477 on my modules and just tried to update. I removed the build directory (as I sometimes do) but I got many errors from the linker (~3000 lines). I double checked and my esp-idf appears to be up to date.
I verified this was not due to having CONFIG_SPIRAM_CACHE_WORKAROUND turned off. Craig
Craig, there was no change to the RTC or IRAM sections lately. Maybe somehow your linker definition got messed up? Regards, Michael Am 18.11.21 um 18:46 schrieb Craig Leres:
/usr/local/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/bin/ld: /home/ice/u0/leres/src/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/build/ovms3.elf section `.rtc.bss' will not fit in region `rtc_slow_seg' [...] /home/ice/u0/leres/esp/openvehicles-xtensa-esp32-elf/components/soc/esp32/cpu_util.c:29:(.iram1.0+0x45): dangerous relocation: l32r: literal placed after use: (.iram1.0.literal+0x10)
-- Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
On 11/18/21 13:32, Michael Balzer wrote:
there was no change to the RTC or IRAM sections lately. Maybe somehow your linker definition got messed up?
"git status" doesn't show any modified files in my copy of the tree. And I don't see any changes to my toolchain, looks like the last time I updated it was in July. The only shared libraries the toolchain ld uses are libz and libc from the base os (not updated in more than a year). The amount of overflow seems unreasonably large? /usr/local/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/bin/ld: RTC_SLOW segment data does not fit. /usr/local/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/bin/ld: region `iram0_0_seg' overflowed by 2172504 bytes /usr/local/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/bin/ld: region `rtc_slow_seg' overflowed by 33681 bytes 2172504 bytes is over 2MB?!?! I tried reverting to ~Oct 29th but I get the same errors so clearly something is recently wrong with my build environment. I found this: https://github.com/espressif/esp-idf/issues/6914 And tried running idf_size.py: ice 34 % python $IDF_PATH/tools/idf_size.py build/ovms3.map load: 0.09 cmd: python3.8 85307 [running] 3.59r 3.57u 0.00s 29% 13616k load: 0.45 cmd: python3.8 85307 [running] 12.20r 12.18u 0.00s 73% 19824k Total sizes: DRAM .data size: 0 bytes DRAM .bss size: 0 bytes Used static DRAM: 0 bytes ( 180736 available, 0.0% used) Used static IRAM: 16775 bytes ( 114297 available, 12.8% used) Flash code: 0 bytes Flash rodata: 64388 bytes Total image size:~ 81163 bytes (.bin may be padded larger) The end of that github issue points to changes in idf_size, could this be a python issue? Maybe python packages have been updated on this box since my last build at the end of October. Another seemingly related thread: https://githubmemory.com/repo/aws/amazon-freertos/issues/3356 "I check the components size. And you can see in the log, program uses the 0.8% of IRAM then how should the IRAM memory region overflow?" This feels like broken math somewhere. Craig
On 11/18/21 09:46, Craig Leres wrote:
Hum... I was running 3.2.017-5-g5a19e477 on my modules and just tried to update. I removed the build directory (as I sometimes do) but I got many errors from the linker (~3000 lines).
This turns out to be due to our old friend pyparsing. FreeBSD recently upgraded from 2.4.7 to 3.0.4 which does not work. The module is used by esp-idf/tools/ldgen. I tried the latest (3.0.6) which also breaks the build. Looking at the espressif esp-idf master branch I find that they *still* are stuck < 2.4.0. The workaround that I used was to put a copy the 2.4.7 version of pyparsing.py into the ldgen directory. Looking at the releases page: https://github.com/pyparsing/pyparsing/releases?page=3 The newest version that is "acceptable" is 2.3.1 which was release in January of 2019... I opened an issue with espressif, maybe they can be coaxed into upgrading ldgen to work with newer versions. Craig
A few days after filling up I noticed that the ovms app was reporting 53% SOC when I knew it should read much higher. I just reset persistent metrics ("metrics persist -r") and ran the engine for a bit and now it shows 92%. My other car was showing 91% but after cycling power to the ovms module it (the other way to reset persistent metrics) it dropped to 89%. Is it possible persistent metrics gets corrupted such that they can no longer be updated? I'm thinking if I see this again I'll write something to dump out the related data structures. Craig
Craig, did any of the metrics dumps show anything unusual? Did the staleness reflect the stopped updates? Did you see any inf/NaN in the dumps? I've had a similar effect lately where I had a bug in the e-Up code that could set a float metric to inf. That did not affect the V2 App though, only the web UI, as the JSON encoder does not handle NaN or inf. I thought if we should test for special float values in the JSON encoder, but then decided it's rather a bug to set a metric value to NaN or inf first place. The metric that was affected by my bug was no V2 transport metric, so if the Apps have an issue with special floats as well, that could have been your issue. You would have seen "inf" or "NaN" in the metrics dump then. Regards, Michael Am 20.11.21 um 02:44 schrieb Craig Leres:
A few days after filling up I noticed that the ovms app was reporting 53% SOC when I knew it should read much higher. I just reset persistent metrics ("metrics persist -r") and ran the engine for a bit and now it shows 92%.
My other car was showing 91% but after cycling power to the ovms module it (the other way to reset persistent metrics) it dropped to 89%.
Is it possible persistent metrics gets corrupted such that they can no longer be updated? I'm thinking if I see this again I'll write something to dump out the related data structures.
Craig _______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
-- Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
On 11/20/21 00:02, Michael Balzer wrote:
did any of the metrics dumps show anything unusual? Did the staleness reflect the stopped updates? Did you see any inf/NaN in the dumps?
Clearly I should have captured some info before resetting... I do remember that in a ssh session, "metrics list -s v.b.soc" showed "[99-]".
I've had a similar effect lately where I had a bug in the e-Up code that could set a float metric to inf. That did not affect the V2 App though, only the web UI, as the JSON encoder does not handle NaN or inf.
This looked like stuck values to me, from the V2 app, the web ui, and the command line.
I thought if we should test for special float values in the JSON encoder, but then decided it's rather a bug to set a metric value to NaN or inf first place.
The metric that was affected by my bug was no V2 transport metric, so if the Apps have an issue with special floats as well, that could have been your issue. You would have seen "inf" or "NaN" in the metrics dump then.
I don't remember seeing any "inf" or "NaN". I'll try to do a better job of collecting hints if this reoccurs. Craig
Sounds OK for me. Regarding using the platform revision instead of CONFIG_OVMS_HW_BASE_3_x config: so in case someone accidentally installs the wrong firmware, the system would heal itself by a simple OTA update? Sounds good and should be a simple change. Regards, Michael Am 18.11.21 um 08:06 schrieb Mark Webb-Johnson:
We need to get the v3.3 release ready for release now, as the new v3.3 hardware is about to enter production and the factory needs the firmware.
Accordingly, I suggest:
1. I have already updated the for-v3.3 branch with all the recent changes from v3.2.
2. We now (today/tomorrow) release the current v3.2 branch as tag 3.2.018, OTA release. This would be the (hopefully) last v3.2 release.
3. After release of that, we branch off master to a v3.2 branch, for historical purposes (and if we need it for anything).
4. We then merge back for-v3.3 into the master branch and start working through the usual daily builds for that to get it ready for production. I think there is still some work to do on pouring the plugins, but the core code (and in particular new modem support) should be ok now.
I think we will need to investigate an approach for dual builds for v3.3-or-later and pre-v3.3 hardware, to take advantage of the new ESP32 rev3 chips. I think those will require a different sdkconfig. We can do that automatically in the OTA system, by building the URL based on the ESP32 platform revision. Should we do that with the v3.3 release (even if the builds are for the moment the same), or at a later date?
Regards, Mark.
-- Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
Regarding the 3.2.018, I will handle it this afternoon (my time). Regarding OTA and support for rev3 ESP32, at the moment, the OTA URL is composed of: <server> (default api.openvehicles.com/firmware/ota <http://api.openvehicles.com/firmware/ota>) <base> (either /v3.0/ or /v3.1/ depending on CONFIG_OVMS_HW_BASE_3_0 or CONFIG_OVMS_HW_BASE_3_1 <tag> (default ‘main’) /ovms3.ver My suggestion is to add the hardware platform (as detected by the code that provides metric m.hardware) in there. Perhaps replace <base> with something that detects /v3.0/, /v3.1/, or /v3.3/ based on a combination of CONFIG_OVMS_HW_BASE and m.hardware ESP32 revision? We should probably just implement that in the v3.3 code, so v3.2 and before keep using CONFIG_OVMS_HW_BASE. Regards, Mark.
On 19 Nov 2021, at 5:24 AM, Michael Balzer <dexter@expeedo.de> wrote:
Signed PGP part Sounds OK for me.
Regarding using the platform revision instead of CONFIG_OVMS_HW_BASE_3_x config: so in case someone accidentally installs the wrong firmware, the system would heal itself by a simple OTA update? Sounds good and should be a simple change.
Regards, Michael
Am 18.11.21 um 08:06 schrieb Mark Webb-Johnson:
We need to get the v3.3 release ready for release now, as the new v3.3 hardware is about to enter production and the factory needs the firmware.
Accordingly, I suggest:
I have already updated the for-v3.3 branch with all the recent changes from v3.2.
We now (today/tomorrow) release the current v3.2 branch as tag 3.2.018, OTA release. This would be the (hopefully) last v3.2 release.
After release of that, we branch off master to a v3.2 branch, for historical purposes (and if we need it for anything).
We then merge back for-v3.3 into the master branch and start working through the usual daily builds for that to get it ready for production. I think there is still some work to do on pouring the plugins, but the core code (and in particular new modem support) should be ok now.
I think we will need to investigate an approach for dual builds for v3.3-or-later and pre-v3.3 hardware, to take advantage of the new ESP32 rev3 chips. I think those will require a different sdkconfig. We can do that automatically in the OTA system, by building the URL based on the ESP32 platform revision. Should we do that with the v3.3 release (even if the builds are for the moment the same), or at a later date?
Regards, Mark.
-- Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
Mark, why not introduce this for v3.2 as well? If a v3.3 user accidentally installs a v3.2 build, that would not heal automatically but stick until manually fixed. I think it makes sense to have all builds automatically try to update to the best build available for the hardware in use. Am I missing somehing? Regards, Michael Am 19.11.21 um 02:44 schrieb Mark Webb-Johnson:
Regarding the 3.2.018, I will handle it this afternoon (my time).
Regarding OTA and support for rev3 ESP32, at the moment, the OTA URL is composed of:
* <server> (default api.openvehicles.com/firmware/ota <http://api.openvehicles.com/firmware/ota>) * <base> (either /v3.0/ or /v3.1/ depending on CONFIG_OVMS_HW_BASE_3_0 or CONFIG_OVMS_HW_BASE_3_1 * <tag> (default ‘main’) * /ovms3.ver
My suggestion is to add the hardware platform (as detected by the code that provides metric m.hardware) in there. Perhaps replace <base> with something that detects /v3.0/, /v3.1/, or /v3.3/ based on a combination of CONFIG_OVMS_HW_BASE and m.hardware ESP32 revision? We should probably just implement that in the v3.3 code, so v3.2 and before keep using CONFIG_OVMS_HW_BASE.
Regards, Mark.
On 19 Nov 2021, at 5:24 AM, Michael Balzer <dexter@expeedo.de <mailto:dexter@expeedo.de>> wrote:
Signed PGP part Sounds OK for me.
Regarding using the platform revision instead of CONFIG_OVMS_HW_BASE_3_x config: so in case someone accidentally installs the wrong firmware, the system would heal itself by a simple OTA update? Sounds good and should be a simple change.
Regards, Michael
Am 18.11.21 um 08:06 schrieb Mark Webb-Johnson:
We need to get the v3.3 release ready for release now, as the new v3.3 hardware is about to enter production and the factory needs the firmware.
Accordingly, I suggest:
1. I have already updated the for-v3.3 branch with all the recent changes from v3.2.
2. We now (today/tomorrow) release the current v3.2 branch as tag 3.2.018, OTA release. This would be the (hopefully) last v3.2 release.
3. After release of that, we branch off master to a v3.2 branch, for historical purposes (and if we need it for anything).
4. We then merge back for-v3.3 into the master branch and start working through the usual daily builds for that to get it ready for production. I think there is still some work to do on pouring the plugins, but the core code (and in particular new modem support) should be ok now.
I think we will need to investigate an approach for dual builds for v3.3-or-later and pre-v3.3 hardware, to take advantage of the new ESP32 rev3 chips. I think those will require a different sdkconfig. We can do that automatically in the OTA system, by building the URL based on the ESP32 platform revision. Should we do that with the v3.3 release (even if the builds are for the moment the same), or at a later date?
Regards, Mark.
-- Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
-- Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
I updated my for-v3.3 build yesterday and booted it on my esp v3 board with the SIM7600 board. After running for a day, as usual, the simcom is not happy. Is this a side effect of not having hardware flow control for the modem uart? I remember we had the "User Interrupt" bug a long time ago but I don't remember the details or how that got fixed. Craig Welcome to the Open Vehicle Monitoring System (OVMS) - SSH Console Firmware: 3.2.017-112-g4af06660/ota_0/main Hardware: OVMS WIFI BLE BT cores=2 rev=ESP32/3 OVMS# cel MODEM Status Model: SIM7600 Network Registration: RegisteredRoaming Provider: AT&T Hologram Signal: -93 dBm Mode: LTE,Online State: NetLoss Mux: Status up PPP: Not connected Last Error: User Interrupt GPS: Connected on channel: #1
The ‘user interrupt’ is just the error code that comes back from the PPP driver when the cellular link goes down. We’d really need a log file to see what the issue is. Regards, Mark.
On 21 Nov 2021, at 1:57 AM, Craig Leres <leres@xse.com> wrote:
I updated my for-v3.3 build yesterday and booted it on my esp v3 board with the SIM7600 board. After running for a day, as usual, the simcom is not happy. Is this a side effect of not having hardware flow control for the modem uart?
I remember we had the "User Interrupt" bug a long time ago but I don't remember the details or how that got fixed.
Craig
Welcome to the Open Vehicle Monitoring System (OVMS) - SSH Console Firmware: 3.2.017-112-g4af06660/ota_0/main Hardware: OVMS WIFI BLE BT cores=2 rev=ESP32/3
OVMS# cel MODEM Status Model: SIM7600 Network Registration: RegisteredRoaming Provider: AT&T Hologram Signal: -93 dBm Mode: LTE,Online State: NetLoss Mux: Status up PPP: Not connected Last Error: User Interrupt GPS: Connected on channel: #1
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
On 11/21/21 16:25, Mark Webb-Johnson wrote:
The ‘user interrupt’ is just the error code that comes back from the PPP driver when the cellular link goes down.
We’d really need a log file to see what the issue is.
I figured. I used: log monitor log level verbose cellular Is the attached sufficient? Looks like it wraps around the axle suspiciously close to 12 hours after boot. Craig D (197024) cellular: mux-tx #3: AT+CREG?;+CCLK?;+CSQ;+CPSI?;+COPS? D (197034) cellular: mux-rx-line #3: +CPSI: LTE,Online,310-410,0x8B44,169982479,257,EUTRAN-BAND12,5110,3,3,-103,-931,-655,14 D (197044) cellular: mux-rx-line #3: +CREG: 1,5 D (197044) cellular: mux-rx-line #3: +CCLK: "21/11/21,10:22:48-32" D (197044) cellular: mux-rx-line #3: +CSQ: 23,99 D (197044) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",7 [...] D (43127044) cellular: mux-rx-line #3: +CPSI: LTE,Online,310-410,0x8B44,169982472,7,EUTRAN-BAND2,800,5,5,-125,-1207,-883,12 D (43127044) cellular: mux-rx-line #3: +CREG: 1,5 D (43127044) cellular: mux-rx-line #3: +CCLK: "21/11/21,22:18:18-32" D (43127044) cellular: mux-rx-line #3: +CSQ: 13,99 D (43127044) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",7 OVMS# D (43157024) cellular: mux-tx #3: AT+CREG?;+CCLK?;+CSQ;+CPSI?;+COPS? D (43157044) cellular: mux-rx-line #3: +CPSI: LTE,Online,310-410,0x8B44,169982472,7,EUTRAN-BAND2,800,5,5,-138,-1210,-876,12 D (43157044) cellular: mux-rx-line #3: +CREG: 1,5 D (43157044) cellular: mux-rx-line #3: +CCLK: "21/11/21,22:18:48-32" D (43157044) cellular: mux-rx-line #3: +CSQ: 13,99 D (43157044) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",7 OVMS# D (43187024) cellular: mux-tx #3: AT+CREG?;+CCLK?;+CSQ;+CPSI?;+COPS? D (43187034) cellular: mux-rx-line #3: +CPSI: LTE,Online,310-410,0x8B44,169982472,7,EUTRAN-BAND2,800,5,5,-129,-1206,-877,12 D (43187044) cellular: mux-rx-line #3: +CREG: 1,5 D (43187044) cellular: mux-rx-line #3: +CCLK: "21/11/21,22:19:18-32" D (43187044) cellular: mux-rx-line #3: +CSQ: 13,99 D (43187044) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",7 OVMS# I (43201024) housekeeping: 2021-11-21 22:19:33 PST (RAM: 8b=86580-87608 32b=30544) OVMS# D (43217024) cellular: mux-tx #3: AT+CREG?;+CCLK?;+CSQ;+CPSI?;+COPS? D (43217044) cellular: mux-rx-line #3: +CPSI: LTE,Online,310-410,0x8B44,169982472,7,EUTRAN-BAND2,800,5,5,-136,-1212,-879,11 D (43217044) cellular: mux-rx-line #3: +CREG: 1,5 D (43217044) cellular: mux-rx-line #3: +CCLK: "21/11/21,22:19:48-32" D (43217044) cellular: mux-rx-line #3: +CSQ: 13,99 D (43217044) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",7 OVMS# D (43218664) cellular: mux-rx-line #3: NO CARRIER D (43218664) cellular: mux-rx-line #4: NO CARRIER D (43218674) cellular: mux-rx-line #3: +PPPD: DISCONNECTED I (43218674) cellular: PPP Connection disconnected D (43218674) cellular: mux-rx-line #4: +PPPD: DISCONNECTED I (43218674) cellular: PPP Connection disconnected OVMS# W (43219024) cellular: Lost network connection (+PPP disconnect in NetMode) D (43219024) cellular: State transition NetMode => NetLoss I (43219024) cellular: State: Enter NetLoss state D (43219024) cellular: mux-tx #3: AT+CGATT=0 V (43219024) cellular: Stopping PPP I (43219024) gsm-ppp: Shutting down (hard)... I (43219024) gsm-ppp: PPP is shutdown I (43219024) netmanager: Set DNS#1 0.0.0.0 I (43219024) netmanager: Set DNS#2 0.0.0.0 I (43219024) netmanager: MODEM down (with WIFI client up): staying with WIFI client priority OVMS# I (43224644) gsm-ppp: StatusCallBack: User Interrupt OVMS# D (43229024) cellular: State timeout NetLoss => NetWait I (43229024) cellular: State: Enter NetWait state OVMS# D (43233024) cellular: State transition NetWait => NetStart I (43233024) cellular: State: Enter NetStart state OVMS# D (43233834) cellular: mux-rx-line #2: +CSQ: 10,99 I (43233834) cellular: Signal Quality is: 10 (-93 dBm) D (43233834) cellular: mux-rx-line #3: +CSQ: 10,99 D (43233834) cellular: mux-rx-line #4: +CSQ: 10,99 D (43233834) cellular: mux-rx-line #2: +CREG: 5 D (43233834) cellular: mux-rx-line #3: +CREG: 5 D (43233834) cellular: mux-rx-line #4: +CREG: 5 D (43234024) cellular: Netstart AT+CGDCONT=1,"IP","hologram";+CGDATA="PPP",1 D (43234064) cellular: mux-rx-line #2: CONNECT 115200 I (43234064) cellular: PPP Connection is ready to start OVMS# D (43235024) cellular: State transition NetStart => NetMode I (43235024) cellular: State: Enter NetMode state V (43235024) cellular: Starting PPP I (43235024) gsm-ppp: Initialising... D (43235114) cellular: mux-rx-line #3: NO CARRIER D (43235114) cellular: mux-rx-line #4: NO CARRIER D (43235124) cellular: mux-rx-line #3: +PPPD: DISCONNECTED I (43235124) cellular: PPP Connection disconnected D (43235124) cellular: mux-rx-line #4: +PPPD: DISCONNECTED I (43235124) cellular: PPP Connection disconnected OVMS# W (43236024) cellular: Lost network connection (+PPP disconnect in NetMode) D (43236024) cellular: State transition NetMode => NetLoss I (43236024) cellular: State: Enter NetLoss state D (43236024) cellular: mux-tx #3: AT+CGATT=0 V (43236024) cellular: Stopping PPP I (43236024) gsm-ppp: Shutting down (hard)... I (43236024) gsm-ppp: PPP is shutdown I (43236024) netmanager: Set DNS#1 0.0.0.0 I (43236024) netmanager: Set DNS#2 0.0.0.0 OVMS# D (43246024) cellular: State timeout NetLoss => NetWait I (43246024) cellular: State: Enter NetWait state OVMS# I (43248024) gsm-ppp: StatusCallBack: User Interrupt OVMS# D (43250024) cellular: State transition NetWait => NetStart I (43250024) cellular: State: Enter NetStart state OVMS# D (43251024) cellular: Netstart AT+CGDCONT=1,"IP","hologram";+CGDATA="PPP",1 D (43251064) cellular: mux-rx-line #2: CONNECT 115200 I (43251064) cellular: PPP Connection is ready to start OVMS# D (43252024) cellular: State transition NetStart => NetMode I (43252024) cellular: State: Enter NetMode state V (43252024) cellular: Starting PPP I (43252024) gsm-ppp: Initialising... D (43252104) cellular: mux-rx-line #3: NO CARRIER D (43252104) cellular: mux-rx-line #4: NO CARRIER D (43252114) cellular: mux-rx-line #3: +PPPD: DISCONNECTED
From the logs, it seems the GSM link is established, but PPP could not be connected. Could this be a merging coverage area (although the CSQ of 13 seems fine), or some other problem with data connection? The error messages 'NO CARRIER’ and '+PPPD: DISCONNECTED’ are from the modem itself. Regards, Mark.
On 22 Nov 2021, at 2:53 PM, Craig Leres <leres@xse.com> wrote:
On 11/21/21 16:25, Mark Webb-Johnson wrote:
The ‘user interrupt’ is just the error code that comes back from the PPP driver when the cellular link goes down. We’d really need a log file to see what the issue is.
I figured. I used:
log monitor log level verbose cellular
Is the attached sufficient? Looks like it wraps around the axle suspiciously close to 12 hours after boot.
Craig
D (197024) cellular: mux-tx #3: AT+CREG?;+CCLK?;+CSQ;+CPSI?;+COPS? D (197034) cellular: mux-rx-line #3: +CPSI: LTE,Online,310-410,0x8B44,169982479,257,EUTRAN-BAND12,5110,3,3,-103,-931,-655,14 D (197044) cellular: mux-rx-line #3: +CREG: 1,5 D (197044) cellular: mux-rx-line #3: +CCLK: "21/11/21,10:22:48-32" D (197044) cellular: mux-rx-line #3: +CSQ: 23,99 D (197044) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",7 [...] D (43127044) cellular: mux-rx-line #3: +CPSI: LTE,Online,310-410,0x8B44,169982472,7,EUTRAN-BAND2,800,5,5,-125,-1207,-883,12 D (43127044) cellular: mux-rx-line #3: +CREG: 1,5 D (43127044) cellular: mux-rx-line #3: +CCLK: "21/11/21,22:18:18-32" D (43127044) cellular: mux-rx-line #3: +CSQ: 13,99 D (43127044) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",7 OVMS# D (43157024) cellular: mux-tx #3: AT+CREG?;+CCLK?;+CSQ;+CPSI?;+COPS? D (43157044) cellular: mux-rx-line #3: +CPSI: LTE,Online,310-410,0x8B44,169982472,7,EUTRAN-BAND2,800,5,5,-138,-1210,-876,12 D (43157044) cellular: mux-rx-line #3: +CREG: 1,5 D (43157044) cellular: mux-rx-line #3: +CCLK: "21/11/21,22:18:48-32" D (43157044) cellular: mux-rx-line #3: +CSQ: 13,99 D (43157044) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",7 OVMS# D (43187024) cellular: mux-tx #3: AT+CREG?;+CCLK?;+CSQ;+CPSI?;+COPS? D (43187034) cellular: mux-rx-line #3: +CPSI: LTE,Online,310-410,0x8B44,169982472,7,EUTRAN-BAND2,800,5,5,-129,-1206,-877,12 D (43187044) cellular: mux-rx-line #3: +CREG: 1,5 D (43187044) cellular: mux-rx-line #3: +CCLK: "21/11/21,22:19:18-32" D (43187044) cellular: mux-rx-line #3: +CSQ: 13,99 D (43187044) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",7 OVMS# I (43201024) housekeeping: 2021-11-21 22:19:33 PST (RAM: 8b=86580-87608 32b=30544) OVMS# D (43217024) cellular: mux-tx #3: AT+CREG?;+CCLK?;+CSQ;+CPSI?;+COPS? D (43217044) cellular: mux-rx-line #3: +CPSI: LTE,Online,310-410,0x8B44,169982472,7,EUTRAN-BAND2,800,5,5,-136,-1212,-879,11 D (43217044) cellular: mux-rx-line #3: +CREG: 1,5 D (43217044) cellular: mux-rx-line #3: +CCLK: "21/11/21,22:19:48-32" D (43217044) cellular: mux-rx-line #3: +CSQ: 13,99 D (43217044) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",7 OVMS# D (43218664) cellular: mux-rx-line #3: NO CARRIER D (43218664) cellular: mux-rx-line #4: NO CARRIER D (43218674) cellular: mux-rx-line #3: +PPPD: DISCONNECTED I (43218674) cellular: PPP Connection disconnected D (43218674) cellular: mux-rx-line #4: +PPPD: DISCONNECTED I (43218674) cellular: PPP Connection disconnected OVMS# W (43219024) cellular: Lost network connection (+PPP disconnect in NetMode) D (43219024) cellular: State transition NetMode => NetLoss I (43219024) cellular: State: Enter NetLoss state D (43219024) cellular: mux-tx #3: AT+CGATT=0 V (43219024) cellular: Stopping PPP I (43219024) gsm-ppp: Shutting down (hard)... I (43219024) gsm-ppp: PPP is shutdown I (43219024) netmanager: Set DNS#1 0.0.0.0 I (43219024) netmanager: Set DNS#2 0.0.0.0 I (43219024) netmanager: MODEM down (with WIFI client up): staying with WIFI client priority OVMS# I (43224644) gsm-ppp: StatusCallBack: User Interrupt OVMS# D (43229024) cellular: State timeout NetLoss => NetWait I (43229024) cellular: State: Enter NetWait state OVMS# D (43233024) cellular: State transition NetWait => NetStart I (43233024) cellular: State: Enter NetStart state OVMS# D (43233834) cellular: mux-rx-line #2: +CSQ: 10,99 I (43233834) cellular: Signal Quality is: 10 (-93 dBm) D (43233834) cellular: mux-rx-line #3: +CSQ: 10,99 D (43233834) cellular: mux-rx-line #4: +CSQ: 10,99 D (43233834) cellular: mux-rx-line #2: +CREG: 5 D (43233834) cellular: mux-rx-line #3: +CREG: 5 D (43233834) cellular: mux-rx-line #4: +CREG: 5 D (43234024) cellular: Netstart AT+CGDCONT=1,"IP","hologram";+CGDATA="PPP",1 D (43234064) cellular: mux-rx-line #2: CONNECT 115200 I (43234064) cellular: PPP Connection is ready to start OVMS# D (43235024) cellular: State transition NetStart => NetMode I (43235024) cellular: State: Enter NetMode state V (43235024) cellular: Starting PPP I (43235024) gsm-ppp: Initialising... D (43235114) cellular: mux-rx-line #3: NO CARRIER D (43235114) cellular: mux-rx-line #4: NO CARRIER D (43235124) cellular: mux-rx-line #3: +PPPD: DISCONNECTED I (43235124) cellular: PPP Connection disconnected D (43235124) cellular: mux-rx-line #4: +PPPD: DISCONNECTED I (43235124) cellular: PPP Connection disconnected OVMS# W (43236024) cellular: Lost network connection (+PPP disconnect in NetMode) D (43236024) cellular: State transition NetMode => NetLoss I (43236024) cellular: State: Enter NetLoss state D (43236024) cellular: mux-tx #3: AT+CGATT=0 V (43236024) cellular: Stopping PPP I (43236024) gsm-ppp: Shutting down (hard)... I (43236024) gsm-ppp: PPP is shutdown I (43236024) netmanager: Set DNS#1 0.0.0.0 I (43236024) netmanager: Set DNS#2 0.0.0.0 OVMS# D (43246024) cellular: State timeout NetLoss => NetWait I (43246024) cellular: State: Enter NetWait state OVMS# I (43248024) gsm-ppp: StatusCallBack: User Interrupt OVMS# D (43250024) cellular: State transition NetWait => NetStart I (43250024) cellular: State: Enter NetStart state OVMS# D (43251024) cellular: Netstart AT+CGDCONT=1,"IP","hologram";+CGDATA="PPP",1 D (43251064) cellular: mux-rx-line #2: CONNECT 115200 I (43251064) cellular: PPP Connection is ready to start OVMS# D (43252024) cellular: State transition NetStart => NetMode I (43252024) cellular: State: Enter NetMode state V (43252024) cellular: Starting PPP I (43252024) gsm-ppp: Initialising... D (43252104) cellular: mux-rx-line #3: NO CARRIER D (43252104) cellular: mux-rx-line #4: NO CARRIER D (43252114) cellular: mux-rx-line #3: +PPPD: DISCONNECTED
On 11/22/21 18:43, Mark Webb-Johnson wrote:
From the logs, it seems the GSM link is established, but PPP could not be connected.
Could this be a merging coverage area (although the CSQ of 13 seems fine), or some other problem with data connection?
The error messages 'NO CARRIER’ and '+PPPD: DISCONNECTED’ are from the modem itself.
There aren't any ppp error messages until 12 hours after bootup. I ran the dev and a normal module in side-by-side windows and the verbose cellular/simcom messages look similar for the first 12 hours. I can also see the dev module (sim7600) is active on https://dashboard.hologram.io Any ideas on how to troubleshoot this? Is it a problem that the dev box is not configured to upload anything to the V2 server? Craig # dev right after bootup D (287003) cellular: mux-tx #3: AT+CREG?;+CCLK?;+CSQ;+CPSI?;+COPS? D (287013) cellular: mux-rx-line #3: +CPSI: LTE,Online,310-410,0x8B44,169982472,7,EUTRAN-BAND2,800,5,5,-125,-1205,-881,12 D (287023) cellular: mux-rx-line #3: +CREG: 1,5 D (287023) cellular: mux-rx-line #3: +CCLK: "21/11/22,23:48:12-32" D (287023) cellular: mux-rx-line #3: +CSQ: 13,99 D (287023) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",7 # production right after bootup D (191983) cellular: mux-tx #3: AT+CREG?;+CCLK?;+CSQ;+CPSI?;+COPS? D (192003) cellular: mux-rx-line #3: +CPSI: WCDMA,Online,310-410,0xDE74,5704608,WCDMA 850,29,4385,0,6.0,95,36,19,500 D (192013) cellular: mux-rx-line #3: +CREG: 1,5 D (192013) cellular: mux-rx-line #3: +CCLK: "21/11/22,23:54:15-32" D (192013) cellular: mux-rx-line #3: +CSQ: 9,99 D (192013) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",2
Craig, From what you sent before, it just seemed that the PPP link could not be established. The CSQ figures you show below indicate the dev board has a better signal (and production is marginal): Value RSSI dBm Condition 2 -109 Marginal 3 -107 Marginal 4 -105 Marginal 5 -103 Marginal 6 -101 Marginal 7 -99 Marginal 8 -97 Marginal 9 -95 Marginal 10 -93 OK 11 -91 OK 12 -89 OK 13 -87 OK 14 -85 OK 15 -83 Good 16 -81 Good 17 -79 Good 18 -77 Good 19 -75 Good 20 -73 Excellent 21 -71 Excellent 22 -69 Excellent 23 -67 Excellent 24 -65 Excellent 25 -63 Excellent 26 -61 Excellent 27 -59 Excellent 28 -57 Excellent 29 -55 Excellent 30 -53 Excellent I am not aware of any limitation / restriction on inactive PPP cellular data connections - I guess that would depend on the configuration of individual operators. Can you configure the dev board to send some data? I am not really sure how to help here, seeing the little extracts of logs you send and not knowing your environment. From a troubleshooting point of view, the logs should show the cellular activity and behaviour of the modem. Regards, Mark.
On 24 Nov 2021, at 4:38 AM, Craig Leres <leres@xse.com> wrote:
On 11/22/21 18:43, Mark Webb-Johnson wrote:
From the logs, it seems the GSM link is established, but PPP could not be connected. Could this be a merging coverage area (although the CSQ of 13 seems fine), or some other problem with data connection? The error messages 'NO CARRIER’ and '+PPPD: DISCONNECTED’ are from the modem itself.
There aren't any ppp error messages until 12 hours after bootup. I ran the dev and a normal module in side-by-side windows and the verbose cellular/simcom messages look similar for the first 12 hours.
I can also see the dev module (sim7600) is active on https://dashboard.hologram.io
Any ideas on how to troubleshoot this?
Is it a problem that the dev box is not configured to upload anything to the V2 server?
Craig
# dev right after bootup D (287003) cellular: mux-tx #3: AT+CREG?;+CCLK?;+CSQ;+CPSI?;+COPS? D (287013) cellular: mux-rx-line #3: +CPSI: LTE,Online,310-410,0x8B44,169982472,7,EUTRAN-BAND2,800,5,5,-125,-1205,-881,12 D (287023) cellular: mux-rx-line #3: +CREG: 1,5 D (287023) cellular: mux-rx-line #3: +CCLK: "21/11/22,23:48:12-32" D (287023) cellular: mux-rx-line #3: +CSQ: 13,99 D (287023) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",7
# production right after bootup D (191983) cellular: mux-tx #3: AT+CREG?;+CCLK?;+CSQ;+CPSI?;+COPS? D (192003) cellular: mux-rx-line #3: +CPSI: WCDMA,Online,310-410,0xDE74,5704608,WCDMA 850,29,4385,0,6.0,95,36,19,500 D (192013) cellular: mux-rx-line #3: +CREG: 1,5 D (192013) cellular: mux-rx-line #3: +CCLK: "21/11/22,23:54:15-32" D (192013) cellular: mux-rx-line #3: +CSQ: 9,99 D (192013) cellular: mux-rx-line #3: +COPS: 0,0,"AT&T Hologram",2
On 11/23/21 18:37, Mark Webb-Johnson wrote:
From what you sent before, it just seemed that the PPP link could not be established. The CSQ figures you show below indicate the dev board has a better signal (and production is marginal):
Indeed, the dev board is in an upstairs bedroom and it's easy to imagine it has a better view of the local cell towers.
I am not aware of any limitation / restriction on inactive PPP cellular data connections - I guess that would depend on the configuration of individual operators. Can you configure the dev board to send some data?
Would that just be a matter of configuring it to use a V2 (or V3) server? Does it matter that the module is not connected to a vehicle? Or that it's constantly on wifi (literally 2m away from the AP). I'm not really clear on how the simcom is used when the module is connected to a vehicle and sits in my garage, connected to wifi, for a week. I've been wondering if the disconnect is just some kind of carrier idle timeout? And after looking at the esp-idf simcom sample code, is the problem that our code is not handling the PPP disconnect event well? Since rebooting the module "fixes" the issue for another 12 hours, it seems it should be possible get back online without rebooting. When PPPERR_USER occurs, all we do is set m_connected to false and SignalEvent that the modem is down. It sort of looks like the example code does more? esp-idf/examples/protocols/pppos_client/components/modem/src/esp_modem.c case PPPERR_USER: /* User interrupt */ esp_event_post_to(esp_dte->event_loop_hdl, ESP_MODEM_EVENT, MODEM_EVENT_PPP_STOP, NULL, 0, 0); /* Free the PPP control block */ pppapi_free(esp_dte->ppp); break; Where pppapi_free() eventually does a tcpip_api_call() which does something to the "TCPIP thread".
I am not really sure how to help here, seeing the little extracts of logs you send and not knowing your environment. From a troubleshooting point of view, the logs should show the cellular activity and behaviour of the modem.
The log is pretty redundant until the failure. I put a copy here if you'd like to look at the whole thing: https://xse.com/leres/scratch/dev1.log.gz Note that it's full of escape sequences because I generated it with xterm and my usual voodoo (ul -Tdumb) doesn't work. Also I can't easily make a sdcard log because the sdcard interface on this unit hasn't worked since I swapped the cp2102 chip; I've been too lazy to debug this plus the CP2102N-A02-GQFN28 chip has been pretty much unobtainium for the last N months. Craig
On 11/24/21 11:06, Craig Leres wrote:
Would that just be a matter of configuring it to use a V2 (or V3) server?
I just went to api.openvehicles.com to add a slot for my dev box and when I clicked on the "register an open vehicle" the form was preceeded by errors. Anyway, I have my dev box configured to use api.openvehicles.com now (it's called xse-dev). After rebooting it currently says "OVMS V2 login successful, and crypto channel established" Craig
Craig, Thanks for letting me know. I have fixed the register form. I think Drupal changed their validation of this in a recent update. Regards, Mark.
On 25 Nov 2021, at 6:02 AM, Craig Leres <leres@xse.com> wrote:
On 11/24/21 11:06, Craig Leres wrote:
Would that just be a matter of configuring it to use a V2 (or V3) server?
I just went to api.openvehicles.com to add a slot for my dev box and when I clicked on the "register an open vehicle" the form was preceeded by errors.
Anyway, I have my dev box configured to use api.openvehicles.com now (it's called xse-dev). After rebooting it currently says "OVMS V2 login successful, and crypto channel established"
Craig<vehicles.png>
The change is not too extensive, so I have made it (as hopefully the last in v3.2 branch). On api.openvehicles.com <http://api.openvehicles.com/>, I have simply symlinked v3.3 to v3.1 for the moment (we can revisit this once we have reviewed the changes necessary to sdkconfig for ESP32 rev3 support). Remember that the v3.x part of the OTA paths reflects the hardware version, not necessarily firmware version. I have copied this to ‘eap' on api.openvehicles.com <http://api.openvehicles.com/>. I have made the branch ‘v3.2’, as a historical backup. I have merged back for-v3.3 into master, and removed the for-v3.3 branch (so ‘edge’ should now build as this version). I have tagged this new master as 3.3-001, and built edge on api.openvehicles.com <http://api.openvehicles.com/>. I appreciate that this is a big change, so have freed up my evenings this week to try to get the remaining issues with v3.3 resolved, ready for release of this firmware to the factory around early December. I hope that this is not too impacting for those on ‘edge’ release, and any bugs / changes can be quickly addressed. Regards, Mark.
On 20 Nov 2021, at 11:24 PM, Michael Balzer <dexter@expeedo.de> wrote:
Signed PGP part Mark,
why not introduce this for v3.2 as well?
If a v3.3 user accidentally installs a v3.2 build, that would not heal automatically but stick until manually fixed.
I think it makes sense to have all builds automatically try to update to the best build available for the hardware in use.
Am I missing somehing?
Regards, Michael
Am 19.11.21 um 02:44 schrieb Mark Webb-Johnson:
Regarding the 3.2.018, I will handle it this afternoon (my time).
Regarding OTA and support for rev3 ESP32, at the moment, the OTA URL is composed of:
<server> (default api.openvehicles.com/firmware/ota <http://api.openvehicles.com/firmware/ota>) <base> (either /v3.0/ or /v3.1/ depending on CONFIG_OVMS_HW_BASE_3_0 or CONFIG_OVMS_HW_BASE_3_1 <tag> (default ‘main’) /ovms3.ver
My suggestion is to add the hardware platform (as detected by the code that provides metric m.hardware) in there. Perhaps replace <base> with something that detects /v3.0/, /v3.1/, or /v3.3/ based on a combination of CONFIG_OVMS_HW_BASE and m.hardware ESP32 revision? We should probably just implement that in the v3.3 code, so v3.2 and before keep using CONFIG_OVMS_HW_BASE.
Regards, Mark.
On 19 Nov 2021, at 5:24 AM, Michael Balzer <dexter@expeedo.de <mailto:dexter@expeedo.de>> wrote:
Signed PGP part Sounds OK for me.
Regarding using the platform revision instead of CONFIG_OVMS_HW_BASE_3_x config: so in case someone accidentally installs the wrong firmware, the system would heal itself by a simple OTA update? Sounds good and should be a simple change.
Regards, Michael
Am 18.11.21 um 08:06 schrieb Mark Webb-Johnson:
We need to get the v3.3 release ready for release now, as the new v3.3 hardware is about to enter production and the factory needs the firmware.
Accordingly, I suggest:
I have already updated the for-v3.3 branch with all the recent changes from v3.2.
We now (today/tomorrow) release the current v3.2 branch as tag 3.2.018, OTA release. This would be the (hopefully) last v3.2 release.
After release of that, we branch off master to a v3.2 branch, for historical purposes (and if we need it for anything).
We then merge back for-v3.3 into the master branch and start working through the usual daily builds for that to get it ready for production. I think there is still some work to do on pouring the plugins, but the core code (and in particular new modem support) should be ok now.
I think we will need to investigate an approach for dual builds for v3.3-or-later and pre-v3.3 hardware, to take advantage of the new ESP32 rev3 chips. I think those will require a different sdkconfig. We can do that automatically in the OTA system, by building the URL based on the ESP32 platform revision. Should we do that with the v3.3 release (even if the builds are for the moment the same), or at a later date?
Regards, Mark.
-- Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com <mailto:OvmsDev@lists.openvehicles.com> http://lists.openvehicles.com/mailman/listinfo/ovmsdev <http://lists.openvehicles.com/mailman/listinfo/ovmsdev>
-- Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
I have released 3.2.018 to main OTA now, and announced it. @Michael please follow with your EU server. Thanks to everyone that contributed to this. The certification for our upcoming 3.3 hardware has been fully completed, and results published. You can see the FCC stuff by: https://www.fcc.gov/oet/ea/fccid <https://www.fcc.gov/oet/ea/fccid> Grantee Code: 2ATDM Product Code: OVMS-33-7 Hardware is in production and should be generally available later this month. We have been affected by some component chip shortages, but I am told those have been resolved. We need to release the 3.3 firmware to the factory later this week (or first thing next). Regards, Mark On 18 Nov 2021, at 3:06 PM, Mark Webb-Johnson <mark@webb-johnson.net> wrote:
We need to get the v3.3 release ready for release now, as the new v3.3 hardware is about to enter production and the factory needs the firmware.
Accordingly, I suggest:
I have already updated the for-v3.3 branch with all the recent changes from v3.2.
We now (today/tomorrow) release the current v3.2 branch as tag 3.2.018, OTA release. This would be the (hopefully) last v3.2 release.
After release of that, we branch off master to a v3.2 branch, for historical purposes (and if we need it for anything).
We then merge back for-v3.3 into the master branch and start working through the usual daily builds for that to get it ready for production. I think there is still some work to do on pouring the plugins, but the core code (and in particular new modem support) should be ok now.
I think we will need to investigate an approach for dual builds for v3.3-or-later and pre-v3.3 hardware, to take advantage of the new ESP32 rev3 chips. I think those will require a different sdkconfig. We can do that automatically in the OTA system, by building the URL based on the ESP32 platform revision. Should we do that with the v3.3 release (even if the builds are for the moment the same), or at a later date?
Regards, Mark.
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
EU server now also has 3.2.018 released in "main". https://dexters-web.de/firmware-release-3.2.018-text_101.en.htm Regards, Michael Am 07.12.21 um 01:36 schrieb Mark Webb-Johnson:
I have released 3.2.018 to main OTA now, and announced it. @Michael please follow with your EU server. Thanks to everyone that contributed to this.
The certification for our upcoming 3.3 hardware has been fully completed, and results published. You can see the FCC stuff by:
https://www.fcc.gov/oet/ea/fccid <https://www.fcc.gov/oet/ea/fccid>
Grantee Code: 2ATDM Product Code: OVMS-33-7
Hardware is in production and should be generally available later this month. We have been affected by some component chip shortages, but I am told those have been resolved. We need to release the 3.3 firmware to the factory later this week (or first thing next).
Regards, Mark
On 18 Nov 2021, at 3:06 PM, Mark Webb-Johnson <mark@webb-johnson.net <mailto:mark@webb-johnson.net>> wrote:
We need to get the v3.3 release ready for release now, as the new v3.3 hardware is about to enter production and the factory needs the firmware.
Accordingly, I suggest:
1. I have already updated the for-v3.3 branch with all the recent changes from v3.2.
2. We now (today/tomorrow) release the current v3.2 branch as tag 3.2.018, OTA release. This would be the (hopefully) last v3.2 release.
3. After release of that, we branch off master to a v3.2 branch, for historical purposes (and if we need it for anything).
4. We then merge back for-v3.3 into the master branch and start working through the usual daily builds for that to get it ready for production. I think there is still some work to do on pouring the plugins, but the core code (and in particular new modem support) should be ok now.
I think we will need to investigate an approach for dual builds for v3.3-or-later and pre-v3.3 hardware, to take advantage of the new ESP32 rev3 chips. I think those will require a different sdkconfig. We can do that automatically in the OTA system, by building the URL based on the ESP32 platform revision. Should we do that with the v3.3 release (even if the builds are for the moment the same), or at a later date?
Regards, Mark.
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com <mailto:OvmsDev@lists.openvehicles.com> http://lists.openvehicles.com/mailman/listinfo/ovmsdev
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
-- Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
On 12/6/21 16:36, Mark Webb-Johnson wrote:
Hardware is in production and should be generally available later this month.
I happened to notice that fasttech.com now lists the new versions: OVMS v3.3 Kit w/SIM7600G 4G Modem Module (Global Edition/No SIM Card Included) OVMS v3.3 Optional SIM7600G 4G Modem Module (Global Edition/No SIM Card Included) (Obviously both are out of stock) Craig
Yeah. Specs, details, and preliminary pictures are there. They had it up for a few hours before I noticed and asked them to put ‘out of stock’ on it until we have a 100% confirmed availability date. In that time they got a dozen or so orders; Some people must be hitting refresh a lot :-) So for the moment it is ‘out of stock’. I still expect initial stocks to arrive just after xmas. Regards, Mark P.S. I would much rather people buy locally from Medlock (USA) or Open Energy Monitor (EU), and get local support. Both have already put in their orders, and we will be delivering to them as quickly as we can. On 16 Dec 2021, at 9:25 AM, Craig Leres <leres@xse.com> wrote:
On 12/6/21 16:36, Mark Webb-Johnson wrote:
Hardware is in production and should be generally available later this month.
I happened to notice that fasttech.com now lists the new versions:
OVMS v3.3 Kit w/SIM7600G 4G Modem Module (Global Edition/No SIM Card Included) OVMS v3.3 Optional SIM7600G 4G Modem Module (Global Edition/No SIM Card Included)
(Obviously both are out of stock)
Craig
participants (3)
-
Craig Leres -
Mark Webb-Johnson -
Michael Balzer