[Ovmsdev] LWIP lock-up
Mark Webb-Johnson
mark at webb-johnson.net
Tue Apr 3 11:56:28 HKT 2018
@Tom,
Mine seems rock-solid now. It has been running over the long 4 day weekend, with no lock-ups or reboots. It seems to switch between wifi and modem very well.
I will now enable the webserver on my module, to see how that behaves.
Regards, Mark.
> On 30 Mar 2018, at 11:38 AM, Mark Webb-Johnson <mark at webb-johnson.net> wrote:
>
> Tom,
>
> For logging, I do this:
>
> OVMS# vfs cat /store/events/sd.mounted/logging
> log file
> log file /sd/ovms.log
> log level verbose
> log level info ssh
>
> Seems to work well. It won’t catch system level output (like crashes), but at least shows what our apps are up to.
>
> We don’t know the order things happened (no logs), but Steven’s module tasks stack is working well and gives lots of clues as to the cause. Many thanks.
>
> Looking at Blk tiT (reverse order):
>
> 0x401cbdaa: tcpip_thread esp-idf/components/lwip/api/tcpip.c:474
> 0x401d71a8: sys_timeouts_mbox_fetch esp-idf/components/lwip/core/timers.c:575
> 0x401e3ea2: fsm_timeout esp-idf/components/lwip/netif/ppp/fsm.c:279
> 0x401e2920: lcp_finished esp-idf/components/lwip/netif/ppp/lcp.c:2365
> 0x401e7654: link_terminated esp-idf/components/lwip/netif/ppp/auth.c:643
> 0x402011b6: ppp_link_terminated esp-idf/components/lwip/netif/ppp/ppp.c:704
> 0x4008b981: xQueueGenericReceive esp-idf/components/freertos/./queue.c:2037
> 0x401e53e1: sys_arch_mbox_fetch esp-idf/components/lwip/port/freertos/sys_arch.c:548
> 0x401e0672: pppos_disconnect esp-idf/components/lwip/netif/ppp/pppos.c:410
> 0x4020118e: ppp_link_end esp/esp-idf/components/lwip/netif/ppp/ppp.c:704
> 0x4012b7ec: GsmPPPOS_StatusCallback OVMS.V3/components/simcom/src/gsmpppos.cpp:149
> 0x401cb450: pppapi_connect esp-idf/components/lwip/api/pppapi.c:266
> 0x401cbf90: tcpip_api_call esp-idf/components/lwip/api/tcpip.c:474
> 0x400eec76: OvmsCommandApp::Log OVMS.V3/main/./ovms_command.cpp:94
> 0x4008b981: xQueueGenericReceive esp-idf/components/freertos/./queue.c:2037
> 0x401e52d0: sys_arch_sem_wait esp-idf/components/lwip/port/freertos/sys_arch.c:548
>
> It seems that the ppp link went down, and that triggered a GsmPPPOS_StatusCallback. That in turn called pppapi_connect to reconnect. That went into the tcp_api_call.
>
> I am concerned that we are doing a tcpip_api_call from within the tiT thread. The LWIP PPP library has two versions of each call. One seems to be intended to be called from within the tiT thread, and the other dispatches the call (via a freeRTOS queue) to be executed from within the tIT thread. But, that logic comes straight from the LWIP PPP examples, so I’m really not sure what is going on.
>
> Looking at GsmPPPOS_StatusCallback, there are a bunch of error codes that can come into that, and they all result in a call to pppapi_connect except for PPPERR_NONE and PPPERR_USER. The actual call we make is pppapi_connect(pcb, 30), which should delay the reconnection for 30 seconds.
>
> Looking at Tmr Svc:
>
> 0x4008d90b: prvTimerTask esp-idf/components/freertos/./timers.c:484
> 0x4008d7fc: prvProcessTimerOrBlockTask esp-idf/components/freertos/./timers.c:484
> 0x40081f7e: esp_crosscore_int_send_yield esp-idf/components/esp32/./crosscore_int.c:112
> 0x4008d818: prvProcessTimerOrBlockTask esp-idf/components/freertos/./timers.c:484
> 0x40081f10: esp_crosscore_int_send esp-idf/components/esp32/./crosscore_int.c:103
> 0x4008d7c9: prvProcessExpiredTimer esp-idf/components/freertos/./timers.c:484
> 0x400eb07b: HousekeepingTicker1(void*) OVMS.V3/main/./ovms_housekeeping.cpp:70
> 0x400eb07b: HousekeepingTicker1(void*) OVMS.V3/main/./ovms_housekeeping.cpp:70
> 0x400eaeb6: std::__cxx11::basic_string xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:135
> 0x400e8fe8: std::__cxx11::basic_string xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:135
> 0x400e8d4c: std::function xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/functional:2271
> 0x400d4d68: void std::__cxx11::basic_string xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.tcc:236
> 0x400e8fdc: OvmsEvents::SignalEvent OVMS.V3/main/./ovms_events.cpp:155
> 0x401183a2: void std::_Mem_fn_base xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/functional:600
> 0x400e208a: std::function xtensa-esp32-elf/include/c++/5.2.0/functional:2271
> 0x4014a76c: opendir esp-idf/components/vfs/./vfs.c:540
> 0x4012a037: std::__cxx11::basic_string xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:135
> 0x40129f8c: simcom::Ticker OVMS.V3/components/simcom/src/simcom.cpp:257
> 0x40129900: simcom::Task() OVMS.V3/components/simcom/src/simcom.cpp:88
> 0x401299f3: simcom::State1Ticker1() OVMS.V3/components/simcom/src/simcom.cpp:531
> 0x4012b2d4: GsmPPPOS::Shutdown(bool) OVMS.V3/components/simcom/src/gsmpppos.cpp:219
> 0x401cb468: pppapi_close esp-idf/components/lwip/api/pppapi.c:319
> 0x40082d58: _free_r esp-idf/components/newlib/./syscalls.c:42
> 0x401cbf90: tcpip_api_call esp-idf/components/lwip/api/tcpip.c:474
> 0x4008b981: xQueueGenericReceive esp-idf/components/freertos/./queue.c:2037
> 0x401e52d0: sys_arch_sem_wait esp-idf/components/lwip/port/freertos/sys_arch.c:548
>
> So here we have a ticker.* signal being issued by housekeeping (in Tmr Svc context, because this code pre-dates the switch yesterday to run these in housekeeping task context). Looking at simcom, that must be ticker.1 because that is the only ticker.* that simcom hooks into. That must be a normal state ticker (State1Ticker), and ends up with a pppapi_close. No idea what state we were in at the time.
>
> The pppapi_close is only called in one place from simcom ppp (GsmPPPOS::Shutdown). That can in turn only be called from the main simcom as a result of the last good frame timeout.
>
> The last few frames of this stack show the tcpip_api_call being raised, and that calling sys_arch_sem_wait to wait for the response from the api call (in the tiT task context). I think the 0x4008b981 xQueueGenericReceive is garbage and shouldn’t be in the backtrace.
>
> The way lwip works is that there is a tIT thread that runs the main IP stack. Function there are not intended to be called directly (from other threads), as they may not be thread safe. So, instead, the tcpip_api_call is used to create a structure packing the call details, and push that structure onto a queue. The tiT thread then reads that queue, executes the function in it’s own context, fills in the result, and then signals completion. The original caller is waiting (sys_arch_sem_wait) for the signal from tiT that that call is complete, and then regains control (in it’s own stack) to handle the result.
>
> I don’t see any real problems in Tmr Svc. There is a hell of a lot going on in a timer call, but we’ve already addressed the core issue there (that Michael identified yesterday), and with current code this part will now run in housekeeping context. Anyway, I don’t think this is the issue.
>
> More concerning is the behaviour in the tIT task. I’m guessing the sys_arch_mbox_fetch followed by pppos_disconnect is our call from Tmr Svc for pppapi_close. So we’ve got an incoming queue message there, and we are handling it (with the sender task Tmr Svc waiting on sys_arch_sem_wait for the signal the function has completed executing in the tiT task). LWIP then calls the status callback (to tell it about the link down), and our code (running in tIT task context) calls pppapi_connect to reconnect. That schedules a pppapi_do_ppp_connect to be run in the tIT task and waits for the semaphore to be set to indicate it is done. But, we are already in the tiT task, so we never look at the incoming function call queue, so we block forever.
>
> Bottom line: I think the fix is to directly call the internal ppp_connect function (as the callback is in tiT task context), rather than scheduling an inter-task call (pppapi_connect):
>
> --- a/vehicle/OVMS.V3/components/simcom/src/gsmpppos.cpp
> +++ b/vehicle/OVMS.V3/components/simcom/src/gsmpppos.cpp
> @@ -146,7 +146,8 @@ static void GsmPPPOS_StatusCallback(ppp_pcb *pcb, int err_code, void *ctx)
> // Try to reconnect in 30 seconds. This is assuming the SIMCOM modem level
> // data channel is still open.
> ESP_LOGI(TAG, "Attempting PPP reconnecting in 30 seconds...");
> - pppapi_connect(pcb, 30);
> + ppp_connect(pcb, 30);
> }
>
> That was a lot of work, and a lot of hassle, for three lousy characters. The LWIP example for PPPoS uses ppp_connect (not pppapi_connect).
>
> This is committed, and builds as 3.1.001-17-g38b3e7f for me. You can get it as:
>
> OVMS# ota flash http api.openvehicles.com/firmware/ota/tmp/ovms3.bin <http://api.openvehicles.com/firmware/ota/tmp/ovms3.bin>
>
> @Tom: can you try it?
>
> Regards, Mark.
>
>> On 29 Mar 2018, at 9:27 AM, Mark Webb-Johnson <mark at webb-johnson.net <mailto:mark at webb-johnson.net>> wrote:
>>
>> Tom,
>>
>> This seems clear, and about what I expected. Looks like the timer service is in 'pppapi_close’, and the tcp/ip task is in ‘pppapi_connect’. One is trying to connect, the other disconnect, and they are deadlocked.
>>
>> I think the '0x400eec76: OvmsCommandApp::Log’ in Blk tiT is a false report - can’t see how that is called from there.
>>
>> I’ve got to put some time into ‘day job’ now, but will have a look at it in detail later.
>>
>> Regards, Mark
>>
>>> On 29 Mar 2018, at 9:04 AM, Tom Parker <tom at carrott.org <mailto:tom at carrott.org>> wrote:
>>>
>>> On 29/03/18 00:02, Mark Webb-Johnson wrote:
>>>
>>>> I unplugged my antenna, then put it in a steel box. It still got signal :(
>>>
>>> lol
>>>
>>> I've got it hung up at the moment. Unfortunately it was during the one drive where I didn't use the datalogger so I don't have the logs leading up to the hang.
>>>
>>> Time is still advancing, but monotonic is not. I don't have it connected to the idf monitor so addr2line is a bit tricky.
>>>
>>> OVMS> tas module tasks
>>> Number of Tasks = 13 Stack: Now Max Total Heap 32-bit SPIRAM
>>> 3FFAFB10 1 Blk esp_timer 396 444 4096 55100 644 0
>>> 3FFBD584 2 Blk eventTask 448 448 4608 0 0 0
>>> 3FFC5C8C 3 Blk CanRxTask 424 824 4096 0 0 0
>>> 3FFCCE0C 4 Blk ipc0 392 504 1024 10848 0 0
>>> 3FFCD40C 5 Blk ipc1 392 504 1024 12 0 0
>>> 3FFCF234 8 Rdy IDLE 368 496 1024 0 0 0
>>> 3FFCF7C8 9 Rdy IDLE 356 692 1024 0 0 0
>>> 3FFD115C 10 Blk Tmr Svc 1512 3928 6144 744 0 0
>>> 3FFD3D6C 14 Blk Housekeeping 356 3444 6144 55792 0 0
>>> 3FFCECA0 16 Blk tiT 908 3852 4608 23968 0 0
>>> 3FFDD374 17 Blk SIMCOMTask 464 2528 4096 4404 0 0
>>> 3FFDF9B4 18 Rdy AsyncConsole 764 3068 5120 516 27488 0
>>> 3FFE3E28 19 Blk Vrx Task 456 3016 4096 0 0 0
>>>
>>> OVMS> module tasks stacks
>>> Number of Tasks = 13 Stack: Now Max Total Heap 32-bit SPIRAM
>>> 3FFAFB10 1 Blk esp_timer 396 444 4096 55100 644 0
>>> 0x400dc0e7 0x4008b981
>>> 3FFBD584 2 Blk eventTask 448 448 4608 0 0 0
>>> 0x4019bfa8 0x4008b981
>>> 3FFC5C8C 3 Blk CanRxTask 424 824 4096 0 0 0
>>> 0x400d3f41 0x4008b981
>>> 3FFCCE0C 4 Blk ipc0 392 504 1024 10848 0 0
>>> 0x4008152b 0x4008b981 0x40081363
>>> 3FFCD40C 5 Blk ipc1 392 504 1024 12 0 0
>>> 0x4008152b 0x4008b981 0x400813dc
>>> 3FFCF234 8 Rdy IDLE 368 496 1024 0 0 0
>>> 0x4008ca40 0x4008bcf3 0x4008bcf3
>>> 3FFCF7C8 9 Rdy IDLE 356 692 1024 0 0 0
>>> 0x4008ca40 0x40082197 0x4008c5bc 0x40082197
>>> 3FFD115C 10 Blk Tmr Svc 1512 3928 6144 744 0 0
>>> 0x401e52d0 0x4008b981 0x401cbf90 0x40082d58 0x401cb468 0x4012b2d4 0x401299f3 0x40129900 0x40129f8c 0x4012a037 0x4014a76c 0x400e208a 0x401183a2 0x400e8fdc 0x400d4d68 0x400e8d4c 0x400e8fe8 0x400eaeb6 0x400eb07b 0x400eb07b 0x4008d7c9 0x40081f10 0x4008d818 0x40081f7e 0x4008d7fc 0x4008d90b
>>> 3FFD3D6C 14 Blk Housekeeping 356 3444 6144 55792 0 0
>>> 0x400eae77 0x4008c972 0x400e2170
>>> 3FFCECA0 16 Blk tiT 908 3852 4608 23968 0 0
>>> 0x401e52d0 0x4008b981 0x400eec76 0x401cbf90 0x401cb450 0x4012b7ec 0x4020118e 0x401e0672 0x401e53e1 0x4008b981 0x402011b6 0x401e7654 0x401e2920 0x401e3ea2 0x401d71a8 0x401cbdaa
>>> 3FFDD374 17 Blk SIMCOMTask 464 2528 4096 4404 0 0
>>> 0x40129909 0x4008b981 0x401299ac
>>> 3FFDF9B4 18 Rdy AsyncConsole 1148 3068 5120 516 27488 0
>>> 0x400e3f09 0x400e7438 0x400ee4b1 0x400ee5c8 0x400ee5ba 0x400ee5ba 0x400ee5f0 0x400e3a3b 0x400f130c 0x400f1373 0x400e3a6a 0x400e8240 0x4017cfb1 0x401fb752 0x400e4010 0x400e8274 0x400e8460 0x400e3ebc 0x400e3ecb 0x400e6c3c
>>> 3FFE3E28 19 Blk Vrx Task 456 3016 4096 0 0 0
>>> 0x4012e589 0x4008b981 0x401fc148
>>>
>>> Blk Tmr Svc and Blk tiT look suspect.
>>>
>>> Blk Tmr Svc
>>> ~/esp/xtensa-esp32-elf/bin/xtensa-esp32-elf-addr2line -pfiaC -e build/ovms3.elf 0x401e52d0 0x4008b981 0x401cbf90 0x40082d58 0x401cb468 0x4012b2d4 0x401299f3 0x40129900 0x40129f8c 0x4012a037 0x4014a76c 0x400e208a 0x401183a2 0x400e8fdc 0x400d4d68 0x400e8d4c 0x400e8fe8 0x400eaeb6 0x400eb07b 0x400eb07b 0x4008d7c9 0x40081f10 0x4008d818 0x40081f7e 0x4008d7fc 0x4008d90b
>>> 0x401e52d0: sys_arch_sem_wait at /home/ubuntu/esp/esp-idf/components/lwip/port/freertos/sys_arch.c:548
>>> 0x4008b981: xQueueGenericReceive at /home/ubuntu/esp/esp-idf/components/freertos/./queue.c:2037
>>> 0x401cbf90: tcpip_api_call at /home/ubuntu/esp/esp-idf/components/lwip/api/tcpip.c:474
>>> 0x40082d58: _free_r at /home/ubuntu/esp/esp-idf/components/newlib/./syscalls.c:42
>>> 0x401cb468: pppapi_close at /home/ubuntu/esp/esp-idf/components/lwip/api/pppapi.c:319
>>> 0x4012b2d4: GsmPPPOS::Shutdown(bool) at /vagrant/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/components/simcom/src/gsmpppos.cpp:219
>>> 0x401299f3: simcom::State1Ticker1() at /vagrant/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/components/simcom/src/simcom.cpp:531
>>> 0x40129900: simcom::Task() at /vagrant/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/components/simcom/src/simcom.cpp:88
>>> 0x40129f8c: simcom::Ticker(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void*) at /vagrant/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/components/simcom/src/simcom.cpp:257
>>> 0x4012a037: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_data() const at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:135
>>> (inlined by) std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_is_local() const at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:170
>>> (inlined by) std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_dispose() at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:179
>>> (inlined by) std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string() at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:544
>>> (inlined by) void std::_Mem_fn_base<void (simcom::*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void*), true>::operator()<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void*, void>(simcom*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, void*&&) const at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/functional:600
>>> (inlined by) void std::_Bind<std::_Mem_fn<void (simcom::*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void*)> (simcom*, std::_Placeholder<1>, std::_Placeholder<2>)>::__call<void, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, void*&&, 0u, 1u, 2u>(std::tuple<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, void*&&>&&, std::_Index_tuple<0u, 1u, 2u>) at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/functional:1074
>>> (inlined by) void std::_Bind<std::_Mem_fn<void (simcom::*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void*)> (simcom*, std::_Placeholder<1>, std::_Placeholder<2>)>::operator()<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void*, void>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, void*&&) at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/functional:1133
>>> (inlined by) std::_Function_handler<void (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void*), std::_Bind<std::_Mem_fn<void (simcom::*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void*)> (simcom*, std::_Placeholder<1>, std::_Placeholder<2>)> >::_M_invoke(std::_Any_data const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, void*&&) at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/functional:1871
>>> 0x4014a76c: opendir at /home/ubuntu/esp/esp-idf/components/vfs/./vfs.c:540
>>> 0x400e208a: std::function<void (OvmsMetric*)>::operator()(OvmsMetric*) const at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/functional:2271
>>> 0x401183a2: void std::_Mem_fn_base<void (OvmsServerV2::*)(OvmsMetric*), true>::operator()<OvmsMetric*, void>(OvmsServerV2*, OvmsMetric*&&) const at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/functional:600
>>> (inlined by) void std::_Bind<std::_Mem_fn<void (OvmsServerV2::*)(OvmsMetric*)> (OvmsServerV2*, std::_Placeholder<1>)>::__call<void, OvmsMetric*&&, 0u, 1u>(std::tuple<OvmsMetric*&&>&&, std::_Index_tuple<0u, 1u>) at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/functional:1074
>>> (inlined by) void std::_Bind<std::_Mem_fn<void (OvmsServerV2::*)(OvmsMetric*)> (OvmsServerV2*, std::_Placeholder<1>)>::operator()<OvmsMetric*, void>(OvmsMetric*&&) at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/functional:1133
>>> (inlined by) std::_Function_handler<void (OvmsMetric*), std::_Bind<std::_Mem_fn<void (OvmsServerV2::*)(OvmsMetric*)> (OvmsServerV2*, std::_Placeholder<1>)> >::_M_invoke(std::_Any_data const&, OvmsMetric*&&) at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/functional:1871
>>> 0x400e8fdc: OvmsEvents::SignalEvent(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void*) at /vagrant/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/main/./ovms_events.cpp:155
>>> 0x400d4d68: void std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_construct<char*>(char*, char*, std::forward_iterator_tag) at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.tcc:236
>>> 0x400e8d4c: std::function<void (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void*)>::operator()(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void*) const at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/functional:2271
>>> 0x400e8fe8: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_data() const at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:135
>>> (inlined by) std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_is_local() const at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:170
>>> (inlined by) std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_dispose() at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:179
>>> (inlined by) std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string() at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:544
>>> (inlined by) OvmsEvents::SignalEvent(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, void*) at /vagrant/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/main/./ovms_events.cpp:155
>>> 0x400eaeb6: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_data() const at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:135
>>> (inlined by) std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_is_local() const at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:170
>>> (inlined by) std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_dispose() at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:179
>>> (inlined by) std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string() at /home/ubuntu/esp/xtensa-esp32-elf/xtensa-esp32-elf/include/c++/5.2.0/bits/basic_string.h:544
>>> (inlined by) Housekeeping::Ticker1() at /vagrant/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/main/./ovms_housekeeping.cpp:223
>>> 0x400eb07b: HousekeepingTicker1(void*) at /vagrant/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/main/./ovms_housekeeping.cpp:70
>>> 0x400eb07b: HousekeepingTicker1(void*) at /vagrant/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/main/./ovms_housekeeping.cpp:70
>>> 0x4008d7c9: prvProcessExpiredTimer at /home/ubuntu/esp/esp-idf/components/freertos/./timers.c:484
>>> 0x40081f10: esp_crosscore_int_send at /home/ubuntu/esp/esp-idf/components/esp32/./crosscore_int.c:103
>>> 0x4008d818: prvProcessTimerOrBlockTask at /home/ubuntu/esp/esp-idf/components/freertos/./timers.c:484
>>> 0x40081f7e: esp_crosscore_int_send_yield at /home/ubuntu/esp/esp-idf/components/esp32/./crosscore_int.c:112
>>> 0x4008d7fc: prvProcessTimerOrBlockTask at /home/ubuntu/esp/esp-idf/components/freertos/./timers.c:484
>>> 0x4008d90b: prvTimerTask at /home/ubuntu/esp/esp-idf/components/freertos/./timers.c:484
>>>
>>> Blk tiT
>>> $ ~/esp/xtensa-esp32-elf/bin/xtensa-esp32-elf-addr2line -pfiaC -e build/ovms3.elf 0x401e52d0 0x4008b981 0x400eec76 0x401cbf90 0x401cb450 0x4012b7ec 0x4020118e 0x401e0672 0x401e53e1 0x4008b981 0x402011b6 0x401e7654 0x401e2920 0x401e3ea2 0x401d71a8 0x401cbdaa
>>> 0x401e52d0: sys_arch_sem_wait at /home/ubuntu/esp/esp-idf/components/lwip/port/freertos/sys_arch.c:548
>>> 0x4008b981: xQueueGenericReceive at /home/ubuntu/esp/esp-idf/components/freertos/./queue.c:2037
>>> 0x400eec76: OvmsCommandApp::Log(char const*, __va_list_tag) at /vagrant/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/main/./ovms_command.cpp:94
>>> 0x401cbf90: tcpip_api_call at /home/ubuntu/esp/esp-idf/components/lwip/api/tcpip.c:474
>>> 0x401cb450: pppapi_connect at /home/ubuntu/esp/esp-idf/components/lwip/api/pppapi.c:266
>>> 0x4012b7ec: GsmPPPOS_StatusCallback(ppp_pcb_s*, int, void*) at /vagrant/Open-Vehicle-Monitoring-System-3/vehicle/OVMS.V3/components/simcom/src/gsmpppos.cpp:149
>>> 0x4020118e: ppp_link_end at /home/ubuntu/esp/esp-idf/components/lwip/netif/ppp/ppp.c:704
>>> 0x401e0672: pppos_disconnect at /home/ubuntu/esp/esp-idf/components/lwip/netif/ppp/pppos.c:410
>>> 0x401e53e1: sys_arch_mbox_fetch at /home/ubuntu/esp/esp-idf/components/lwip/port/freertos/sys_arch.c:548
>>> 0x4008b981: xQueueGenericReceive at /home/ubuntu/esp/esp-idf/components/freertos/./queue.c:2037
>>> 0x402011b6: ppp_link_terminated at /home/ubuntu/esp/esp-idf/components/lwip/netif/ppp/ppp.c:704
>>> 0x401e7654: link_terminated at /home/ubuntu/esp/esp-idf/components/lwip/netif/ppp/auth.c:643
>>> 0x401e2920: lcp_finished at /home/ubuntu/esp/esp-idf/components/lwip/netif/ppp/lcp.c:2365
>>> 0x401e3ea2: fsm_timeout at /home/ubuntu/esp/esp-idf/components/lwip/netif/ppp/fsm.c:279
>>> 0x401d71a8: sys_timeouts_mbox_fetch at /home/ubuntu/esp/esp-idf/components/lwip/core/timers.c:575
>>> 0x401cbdaa: tcpip_thread at /home/ubuntu/esp/esp-idf/components/lwip/api/tcpip.c:474
>>>
>>> _______________________________________________
>>> OvmsDev mailing list
>>> OvmsDev at lists.teslaclub.hk <mailto:OvmsDev at lists.teslaclub.hk>
>>> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev
>>
>> _______________________________________________
>> OvmsDev mailing list
>> OvmsDev at lists.teslaclub.hk <mailto:OvmsDev at lists.teslaclub.hk>
>> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev
>
> _______________________________________________
> OvmsDev mailing list
> OvmsDev at lists.teslaclub.hk
> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvehicles.com/pipermail/ovmsdev/attachments/20180403/b005c4a3/attachment-0001.html>
More information about the OvmsDev
mailing list