[Ovmsdev] v3 hardware disconnecting from v2 server

Mark Webb-Johnson mark at webb-johnson.net
Mon Jan 29 10:43:07 HKT 2018

I am just suggesting we clean up that clean-shutdown case. If the network is being shut down (either on command, or via script), we can at least do it cleanly. We do get such an indication (shutting down event, followed by wifi shutdown, followed by shut down event). We should, in general, shutdown our connection on the indication-to-shutdown event, rather than the already-shutdown event (as that at least allows some chance to inform the remote end of the issue). Perhaps the ‘wifi mode off’ command should sleep for a second or two, after issuing the indication-to-shutdown event, to increase the chance of a clean shutdown?

For the other case of connections being externally dropped, I’m trying to see how the ESP IDF v3 framework handles it. Espressif have changed things slightly, and there is now a ‘lost ip’ timer that signals the IP address being lost after a timeout; I think we can pickup on that event to shut things down. That is SYSTEM_EVENT_STA_LOST_IP, but wasn’t available in ESP IDF v2.1 (and is now available in ESP IDF v3 but we don’t use it). I am not sure how that differs from the SYSTEM_EVENT_STA_DISCONNECTED system event (which is the one we handle at the moment).

In general, I don’t think our handling of these events is optimal at the moment. We have a variety of low-level events from things like the wifi driver, and then high-level network manager events. But things like console_ssh should probably only bring up the server in the case of a wifi connection (not simcom modem ppp). I see mdns uses the low-level wifi events, but console_ssh uses the high-level network manager events (because it integrates to mongoose so can’t bring up the server until mongoose is ready to be initialised).

In the normal socket comms case, a write() on a disconnected socket will fail and indicate appropriately. But with the comms going through mongoose, I am not sure how exactly things are handled. When we mg_send() data, it is merely added to an output buffer and no socket write occurs at that time so we won’t be getting back an error indication. In fact, the prototype for mg_send is a void return:
void mg_send(struct mg_connection *, const void *buf, int len);
I assume that if mongoose calls write() on the socket, and an error indication comes back, then it closes the connection. In mg_write_to_socket(), I see that kind of logic (but haven’t traced it myself):
  if (n > 0) {
    mg_if_sent_cb(nc, n);
  } else if (n < 0 && mg_is_error(n)) {
    /* Something went wrong, drop the connection. */
    nc->flags |= MG_F_CLOSE_IMMEDIATELY;
That should deliver an MG_EV_CLOSE to each of the active connections.

I’ll have to look at it in more details. Just trying to get the final testing of v3.1 hardware done first.

Regards, Mark.

> On 29 Jan 2018, at 9:07 AM, Tom Parker <tom at carrott.org> wrote:
> On 29/01/18 13:29, Mark Webb-Johnson wrote:
>> Perhaps we should issue network.mgr.stop earlier? However, even if we close the connections from our end, I’m not sure if the packet would make it through to the other end before the wifi disconnects.
> I don't think we need to address the situation where the server does not get informed of the disconnection -- the disconnection could happen before we know it has happened preventing us from signaling the server. This could happen simply by driving away from the wifi network.
> What we need to address is informing the client inside the vehicle module that it is no longer connected. In the case where we know the network has gone down, we can have the OVMS process listen for the event and take appropriate actions. In the case where the network is up but the socket is broken somewhere in the network and we get a RST response or we never get an ACK, we should signal the client. It's been a long time since I dealt with raw sockets and tried to handle all the edge cases so I'll have to defer to others with more knowledge of the apis to suggest how it should work. The v2 client is good in that it transmits periodically (which should eliminate the need for TCP level keepalive packets on all but the most aggressive networks) so the socket library has an opportunity detect the broken socket and tell the client code that the socket isn't working.
> I wrote an OVMS v2 python client which suffers from the same problem, every now and again it needs to be restarted because it thinks it is connected but it isn't. Unlike the vehicle module, this client never writes to the socket after login but I'm a little surprised the OS doesn't eventually say "this socket is broken" or something, I really should work out how to fix it (maybe by restarting if there has been no data for some number of minutes).
> _______________________________________________
> OvmsDev mailing list
> OvmsDev at lists.teslaclub.hk
> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvehicles.com/pipermail/ovmsdev/attachments/20180129/eaf831ae/attachment.htm>

More information about the OvmsDev mailing list