[Ovmsdev] A promiscous wifi client

Mark Webb-Johnson mark at webb-johnson.net
Mon Nov 20 13:11:01 HKT 2017


The general approach of async net libraries like mongoose seem to be that they don’t allow blocking, and that is why the send doesn’t send immediately. That said, I’ve seen implementations that at least _try_ to send immediately (and only queue it if that fails); however that can complicate the transmitter (which has to deal with the two possible cases).

The more general approach seems to be that if we want to send something big, we send a chunk, than wait for the MG_EV_SEND message to indicate that chunk has been sent, then queue the next chunk to go. That seems to work well in Mongoose. My issue is that the higher-level protocols in Mongoose don’t support that well (for example, HTTP client and server). Kind of disappointing. But, the basic low-level networking API seems ok.

I can see how something like ‘vfs cat my-big-file’ would cause us concern.

I tried the telnet server component, with wifi up, and that ESP_LOGI on to log the event. Dropping the event #0 (poll) notifications, this is what I saw:

I (22689) telnet: Launching Telnet Server

(Launch a normal telnet connection)
I (47049) telnet: Event 1 conn 0x3fff1004, data 0x3fff101c (MG_EV_ACCEPT)
I (47049) telnet: Event 3 conn 0x3fff1004, data 0x3ffeffb0 (MG_EV_RECV)
I (47099) telnet: Event 4 conn 0x3fff1004, data 0x3ffeffc0 (MG_EV_SEND)
I (47109) telnet: Event 3 conn 0x3fff1004, data 0x3ffeffb0 (MG_EV_RECV)
I (51749) telnet: Event 3 conn 0x3fff1004, data 0x3ffeffb0 (MG_EV_RECV)
I (51759) telnet: Event 4 conn 0x3fff1004, data 0x3ffeffc0 (MG_EV_SEND)
I (51889) telnet: Event 3 conn 0x3fff1004, data 0x3ffeffb0 (MG_EV_RECV)
I (51899) telnet: Event 4 conn 0x3fff1004, data 0x3ffeffc0 (MG_EV_SEND)
I (51939) telnet: Event 3 conn 0x3fff1004, data 0x3ffeffb0 (MG_EV_RECV)
I (51949) telnet: Event 4 conn 0x3fff1004, data 0x3ffeffc0 (MG_EV_SEND)
I (52049) telnet: Event 3 conn 0x3fff1004, data 0x3ffeffb0 (MG_EV_RECV)
I (52059) telnet: Event 4 conn 0x3fff1004, data 0x3ffeffc0 (MG_EV_SEND)
I (52209) telnet: Event 3 conn 0x3fff1004, data 0x3ffeffb0 (MG_EV_RECV)
I (52209) telnet: Event 4 conn 0x3fff1004, data 0x3ffeffc0 (MG_EV_SEND)
(Disconnect the client side)
I (54309) telnet: Event 5 conn 0x3fff1004, data 0x0        (MG_EV_CLOSE)

(Launch a normal telnet connection)
I (58319) telnet: Event 1 conn 0x3fff0f08, data 0x3fff0f20 (MG_EV_ACCEPT)
I (58319) telnet: Event 3 conn 0x3fff0f08, data 0x3ffeffb0 (MG_EV_RECV)
I (58369) telnet: Event 4 conn 0x3fff0f08, data 0x3ffeffc0 (MG_EV_SEND)
I (58379) telnet: Event 3 conn 0x3fff0f08, data 0x3ffeffb0 (MG_EV_RECV)

OVMS > wifi mode off
Stopping wifi station...
I (62569) events: Signal(system.wifi.down)
I (62569) events: Signal(network.wifi.down)
I (62569) ssh: Stopping SSH Server
I (62579) events: Signal(network.down)
I (62579) wifi: state: run -> init (0)
I (62579) wifi: pm stop, total sleep time: 0/33441266

I (62579) wifi: n:11 0, o:11 0, ap:255 255, sta:11 0, prof:1
I (62589) events: Signal(system.wifi.sta.disconnected)

I (62589) telnet: Event 5 conn 0x3fff0f08, data 0x0 (MG_EV_CLOSE)

I (62589) events: Signal(network.mgr.stop)
I (62589) telnet: Stopping Telnet Server
I (62589) events: Signal(system.wifi.sta.stop)
I (62589) telnet: Event 5 conn 0x3fff0ccc, data 0x0 (MG_EV_CLOSE)

As you say, not elegant, but at least our side gets closed.

Regards, Mark.

> On 20 Nov 2017, at 8:58 AM, Stephen Casner <casner at acm.org> wrote:
> On Sun, 19 Nov 2017, Mark Webb-Johnson wrote:
>> Strange. When I tested mongoose, I got a close event (MG_EV_CLOSE) for
>> each open connection, when the interface went down.
> It does send a close event, but not until after the wifi is already
> shut down so closing the socket at that point does not send any packet
> to the client.  I've decided to punt on that issue, though, because
> manually shutting down wifi is not an important use case.  The more
> likely case is that wifi connectivity is lost due to motion or other
> causes and in that case no close packet can be delivered anyway.
> My other points were:
>>> - It looks like there may be a memory blocked leaked at each client
>>>   disconnection.  I need to find that.
> This is not a leak.  It is the policy of lwip to allocate a semaphore
> to each socket and to not reuse socket structures in its pool until
> all have been used.  So that means up to 10 semaphore blocks of 92
> bytes each (104 with debug overhead) will be allocated as new client
> connections are made.  I actually figured this out once before in
> August when I first implemented telnet.
>>> - I think I can improve how I handle the receive buffer and save
>>>   some more RAM.
>>> I've also thought more about how to convert the SSH Console and will
>>> take a stab at that as well.
> In my local repository I have done both of these improvements but not
> yet committed them.
> Furthermore, I have determined that the "surgery" in OvmsConsole
> required to accommodate ConsoleAsync as a task while ConsoleTelnet and
> ConsoleSSH run within the NetManTask was not as bad as I thought.  So,
> as you requested, in my local copy I have eliminated all the dedicated
> tasks for the Telnet and SSH implementations and have that working.
> However, there are a couple of downsides to using mongoose:
> 1) Output from other tasks through the OvmsCommandApp::Log() facility
> will be delayed on average by half the mongoose polling interval,
> which is currently one second.  In my original implementation I
> avoided polling to avoid delays, but that had other costs.
> 2) More serious: Mongoose has a policy that callback function will not
> block, which is reasonable since it acts as a hub for multiple
> services.  However, one consequence is that the mg_send() function
> always just buffers the data you give it into heap RAM.  So, in the
> SSH console, for example, when you enter a command an MG_EV_RECV event
> will pass control into code in the console and command classes.  That
> code will generate its output as multiple puts or printf calls each of
> which results in an mg_send() call.  For a command like "?" that does
> many lines of output, mongoose will be doing a bunch of realloc()
> operations to keep appending all of the lines into one big buffer in
> RAM.  It is possible for a command to use up all the available RAM.
> Fortunately mg_send() fails gracefully when that happens, but it means
> command output is lost and in the meantime other functions that depend
> upon RAM may fail less gracefully.  It was issues like this that led
> me to be concerned about eliminating the console tasks.
> There is not any mechanism in Mongoose for the application to flush
> the send buffer.  The data will only be sent when the callback for the
> MG_EV_RECV event returns to Mongoose.  We don't have a way to return
> to mongoose part way through the command output and then get control
> back again to finish the command.  The state of being in the middle of
> the command is lost without it being a separate task to hold that
> state.  It might work to recurse into mongoose by calling
> mg_mgr_poll() again, but the code in mongoose may not be reenterable
> and this would incur significant risk of stack overflow.
> Should we consider modifying Mongoose to change its policy so that
> mg_send() actually sends its data?  This could be a flag in the
> mg_connection object, or there could just be an mg_flush() call
> added.  Digging through the code I found mg_mgr_handle_conn() that
> does the actual sending, but that function is not exposed in the
> header file and I don't know whether it would be valid to call it from
> user code while in the middle of the callback.
> You also said:
>>>>> Perhaps a generic extension to ovms_netmanager where you give it a
>>>>> tcp port and an object type. Then, it listens on that port and
>>>>> launches objects of that type (pre-configured to receive mongoose
>>>>> events as appropriate) for each new connection? Have a base virtual
>>>>> connection class, that gets inherited by each implementation (ssh,
>>>>> telnet, etc).
> Right now I just have some similar code in OvmsTelnet and OvmsSSH
> classes.  When this settles down we could consider factoring that int
> a base virtual connection class.
>                                                        -- Steve
> _______________________________________________
> OvmsDev mailing list
> OvmsDev at lists.teslaclub.hk
> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvehicles.com/pipermail/ovmsdev/attachments/20171120/93153559/attachment.htm>

More information about the OvmsDev mailing list