[Ovmsdev] A promiscous wifi client
Stephen Casner
casner at acm.org
Mon Nov 20 08:58:08 HKT 2017
On Sun, 19 Nov 2017, Mark Webb-Johnson wrote:
> Strange. When I tested mongoose, I got a close event (MG_EV_CLOSE) for
> each open connection, when the interface went down.
It does send a close event, but not until after the wifi is already
shut down so closing the socket at that point does not send any packet
to the client. I've decided to punt on that issue, though, because
manually shutting down wifi is not an important use case. The more
likely case is that wifi connectivity is lost due to motion or other
causes and in that case no close packet can be delivered anyway.
My other points were:
> > - It looks like there may be a memory blocked leaked at each client
> > disconnection. I need to find that.
This is not a leak. It is the policy of lwip to allocate a semaphore
to each socket and to not reuse socket structures in its pool until
all have been used. So that means up to 10 semaphore blocks of 92
bytes each (104 with debug overhead) will be allocated as new client
connections are made. I actually figured this out once before in
August when I first implemented telnet.
> > - I think I can improve how I handle the receive buffer and save
> > some more RAM.
> >
> > I've also thought more about how to convert the SSH Console and will
> > take a stab at that as well.
In my local repository I have done both of these improvements but not
yet committed them.
Furthermore, I have determined that the "surgery" in OvmsConsole
required to accommodate ConsoleAsync as a task while ConsoleTelnet and
ConsoleSSH run within the NetManTask was not as bad as I thought. So,
as you requested, in my local copy I have eliminated all the dedicated
tasks for the Telnet and SSH implementations and have that working.
However, there are a couple of downsides to using mongoose:
1) Output from other tasks through the OvmsCommandApp::Log() facility
will be delayed on average by half the mongoose polling interval,
which is currently one second. In my original implementation I
avoided polling to avoid delays, but that had other costs.
2) More serious: Mongoose has a policy that callback function will not
block, which is reasonable since it acts as a hub for multiple
services. However, one consequence is that the mg_send() function
always just buffers the data you give it into heap RAM. So, in the
SSH console, for example, when you enter a command an MG_EV_RECV event
will pass control into code in the console and command classes. That
code will generate its output as multiple puts or printf calls each of
which results in an mg_send() call. For a command like "?" that does
many lines of output, mongoose will be doing a bunch of realloc()
operations to keep appending all of the lines into one big buffer in
RAM. It is possible for a command to use up all the available RAM.
Fortunately mg_send() fails gracefully when that happens, but it means
command output is lost and in the meantime other functions that depend
upon RAM may fail less gracefully. It was issues like this that led
me to be concerned about eliminating the console tasks.
There is not any mechanism in Mongoose for the application to flush
the send buffer. The data will only be sent when the callback for the
MG_EV_RECV event returns to Mongoose. We don't have a way to return
to mongoose part way through the command output and then get control
back again to finish the command. The state of being in the middle of
the command is lost without it being a separate task to hold that
state. It might work to recurse into mongoose by calling
mg_mgr_poll() again, but the code in mongoose may not be reenterable
and this would incur significant risk of stack overflow.
Should we consider modifying Mongoose to change its policy so that
mg_send() actually sends its data? This could be a flag in the
mg_connection object, or there could just be an mg_flush() call
added. Digging through the code I found mg_mgr_handle_conn() that
does the actual sending, but that function is not exposed in the
header file and I don't know whether it would be valid to call it from
user code while in the middle of the callback.
You also said:
> >>> Perhaps a generic extension to ovms_netmanager where you give it a
> >>> tcp port and an object type. Then, it listens on that port and
> >>> launches objects of that type (pre-configured to receive mongoose
> >>> events as appropriate) for each new connection? Have a base virtual
> >>> connection class, that gets inherited by each implementation (ssh,
> >>> telnet, etc).
Right now I just have some similar code in OvmsTelnet and OvmsSSH
classes. When this settles down we could consider factoring that int
a base virtual connection class.
-- Steve
More information about the OvmsDev
mailing list