[Ovmsdev] Transition to new esp-idf

Stephen Casner casner at acm.org
Mon Jan 22 14:33:37 HKT 2018


WolfSSH does not implement scp, my code does.

I have a loop that reads a buffer from the file and then sends it
through WolfSSH which makes up outgoing packets and calls my
SendCallback() one or more times, each of which issues a mg_send()
call.  But I break out of that loop if MG_EV_SEND has not been
received.  With my modified mongoose, if mg_send() was able to
transmit immediately then that would cause a recursive call back to my
event loop with MG_EV_SEND so this loop would spin along merrily.  As
the code currently stands without my mods, the loop will always break
after sending one buffer.

As I reviewed the code to answer your question, I see that I enter
that send loop at every MG_EV_POLL.  Perhaps I should add a check
before entering the loop to see that the previous buffer has been
sent.  This may make the scp slow, though.

                                                        -- Steve

On Mon, 22 Jan 2018, Mark Webb-Johnson wrote:

> How does wolfssh deal with the scp issue? Can it send chunk by chunk, based on MG_EV_SEND? I'm not so worried about normal command output (especially with SPI RAM), but scp terrifies me.
>
> Regards, Mark.
>
> > On 22 Jan 2018, at 1:25 PM, Stephen Casner <casner at acm.org> wrote:
> >
> > My original implementation of both telnet and ssh was with separate
> > tasks.  That was straightforward.  However, you wanted them merged
> > under mongoose to avoid the cost of the separate stack spaces.  That
> > required reworking the control flow to fit into the mongoose
> > event-driven structure.  Both telnet and wolfssh are capable of
> > working with non-blocking I/O, so this was possible though it took me
> > a while to work out what I needed to do.
> >
> > The severity of the memory limit depends upon how low we try to go
> > with additional components configured in.  Even typing the '?' command
> > does a reasonable chunk of output now, and in SSH that gets amplified
> > a little by the encryption overhead.
> >
> > The send_mbuf does not get full because mongoose just keeps doing
> > realloc.
> >
> >                                                        -- Steve
> >
> > On Mon, 22 Jan 2018, Mark Webb-Johnson wrote:
> >
> >> Ok, understood. I thought it was running under console task.
> >>
> >> I guess the issue is primarily SCP? A general command output should
> >> be reasonably small, but a file could by MB in size? What happens if
> >> the send_mbuf is full?
> >>
> >> Regards, Mark.
> >>
> >>> On 22 Jan 2018, at 12:53 PM, Stephen Casner <casner at acm.org> wrote:
> >>>
> >>> Umm, Mark, we're not running a separate task.  If we were, this would
> >>> not be a problem.  With one task, sleeping would do no good because
> >>> that would be causing Mongoose to sleep.  The only way we can save the
> >>> state of the "user" code that is doing the output would be to recurse
> >>> into Mongoose so it can actually send, but that is likely to lead to a
> >>> stack overflow due to repeated recursion.  After pondering many ideas,
> >>> that's why I made the change to mg_send.
> >>>
> >>> It is still not difficult to use one of several options make a
> >>> solution that allows SSH to send immediately while MQTT does not, but
> >>> it's just that some mods to mongoose.c will be required.
> >>>
> >>>                                                       -- Steve
> >>>
> >>>
> >>> On Mon, 22 Jan 2018, Mark Webb-Johnson wrote:
> >>>
> >>>>> Perhaps check add a loop to check if ctx->send_mbuf.len and ctx->send_mbuf.size to see how much space is free. If not enough, then sleep the task (10ms or 100ms?) and try again? Or use MG_EV_SEND to set a semaphore, and pickup on that in the SendCallback? Rely on the fact that mongoose is running in a separate task and will empty the buffer when it can.
> >>>>
> >>>> To be hopefully clear(er):
> >>>>
> >>>> For event driven systems sending a big file, the usual approach is to send a block, wait for SENT callback, then send the next block. Repeat until no more file left. That approach minimises the buffer usage.
> >>>>
> >>>> We are trying to shoe-horn SSH into this event driven system, but the wolfssh system is expecting a normal blocking API.
> >>>>
> >>>> But, we are running in a separate task, so with a semaphore / poll we can convert the events into a blocking API.
> >>>>
> >>>> Two approaches I can see:
> >>>>
> >>>> Simple is to just check if ctx->send_mbuf has enough space. If not, sleep for a while, and check again. Rely on the mongoose task emptying the buffer.
> >>>>
> >>>> More complex is to use the mongoose MG_EV_SEND callback (which signifies that some data has been sent), and a semaphore to signal data has been sent. The OvmsSSH::EventHandler and SendCallback could then use that to co-ordinate and avoid sleeping. This is the preferred approach.
> >>>>
> >>>> Perhaps this is general enough to be put into a library? An object that could be used to encapsulate the semaphore (initialised to indicate data has been sent), method to indicate data has been sent, and method to wait for that indication. I have a similar problem (although the reverse - receive rather than transmit) in ovms_ota now, and perhaps a generic solution could solve both our problems?
> >>>>
> >>>> Regards, Mark.
> >>>>
> >>>>> On 22 Jan 2018, at 12:34 PM, Mark Webb-Johnson <mark at webb-johnson.net> wrote:
> >>>>>
> >>>>> It doesn't seem as if there is a good solution. I can see two approaches:
> >>>>>
> >>>>> Use a MG_F_USER_? flag to mean 'immediate write' and extend mg_send to honour that.
> >>>>>
> >>>>> Add a separate mg_flush() call (used after mg_send) to flush the fd.
> >>>>>
> >>>>> That static function is going to be a pain to workaround. Perhaps a #include for our C code in mongoose.c?
> >>>>>
> >>>>> All of this is going to be fighting the event-driven mechanism of Mongoose. Is there another way of doing this?
> >>>>>
> >>>>> For console_ssh, I think this is where it is:
> >>>>>
> >>>>> int SendCallback(WOLFSSH* ssh, void* data, uint32_t size, void* ctx)
> >>>>> {
> >>>>> mg_send((mg_connection*)ctx, (char*)data, size);
> >>>>> return size;
> >>>>> }
> >>>>>
> >>>>> Perhaps check add a loop to check if ctx->send_mbuf.len and ctx->send_mbuf.size to see how much space is free. If not enough, then sleep the task (10ms or 100ms?) and try again? Or use MG_EV_SEND to set a semaphore, and pickup on that in the SendCallback? Rely on the fact that mongoose is running in a separate task and will empty the buffer when it can.
> >>>>>
> >>>>> Regards, Mark.
> >>>>>
> >>>>>> On 22 Jan 2018, at 3:17 AM, Stephen Casner <casner at acm.org <mailto:casner at acm.org>> wrote:
> >>>>>>
> >>>>>> Mark,
> >>>>>>
> >>>>>> Well, in turn, I'm sorry for making an API change that was driving you
> >>>>>> crazy.  It would have been smarter to add it as a new function even
> >>>>>> though that would be duplicating more code.
> >>>>>>
> >>>>>> As the code currently stands, telnet and SSH will work so long as no
> >>>>>> operation does more contiguous output than the amount of available
> >>>>>> free memory can hold, otherwise an out-of-memory crash will occur.  I
> >>>>>> don't know if we consider that an acceptable risk.  Maybe with v3.1
> >>>>>> hardware it will be.
> >>>>>>
> >>>>>> Your suggestion to put a new funtion into a separate module is a fine
> >>>>>> idea, but that function needs to access some functions in mongoose.c
> >>>>>> that are scoped static.  That means we can't entirely avoid modifying
> >>>>>> mongoose.
> >>>>>>
> >>>>>>                                                      -- Steve
> >>>>>>
> >>>>>> On Fri, 19 Jan 2018, Mark Webb-Johnson wrote:
> >>>>>>
> >>>>>>> Oops. Sorry. That change broke MQTT. I couldn't understand what was going on (as mg_send was sending immediately). MQTT works like this:
> >>>>>>>
> >>>>>>> void mg_mqtt_publish(struct mg_connection *nc, const char *topic,
> >>>>>>>                   uint16_t message_id, int flags, const void *data,
> >>>>>>>                   size_t len) {
> >>>>>>> size_t old_len = nc->send_mbuf.len;
> >>>>>>>
> >>>>>>> uint16_t topic_len = htons((uint16_t) strlen(topic));
> >>>>>>> uint16_t message_id_net = htons(message_id);
> >>>>>>>
> >>>>>>> mg_send(nc, &topic_len, 2);
> >>>>>>> mg_send(nc, topic, strlen(topic));
> >>>>>>> if (MG_MQTT_GET_QOS(flags) > 0) {
> >>>>>>>  mg_send(nc, &message_id_net, 2);
> >>>>>>> }
> >>>>>>> mg_send(nc, data, len);
> >>>>>>>
> >>>>>>> mg_mqtt_prepend_header(nc, MG_MQTT_CMD_PUBLISH, flags,
> >>>>>>>                       nc->send_mbuf.len - old_len);
> >>>>>>> }
> >>>>>>>
> >>>>>>> It uses mg_send a bunch of times, then goes back and modifies the send_mbuf by inserting a header, then finishes so that the actual transmission can occur. Seems a really dumb way to do it, but such is life.
> >>>>>>>
> >>>>>>> It was driving me crazy last night, so in the end I just updated mongoose this morning and hey! everything worked. Now I know why :-(
> >>>>>>>
> >>>>>>> I see that mg_send_dns_query() does the same (it calls mg_dns_insert_header, which then calls mbuf_insert). Making mg_send transmit immediately would break that as well.
> >>>>>>>
> >>>>>>> How about introducing a new mg_send_now() that calls mg_send() then sends the data immediately? Perhaps it could be a separate .h/.c file mongoose_extensions to avoid the change getting overwritten?
> >>>>>>>
> >>>>>>> Regards, Mark.
> >>>>>>>
> >>>>>>>> On 19 Jan 2018, at 2:36 PM, Stephen Casner <casner at acm.org <mailto:casner at acm.org>> wrote:
> >>>>>>>>
> >>>>>>>> Mark,
> >>>>>>>>
> >>>>>>>> The update of Mongoose to v6.10 removed the change I had made so that
> >>>>>>>> the mg_send() call would transmit on the network immediately if the
> >>>>>>>> socket was ready.  I needed to make that change because we would
> >>>>>>>> otherwise run out of RAM with SSH because mg_send() would just buffer
> >>>>>>>> everything until the next poll.
> >>>>>>>>
> >>>>>>>>                                                     -- Steve
> >>>>>>>>
> >>>>>>>> On Fri, 19 Jan 2018, Mark Webb-Johnson wrote:
> >>>>>>>>
> >>>>>>>>> I re-worked the ovms_server_* framework, and v2 implementation, to use MONGOOSE.
> >>>>>>>>>
> >>>>>>>>> It seems to be _basically_ working. It can connect/disconnect/etc. Some slight memory saving, but standardising the networking throughout on mongoose should simplify things.
> >>>>>>>>>
> >>>>>>>>> I am seeing problems with transmitting the FEATURES and PARAMETERS sometimes - particularly in low memory situations. I'm still trying to find out why.
> >>>>>>>>>
> >>>>>>>>> Regards, Mark.
> >>>>>>>>>
> >>>>>>>>>> On 17 Jan 2018, at 8:33 AM, Mark Webb-Johnson <mark at webb-johnson.net <mailto:mark at webb-johnson.net>> wrote:
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> This is the issue Michael pointed out. The 'server response is incomplete' problem with select(). Apologies for this; I am not sure why I didn't notice it before.
> >>>>>>>>>>
> >>>>>>>>>> Gory details are here:
> >>>>>>>>>>
> >>>>>>>>>> https://github.com/espressif/esp-idf/issues/1510 <https://github.com/espressif/esp-idf/issues/1510> <https://github.com/espressif/esp-idf/issues/1510 <https://github.com/espressif/esp-idf/issues/1510>>
> >>>>>>>>>>
> >>>>>>>>>> I think Espressif implemented requirement this in a bizarre way, likely to break compatibility, but such is life. They did point it out as a 'breaking change' (at the bottom of the release notes for 3.0b1):
> >>>>>>>>>>
> >>>>>>>>>> https://github.com/espressif/esp-idf/releases/tag/v3.0-rc1 <https://github.com/espressif/esp-idf/releases/tag/v3.0-rc1> <https://github.com/espressif/esp-idf/releases/tag/v3.0-rc1 <https://github.com/espressif/esp-idf/releases/tag/v3.0-rc1>>
> >>>>>>>>>>
> >>>>>>>>>> LWIP socket file descriptors now take higher numeric values (via the LWIP LWIP_SOCKET_OFFSET macro). BSD sockets code should mostly work as expected (and, new in V3.0, some standard POSIX functions can now be used with sockets). However any code which assumes a socket file descriptor is always a low numbered integer may need modifying to account for LWIP_SOCKET_OFFSET.
> >>>>>>>>>>
> >>>>>>>>>> It sure broke us.
> >>>>>>>>>>
> >>>>>>>>>> I've made a one-line workaround fix (to ovms_buffer.cpp), and ovms server v2 connections are working again for me. That is committed and pushed already.
> >>>>>>>>>>
> >>>>>>>>>> It is kind of messy to have all these different networking implementations in our code base; I intend to move ovms_server_* to mongoose networking over the next few days. That will mean we won't need a separate task/stack for server connections, and should save us 7KB internal RAM for each connection. Also ovms_ota. But that will have to wait, as I need to get the hardware complete first (some issues with 1.8v vs 3.3v logic on VDD_SDIO of the wrover module and some of our GPIOs), and that is top priority.
> >>>>>>>>>>
> >>>>>>>>>> Regards, Mark.
> >>>>>>>>>>
> >>>>>>>>>>> On 17 Jan 2018, at 7:05 AM, Greg D. <gregd2350 at gmail.com <mailto:gregd2350 at gmail.com> <mailto:gregd2350 at gmail.com <mailto:gregd2350 at gmail.com>>> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> But, I'm not getting much love out of the v2 server.  The connection doesn't appear to be working - "server response is incomplete".  Same error whether on wifi or modem.
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>> _______________________________________________
> >>>>>>>> OvmsDev mailing list
> >>>>>>>> OvmsDev at lists.teslaclub.hk <mailto:OvmsDev at lists.teslaclub.hk>
> >>>>>>>> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev
> >>>>>>>
> >>>>>> _______________________________________________
> >>>>>> OvmsDev mailing list
> >>>>>> OvmsDev at lists.teslaclub.hk <mailto:OvmsDev at lists.teslaclub.hk>
> >>>>>> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev
> >>>>>
> >>>>> _______________________________________________
> >>>>> OvmsDev mailing list
> >>>>> OvmsDev at lists.teslaclub.hk
> >>>>> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev
> >>>>
> >>> _______________________________________________
> >>> OvmsDev mailing list
> >>> OvmsDev at lists.teslaclub.hk
> >>> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev
> >>
> >> _______________________________________________
> >> OvmsDev mailing list
> >> OvmsDev at lists.teslaclub.hk
> >> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev
> >>
> >>
> > _______________________________________________
> > OvmsDev mailing list
> > OvmsDev at lists.teslaclub.hk
> > http://lists.teslaclub.hk/mailman/listinfo/ovmsdev
>
> _______________________________________________
> OvmsDev mailing list
> OvmsDev at lists.teslaclub.hk
> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev
>
>



More information about the OvmsDev mailing list