<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><blockquote type="cite" class="">Perhaps check add a loop to check if ctx->send_mbuf.len and ctx->send_mbuf.size to see how much space is free. If not enough, then sleep the task (10ms or 100ms?) and try again? Or use MG_EV_SEND to set a semaphore, and pickup on that in the SendCallback? Rely on the fact that mongoose is running in a separate task and will empty the buffer when it can.</blockquote><div class=""><br class=""></div><div class="">To be hopefully clear(er):</div><div class=""><br class=""></div><div class=""><ul class="MailOutline"><li class="">For event driven systems sending a big file, the usual approach is to send a block, wait for SENT callback, then send the next block. Repeat until no more file left. That approach minimises the buffer usage.<br class=""><br class=""></li><li class="">We are trying to shoe-horn SSH into this event driven system, but the wolfssh system is expecting a normal blocking API.<br class=""><br class=""></li><li class="">But, we are running in a separate task, so with a semaphore / poll we can convert the events into a blocking API.<br class=""><br class=""></li><li class="">Two approaches I can see:<br class=""><br class=""></li><ul class=""><li class="">Simple is to just check if ctx->send_mbuf has enough space. If not, sleep for a while, and check again. Rely on the mongoose task emptying the buffer.<br class=""><br class=""></li><li class="">More complex is to use the mongoose MG_EV_SEND callback (which signifies that some data has been sent), and a semaphore to signal data has been sent. The OvmsSSH::EventHandler and SendCallback could then use that to co-ordinate and avoid sleeping. This is the preferred approach.</li></ul></ul></div><div class=""><br class=""></div><div class="">Perhaps this is general enough to be put into a library? An object that could be used to encapsulate the semaphore (initialised to indicate data has been sent), method to indicate data has been sent, and method to wait for that indication. I have a similar problem (although the reverse - receive rather than transmit) in ovms_ota now, and perhaps a generic solution could solve both our problems?</div><div class=""><br class=""></div><div class="">Regards, Mark.<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On 22 Jan 2018, at 12:34 PM, Mark Webb-Johnson <<a href="mailto:mark@webb-johnson.net" class="">mark@webb-johnson.net</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html; charset=utf-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">It doesn’t seem as if there is a good solution. I can see two approaches:<div class=""><br class=""></div><div class=""><ol class="MailOutline"><li class="">Use a MG_F_USER_? flag to mean ‘immediate write’ and extend mg_send to honour that.<br class=""><br class=""></li><li class="">Add a separate mg_flush() call (used after mg_send) to flush the fd.</li></ol><div class=""><br class=""></div><div class="">That static function is going to be a pain to workaround. Perhaps a #include for our C code in mongoose.c?</div><div class=""><br class=""></div><div class="">All of this is going to be fighting the event-driven mechanism of Mongoose. Is there another way of doing this?</div><div class=""><br class=""></div><div class="">For console_ssh, I think this is where it is:</div><div class=""><br class=""></div></div><blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;" class=""><div class=""><div class=""><div class=""><font face="Andale Mono" class=""><span style="font-size: 18px;" class="">int SendCallback(WOLFSSH* ssh, void* data, uint32_t size, void* ctx)</span></font></div><div class=""><font face="Andale Mono" class=""><span style="font-size: 18px;" class=""> {</span></font></div><div class=""><font face="Andale Mono" class=""><span style="font-size: 18px;" class=""> mg_send((mg_connection*)ctx, (char*)data, size);</span></font></div><div class=""><font face="Andale Mono" class=""><span style="font-size: 18px;" class=""> return size;</span></font></div><div class=""><font face="Andale Mono" class=""><span style="font-size: 18px;" class=""> }</span></font></div></div></div></blockquote><div class=""><div class=""><br class=""></div><div class="">Perhaps check add a loop to check if ctx->send_mbuf.len and ctx->send_mbuf.size to see how much space is free. If not enough, then sleep the task (10ms or 100ms?) and try again? Or use MG_EV_SEND to set a semaphore, and pickup on that in the SendCallback? Rely on the fact that mongoose is running in a separate task and will empty the buffer when it can.</div><div class=""><br class=""></div><div class="">Regards, Mark.</div><div class=""><br class=""><blockquote type="cite" class=""><div class="">On 22 Jan 2018, at 3:17 AM, Stephen Casner <<a href="mailto:casner@acm.org" class="">casner@acm.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Mark,<br class=""><br class="">Well, in turn, I'm sorry for making an API change that was driving you<br class="">crazy. It would have been smarter to add it as a new function even<br class="">though that would be duplicating more code.<br class=""><br class="">As the code currently stands, telnet and SSH will work so long as no<br class="">operation does more contiguous output than the amount of available<br class="">free memory can hold, otherwise an out-of-memory crash will occur. I<br class="">don't know if we consider that an acceptable risk. Maybe with v3.1<br class="">hardware it will be.<br class=""><br class="">Your suggestion to put a new funtion into a separate module is a fine<br class="">idea, but that function needs to access some functions in mongoose.c<br class="">that are scoped static. That means we can't entirely avoid modifying<br class="">mongoose.<br class=""><br class=""> -- Steve<br class=""><br class="">On Fri, 19 Jan 2018, Mark Webb-Johnson wrote:<br class=""><br class=""><blockquote type="cite" class="">Oops. Sorry. That change broke MQTT. I couldn’t understand what was going on (as mg_send was sending immediately). MQTT works like this:<br class=""><br class="">void mg_mqtt_publish(struct mg_connection *nc, const char *topic,<br class=""> uint16_t message_id, int flags, const void *data,<br class=""> size_t len) {<br class=""> size_t old_len = nc->send_mbuf.len;<br class=""><br class=""> uint16_t topic_len = htons((uint16_t) strlen(topic));<br class=""> uint16_t message_id_net = htons(message_id);<br class=""><br class=""> mg_send(nc, &topic_len, 2);<br class=""> mg_send(nc, topic, strlen(topic));<br class=""> if (MG_MQTT_GET_QOS(flags) > 0) {<br class=""> mg_send(nc, &message_id_net, 2);<br class=""> }<br class=""> mg_send(nc, data, len);<br class=""><br class=""> mg_mqtt_prepend_header(nc, MG_MQTT_CMD_PUBLISH, flags,<br class=""> nc->send_mbuf.len - old_len);<br class="">}<br class=""><br class="">It uses mg_send a bunch of times, then goes back and modifies the send_mbuf by inserting a header, then finishes so that the actual transmission can occur. Seems a really dumb way to do it, but such is life.<br class=""><br class="">It was driving me crazy last night, so in the end I just updated mongoose this morning and hey! everything worked. Now I know why :-(<br class=""><br class="">I see that mg_send_dns_query() does the same (it calls mg_dns_insert_header, which then calls mbuf_insert). Making mg_send transmit immediately would break that as well.<br class=""><br class="">How about introducing a new mg_send_now() that calls mg_send() then sends the data immediately? Perhaps it could be a separate .h/.c file mongoose_extensions to avoid the change getting overwritten?<br class=""><br class="">Regards, Mark.<br class=""><br class=""><blockquote type="cite" class="">On 19 Jan 2018, at 2:36 PM, Stephen Casner <<a href="mailto:casner@acm.org" class="">casner@acm.org</a>> wrote:<br class=""><br class="">Mark,<br class=""><br class="">The update of Mongoose to v6.10 removed the change I had made so that<br class="">the mg_send() call would transmit on the network immediately if the<br class="">socket was ready. I needed to make that change because we would<br class="">otherwise run out of RAM with SSH because mg_send() would just buffer<br class="">everything until the next poll.<br class=""><br class=""> -- Steve<br class=""><br class="">On Fri, 19 Jan 2018, Mark Webb-Johnson wrote:<br class=""><br class=""><blockquote type="cite" class="">I re-worked the ovms_server_* framework, and v2 implementation, to use MONGOOSE.<br class=""><br class="">It seems to be _basically_ working. It can connect/disconnect/etc. Some slight memory saving, but standardising the networking throughout on mongoose should simplify things.<br class=""><br class="">I am seeing problems with transmitting the FEATURES and PARAMETERS sometimes - particularly in low memory situations. I’m still trying to find out why.<br class=""><br class="">Regards, Mark.<br class=""><br class=""><blockquote type="cite" class="">On 17 Jan 2018, at 8:33 AM, Mark Webb-Johnson <<a href="mailto:mark@webb-johnson.net" class="">mark@webb-johnson.net</a>> wrote:<br class=""><br class=""><br class="">This is the issue Michael pointed out. The 'server response is incomplete’ problem with select(). Apologies for this; I am not sure why I didn’t notice it before.<br class=""><br class="">Gory details are here:<br class=""><br class=""><a href="https://github.com/espressif/esp-idf/issues/1510" class="">https://github.com/espressif/esp-idf/issues/1510</a> <<a href="https://github.com/espressif/esp-idf/issues/1510" class="">https://github.com/espressif/esp-idf/issues/1510</a>><br class=""><br class="">I think Espressif implemented requirement this in a bizarre way, likely to break compatibility, but such is life. They did point it out as a ‘breaking change’ (at the bottom of the release notes for 3.0b1):<br class=""><br class=""><a href="https://github.com/espressif/esp-idf/releases/tag/v3.0-rc1" class="">https://github.com/espressif/esp-idf/releases/tag/v3.0-rc1</a> <<a href="https://github.com/espressif/esp-idf/releases/tag/v3.0-rc1" class="">https://github.com/espressif/esp-idf/releases/tag/v3.0-rc1</a>><br class=""><br class="">LWIP socket file descriptors now take higher numeric values (via the LWIP LWIP_SOCKET_OFFSET macro). BSD sockets code should mostly work as expected (and, new in V3.0, some standard POSIX functions can now be used with sockets). However any code which assumes a socket file descriptor is always a low numbered integer may need modifying to account for LWIP_SOCKET_OFFSET.<br class=""><br class="">It sure broke us.<br class=""><br class="">I’ve made a one-line workaround fix (to ovms_buffer.cpp), and ovms server v2 connections are working again for me. That is committed and pushed already.<br class=""><br class="">It is kind of messy to have all these different networking implementations in our code base; I intend to move ovms_server_* to mongoose networking over the next few days. That will mean we won’t need a separate task/stack for server connections, and should save us 7KB internal RAM for each connection. Also ovms_ota. But that will have to wait, as I need to get the hardware complete first (some issues with 1.8v vs 3.3v logic on VDD_SDIO of the wrover module and some of our GPIOs), and that is top priority.<br class=""><br class="">Regards, Mark.<br class=""><br class=""><blockquote type="cite" class="">On 17 Jan 2018, at 7:05 AM, Greg D. <<a href="mailto:gregd2350@gmail.com" class="">gregd2350@gmail.com</a> <<a href="mailto:gregd2350@gmail.com" class="">mailto:gregd2350@gmail.com</a>>> wrote:<br class=""><br class="">But, I'm not getting much love out of the v2 server. The connection doesn't appear to be working - "server response is incomplete". Same error whether on wifi or modem.<br class=""></blockquote><br class=""></blockquote><br class=""></blockquote>_______________________________________________<br class="">OvmsDev mailing list<br class=""><a href="mailto:OvmsDev@lists.teslaclub.hk" class="">OvmsDev@lists.teslaclub.hk</a><br class=""><a href="http://lists.teslaclub.hk/mailman/listinfo/ovmsdev" class="">http://lists.teslaclub.hk/mailman/listinfo/ovmsdev</a><br class=""></blockquote><br class=""></blockquote>_______________________________________________<br class="">OvmsDev mailing list<br class=""><a href="mailto:OvmsDev@lists.teslaclub.hk" class="">OvmsDev@lists.teslaclub.hk</a><br class=""><a href="http://lists.teslaclub.hk/mailman/listinfo/ovmsdev" class="">http://lists.teslaclub.hk/mailman/listinfo/ovmsdev</a><br class=""></div></div></blockquote></div><br class=""></div></div>_______________________________________________<br class="">OvmsDev mailing list<br class=""><a href="mailto:OvmsDev@lists.teslaclub.hk" class="">OvmsDev@lists.teslaclub.hk</a><br class="">http://lists.teslaclub.hk/mailman/listinfo/ovmsdev<br class=""></div></blockquote></div><br class=""></div></body></html>