[Ovmsdev] Time for release 3.2.016?

Michael Balzer dexter at expeedo.de
Fri Feb 12 17:12:42 HKT 2021


Steve,

as written before, higher base memory footprint is probably OK but needs 
to be tested on complex vehicles. So I suggest we do that before 
finalizing 3.2.016.

Mongoose SSL connects also are slow, speeding them up would have a 
broader effect. But mongoose won't benefit from wolfSSL in internal RAM, 
as it's using mbedTLS supplied by the esp-idf.

What do you think, is there a chance we could reduce overall memory 
footprint _and_ gain speed for all SSL connects if we change mongoose to 
use wolfSSL?

This comparison indicates wolfSSL is normally much more RAM efficient 
than mbedTLS:
https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/protocols/esp_tls.html#comparison-between-mbedtls-and-wolfssl

Maybe discarding mbedTLS frees enough memory to put wolfSSL into 
internal RAM?

Regards,
Michael


Am 12.02.21 um 08:15 schrieb Stephen Casner:
> Earlier I said, after observing that the SSH connection time was about
> 4 seconds on my bench v3.0 hardware vs. 10 seconds reported by Craig:
>
>> So I think this means it's not a problem of crypto computation time,
>> so whether or not hardware acceleration is working is not the issue.
>> There must be some interaction among our network activities that is
>> causing some action to wait for a timeout or a subsequent event.
> Nope!  The difference is slow PSRAM vs. fast internal RAM.  The v3.0
> hardware has no PSRAM so I had to pare down the configuration so I
> could run SSH using internal RAM.
>
> In addition, to avoid needing to increase the NetMan stack size I
> changed the wolfssl configuration to reduce stack usage by allocating
> buffers using malloc instead of on the stack.  There are three
> expensive functions that do many calculations and many mallocs:
>
> wc_ecc_make_key:  2.3 seconds, 13409 mallocs
> wc_ecc_shared_secret:  2.3 seconds, 13405 mallocs
> RsaPublicEncryptEx:  1.9 seconds, 4980 mallocs
>
> The result is that on a v3.1 module where malloc comes from PSRAM the
> connect time is 8-10 seconds.  On the v3.0 module, stil doing malloc
> but from internal RAM, the connect time is 3.5-5 seconds.  If I
> increase the NetMan stack by 2kB and switch back to stack buffers
> (which are in internal RAM with no malloc overhead) then the connect
> time is 2.9-3.5 seconds.
>
> So that explains why I didn't particularly notice the longer connect
> time before Craig reported it.  All of my development testing was done
> with the larger stack and stack allocation of buffers.  I only made
> the shift to malloc buffers as a later step to avoid the stack size
> increase.  I verified that it was working, but only connected a few
> times with my attention focused on checking the stack usage.
>
> So: Can we afford to increase our base RAM usage by 2K for the benefit
> of reasonable SSH connect times?
>
>                                                          -- Steve
> _______________________________________________
> OvmsDev mailing list
> OvmsDev at lists.openvehicles.com
> http://lists.openvehicles.com/mailman/listinfo/ovmsdev

-- 
Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal
Fon 02333 / 833 5735 * Handy 0176 / 206 989 26


-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature
Type: application/pgp-signature
Size: 203 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openvehicles.com/pipermail/ovmsdev/attachments/20210212/3ae6a08e/attachment.sig>


More information about the OvmsDev mailing list