[Ovmsdev] RAM
Michael Balzer
dexter at expeedo.de
Mon Oct 30 05:45:23 HKT 2017
While implementing the set type metrics I mentioned before, I've checked
out metrics RAM usage: just adding four new metrics reduced my free
memory by 656 bytes. 112 bytes of that were names, 16 bytes were actual
data content, 16 bytes were pointers to the metrics.
So it seems each metric currently needs 128 bytes management overhead (+
RAM for the name in case of dynamic names like for an array of battery
cells).
I've added 128 custom metrics now, which add up to ~ 20 KB of RAM --
more than 30% of the free RAM available before. I've got ~80% of my
planned metrics now, so it should fit for the Twizy, but a more complex
monitoring would need to use combined metrics for i.e. cell data.
Maybe we can change the registry structure to not using std::map. The
fast btree lookup is not really needed if listeners use pointers. A
simple std::forward_list would do if new metrics are inserted sorted.
Also, allocating RAM for name patterns like "...cell.<n>..." is of
course very wasteful, maybe we can introduce some array logic to
generate these names.
Regards,
Michael
Am 25.10.2017 um 07:09 schrieb Stephen Casner:
> I'd like to add another $.02 regarding RAM usage. We need to consider
> the impact of all three modes of RAM usage:
>
> 1. static declaration of variables
>
> 2. local (automatic) variables that cause the maximum stack usage to
> increase so we need to dedicate a larger stack allocation
>
> 3. dynamic allocations from the heap
>
> I have more specific comments for each of these.
>
> 1. If you need a large buffer, declaring it as static storage means
> that it is always allocated even if your code is not being used
> (unless it is something like vehicle-specific code that is configured
> out). So, it would be better to dynamically allocate that buffer
> space from the heap (malloc) when needed and then free it when
> finished so that the usage is only temporary. That way the same space
> might be used for other purposes at other times.
>
> 2. I recomment NOT USING std::string except where it is really needed
> and useful. In particular if you have a function parameter that is
> always supplied as a character constant but the type of the parameter
> is std::string then the compiler needs to expand the caller's stack
> space by 32 bytes, for each such instance of the call, to hold the
> std:string structure. Additional heap space is required for the
> characters. None of that would be required if the parameter type were
> const char*. The same problem applies to functions that return
> std::string, since the compiler must allocate stack space in the
> calling function for the return value to be copied. In particular if
> the caller is just going to print that string with .c_str(), it would
> be much better to put the .c_str() in the called function and return
> the const char* UNDER ONE IMPORTANT CONDITION: this depends on the
> std::string in the called function being stable, such as a member
> variable of the class. If the string in the called function is
> automatic (allocated on the stack), then the .c_str() of it won't be
> valid in the caller.
>
> I saved substantial stack space and also heap space by changing the
> command maps in OvmsCommand from map<std::string, OvmsCommand*> to
> map<const char*, OvmsCommand*, CompareCharPtr>. This was possible
> because all of the command token strings that are put in the map come
> from character constants anyway, and those are stable.
>
> I think there are several more functions that could safely have their
> arguments or return values changed. Now, I don't mean to be pushing
> us back to essentially writing C code in C++ and ignoring the benefits
> of C++. For places where dynamic storage is needed, as for a class
> member, using std::string is a big advantage and not a problem. Just
> be cognizant of the costs where it is used.
>
> 3. As I mentioned in an earlier message, there is another 40K of RAM
> available for dynamic allocation by code that only requires 32-bit
> access, not byte-access. This is in IRAM (Instruction RAM). It won't
> be allocated by 'malloc' or 'new' but can be allocated explicitly with
> pvPortMallocCaps(sizeof(size, MALLOC_CAP_32BIT). I'm currently using
> part of it for the storage of metadata about blocks allocated from the
> heap in the ovms_module debugging code to minimize the impact that
> using that code has on the memory available for the code to be tested.
>
> -- Steve
> _______________________________________________
> OvmsDev mailing list
> OvmsDev at lists.teslaclub.hk
> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev
--
Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal
Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
More information about the OvmsDev
mailing list