[Ovmsdev] RAM

Mark Webb-Johnson mark at webb-johnson.net
Mon Oct 30 10:38:08 HKT 2017


Firstly, remember that with memory debugging turned on (menuconfig, Components, OVMS, Developer, Enabled extended RAM memory allocation statistics), there is an extra overhead for each memory allocation. I tested a simple allocation of a 32 byte object, and that reduced memory by 56 bytes. So, seems to be 24 bytes for each allocation. Not sure what it is without the debugging turned on (as much harder to see the allocations), but it should be a lot less.

I added some instrumentation, and got this:

I (2778) metrics: Initialising METRICS (1810)
I (2817) metrics:   OvmsMetric is 28 bytes
I (2861) metrics:   OvmsMetricBool is 32 bytes
I (2909) metrics:   OvmsMetricInt is 32 bytes
I (2955) metrics:   OvmsMetricFloat is 32 bytes
I (3003) metrics:   OvmsMetricString is 52 bytes

Then, a ‘test metric’ to register a new OvmsMetricBool metric, and got this:

OVMS > test metric
OVMS > module memory
============================
Free 8-bit 102028/246616, 32-bit 18576/43548, numafter = 921
task=Asy total=      0    104  24972 change=     +0   +104     +0
============================

That is using a ‘x.test.metric’ name (which should fit in the std::string small string allocation).

It seems that using OvmsMetricBool (probably our smallest), we currently need 32bytes for the bool itself, plus 48 bytes for the OvmsMetrics map linkage, plus 24 bytes allocation overhead.

Looking at the OvmsMetricBool, it adds a ‘bool m_value’ on top of the base OvmsMetric values, which are:

    const char* m_name;
    bool m_defined;
    bool m_stale;
    int m_autostale;
    metric_unit_t m_units;
    std::bitset<METRICS_MAX_MODIFIERS> m_modified;
    uint32_t m_lastmodified;

Looking at the OvmsMetric member variables, we can live with a autostale of unsigned 16 bit integer, so re-arranging those is a simple win:

    const char* m_name;
    metric_unit_t m_units;
    std::bitset<METRICS_MAX_MODIFIERS> m_modified;
    uint32_t m_lastmodified;
    uint16_t m_autostale;
    bool m_defined;
    bool m_stale;

With that done, OvmsMetric goes from 28 bytes to 24 (with similar 4 byte win on all the other derived types). Also, my simple registration of a new OvmsMetricBool goes from 104 bytes to 100.

Looking at the m_metrics map storage, I tried a simple:

std::forward_list<uint32_t> x;

for (uint32_t k=0;k<100;k++)
  x.push_front(k);

OVMS > test metric
OVMS > module memory
task=Asy total=      0   3228  24972 change=     +0  +3228     +0
OVMS > test metric
OVMS > module memory
task=Asy total=      0   6428  24972 change=     +0  +3200     +0

That works out at 32 bytes per std::forward_list entry. That is presumably the 24 bytes for the allocation, plus 8 bytes per entry (a pointer to the next, plus a pointer to the entry content).

The absolute most efficient dynamic system would be an OvmsMetric* m_next in OvmsMetric, then a OvmsMetric* m_first in OvmsMetrics, and remove the map altogether. That would be 4 bytes for the OvmsMetrics list, plus an extra 4 bytes for each metric.

The original design made a lot of use of MyMetrics.Find(), but that has been deprecated now and I don’t see anything using it at all. The only thing we need is a ‘metrics list’ iterator to show the details - all that needs is an ordered list.

So, I went ahead and did that. Changed ovms_metrics to use a manually managed one-way linked list.

With the original code, and my default set of metrics (not including the Twizy ones), this comes to:

OVMS > module memory
============================
Free 8-bit 102264/246616, 32-bit 18576/43548, numafter = 917

OVMS > vehicle module RT
I (107076) v-renaulttwizy: Renault Twizy vehicle module

OVMS > module memory
============================
Free 8-bit 75380/246616, 32-bit 18576/43548, numafter = 1000

With the new code, and my default set of metrics (not including the Twizy ones), this comes to:

OVMS > module memory
============================
Free 8-bit 106536/246632, 32-bit 18576/43548, numafter = 828

OVMS > vehicle module RT
I (55126) v-renaulttwizy: Renault Twizy vehicle module

OVMS > module memory
============================
Free 8-bit 85800/246632, 32-bit 18576/43548, numafter = 1000

So, a reasonable saving of 10KB.

I tried with memory and task debugging turned off, and we get m.freeram 118888/98200 going to 121884/104932.

With the optimised OvmsMetrics code, RT seems to be allocating memory as follows:

Free 8-bit 85812/246632, 32-bit 18576/43548, numafter = 1000
task=Asy total=      0  20616  24972 change=     +0 +20616     +0

We could use a static style allocation (std::vector, or a static array), and fixed structures (rather than dynamic objects). That would save the allocation overhead, but would make things a lot more rigid.

All the above committed and pushed.

Regards, Mark.

P.S. Note that Espressif are just now coming out with WROVER modules that include 32Mbit of PSRAM (can be mapped so that heap size goes up around 4MB!) Probably another few months before code support for that stabilises, and modules available in quantity. Perhaps in our future we could switch to that, but for the moment I think we’re ok. Just need to be careful.

> On 30 Oct 2017, at 5:45 AM, Michael Balzer <dexter at expeedo.de> wrote:
> 
> While implementing the set type metrics I mentioned before, I've checked
> out metrics RAM usage: just adding four new metrics reduced my free
> memory by 656 bytes. 112 bytes of that were names, 16 bytes were actual
> data content, 16 bytes were pointers to the metrics.
> 
> So it seems each metric currently needs 128 bytes management overhead (+
> RAM for the name in case of dynamic names like for an array of battery
> cells).
> 
> I've added 128 custom metrics now, which add up to ~ 20 KB of RAM --
> more than 30% of the free RAM available before. I've got ~80% of my
> planned metrics now, so it should fit for the Twizy, but a more complex
> monitoring would need to use combined metrics for i.e. cell data.
> 
> Maybe we can change the registry structure to not using std::map. The
> fast btree lookup is not really needed if listeners use pointers. A
> simple std::forward_list would do if new metrics are inserted sorted.
> Also, allocating RAM for name patterns like "...cell.<n>..."  is of
> course very wasteful, maybe we can introduce some array logic to
> generate these names.
> 
> Regards,
> Michael
> 
> 
> Am 25.10.2017 um 07:09 schrieb Stephen Casner:
>> I'd like to add another $.02 regarding RAM usage.  We need to consider
>> the impact of all three modes of RAM usage:
>> 
>> 1. static declaration of variables
>> 
>> 2. local (automatic) variables that cause the maximum stack usage to
>> increase so we need to dedicate a larger stack allocation
>> 
>> 3. dynamic allocations from the heap
>> 
>> I have more specific comments for each of these.
>> 
>> 1.  If you need a large buffer, declaring it as static storage means
>> that it is always allocated even if your code is not being used
>> (unless it is something like vehicle-specific code that is configured
>> out).  So, it would be better to dynamically allocate that buffer
>> space from the heap (malloc) when needed and then free it when
>> finished so that the usage is only temporary.  That way the same space
>> might be used for other purposes at other times.
>> 
>> 2.  I recomment NOT USING std::string except where it is really needed
>> and useful.  In particular if you have a function parameter that is
>> always supplied as a character constant but the type of the parameter
>> is std::string then the compiler needs to expand the caller's stack
>> space by 32 bytes, for each such instance of the call, to hold the
>> std:string structure.  Additional heap space is required for the
>> characters.  None of that would be required if the parameter type were
>> const char*.  The same problem applies to functions that return
>> std::string, since the compiler must allocate stack space in the
>> calling function for the return value to be copied.  In particular if
>> the caller is just going to print that string with .c_str(), it would
>> be much better to put the .c_str() in the called function and return
>> the const char* UNDER ONE IMPORTANT CONDITION: this depends on the
>> std::string in the called function being stable, such as a member
>> variable of the class.  If the string in the called function is
>> automatic (allocated on the stack), then the .c_str() of it won't be
>> valid in the caller.
>> 
>> I saved substantial stack space and also heap space by changing the
>> command maps in OvmsCommand from map<std::string, OvmsCommand*> to
>> map<const char*, OvmsCommand*, CompareCharPtr>.  This was possible
>> because all of the command token strings that are put in the map come
>> from character constants anyway, and those are stable.
>> 
>> I think there are several more functions that could safely have their
>> arguments or return values changed.  Now, I don't mean to be pushing
>> us back to essentially writing C code in C++ and ignoring the benefits
>> of C++.  For places where dynamic storage is needed, as for a class
>> member, using std::string is a big advantage and not a problem.  Just
>> be cognizant of the costs where it is used.
>> 
>> 3. As I mentioned in an earlier message, there is another 40K of RAM
>> available for dynamic allocation by code that only requires 32-bit
>> access, not byte-access.  This is in IRAM (Instruction RAM).  It won't
>> be allocated by 'malloc' or 'new' but can be allocated explicitly with
>> pvPortMallocCaps(sizeof(size, MALLOC_CAP_32BIT).  I'm currently using
>> part of it for the storage of metadata about blocks allocated from the
>> heap in the ovms_module debugging code to minimize the impact that
>> using that code has on the memory available for the code to be tested.
>> 
>>                                                        -- Steve
>> _______________________________________________
>> OvmsDev mailing list
>> OvmsDev at lists.teslaclub.hk
>> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev
> 
> -- 
> Michael Balzer * Helkenberger Weg 9 * D-58256 Ennepetal
> Fon 02333 / 833 5735 * Handy 0176 / 206 989 26
> 
> 
> _______________________________________________
> OvmsDev mailing list
> OvmsDev at lists.teslaclub.hk
> http://lists.teslaclub.hk/mailman/listinfo/ovmsdev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvehicles.com/pipermail/ovmsdev/attachments/20171030/a306147c/attachment.htm>


More information about the OvmsDev mailing list