[Ovmsdev] Memory leak
Stephen Casner
casner at acm.org
Sun Apr 22 07:16:45 HKT 2018
On Sat, 21 Apr 2018, Michael Balzer wrote:
> the prompt is copied from the web shell. That's also the source of
> the "mo m" task, which is the execution task of the command.
Aha.
> I merged prop-cmdtask into master some weeks ago. I kept the branch
> for documentation, guess it can be removed now.
No worries.
> >> Nothing special to see. +14k on the events task vs. a fresh boot,
> >> but nothing that can explain the loss of more than 40k.
Here is my tally based on your command captures:
281968 100.00% Total DRAM space
208816 74.06% Total DRAM allocated
118368 41.98% DRAM allocated by OVMS Events
14600 5.18% Free DRAM
58552 20.77% Overhead for 2927.6 blocks
So OVMS Events is using 42% of the total DRAM, or more than half of
what has been allocated. I wonder if some of that could move to
SPIRAM?
I think your more than 40K missing is not a leak, it is the heap
overhead. With heap poisoning enabled to allow the diagnostics, the
overhead on each block is 20 bytes. Without heap poisoning the
overhead is one pointer, 4 bytes. The 58K of overhead corresponds to
about 2900 allocated blocks. There could well be that many, because
many of them are small. Try a "mo m *" command which will dump the
details for individual blocks for all the tasks up to the buffer limit
of 1000 blocks. Here are a coupl of sample sections of that output:
+ t=esp_timer s= 44 a=0x3ffca344 3F408AB4 3FFCA350 00000009 73626577 65767265 FEFE0072 FEFEFEFE 3FFCA384
+ t=esp_timer s= 12 a=0x3ffca384 4011B808 00000000 3FFB4288 BAAD5678 3FFCA3B4 ABBA1234 3FFAFAB0 0000000C
+ t=esp_timer s= 12 a=0x3ffca3a4 3FFCA164 3FFCA164 3FFCA344 BAAD5678 3FFCA3DC ABBA1234 3FFAFAB0 00000014
+ t=esp_timer s= 20 a=0x3ffca3c4 3F408F8C 3F405748 40127470 00000000 00000000 BAAD5678 3FFCA404 ABBA1234
+ t=esp_timer s= 20 a=0x3ffca3ec 3F411C30 3F411C38 40125D30 00000000 00000000 BAAD5678 3FFCA42C ABBA1234
+ t=esp_timer s= 20 a=0x3ffca414 3F411C44 3F411C4C 40125D30 00000000 00000000 BAAD5678 3FFCA448 ABBA1234
+ t=esp_timer s= 8 a=0x3ffca43c 3FFCA480 3FFCA414 BAAD5678 3FFCA470 ABBA1234 3FFAFAB0 00000014 3F411C58
+ t=esp_timer s= 20 a=0x3ffca458 3F411C58 3F411C60 40125D30 00000000 00000000 BAAD5678 3FFCA48C ABBA1234
+ t=esp_timer s= 8 a=0x3ffca480 3FFCA4C4 3FFCA458 BAAD5678 3FFCA4B4 ABBA1234 3FFAFAB0 00000014 3F411C6C
+ t=esp_timer s= 20 a=0x3ffca49c 3F411C6C 3F402EC8 40125D30 00000000 00000000 BAAD5678 3FFCA4D0 ABBA1234
+ t=OVMS Events s= 84 a=0x3ffcdd6c 3F4081F4 3F407000 3F441B9C 401276F8 3F465668 3FFCDD88 00000000 3FFD7E00
+ t=OVMS Events s= 24 a=0x3ffcddd4 00000000 3FFCDE68 00000000 00000000 3F407000 3FFCDD6C BAAD5678 3FFCDE58
+ t=OVMS Events s= 84 a=0x3ffcde00 3F4081F4 3F441B8C 3F441B9C 401276F8 3F465668 3FFCDE1C 00000000 3FFCDE00
+ t=OVMS Events s= 24 a=0x3ffcde68 00000001 3FFCDD40 3FFCDDD4 3FFCDEFC 3F441B8C 3FFCDE00 BAAD5678 3FFCDEEC
+ t=OVMS Events s= 84 a=0x3ffcde94 3F4081F4 3F442398 3F441BAC 4012764C 3F465668 3FFCDEB0 00000000 400E7A00
+ t=OVMS Events s= 24 a=0x3ffcdefc 00000000 3FFCDE68 00000000 00000000 3F442398 3FFCDE94 BAAD5678 3FFCDF50
+ t=OVMS Events s= 36 a=0x3ffcdf28 3F445990 3F40AEEC 00000001 00000017 00000013 00000012 FFFFFFFF FFFFFFFF
+ t=OVMS Events s= 24 a=0x3ffcdf60 00000000 3FFD943C 3FFDC810 3FFDA8C0 3F40AEEC 3FFCDF28 BAAD5678 3FFCDFE4
+ t=OVMS Events s= 84 a=0x3ffcdf8c 3F4081F4 3F40AEEC 3F441B9C 00000000 3F407BB8 3FFCDFA8 00000000 00000000
You see many of the blocks are 8, 12, 20, or 24 bytes, so the overhead
doubles the space used. Some blocks are larger, so overall the
overhead is about 21% according to my tally.
The poisoning serves two purposes: it allows checks for buffer
overruns (8 bytes) in addition to tracking of task ownership and size
(8 bytes). It might be feasible to drop the former while still
allowing the memory diagnostics to work.
-- Steve
> Am 21.04.2018 um 22:05 schrieb Stephen Casner:
> > Michael,
> >
> > I don't have a quick explanation for the memory leak, but some aspects
> > of the sample output you show are curious. The prompt "OVMS >"
> > predates Mark's change on 3/19 to remove the space but includes my
> > change on 3/20 that put the asterisk right after the task name like
> > "main*". It also includes Mark's insertion of the "OVMS" prefix on
> > task names on 4/8 but not my fix to change the prompt to '#' for ssh
> > logins (assuming this was an ssh console). Also I had never seen
> > tasks like "mo m". Perhaps this was built in your local prop-cmdtask
> > branch or something? In which case perhaps an update would fix the
> > memory leak?
> >
> > -- Steve
> >
> > On Sat, 21 Apr 2018, Michael Balzer wrote:
> >
> >> There seems to be a memory leak, after running the module over the past hours in my
> >> car, ssh couldn't allocate 4K for a new session and I had this status:
> >>
> >> OVMS > mo m
> >> Free 8-bit 14600/281968, 32-bit 8052/24308, SPIRAM 4129588/4194252
> >> --Task-- Total DRAM D/IRAM IRAM SPIRAM +/- DRAM D/IRAM IRAM SPIRAM
> >> no task* 5348 0 0 0 +5348 +0 +0 +0
> >> main* 18084 0 0 0 +18084 +0 +0 +0
> >> esp_timer 39268 0 644 23436 +39268 +0 +644 +23436
> >> OVMS Events 53304 65064 0 10076 +53304 +65064 +0 +10076
> >> ipc0 7776 0 0 0 +7776 +0 +0 +0
> >> ipc1 12 0 0 0 +12 +0 +0 +0
> >> tiT 232 908 0 6376 +232 +908 +0 +6376
> >> OVMS SIMCOM 0 4512 0 0 +0 +4512 +0 +0
> >> wifi 1464 64 0 3512 +1464 +64 +0 +3512
> >> OVMS Console 0 20 0 0 +0 +20 +0 +0
> >> OVMS NetMan 0 12688 0 68 +0 +12688 +0 +68
> >> mo m 0 0 15488 12000 +0 +0 +15488 +12000
> >>
> >> OVMS > mo t
> >> Number of Tasks = 20 Stack: Now Max Total Heap 32-bit SPIRAM
> >> 3FFAFB48 1 Blk esp_timer 388 644 4096 39268 644 23436
> >> 3FFBDE54 2 Blk eventTask 448 1328 4608 0 0 0
> >> 3FFBFC50 3 Blk OVMS Events 444 3932 6144 118368 0 10076
> >> 3FFC4974 4 Blk OVMS CanRx 432 544 1024 0 0 0
> >> 3FFC9090 5 Blk ipc0 396 444 1024 7776 0 0
> >> 3FFC9690 6 Blk ipc1 396 444 1024 12 0 0
> >> 3FFCB4B8 9 Rdy IDLE 356 484 1024 0 0 0
> >> 3FFCBA4C 10 Rdy IDLE 360 488 1024 0 0 0
> >> 3FFCC7E0 11 Blk Tmr Svc 396 1820 3072 0 0 0
> >> 3FFC98FC 16 Blk tiT 496 2272 3072 1136 0 6576
> >> 3FFD7540 17 Blk OVMS SIMCOM 460 3020 4096 4512 0 0
> >> 3FFD951C 18 Blk wifi 424 2168 4096 1528 0 3512
> >> 3FFDBCB4 19 Blk pmT 416 576 2560 0 0 0
> >> 3FFDDABC 20 Blk OVMS Vehicle 456 2008 3072 0 0 0
> >> 3FFDFE38 21 Blk OVMS COrx 452 580 3072 0 0 0
> >> 3FFE1C88 22 Blk OVMS COwrk 532 532 1536 0 0 0
> >> 3FFE3E78 23 Blk OVMS Console 548 1716 6144 20 0 0
> >> 3FFE9430 24 Blk OVMS NetMan 732 5980 7168 12692 0 68
> >> 3FFEA9C8 25 Blk mdns 404 1724 4096 0 0 0
> >> 3FFFCD14 36 Rdy mo t 704 1936 5120 72 0 0
> >>
> >>
> >> Nothing special to see. +14k on the events task vs. a fresh boot, but nothing that can
> >> explain the loss of more than 40k.
> >>
> >> Odd, I'll watch this.
> >>
> >> Regards,
> >> Michael
More information about the OvmsDev
mailing list