You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/blog/2025-03-10-fixing-memory-leak/index.md
+19-12Lines changed: 19 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,10 +11,12 @@ used uncloseable resources, which would leak memory, [resulting ultimately in a
11
11
"OutOfMemoryError: Java heap space"](https://github.com/metafacture/metafacture-core/issues/666) since at least 2013 (back then the class was
12
12
called `MapFile`).
13
13
14
-
Affected were all usages of metafacture which instantiate a `Flux` multiple times
14
+
## Preconditions
15
+
16
+
Affected were all usages of Metafacture which instantiate a `Flux` multiple times
15
17
over the lifecycle of one JVM. While this is an obvious statement, we could
16
18
experience the leaking of memory at the earliest since March 2021 when we
17
-
changed how to start our `lobid`ETL processes: back in the early days we invoked
19
+
changed how to start our ETL processes for lobid-resources: Back in the early days we invoked
18
20
the ETL by starting a new JVM, run our workflow and terminate the JVM afterwards.
19
21
Coming with the [Webhook in March 2021](https://github.com/hbz/lobid-resources/issues/1159)
20
22
the JVM was not terminated after an ETL but listened further for incoming ETL
@@ -23,13 +25,15 @@ was to be deployed, but we soon discovered that restarting [our Play app](https:
23
25
just before the weekly fresh and complete ETL of > 24 million documents improved
24
26
the performance and averted some hanging processes or crashes.
25
27
26
-
We also have a monitor tool installed on our servers which checks if processes
27
-
are terminated and restarts those automatically. This happened often after some
28
+
We also have a monitoring tool installed on our servers which checks for terminated processes
29
+
and restarts them automatically. This happened often after some
28
30
more ETL processes were invoked (the "daily updates").
29
31
It was unclear why these crashes appeared but by assigning more RAM and
30
32
(automatically) restarting the whole JVM after crashes the ETL process went
31
33
stable enough.
32
34
35
+
## Plotting the leak
36
+
33
37
Since 2025 we started the [Rheinland-Pfälzische Bibliografie (RPB)](https://github.com/hbz/rpb/).
34
38
Here, ETL processes are triggered _every time_ a record is changed or created.
35
39
This resulted very quickly in an
@@ -50,24 +54,27 @@ works - this is the garbage collector trying to find some piece of memory
50
54
to free - without much avail. The app crashes shortly after that (not to be
51
55
seen).
52
56
57
+
## One fix, several problems solved
58
+
53
59
After [fixing the memory leak](https://github.com/metafacture/metafacture-core/commit/b32609307f75187a6a3822b8a951429c7fc924f3)
54
60
the consumption of memory is normal, i.e. not ascending:
55
61

56
62
Every spike indicates that memory resources were freed, resulting in a stable
57
63
rise and fall of CPU and memory usage. The memory leak is really fixed. 😀
58
64
59
-
Fixing the memory leak in `metafacture` resolved some issues we've experienced:
60
-
-`lobid-resources`: daily updates sometimes aborted - although this was not such a big thing because our monitor scripts could "heal" the update process automatically (by restarting the app). However, the updates now don't take e.g. 4h (counting from triggering the update until the successful ETL) but 0.45m, which is way faster.
61
-
-`Metafacture Playground`: we had some [performance issues](https://github.com/metafacture/metafacture-playground/issues/194) which are now solved.
62
-
-`RPB`: a situation arose where we could only ever add more memory to our VMs to counteract a crash of cataloguing - always fearing that not too much documents were ETLed before the daily restart of the cataloguing app.
65
+
Fixing the memory leak in Metafacture resolved some issues we've experienced:
66
+
- lobid-resources: daily updates sometimes aborted - although this was not such a big thing because our monitor scripts could "heal" the update process automatically (by restarting the app). However, the updates now don't take e.g. 4h (counting from triggering the update until the successful ETL) but 0.45m, which is way faster.
67
+
- Metafacture Playground: we had some [performance issues](https://github.com/metafacture/metafacture-playground/issues/194) which are now solved.
68
+
- RPB: a situation arose where we could only ever add more memory to our VMs to counteract a crash of cataloguing - always fearing that not too much documents were ETLed before the daily restart of the cataloguing app.
69
+
70
+
## How to and credits
63
71
64
-
# How to and credits
65
72
It is _one_ thing to discover a memory leak, but another thing to
66
73
determine where the source of that leak _exactly_ is.
67
-
I've to thank e.g. [Chris Braithwaite for his excellent blog post concerning Java memory leaks](https://medium.com/@chrisbrat_17048/java-memory-leak-investigation-8add1314e33b) to gain a bit more of the background of what a Java memory leak is.
74
+
I have to thank e.g. [Chris Braithwaite for his excellent blog post concerning Java memory leaks](https://medium.com/@chrisbrat_17048/java-memory-leak-investigation-8add1314e33b) to gain a bit more of the background of what a Java memory leak is.
68
75
Very useful for me was the built-in profiler in Intellij IDEA. It not only
69
-
has helped to plot the graphs (see above), to see at a glance that there indeed is a
70
-
memory leak, but can also capture memory snapshots and profile the CPU usage,
76
+
has helped to plot the graphs (see above) to see at a glance that there indeed is a
77
+
memory leak, but can also capture memory snapshots and profile the CPU usage
71
78
to find the problematic classes. It would show something like this:
72
79

73
80
If you have found the class where the memory leak most likely originates from
0 commit comments