LookupServiceCache in GigaRegistrar has a memory leak and performance problem when multiple instances restart

Description

It seems that the cache is not cleared correctly and when restarting spaces there seems to be an increase in the number of elements in the cache. See below graph where a space is restarted three times. Initially, the cache contains 370 items, but after restarts, the cache on lus01 contains 376, 385 and finally 391 items. This is in a controlled test environment, in production, we currently have around 25 000 elements in the cache since there are a lot of spaces registered in the same LUS cluster. Heap dump attached Lrmi thread pool are pending for long time Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011"GS-LRMI-Connection-pool-1-thread-3" #66 daemon prio=5 os_prio=0 cpu=53720.34ms elapsed=2696.03s tid=0x00007f3098004800 nid=0x35d51 in Object.wait() [0x00007f30e1dbd000] Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011 java.lang.Thread.State: WAITING (on object monitor) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at java.lang.Object.wait(java.base@11.0.20/Native Method) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011- waiting on <no object reference available> Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at java.lang.Object.wait(java.base@11.0.20/Object.java:328) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at com.sun.jini.thread.ReadersWriter.writeLock(ReadersWriter.java:83) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011- waiting to re-lock in wait() <0x00000006808e8380> (a com.sun.jini.thread.ReadersWriter) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at com.sun.jini.reggie.GigaRegistrar.register(GigaRegistrar.java:2475) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at com.sun.jini.reggie.RegistrarGigaspacesMethodinternalInvoke2.internalInvoke(Unknown Source) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at com.gigaspaces.internal.reflection.fast.AbstractMethod.invoke(AbstractMethod.java:45) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at com.gigaspaces.lrmi.LRMIRuntime.invoked(LRMIRuntime.java:447) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at com.gigaspaces.lrmi.nio.Pivot.consumeAndHandleRequest(Pivot.java:504) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at com.gigaspaces.lrmi.nio.Pivot.handleRequest(Pivot.java:582) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at com.gigaspaces.lrmi.nio.Pivot$ChannelEntryTask.run(Pivot.java:188) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.20/ThreadPoolExecutor.java:1128) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.20/ThreadPoolExecutor.java:628) Nov 9 14:48:10 gs16-lus02 gs-agent.sh[220324]: [lus][1/220403]#011#011at java.lang.Thread.run(java.base@11.0.20/Thread.java:829)

Activity

Details

Assignee

Reporter

Labels

Participants of an issue

Ester Atsmon

Priority

Edition

Platform

All

Freshdesk Support

Open Freshdesk Support
Created November 13, 2023 at 3:09 PM
Updated August 20, 2024 at 1:32 PM
Freshdesk Support