[infinispan-dev] Classloader leaks?

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[infinispan-dev] Classloader leaks?

Sanne Grinovero-3
Hi all,

our documentation suggest to raise the file limits to about 16K:
http://infinispan.org/docs/stable/contributing/contributing.html#running_and_writing_tests

I already have this setup since years, yet I've been noticing errors such as:

"Caused by: java.io.IOException: Too many open files"

Today I decided to finally have a look, and I see that while running
the testsuite, my system's consumption of file descriptor raises
continuously, up to more than 2 millions.
(When not running the suite, I'm consuming 200K - that's including
IDEs and other FD hungry systems like Chrome)

Trying to get some samples of these file descriptors, it looks like
it's really about open files. Jar files to be more precise.

What puzzles me is that taking just one jar - jgroups for example - I
can count 7852 open instances of it, but distributed among a handful
of processes only.

My guess is classloaders aren't being closed?

Also: why did nobody else notice problems? Do you all have
reconfigured your system for unlimited FDs?

Thanks,
Sanne
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Classloader leaks?

Dan Berindei
I've been running with an limit of 10240 file descriptors for a while
now, and I've never had problems (unless you count the time I upgraded
gnome-terminal and it started ignoring my /etc/security/limits.conf).

The CI agents also run the full build with a 9999 file descriptors limit.

Dan

On Wed, Feb 22, 2017 at 9:25 PM, Sanne Grinovero <[hidden email]> wrote:

> Hi all,
>
> our documentation suggest to raise the file limits to about 16K:
> http://infinispan.org/docs/stable/contributing/contributing.html#running_and_writing_tests
>
> I already have this setup since years, yet I've been noticing errors such as:
>
> "Caused by: java.io.IOException: Too many open files"
>
> Today I decided to finally have a look, and I see that while running
> the testsuite, my system's consumption of file descriptor raises
> continuously, up to more than 2 millions.
> (When not running the suite, I'm consuming 200K - that's including
> IDEs and other FD hungry systems like Chrome)
>
> Trying to get some samples of these file descriptors, it looks like
> it's really about open files. Jar files to be more precise.
>
> What puzzles me is that taking just one jar - jgroups for example - I
> can count 7852 open instances of it, but distributed among a handful
> of processes only.
>
> My guess is classloaders aren't being closed?
>
> Also: why did nobody else notice problems? Do you all have
> reconfigured your system for unlimited FDs?
>
> Thanks,
> Sanne
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Classloader leaks?

Dennis Reed
In reply to this post by Sanne Grinovero-3
Are those actually 2 million *unique* descriptors?

I've seen lsof output that listed many duplicates for the same file
descriptor (one for each thread?), making the list appear much larger
than it really was.

-Dennis


On 02/22/2017 02:25 PM, Sanne Grinovero wrote:

> Hi all,
>
> our documentation suggest to raise the file limits to about 16K:
> http://infinispan.org/docs/stable/contributing/contributing.html#running_and_writing_tests
>
> I already have this setup since years, yet I've been noticing errors such as:
>
> "Caused by: java.io.IOException: Too many open files"
>
> Today I decided to finally have a look, and I see that while running
> the testsuite, my system's consumption of file descriptor raises
> continuously, up to more than 2 millions.
> (When not running the suite, I'm consuming 200K - that's including
> IDEs and other FD hungry systems like Chrome)
>
> Trying to get some samples of these file descriptors, it looks like
> it's really about open files. Jar files to be more precise.
>
> What puzzles me is that taking just one jar - jgroups for example - I
> can count 7852 open instances of it, but distributed among a handful
> of processes only.
>
> My guess is classloaders aren't being closed?
>
> Also: why did nobody else notice problems? Do you all have
> reconfigured your system for unlimited FDs?
>
> Thanks,
> Sanne
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Classloader leaks?

Sanne Grinovero-3
On 22 February 2017 at 21:20, Dennis Reed <[hidden email]> wrote:
> Are those actually 2 million *unique* descriptors?
>
> I've seen lsof output that listed many duplicates for the same file
> descriptor (one for each thread?), making the list appear much larger
> than it really was.

Good point! You're right, I verified and all instances of e.g. the
jgroups jar were using the same FD, just a different thread id.

This is the full error I'm having, when running the tests from the
"infinispan-compatibility-mode-it" maven module:


java.lang.IllegalStateException: failed to create a child event loop
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:88)
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58)
at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:51)
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:87)
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:82)
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:63)
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:51)
at org.jboss.resteasy.plugins.server.netty.NettyJaxrsServer.start(NettyJaxrsServer.java:239)
at org.infinispan.rest.NettyRestServer.start(NettyRestServer.java:81)
at org.infinispan.it.compatibility.CompatibilityCacheFactory.createRestCache(CompatibilityCacheFactory.java:199)
at org.infinispan.it.compatibility.CompatibilityCacheFactory.createRestMemcachedCaches(CompatibilityCacheFactory.java:137)
at org.infinispan.it.compatibility.CompatibilityCacheFactory.setup(CompatibilityCacheFactory.java:123)
at org.infinispan.it.compatibility.ByteArrayKeyReplEmbeddedHotRodTest.setup(ByteArrayKeyReplEmbeddedHotRodTest.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
at org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:564)
at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:213)
at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:138)
at org.testng.internal.TestMethodWorker.invokeBeforeClassMethods(TestMethodWorker.java:175)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:107)
at org.testng.TestRunner.privateRun(TestRunner.java:767)
at org.testng.TestRunner.run(TestRunner.java:617)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: io.netty.channel.ChannelException: failed to open a new selector
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:157)
at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:148)
at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:126)
at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:36)
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84)
... 32 more
Caused by: java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:65)
at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:155)
... 36 more

Now that I know which metrics to look at, I see that before running
this specific module my system is consuming about 12K FDs, and
occasionally it just stays around that same level and the integration
tests will pass w/o any failure.
However most of the times when I run this module specifically, I see
the FD consumption increase during the test run and eventually fail
with the above error. The last sample I could take before failing was
around 15,5K, and not surprising as my limit is set to 16384.. so I
guess it could have attempted to grow further.

Tomorrow I'll try with higher limits, to see if I'm running with
barely enough for this testsuite or if I see it growing more. Still
sounds like a leak though, as the increase is quite significant..

Thanks,
Sanne

>
> -Dennis
>
>
> On 02/22/2017 02:25 PM, Sanne Grinovero wrote:
>> Hi all,
>>
>> our documentation suggest to raise the file limits to about 16K:
>> http://infinispan.org/docs/stable/contributing/contributing.html#running_and_writing_tests
>>
>> I already have this setup since years, yet I've been noticing errors such as:
>>
>> "Caused by: java.io.IOException: Too many open files"
>>
>> Today I decided to finally have a look, and I see that while running
>> the testsuite, my system's consumption of file descriptor raises
>> continuously, up to more than 2 millions.
>> (When not running the suite, I'm consuming 200K - that's including
>> IDEs and other FD hungry systems like Chrome)
>>
>> Trying to get some samples of these file descriptors, it looks like
>> it's really about open files. Jar files to be more precise.
>>
>> What puzzles me is that taking just one jar - jgroups for example - I
>> can count 7852 open instances of it, but distributed among a handful
>> of processes only.
>>
>> My guess is classloaders aren't being closed?
>>
>> Also: why did nobody else notice problems? Do you all have
>> reconfigured your system for unlimited FDs?
>>
>> Thanks,
>> Sanne
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Classloader leaks?

Dan Berindei
On my system, `cat /proc/sys/fs/file-nr` reports 24838 used FDs, and
that goes up to 29030 when running the
infinispan-compatibility-mode-it tests. Since there are only 78 tests,
a leak is quite possible, but I wouldn't say 4K open files (or
sockets, more likely) really is a deal-breaker.

Cheers
Dan


On Thu, Feb 23, 2017 at 1:31 AM, Sanne Grinovero <[hidden email]> wrote:

> On 22 February 2017 at 21:20, Dennis Reed <[hidden email]> wrote:
>> Are those actually 2 million *unique* descriptors?
>>
>> I've seen lsof output that listed many duplicates for the same file
>> descriptor (one for each thread?), making the list appear much larger
>> than it really was.
>
> Good point! You're right, I verified and all instances of e.g. the
> jgroups jar were using the same FD, just a different thread id.
>
> This is the full error I'm having, when running the tests from the
> "infinispan-compatibility-mode-it" maven module:
>
>
> java.lang.IllegalStateException: failed to create a child event loop
> at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:88)
> at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58)
> at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:51)
> at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:87)
> at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:82)
> at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:63)
> at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:51)
> at org.jboss.resteasy.plugins.server.netty.NettyJaxrsServer.start(NettyJaxrsServer.java:239)
> at org.infinispan.rest.NettyRestServer.start(NettyRestServer.java:81)
> at org.infinispan.it.compatibility.CompatibilityCacheFactory.createRestCache(CompatibilityCacheFactory.java:199)
> at org.infinispan.it.compatibility.CompatibilityCacheFactory.createRestMemcachedCaches(CompatibilityCacheFactory.java:137)
> at org.infinispan.it.compatibility.CompatibilityCacheFactory.setup(CompatibilityCacheFactory.java:123)
> at org.infinispan.it.compatibility.ByteArrayKeyReplEmbeddedHotRodTest.setup(ByteArrayKeyReplEmbeddedHotRodTest.java:87)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:564)
> at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:213)
> at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:138)
> at org.testng.internal.TestMethodWorker.invokeBeforeClassMethods(TestMethodWorker.java:175)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:107)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: io.netty.channel.ChannelException: failed to open a new selector
> at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:157)
> at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:148)
> at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:126)
> at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:36)
> at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84)
> ... 32 more
> Caused by: java.io.IOException: Too many open files
> at sun.nio.ch.IOUtil.makePipe(Native Method)
> at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:65)
> at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
> at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:155)
> ... 36 more
>
> Now that I know which metrics to look at, I see that before running
> this specific module my system is consuming about 12K FDs, and
> occasionally it just stays around that same level and the integration
> tests will pass w/o any failure.
> However most of the times when I run this module specifically, I see
> the FD consumption increase during the test run and eventually fail
> with the above error. The last sample I could take before failing was
> around 15,5K, and not surprising as my limit is set to 16384.. so I
> guess it could have attempted to grow further.
>
> Tomorrow I'll try with higher limits, to see if I'm running with
> barely enough for this testsuite or if I see it growing more. Still
> sounds like a leak though, as the increase is quite significant..
>
> Thanks,
> Sanne
>
>>
>> -Dennis
>>
>>
>> On 02/22/2017 02:25 PM, Sanne Grinovero wrote:
>>> Hi all,
>>>
>>> our documentation suggest to raise the file limits to about 16K:
>>> http://infinispan.org/docs/stable/contributing/contributing.html#running_and_writing_tests
>>>
>>> I already have this setup since years, yet I've been noticing errors such as:
>>>
>>> "Caused by: java.io.IOException: Too many open files"
>>>
>>> Today I decided to finally have a look, and I see that while running
>>> the testsuite, my system's consumption of file descriptor raises
>>> continuously, up to more than 2 millions.
>>> (When not running the suite, I'm consuming 200K - that's including
>>> IDEs and other FD hungry systems like Chrome)
>>>
>>> Trying to get some samples of these file descriptors, it looks like
>>> it's really about open files. Jar files to be more precise.
>>>
>>> What puzzles me is that taking just one jar - jgroups for example - I
>>> can count 7852 open instances of it, but distributed among a handful
>>> of processes only.
>>>
>>> My guess is classloaders aren't being closed?
>>>
>>> Also: why did nobody else notice problems? Do you all have
>>> reconfigured your system for unlimited FDs?
>>>
>>> Thanks,
>>> Sanne
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> [hidden email]
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Classloader leaks?

Vojtech Juranek
In reply to this post by Sanne Grinovero-3
On středa 22. února 2017 23:31:55 CET Sanne Grinovero wrote:
> Good point! You're right, I verified and all instances of e.g. the
> jgroups jar were using the same FD, just a different thread id.

yes, maybe there are some stale threads? Recently (well, actually it quite
some time and it's still on my TODO list to investigate it more:-) testsuite
started to fail with OOM issue "cannto create native thread" on some machines
(rarely on RHEL7, very often on RHEL6). I tried to increase limit on user
threads and end up with unlimited, so this sound like some leak to me as well.

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

signature.asc (484 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Classloader leaks?

Dan Berindei
After looking into this some more...

TestNGTestListener does report 1262 leaked HotRodServerWorker and
MemcachedServer worker threads at the end of the compatibility-mode-it
test suite, so I'd say we probably do have a leak. The tests do seem
to shut down their servers at the end, maybe someone more familiar
with the server could look into why this happens? And maybe change the
thread naming scheme so that we can tell from the thread name which
test didn't shut down its servers properly?

But I'm pretty sure the leak isn't the problem here, compared to the
fact that we run 15 tests in parallel, AND each test starts 2
HotRod/Memcached/REST servers and 2 clients, AND they use the default
thread pool sizes instead of the 20x smaller thread pool sizes that we
use in the core suite.

Cheers
Dan

On Thu, Feb 23, 2017 at 9:56 AM, Vojtech Juranek <[hidden email]> wrote:

> On středa 22. února 2017 23:31:55 CET Sanne Grinovero wrote:
>> Good point! You're right, I verified and all instances of e.g. the
>> jgroups jar were using the same FD, just a different thread id.
>
> yes, maybe there are some stale threads? Recently (well, actually it quite
> some time and it's still on my TODO list to investigate it more:-) testsuite
> started to fail with OOM issue "cannto create native thread" on some machines
> (rarely on RHEL7, very often on RHEL6). I tried to increase limit on user
> threads and end up with unlimited, so this sound like some leak to me as well.
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Classloader leaks?

Sanne Grinovero-3
Hi Dan,
thanks! Yes I concur: while this module seems to have some high leaks
I also suspect now that it's unrelated to the failure.

Looks like the "too many files" exception can be thrown for a wide
number of reasons,

FYI these tests fail "almost consistently" for me:

Failed tests:
  ByteArrayKeyDistEmbeddedHotRodTest.setup:27 » IllegalState failed to
create a ...
  ByteArrayKeyReplEmbeddedHotRodTest.setup:87 » IllegalState failed to
create a ...
  ByteArrayValueDistEmbeddedHotRodTest.setup:27 » IllegalState failed
to create ...
  ByteArrayValueReplEmbeddedHotRodTest.setup:87 » IllegalState failed
to create ...
  DistEmbeddedHotRodBulkTest.setup:36 » IllegalState failed to create
a child ev...
  DistEmbeddedRestHotRodTest.setup:25 » IllegalState failed to create
a child ev...
  DistL1EmbeddedHotRodTest.setup:30 » IllegalState failed to create a
child even...
  DistMemcachedEmbeddedTest.setup:39 » Transport Could not fetch transport
org.infinispan.it.compatibility.EmbeddedHotRodCacheListenerTest.setup(org.infinispan.it.compatibility.EmbeddedHotRodCacheListenerTest)
  Run 1: EmbeddedHotRodCacheListenerTest.setup:36 » IllegalState
failed to create a chi...
  Run 2: PASS
  Run 3: PASS

  EmbeddedMemcachedCacheListenerTest.setup:39 » IllegalState failed to
create a ...
  EmbeddedRestMemcachedHotRodTest.setup:50 » IllegalState failed to
create a chi...
  ReplEmbeddedRestHotRodTest.setup:38 » Channel Unable to create
Channel from cl...

Tests run: 103, Failures: 12, Errors: 0, Skipped: 49

If I run it a dozen times, I get the same report (exactly the same
tests failing) in 11 cases, and in one case it just passes all test.
I'm pretty sure I'm not running out of FDs anymore.

I'll open a JIRA and move on, as I just needed to verify some PRs but
need to switch focus to my own projects:
I hope someone with more Netty experience will be interested enough to
have a look, I can try reproducing
eventual fixes.

 - https://issues.jboss.org/browse/ISPN-7517

Thanks all!

Sanne







On 23 February 2017 at 08:29, Dan Berindei <[hidden email]> wrote:

> After looking into this some more...
>
> TestNGTestListener does report 1262 leaked HotRodServerWorker and
> MemcachedServer worker threads at the end of the compatibility-mode-it
> test suite, so I'd say we probably do have a leak. The tests do seem
> to shut down their servers at the end, maybe someone more familiar
> with the server could look into why this happens? And maybe change the
> thread naming scheme so that we can tell from the thread name which
> test didn't shut down its servers properly?
>
> But I'm pretty sure the leak isn't the problem here, compared to the
> fact that we run 15 tests in parallel, AND each test starts 2
> HotRod/Memcached/REST servers and 2 clients, AND they use the default
> thread pool sizes instead of the 20x smaller thread pool sizes that we
> use in the core suite.
>
> Cheers
> Dan
>
> On Thu, Feb 23, 2017 at 9:56 AM, Vojtech Juranek <[hidden email]> wrote:
>> On středa 22. února 2017 23:31:55 CET Sanne Grinovero wrote:
>>> Good point! You're right, I verified and all instances of e.g. the
>>> jgroups jar were using the same FD, just a different thread id.
>>
>> yes, maybe there are some stale threads? Recently (well, actually it quite
>> some time and it's still on my TODO list to investigate it more:-) testsuite
>> started to fail with OOM issue "cannto create native thread" on some machines
>> (rarely on RHEL7, very often on RHEL6). I tried to increase limit on user
>> threads and end up with unlimited, so this sound like some leak to me as well.
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Loading...