[infinispan-dev] Faster LRU

classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|

[infinispan-dev] Faster LRU

Vladimir Blagojevic
Hey guys,

In the past few days I've look around how to squeeze every bit of
performance out of BCHM and particularly our LRU impl. What I did not
like about current LRU is that a search for an element in the queue is
not constant time operation but requires full queue traversal if we need
to find an element[1].

It would be be particularly nice to have a hashmap with a constant cost
for look up operations. Something like LinkedHashMap. LinkedHashMap
seems to be a good container for LRU as it provides constant time lookup
but also a hook, a callback call for eviction of the oldest entry in the
form of removeEldestEntry callback. So why not implement our segment
eviction policy by using a LinkedHashMap [2]?

I've seen about 50% performance increase for smaller caches (100K) and
even more for larger and more contended caches - about 75% increase.
After this change BCHM performance was not that much worse than CHM and
it was faster than synchronized hashmap.

Should we include this impl as FAST_LRU as I would not want to remove
current LRU just yet? We have to prove this one is correct and that it
does not have have any unforeseen issues.

Let me know what you think!

Vladimir








[1]
https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/util/concurrent/BoundedConcurrentHashMap.java#L467

[2] Source code snippet for LRU in BCHM.


static final class LRU<K, V> extends LinkedHashMap<HashEntry<K,V>, V>
implements EvictionPolicy<K, V> {
       /** The serialVersionUID */
       private static final long serialVersionUID = -6627108081544347068L;

       private final ConcurrentLinkedQueue<HashEntry<K, V>> accessQueue;
       private final Segment<K,V> segment;
       private final int maxBatchQueueSize;
       private final int trimDownSize;
       private final float batchThresholdFactor;
       private final Set<HashEntry<K, V>> evicted;

       public LRU(Segment<K,V> s, int capacity, float lf, int
maxBatchSize, float batchThresholdFactor) {
          super((int)(capacity*lf));
          this.segment = s;
          this.trimDownSize = (int)(capacity*lf);
          this.maxBatchQueueSize = maxBatchSize > MAX_BATCH_SIZE ?
MAX_BATCH_SIZE : maxBatchSize;
          this.batchThresholdFactor = batchThresholdFactor;
          this.accessQueue = new ConcurrentLinkedQueue<HashEntry<K, V>>();
          this.evicted = new HashSet<HashEntry<K, V>>();
       }

       @Override
       public Set<HashEntry<K, V>> execute() {
          Set<HashEntry<K, V>> evictedCopy = new HashSet<HashEntry<K, V>>();
          for (HashEntry<K, V> e : accessQueue) {
             put(e, e.value);
          }
          evictedCopy.addAll(evicted);
          accessQueue.clear();
          evicted.clear();
          return evictedCopy;
       }



       @Override
       public Set<HashEntry<K, V>> onEntryMiss(HashEntry<K, V> e) {
          return Collections.emptySet();
       }

       /*
        * Invoked without holding a lock on Segment
        */
       @Override
       public boolean onEntryHit(HashEntry<K, V> e) {
          accessQueue.add(e);
          return accessQueue.size() >= maxBatchQueueSize *
batchThresholdFactor;
       }

       /*
        * Invoked without holding a lock on Segment
        */
       @Override
       public boolean thresholdExpired() {
          return accessQueue.size() >= maxBatchQueueSize;
       }

       @Override
       public void onEntryRemove(HashEntry<K, V> e) {
          remove(e);
          // we could have multiple instances of e in accessQueue;
remove them all
          while (accessQueue.remove(e)) {
             continue;
          }
       }

       @Override
       public void clear() {
          super.clear();
          accessQueue.clear();
       }

       @Override
       public Eviction strategy() {
          return Eviction.LRU;
       }

       protected boolean removeEldestEntry(Entry<HashEntry<K,V>,V> eldest){
          HashEntry<K, V> evictedEntry = eldest.getKey();
         
segment.evictionListener.onEntryChosenForEviction(evictedEntry.value);
          segment.remove(evictedEntry.key, evictedEntry.hash, null);
          evicted.add(evictedEntry);
          return size() > trimDownSize;
       }
    }
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Alex Kluge
Hi,

  I have a completely in-line limit on the cache entries built on a clock cache that is an approximation to an LRU cache, and is extremely fast (O(1)). It is not a strick LRU, but chooses a not recently used item for removal. I'll provide some more details soon.

  I'm not sure how far along you are, as some of this is in the future tense, and some in the past. But I'll dig up this code - it's been a while since I worked on the cache. :)

                                                                       Alex

--- On Tue, 7/5/11, Vladimir Blagojevic <[hidden email]> wrote:

From: Vladimir Blagojevic <[hidden email]>
Subject: [infinispan-dev] Faster LRU
To: "infinispan -Dev List" <[hidden email]>
Date: Tuesday, July 5, 2011, 11:23 AM

Hey guys,

In the past few days I've look around how to squeeze every bit of
performance out of BCHM and particularly our LRU impl. What I did not
like about current LRU is that a search for an element in the queue is
not constant time operation but requires full queue traversal if we need
to find an element[1].

It would be be particularly nice to have a hashmap with a constant cost
for look up operations. Something like LinkedHashMap. LinkedHashMap
seems to be a good container for LRU as it provides constant time lookup
but also a hook, a callback call for eviction of the oldest entry in the
form of removeEldestEntry callback. So why not implement our segment
eviction policy by using a LinkedHashMap [2]?

I've seen about 50% performance increase for smaller caches (100K) and
even more for larger and more contended caches - about 75% increase.
After this change BCHM performance was not that much worse than CHM and
it was faster than synchronized hashmap.

Should we include this impl as FAST_LRU as I would not want to remove
current LRU just yet? We have to prove this one is correct and that it
does not have have any unforeseen issues.

Let me know what you think!

Vladimir








[1]
https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/util/concurrent/BoundedConcurrentHashMap.java#L467

[2] Source code snippet for LRU in BCHM.


static final class LRU<K, V> extends LinkedHashMap<HashEntry<K,V>, V>
implements EvictionPolicy<K, V> {
       /** The serialVersionUID */
       private static final long serialVersionUID = -6627108081544347068L;

       private final ConcurrentLinkedQueue<HashEntry<K, V>> accessQueue;
       private final Segment<K,V> segment;
       private final int maxBatchQueueSize;
       private final int trimDownSize;
       private final float batchThresholdFactor;
       private final Set<HashEntry<K, V>> evicted;

       public LRU(Segment<K,V> s, int capacity, float lf, int
maxBatchSize, float batchThresholdFactor) {
          super((int)(capacity*lf));
          this.segment = s;
          this.trimDownSize = (int)(capacity*lf);
          this.maxBatchQueueSize = maxBatchSize > MAX_BATCH_SIZE ?
MAX_BATCH_SIZE : maxBatchSize;
          this.batchThresholdFactor = batchThresholdFactor;
          this.accessQueue = new ConcurrentLinkedQueue<HashEntry<K, V>>();
          this.evicted = new HashSet<HashEntry<K, V>>();
       }

       @Override
       public Set<HashEntry<K, V>> execute() {
          Set<HashEntry<K, V>> evictedCopy = new HashSet<HashEntry<K, V>>();
          for (HashEntry<K, V> e : accessQueue) {
             put(e, e.value);
          }
          evictedCopy.addAll(evicted);
          accessQueue.clear();
          evicted.clear();
          return evictedCopy;
       }



       @Override
       public Set<HashEntry<K, V>> onEntryMiss(HashEntry<K, V> e) {
          return Collections.emptySet();
       }

       /*
        * Invoked without holding a lock on Segment
        */
       @Override
       public boolean onEntryHit(HashEntry<K, V> e) {
          accessQueue.add(e);
          return accessQueue.size() >= maxBatchQueueSize *
batchThresholdFactor;
       }

       /*
        * Invoked without holding a lock on Segment
        */
       @Override
       public boolean thresholdExpired() {
          return accessQueue.size() >= maxBatchQueueSize;
       }

       @Override
       public void onEntryRemove(HashEntry<K, V> e) {
          remove(e);
          // we could have multiple instances of e in accessQueue;
remove them all
          while (accessQueue.remove(e)) {
             continue;
          }
       }

       @Override
       public void clear() {
          super.clear();
          accessQueue.clear();
       }

       @Override
       public Eviction strategy() {
          return Eviction.LRU;
       }

       protected boolean removeEldestEntry(Entry<HashEntry<K,V>,V> eldest){
          HashEntry<K, V> evictedEntry = eldest.getKey();
         
segment.evictionListener.onEntryChosenForEviction(evictedEntry.value);
          segment.remove(evictedEntry.key, evictedEntry.hash, null);
          evicted.add(evictedEntry);
          return size() > trimDownSize;
       }
    }
_______________________________________________
infinispan-dev mailing list
infinispan-dev@...
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Dan Berindei
In reply to this post by Vladimir Blagojevic
On Tue, Jul 5, 2011 at 7:23 PM, Vladimir Blagojevic <[hidden email]> wrote:

> Hey guys,
>
> In the past few days I've look around how to squeeze every bit of
> performance out of BCHM and particularly our LRU impl. What I did not
> like about current LRU is that a search for an element in the queue is
> not constant time operation but requires full queue traversal if we need
> to find an element[1].
>
> It would be be particularly nice to have a hashmap with a constant cost
> for look up operations. Something like LinkedHashMap. LinkedHashMap
> seems to be a good container for LRU as it provides constant time lookup
> but also a hook, a callback call for eviction of the oldest entry in the
> form of removeEldestEntry callback. So why not implement our segment
> eviction policy by using a LinkedHashMap [2]?
>

+1 Vladimir, I had a similar idea to implement a BCHM segment entirely
using a single LinkedHashMap - obviously that requires a lot more work
to integrate with the existing BCHM then using a LHM inside the
eviction policy, but it should also be more memory efficient.

> I've seen about 50% performance increase for smaller caches (100K) and
> even more for larger and more contended caches - about 75% increase.
> After this change BCHM performance was not that much worse than CHM and
> it was faster than synchronized hashmap.
>
> Should we include this impl as FAST_LRU as I would not want to remove
> current LRU just yet? We have to prove this one is correct and that it
> does not have have any unforeseen issues.
>

I would definitely remove the old LRU, it's much harder to understand
because of the batching and the JDK already has tests to guarantee
that LinkedHashMap is correct.

One idea for testing I was discussing with Galder on my pull request's
page was to simulate a real cache workload, with get misses triggering
a put and also a small delay, in order to evaluate how good an
eviction policy is at keeping the most used keys in.

We definitely need to *some* testing for this, we can't afford to wait
for another community member to come in a year's time and prove to us
that we're rubbish :)

Dan
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Vladimir Blagojevic
On 11-07-05 4:58 PM, Dan Berindei wrote:
> +1 Vladimir, I had a similar idea to implement a BCHM segment entirely
> using a single LinkedHashMap - obviously that requires a lot more work
> to integrate with the existing BCHM then using a LHM inside the
> eviction policy, but it should also be more memory efficient.
Ok great! I still want to keep BCHM lock amortization unless someone can
prove that they have a better solution.

> I would definitely remove the old LRU, it's much harder to understand
> because of the batching and the JDK already has tests to guarantee
> that LinkedHashMap is correct.
>
> One idea for testing I was discussing with Galder on my pull request's
> page was to simulate a real cache workload, with get misses triggering
> a put and also a small delay, in order to evaluate how good an
> eviction policy is at keeping the most used keys in.
>
> We definitely need to *some* testing for this, we can't afford to wait
> for another community member to come in a year's time and prove to us
> that we're rubbish :)
>
I think I've nailed it now as the results I am getting are correct in
terms of segment sizing and everything else. Performance looks very good
as well.

How about we do the following. I'll issue a pull request for an updated
LRU along with my updated MapStressTest. I will use LRU name for this
faster version, I'll rename current LRU as OLD_LRU and if all
correctness, performance and other tests we have prove new LRU works as
it should we'll just drop OLD_LRU from code altogether prior to
5.0.Final. As soon as this pull is integrated you can add your eviction
correctness code to MapStressTest or make a new test - whatever suits
you better!

Any objections?

Regards,
Vladimir




_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Vladimir Blagojevic
In reply to this post by Alex Kluge
On 11-07-05 12:47 PM, Alex Kluge wrote:
Hi,

  I have a completely in-line limit on the cache entries built on a clock cache that is an approximation to an LRU cache, and is extremely fast (O(1)). It is not a strick LRU, but chooses a not recently used item for removal. I'll provide some more details soon.

  I'm not sure how far along you are, as some of this is in the future tense, and some in the past. But I'll dig up this code - it's been a while since I worked on the cache. :)

                                                                       Alex
Alex,

I've seen references to clock algorithm in research literature. If you could adapt it to BCHM that would be great!

Thanks,
Vladimir

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Dan Berindei
In reply to this post by Alex Kluge

On Tue, Jul 5, 2011 at 7:47 PM, Alex Kluge <[hidden email]> wrote:
Hi,

  I have a completely in-line limit on the cache entries built on a clock cache that is an approximation to an LRU cache, and is extremely fast (O(1)). It is not a strick LRU, but chooses a not recently used item for removal. I'll provide some more details soon.

  I'm not sure how far along you are, as some of this is in the future tense, and some in the past. But I'll dig up this code - it's been a while since I worked on the cache. :)

This sounds great Alex. We're not strictly LRU either because the policy is enforced at the map segment level, so we don't always evict the oldest element in the map.

Maybe you also have a good cache/eviction policy test? ;-)

Cheers
Dan


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Vladimir Blagojevic
In reply to this post by Vladimir Blagojevic
Hey,

Ok I've issued a pull request. Please review it
https://github.com/infinispan/infinispan/pull/418
I am very happy with performance increase observed in new LRU. It
significantly beats synchronized Map and performance is very close to
ConcurrentHashMap itself.
I ran tests overnight for 12 hours (each test run is 20 minutes).

Dan, please have a look at new MapStressTest and add enhancements you
discussed with Galder and Sanne. Once you integrate these changes lets
do another round of testing of both LRU impls and if all goes well we
can drop old LRU entirely.

Cheers,
Vladimir



[ec2-user@ip-10-38-110-25 infinispan]$ ps -e | grep java
[ec2-user@ip-10-38-110-25 infinispan]$ cat perf_new_lru.log
[INFO] Scanning for projects...
[INFO] Reactor build order:
[INFO]   Infinispan Common Parent
[INFO]   Infinispan Core
[INFO]   Infinispan Tools
[INFO]   Infinispan Query API
[INFO]   Infinispan Tree API
[INFO]   Parent pom for cachestore modules
[INFO]   Infinispan JDBC CacheStore
[INFO]   Infinispan Lucene Directory Implementation
[INFO]   Infinispan JDBM CacheStore
[INFO]   Infinispan BDBJE CacheStore
[INFO]   Infinispan CloudCacheStore
[INFO]   Parent pom for server modules
[INFO]   Infinispan Server Core Module
[INFO]   Infinispan Server Hotrod Module
[INFO]   Infinispan Client Hotrod Module
[INFO]   Infinispan remote CacheStore
[INFO]   Infinispan CassandraCacheStore
[INFO]   Infinispan Server Memcached Module
[INFO]   Infinispan WebSocket Server
[INFO]   Infinispan REST Server
[INFO]   Infinispan RHQ Plugin
[INFO]   Infinispan Spring Integration
[INFO]   Infinispan GUI Demo
[INFO]   Infinispan EC2 Demo
[INFO]   Infinispan Distributed Executors and Map/Reduce Demo
[INFO]   Infinispan EC2 Demo UI
[INFO]   Infinispan Directory Demo
[INFO]   Infinispan Lucene Directory Demo
[INFO]   Infinispan GridFileSystem WebDAV interface
[INFO]   Infinispan Distribution
[INFO]
------------------------------------------------------------------------
[INFO] Building Infinispan Common Parent
[INFO]    task-segment: [test]
[INFO]
------------------------------------------------------------------------
[INFO] snapshot org.jboss.ws:jbossws-api:1.0.0-SNAPSHOT: checking for
updates from jboss-public-repository
[INFO] snapshot org.jboss.ws:jbossws-api:1.0.0-SNAPSHOT: checking for
updates from jboss-public-repository-group
[INFO] snapshot org.jboss.ws:jbossws-parent:1.0.10-SNAPSHOT: checking
for updates from jboss-public-repository
[INFO] snapshot org.jboss.ws:jbossws-parent:1.0.10-SNAPSHOT: checking
for updates from jboss-public-repository-group
[INFO] [enforcer:enforce {execution: enforce-java}]
[INFO]
------------------------------------------------------------------------
[INFO] Building Infinispan Core
[INFO]    task-segment: [test]
[INFO]
------------------------------------------------------------------------
[INFO] [enforcer:enforce {execution: enforce-java}]
[INFO] [resources:resources {execution: default-resources}]
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 17 resources
[INFO] [compiler:compile {execution: default-compile}]
[INFO] Compiling 52 source files to
/home/ec2-user/infinispan/core/target/classes
[INFO] Preparing exec:java
[WARNING] Removing: java from forked lifecycle, to prevent recursive
invocation.
[INFO] [enforcer:enforce {execution: enforce-java}]
[INFO] [exec:java {execution: default}]
Generating schema file in
/home/ec2-user/infinispan/core/src/main/resources/schema
Using file name infinispan-config-5.0.xsd for schema
Generated schema file successfully
[INFO] [resources:testResources {execution: default-testResources}]
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 24 resources
[INFO] [compiler:testCompile {execution: default-testCompile}]
[INFO] Nothing to compile - all classes are up to date
[INFO] [surefire:test {execution: default-test}]
[INFO] Surefire report directory:
/home/ec2-user/infinispan/core/target/surefire-reports

-------------------------------------------------------
  T E S T S
-------------------------------------------------------
Running TestSuite
Performance for container BoundedConcurrentHashMap max capacity is
1048576[numReaders,numWriters,numRemovers]=[8,2,1]
Average get ops/ms 188
Average put ops/ms 126
Average remove ops/ms 128
Size = 743454
Performance for container BoundedConcurrentHashMap max capacity is
1048576[numReaders,numWriters,numRemovers]=[32,4,2]
Average get ops/ms 54
Average put ops/ms 33
Average remove ops/ms 41
Size = 762775
Performance for container BoundedConcurrentHashMap max capacity is
1048576[numReaders,numWriters,numRemovers]=[64,8,3]
Average get ops/ms 26
Average put ops/ms 15
Average remove ops/ms 19
Size = 758876
[testng-MapStressTest] Test
testBufferedConcurrentHashMapLRU(org.infinispan.stress.MapStressTest)
succeeded.
Test suite progress: tests succeeded: 1, failed: 0, skipped: 0.
Performance for container BoundedConcurrentHashMap max capacity is
131072[numReaders,numWriters,numRemovers]=[8,2,1]
Average get ops/ms 305
Average put ops/ms 99
Average remove ops/ms 213
Size = 98200
Performance for container BoundedConcurrentHashMap max capacity is
131072[numReaders,numWriters,numRemovers]=[32,4,2]
Average get ops/ms 89
Average put ops/ms 30
Average remove ops/ms 55
Size = 96159
Performance for container BoundedConcurrentHashMap max capacity is
131072[numReaders,numWriters,numRemovers]=[64,8,3]
Average get ops/ms 47
Average put ops/ms 12
Average remove ops/ms 19
Size = 96977
[testng-MapStressTest] Test
testBufferedConcurrentHashMapLRU(org.infinispan.stress.MapStressTest)
succeeded.
Test suite progress: tests succeeded: 2, failed: 0, skipped: 0.
Performance for container BoundedConcurrentHashMap max capacity is
524288[numReaders,numWriters,numRemovers]=[8,2,1]
Average get ops/ms 234
Average put ops/ms 112
Average remove ops/ms 158
Size = 393215
Performance for container BoundedConcurrentHashMap max capacity is
524288[numReaders,numWriters,numRemovers]=[32,4,2]
Average get ops/ms 68
Average put ops/ms 32
Average remove ops/ms 47
Size = 391512
Performance for container BoundedConcurrentHashMap max capacity is
524288[numReaders,numWriters,numRemovers]=[64,8,3]
Average get ops/ms 35
Average put ops/ms 15
Average remove ops/ms 21
Size = 390856
[testng-MapStressTest] Test
testBufferedConcurrentHashMapLRU(org.infinispan.stress.MapStressTest)
succeeded.
Test suite progress: tests succeeded: 3, failed: 0, skipped: 0.
Performance for container CacheImpl max capacity is
1048576[numReaders,numWriters,numRemovers]=[8,2,1]
Average get ops/ms 104
Average put ops/ms 48
Average remove ops/ms 58
Size = 738030
Performance for container CacheImpl max capacity is
1048576[numReaders,numWriters,numRemovers]=[32,4,2]
Average get ops/ms 30
Average put ops/ms 13
Average remove ops/ms 22
Size = 774764
Performance for container CacheImpl max capacity is
1048576[numReaders,numWriters,numRemovers]=[64,8,3]
Average get ops/ms 16
Average put ops/ms 6
Average remove ops/ms 13
Size = 776668
[testng-MapStressTest] Test
testCache(org.infinispan.stress.MapStressTest) succeeded.
Test suite progress: tests succeeded: 4, failed: 0, skipped: 0.
Performance for container CacheImpl max capacity is
131072[numReaders,numWriters,numRemovers]=[8,2,1]
Average get ops/ms 220
Average put ops/ms 44
Average remove ops/ms 123
Size = 98304
Performance for container CacheImpl max capacity is
131072[numReaders,numWriters,numRemovers]=[32,4,2]
Average get ops/ms 66
Average put ops/ms 12
Average remove ops/ms 39
Size = 97832
Performance for container CacheImpl max capacity is
131072[numReaders,numWriters,numRemovers]=[64,8,3]
Average get ops/ms 34
Average put ops/ms 5
Average remove ops/ms 21
Size = 97802
[testng-MapStressTest] Test
testCache(org.infinispan.stress.MapStressTest) succeeded.
Test suite progress: tests succeeded: 5, failed: 0, skipped: 0.
Performance for container CacheImpl max capacity is
524288[numReaders,numWriters,numRemovers]=[8,2,1]
Average get ops/ms 144
Average put ops/ms 44
Average remove ops/ms 77
Size = 393216
Performance for container CacheImpl max capacity is
524288[numReaders,numWriters,numRemovers]=[32,4,2]
Average get ops/ms 42
Average put ops/ms 12
Average remove ops/ms 28
Size = 392794
Performance for container CacheImpl max capacity is
524288[numReaders,numWriters,numRemovers]=[64,8,3]
Average get ops/ms 22
Average put ops/ms 6
Average remove ops/ms 16
Size = 387007
[testng-MapStressTest] Test
testCache(org.infinispan.stress.MapStressTest) succeeded.
Test suite progress: tests succeeded: 6, failed: 0, skipped: 0.
Performance for container ConcurrentHashMap max capacity is
1048576[numReaders,numWriters,numRemovers]=[8,2,1]
Average get ops/ms 303
Average put ops/ms 235
Average remove ops/ms 240
Size = 711798
Performance for container ConcurrentHashMap max capacity is
1048576[numReaders,numWriters,numRemovers]=[32,4,2]
Average get ops/ms 88
Average put ops/ms 68
Average remove ops/ms 70
Size = 669381
Performance for container ConcurrentHashMap max capacity is
1048576[numReaders,numWriters,numRemovers]=[64,8,3]
Average get ops/ms 45
Average put ops/ms 29
Average remove ops/ms 31
Size = 758993
[testng-MapStressTest] Test
testConcurrentHashMap(org.infinispan.stress.MapStressTest) succeeded.
Test suite progress: tests succeeded: 7, failed: 0, skipped: 0.
Performance for container ConcurrentHashMap max capacity is
131072[numReaders,numWriters,numRemovers]=[8,2,1]
Average get ops/ms 261
Average put ops/ms 198
Average remove ops/ms 197
Size = 667905
Performance for container ConcurrentHashMap max capacity is
131072[numReaders,numWriters,numRemovers]=[32,4,2]
Average get ops/ms 75
Average put ops/ms 55
Average remove ops/ms 56
Size = 614548
Performance for container ConcurrentHashMap max capacity is
131072[numReaders,numWriters,numRemovers]=[64,8,3]
Average get ops/ms 37
Average put ops/ms 24
Average remove ops/ms 25
Size = 789004
[testng-MapStressTest] Test
testConcurrentHashMap(org.infinispan.stress.MapStressTest) succeeded.
Test suite progress: tests succeeded: 8, failed: 0, skipped: 0.
Performance for container ConcurrentHashMap max capacity is
524288[numReaders,numWriters,numRemovers]=[8,2,1]
Average get ops/ms 305
Average put ops/ms 231
Average remove ops/ms 236
Size = 656558
Performance for container ConcurrentHashMap max capacity is
524288[numReaders,numWriters,numRemovers]=[32,4,2]
Average get ops/ms 88
Average put ops/ms 66
Average remove ops/ms 68
Size = 716811
Performance for container ConcurrentHashMap max capacity is
524288[numReaders,numWriters,numRemovers]=[64,8,3]
Average get ops/ms 44
Average put ops/ms 30
Average remove ops/ms 31
Size = 757864
[testng-MapStressTest] Test
testConcurrentHashMap(org.infinispan.stress.MapStressTest) succeeded.
Test suite progress: tests succeeded: 9, failed: 0, skipped: 0.
Performance for container SynchronizedMap max capacity is
1048576[numReaders,numWriters,numRemovers]=[8,2,1]
Average get ops/ms 171
Average put ops/ms 171
Average remove ops/ms 142
Size = 679283
Performance for container SynchronizedMap max capacity is
1048576[numReaders,numWriters,numRemovers]=[32,4,2]
Average get ops/ms 52
Average put ops/ms 52
Average remove ops/ms 38
Size = 806142
Performance for container SynchronizedMap max capacity is
1048576[numReaders,numWriters,numRemovers]=[64,8,3]
Average get ops/ms 26
Average put ops/ms 27
Average remove ops/ms 19
Size = 860559
[testng-MapStressTest] Test
testHashMap(org.infinispan.stress.MapStressTest) succeeded.
Test suite progress: tests succeeded: 10, failed: 0, skipped: 0.
Performance for container SynchronizedMap max capacity is
131072[numReaders,numWriters,numRemovers]=[8,2,1]
Average get ops/ms 171
Average put ops/ms 172
Average remove ops/ms 151
Size = 722315
Performance for container SynchronizedMap max capacity is
131072[numReaders,numWriters,numRemovers]=[32,4,2]
Average get ops/ms 51
Average put ops/ms 54
Average remove ops/ms 47
Size = 810163
Performance for container SynchronizedMap max capacity is
131072[numReaders,numWriters,numRemovers]=[64,8,3]
Average get ops/ms 25
Average put ops/ms 26
Average remove ops/ms 22
Size = 848856
[testng-MapStressTest] Test
testHashMap(org.infinispan.stress.MapStressTest) succeeded.
Test suite progress: tests succeeded: 11, failed: 0, skipped: 0.
Performance for container SynchronizedMap max capacity is
524288[numReaders,numWriters,numRemovers]=[8,2,1]
Average get ops/ms 169
Average put ops/ms 173
Average remove ops/ms 151
Size = 802294
Performance for container SynchronizedMap max capacity is
524288[numReaders,numWriters,numRemovers]=[32,4,2]
Average get ops/ms 51
Average put ops/ms 53
Average remove ops/ms 47
Size = 666371
Performance for container SynchronizedMap max capacity is
524288[numReaders,numWriters,numRemovers]=[64,8,3]
Average get ops/ms 26
Average put ops/ms 26
Average remove ops/ms 21
Size = 829093
[testng-MapStressTest] Test
testHashMap(org.infinispan.stress.MapStressTest) succeeded.
Test suite progress: tests succeeded: 12, failed: 0, skipped: 0.
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed:
43,209.511 sec

Results :

Tests run: 12, Failures: 0, Errors: 0, Skipped: 0

[INFO]
------------------------------------------------------------------------
[INFO] Building Infinispan Tools
[INFO]    task-segment: [test]
[INFO]
------------------------------------------------------------------------
[INFO] snapshot org.infinispan:infinispan-core:5.0.0-SNAPSHOT: checking
for updates from jboss-public-repository
[INFO] [enforcer:enforce {execution: enforce-java}]
[INFO] [resources:resources {execution: default-resources}]
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO] [compiler:compile {execution: default-compile}]
[INFO] Nothing to compile - all classes are up to date
[INFO] [resources:testResources {execution: default-testResources}]
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory
/home/ec2-user/infinispan/tools/src/test/resources
[INFO] [compiler:testCompile {execution: default-testCompile}]
[INFO] Nothing to compile - all classes are up to date
[INFO] [surefire:test {execution: default-test}]
[INFO] Surefire report directory:
/home/ec2-user/infinispan/tools/target/surefire-reports

-------------------------------------------------------
  T E S T S
-------------------------------------------------------
There are no tests to run.

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO]
------------------------------------------------------------------------
[ERROR] BUILD FAILURE
[INFO]
------------------------------------------------------------------------
[INFO] No tests were executed!  (Set -DfailIfNoTests=false to ignore
this error.)
[INFO]
------------------------------------------------------------------------
[INFO] For more information, run Maven with the -e switch
[INFO]
------------------------------------------------------------------------
[INFO] Total time: 720 minutes 23 seconds
[INFO] Finished at: Wed Jul 06 17:13:51 UTC 2011
[INFO] Final Memory: 206M/2001M
[INFO]
------------------------------------------------------------------------
[ec2-user@ip-10-38-110-25 infinispan]$
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Sanne Grinovero-3

Awesome!
Are you really sure about eradicating the old implementation?

Cheers,
Sanne

On 6 Jul 2011 19:39, "Vladimir Blagojevic" <[hidden email]> wrote:
> Hey,
>
> Ok I've issued a pull request. Please review it
> https://github.com/infinispan/infinispan/pull/418
> I am very happy with performance increase observed in new LRU. It
> significantly beats synchronized Map and performance is very close to
> ConcurrentHashMap itself.
> I ran tests overnight for 12 hours (each test run is 20 minutes).
>
> Dan, please have a look at new MapStressTest and add enhancements you
> discussed with Galder and Sanne. Once you integrate these changes lets
> do another round of testing of both LRU impls and if all goes well we
> can drop old LRU entirely.
>
> Cheers,
> Vladimir
>
>
>
> [ec2-user@ip-10-38-110-25 infinispan]$ ps -e | grep java
> [ec2-user@ip-10-38-110-25 infinispan]$ cat perf_new_lru.log
> [INFO] Scanning for projects...
> [INFO] Reactor build order:
> [INFO] Infinispan Common Parent
> [INFO] Infinispan Core
> [INFO] Infinispan Tools
> [INFO] Infinispan Query API
> [INFO] Infinispan Tree API
> [INFO] Parent pom for cachestore modules
> [INFO] Infinispan JDBC CacheStore
> [INFO] Infinispan Lucene Directory Implementation
> [INFO] Infinispan JDBM CacheStore
> [INFO] Infinispan BDBJE CacheStore
> [INFO] Infinispan CloudCacheStore
> [INFO] Parent pom for server modules
> [INFO] Infinispan Server Core Module
> [INFO] Infinispan Server Hotrod Module
> [INFO] Infinispan Client Hotrod Module
> [INFO] Infinispan remote CacheStore
> [INFO] Infinispan CassandraCacheStore
> [INFO] Infinispan Server Memcached Module
> [INFO] Infinispan WebSocket Server
> [INFO] Infinispan REST Server
> [INFO] Infinispan RHQ Plugin
> [INFO] Infinispan Spring Integration
> [INFO] Infinispan GUI Demo
> [INFO] Infinispan EC2 Demo
> [INFO] Infinispan Distributed Executors and Map/Reduce Demo
> [INFO] Infinispan EC2 Demo UI
> [INFO] Infinispan Directory Demo
> [INFO] Infinispan Lucene Directory Demo
> [INFO] Infinispan GridFileSystem WebDAV interface
> [INFO] Infinispan Distribution
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Building Infinispan Common Parent
> [INFO] task-segment: [test]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] snapshot org.jboss.ws:jbossws-api:1.0.0-SNAPSHOT: checking for
> updates from jboss-public-repository
> [INFO] snapshot org.jboss.ws:jbossws-api:1.0.0-SNAPSHOT: checking for
> updates from jboss-public-repository-group
> [INFO] snapshot org.jboss.ws:jbossws-parent:1.0.10-SNAPSHOT: checking
> for updates from jboss-public-repository
> [INFO] snapshot org.jboss.ws:jbossws-parent:1.0.10-SNAPSHOT: checking
> for updates from jboss-public-repository-group
> [INFO] [enforcer:enforce {execution: enforce-java}]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Building Infinispan Core
> [INFO] task-segment: [test]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] [enforcer:enforce {execution: enforce-java}]
> [INFO] [resources:resources {execution: default-resources}]
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 17 resources
> [INFO] [compiler:compile {execution: default-compile}]
> [INFO] Compiling 52 source files to
> /home/ec2-user/infinispan/core/target/classes
> [INFO] Preparing exec:java
> [WARNING] Removing: java from forked lifecycle, to prevent recursive
> invocation.
> [INFO] [enforcer:enforce {execution: enforce-java}]
> [INFO] [exec:java {execution: default}]
> Generating schema file in
> /home/ec2-user/infinispan/core/src/main/resources/schema
> Using file name infinispan-config-5.0.xsd for schema
> Generated schema file successfully
> [INFO] [resources:testResources {execution: default-testResources}]
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 24 resources
> [INFO] [compiler:testCompile {execution: default-testCompile}]
> [INFO] Nothing to compile - all classes are up to date
> [INFO] [surefire:test {execution: default-test}]
> [INFO] Surefire report directory:
> /home/ec2-user/infinispan/core/target/surefire-reports
>
> -------------------------------------------------------
> T E S T S
> -------------------------------------------------------
> Running TestSuite
> Performance for container BoundedConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 188
> Average put ops/ms 126
> Average remove ops/ms 128
> Size = 743454
> Performance for container BoundedConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 54
> Average put ops/ms 33
> Average remove ops/ms 41
> Size = 762775
> Performance for container BoundedConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 26
> Average put ops/ms 15
> Average remove ops/ms 19
> Size = 758876
> [testng-MapStressTest] Test
> testBufferedConcurrentHashMapLRU(org.infinispan.stress.MapStressTest)
> succeeded.
> Test suite progress: tests succeeded: 1, failed: 0, skipped: 0.
> Performance for container BoundedConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 305
> Average put ops/ms 99
> Average remove ops/ms 213
> Size = 98200
> Performance for container BoundedConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 89
> Average put ops/ms 30
> Average remove ops/ms 55
> Size = 96159
> Performance for container BoundedConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 47
> Average put ops/ms 12
> Average remove ops/ms 19
> Size = 96977
> [testng-MapStressTest] Test
> testBufferedConcurrentHashMapLRU(org.infinispan.stress.MapStressTest)
> succeeded.
> Test suite progress: tests succeeded: 2, failed: 0, skipped: 0.
> Performance for container BoundedConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 234
> Average put ops/ms 112
> Average remove ops/ms 158
> Size = 393215
> Performance for container BoundedConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 68
> Average put ops/ms 32
> Average remove ops/ms 47
> Size = 391512
> Performance for container BoundedConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 35
> Average put ops/ms 15
> Average remove ops/ms 21
> Size = 390856
> [testng-MapStressTest] Test
> testBufferedConcurrentHashMapLRU(org.infinispan.stress.MapStressTest)
> succeeded.
> Test suite progress: tests succeeded: 3, failed: 0, skipped: 0.
> Performance for container CacheImpl max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 104
> Average put ops/ms 48
> Average remove ops/ms 58
> Size = 738030
> Performance for container CacheImpl max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 30
> Average put ops/ms 13
> Average remove ops/ms 22
> Size = 774764
> Performance for container CacheImpl max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 16
> Average put ops/ms 6
> Average remove ops/ms 13
> Size = 776668
> [testng-MapStressTest] Test
> testCache(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 4, failed: 0, skipped: 0.
> Performance for container CacheImpl max capacity is
> 131072[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 220
> Average put ops/ms 44
> Average remove ops/ms 123
> Size = 98304
> Performance for container CacheImpl max capacity is
> 131072[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 66
> Average put ops/ms 12
> Average remove ops/ms 39
> Size = 97832
> Performance for container CacheImpl max capacity is
> 131072[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 34
> Average put ops/ms 5
> Average remove ops/ms 21
> Size = 97802
> [testng-MapStressTest] Test
> testCache(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 5, failed: 0, skipped: 0.
> Performance for container CacheImpl max capacity is
> 524288[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 144
> Average put ops/ms 44
> Average remove ops/ms 77
> Size = 393216
> Performance for container CacheImpl max capacity is
> 524288[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 42
> Average put ops/ms 12
> Average remove ops/ms 28
> Size = 392794
> Performance for container CacheImpl max capacity is
> 524288[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 22
> Average put ops/ms 6
> Average remove ops/ms 16
> Size = 387007
> [testng-MapStressTest] Test
> testCache(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 6, failed: 0, skipped: 0.
> Performance for container ConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 303
> Average put ops/ms 235
> Average remove ops/ms 240
> Size = 711798
> Performance for container ConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 88
> Average put ops/ms 68
> Average remove ops/ms 70
> Size = 669381
> Performance for container ConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 45
> Average put ops/ms 29
> Average remove ops/ms 31
> Size = 758993
> [testng-MapStressTest] Test
> testConcurrentHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 7, failed: 0, skipped: 0.
> Performance for container ConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 261
> Average put ops/ms 198
> Average remove ops/ms 197
> Size = 667905
> Performance for container ConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 75
> Average put ops/ms 55
> Average remove ops/ms 56
> Size = 614548
> Performance for container ConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 37
> Average put ops/ms 24
> Average remove ops/ms 25
> Size = 789004
> [testng-MapStressTest] Test
> testConcurrentHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 8, failed: 0, skipped: 0.
> Performance for container ConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 305
> Average put ops/ms 231
> Average remove ops/ms 236
> Size = 656558
> Performance for container ConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 88
> Average put ops/ms 66
> Average remove ops/ms 68
> Size = 716811
> Performance for container ConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 44
> Average put ops/ms 30
> Average remove ops/ms 31
> Size = 757864
> [testng-MapStressTest] Test
> testConcurrentHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 9, failed: 0, skipped: 0.
> Performance for container SynchronizedMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 171
> Average put ops/ms 171
> Average remove ops/ms 142
> Size = 679283
> Performance for container SynchronizedMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 52
> Average put ops/ms 52
> Average remove ops/ms 38
> Size = 806142
> Performance for container SynchronizedMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 26
> Average put ops/ms 27
> Average remove ops/ms 19
> Size = 860559
> [testng-MapStressTest] Test
> testHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 10, failed: 0, skipped: 0.
> Performance for container SynchronizedMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 171
> Average put ops/ms 172
> Average remove ops/ms 151
> Size = 722315
> Performance for container SynchronizedMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 51
> Average put ops/ms 54
> Average remove ops/ms 47
> Size = 810163
> Performance for container SynchronizedMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 25
> Average put ops/ms 26
> Average remove ops/ms 22
> Size = 848856
> [testng-MapStressTest] Test
> testHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 11, failed: 0, skipped: 0.
> Performance for container SynchronizedMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 169
> Average put ops/ms 173
> Average remove ops/ms 151
> Size = 802294
> Performance for container SynchronizedMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 51
> Average put ops/ms 53
> Average remove ops/ms 47
> Size = 666371
> Performance for container SynchronizedMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 26
> Average put ops/ms 26
> Average remove ops/ms 21
> Size = 829093
> [testng-MapStressTest] Test
> testHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 12, failed: 0, skipped: 0.
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed:
> 43,209.511 sec
>
> Results :
>
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Building Infinispan Tools
> [INFO] task-segment: [test]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] snapshot org.infinispan:infinispan-core:5.0.0-SNAPSHOT: checking
> for updates from jboss-public-repository
> [INFO] [enforcer:enforce {execution: enforce-java}]
> [INFO] [resources:resources {execution: default-resources}]
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 2 resources
> [INFO] [compiler:compile {execution: default-compile}]
> [INFO] Nothing to compile - all classes are up to date
> [INFO] [resources:testResources {execution: default-testResources}]
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] skip non existing resourceDirectory
> /home/ec2-user/infinispan/tools/src/test/resources
> [INFO] [compiler:testCompile {execution: default-testCompile}]
> [INFO] Nothing to compile - all classes are up to date
> [INFO] [surefire:test {execution: default-test}]
> [INFO] Surefire report directory:
> /home/ec2-user/infinispan/tools/target/surefire-reports
>
> -------------------------------------------------------
> T E S T S
> -------------------------------------------------------
> There are no tests to run.
>
> Results :
>
> Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [ERROR] BUILD FAILURE
> [INFO]
> ------------------------------------------------------------------------
> [INFO] No tests were executed! (Set -DfailIfNoTests=false to ignore
> this error.)
> [INFO]
> ------------------------------------------------------------------------
> [INFO] For more information, run Maven with the -e switch
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 720 minutes 23 seconds
> [INFO] Finished at: Wed Jul 06 17:13:51 UTC 2011
> [INFO] Final Memory: 206M/2001M
> [INFO]
> ------------------------------------------------------------------------
> [ec2-user@ip-10-38-110-25 infinispan]$
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Vladimir Blagojevic
No :-) We can leave it, it all depends on how many more CR cycles we have before Final release to field test new LRU!

On 11-07-06 2:46 PM, Sanne Grinovero wrote:

Awesome!
Are you really sure about eradicating the old implementation?

Cheers,
Sanne

On 6 Jul 2011 19:39, "Vladimir Blagojevic" <[hidden email]> wrote:
> Hey,
>
> Ok I've issued a pull request. Please review it
> https://github.com/infinispan/infinispan/pull/418
> I am very happy with performance increase observed in new LRU. It
> significantly beats synchronized Map and performance is very close to
> ConcurrentHashMap itself.
> I ran tests overnight for 12 hours (each test run is 20 minutes).
>
> Dan, please have a look at new MapStressTest and add enhancements you
> discussed with Galder and Sanne. Once you integrate these changes lets
> do another round of testing of both LRU impls and if all goes well we
> can drop old LRU entirely.
>
> Cheers,
> Vladimir
>
>
>
> [ec2-user@ip-10-38-110-25 infinispan]$ ps -e | grep java
> [ec2-user@ip-10-38-110-25 infinispan]$ cat perf_new_lru.log
> [INFO] Scanning for projects...
> [INFO] Reactor build order:
> [INFO] Infinispan Common Parent
> [INFO] Infinispan Core
> [INFO] Infinispan Tools
> [INFO] Infinispan Query API
> [INFO] Infinispan Tree API
> [INFO] Parent pom for cachestore modules
> [INFO] Infinispan JDBC CacheStore
> [INFO] Infinispan Lucene Directory Implementation
> [INFO] Infinispan JDBM CacheStore
> [INFO] Infinispan BDBJE CacheStore
> [INFO] Infinispan CloudCacheStore
> [INFO] Parent pom for server modules
> [INFO] Infinispan Server Core Module
> [INFO] Infinispan Server Hotrod Module
> [INFO] Infinispan Client Hotrod Module
> [INFO] Infinispan remote CacheStore
> [INFO] Infinispan CassandraCacheStore
> [INFO] Infinispan Server Memcached Module
> [INFO] Infinispan WebSocket Server
> [INFO] Infinispan REST Server
> [INFO] Infinispan RHQ Plugin
> [INFO] Infinispan Spring Integration
> [INFO] Infinispan GUI Demo
> [INFO] Infinispan EC2 Demo
> [INFO] Infinispan Distributed Executors and Map/Reduce Demo
> [INFO] Infinispan EC2 Demo UI
> [INFO] Infinispan Directory Demo
> [INFO] Infinispan Lucene Directory Demo
> [INFO] Infinispan GridFileSystem WebDAV interface
> [INFO] Infinispan Distribution
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Building Infinispan Common Parent
> [INFO] task-segment: [test]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] snapshot org.jboss.ws:jbossws-api:1.0.0-SNAPSHOT: checking for
> updates from jboss-public-repository
> [INFO] snapshot org.jboss.ws:jbossws-api:1.0.0-SNAPSHOT: checking for
> updates from jboss-public-repository-group
> [INFO] snapshot org.jboss.ws:jbossws-parent:1.0.10-SNAPSHOT: checking
> for updates from jboss-public-repository
> [INFO] snapshot org.jboss.ws:jbossws-parent:1.0.10-SNAPSHOT: checking
> for updates from jboss-public-repository-group
> [INFO] [enforcer:enforce {execution: enforce-java}]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Building Infinispan Core
> [INFO] task-segment: [test]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] [enforcer:enforce {execution: enforce-java}]
> [INFO] [resources:resources {execution: default-resources}]
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 17 resources
> [INFO] [compiler:compile {execution: default-compile}]
> [INFO] Compiling 52 source files to
> /home/ec2-user/infinispan/core/target/classes
> [INFO] Preparing exec:java
> [WARNING] Removing: java from forked lifecycle, to prevent recursive
> invocation.
> [INFO] [enforcer:enforce {execution: enforce-java}]
> [INFO] [exec:java {execution: default}]
> Generating schema file in
> /home/ec2-user/infinispan/core/src/main/resources/schema
> Using file name infinispan-config-5.0.xsd for schema
> Generated schema file successfully
> [INFO] [resources:testResources {execution: default-testResources}]
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 24 resources
> [INFO] [compiler:testCompile {execution: default-testCompile}]
> [INFO] Nothing to compile - all classes are up to date
> [INFO] [surefire:test {execution: default-test}]
> [INFO] Surefire report directory:
> /home/ec2-user/infinispan/core/target/surefire-reports
>
> -------------------------------------------------------
> T E S T S
> -------------------------------------------------------
> Running TestSuite
> Performance for container BoundedConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 188
> Average put ops/ms 126
> Average remove ops/ms 128
> Size = 743454
> Performance for container BoundedConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 54
> Average put ops/ms 33
> Average remove ops/ms 41
> Size = 762775
> Performance for container BoundedConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 26
> Average put ops/ms 15
> Average remove ops/ms 19
> Size = 758876
> [testng-MapStressTest] Test
> testBufferedConcurrentHashMapLRU(org.infinispan.stress.MapStressTest)
> succeeded.
> Test suite progress: tests succeeded: 1, failed: 0, skipped: 0.
> Performance for container BoundedConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 305
> Average put ops/ms 99
> Average remove ops/ms 213
> Size = 98200
> Performance for container BoundedConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 89
> Average put ops/ms 30
> Average remove ops/ms 55
> Size = 96159
> Performance for container BoundedConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 47
> Average put ops/ms 12
> Average remove ops/ms 19
> Size = 96977
> [testng-MapStressTest] Test
> testBufferedConcurrentHashMapLRU(org.infinispan.stress.MapStressTest)
> succeeded.
> Test suite progress: tests succeeded: 2, failed: 0, skipped: 0.
> Performance for container BoundedConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 234
> Average put ops/ms 112
> Average remove ops/ms 158
> Size = 393215
> Performance for container BoundedConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 68
> Average put ops/ms 32
> Average remove ops/ms 47
> Size = 391512
> Performance for container BoundedConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 35
> Average put ops/ms 15
> Average remove ops/ms 21
> Size = 390856
> [testng-MapStressTest] Test
> testBufferedConcurrentHashMapLRU(org.infinispan.stress.MapStressTest)
> succeeded.
> Test suite progress: tests succeeded: 3, failed: 0, skipped: 0.
> Performance for container CacheImpl max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 104
> Average put ops/ms 48
> Average remove ops/ms 58
> Size = 738030
> Performance for container CacheImpl max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 30
> Average put ops/ms 13
> Average remove ops/ms 22
> Size = 774764
> Performance for container CacheImpl max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 16
> Average put ops/ms 6
> Average remove ops/ms 13
> Size = 776668
> [testng-MapStressTest] Test
> testCache(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 4, failed: 0, skipped: 0.
> Performance for container CacheImpl max capacity is
> 131072[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 220
> Average put ops/ms 44
> Average remove ops/ms 123
> Size = 98304
> Performance for container CacheImpl max capacity is
> 131072[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 66
> Average put ops/ms 12
> Average remove ops/ms 39
> Size = 97832
> Performance for container CacheImpl max capacity is
> 131072[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 34
> Average put ops/ms 5
> Average remove ops/ms 21
> Size = 97802
> [testng-MapStressTest] Test
> testCache(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 5, failed: 0, skipped: 0.
> Performance for container CacheImpl max capacity is
> 524288[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 144
> Average put ops/ms 44
> Average remove ops/ms 77
> Size = 393216
> Performance for container CacheImpl max capacity is
> 524288[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 42
> Average put ops/ms 12
> Average remove ops/ms 28
> Size = 392794
> Performance for container CacheImpl max capacity is
> 524288[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 22
> Average put ops/ms 6
> Average remove ops/ms 16
> Size = 387007
> [testng-MapStressTest] Test
> testCache(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 6, failed: 0, skipped: 0.
> Performance for container ConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 303
> Average put ops/ms 235
> Average remove ops/ms 240
> Size = 711798
> Performance for container ConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 88
> Average put ops/ms 68
> Average remove ops/ms 70
> Size = 669381
> Performance for container ConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 45
> Average put ops/ms 29
> Average remove ops/ms 31
> Size = 758993
> [testng-MapStressTest] Test
> testConcurrentHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 7, failed: 0, skipped: 0.
> Performance for container ConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 261
> Average put ops/ms 198
> Average remove ops/ms 197
> Size = 667905
> Performance for container ConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 75
> Average put ops/ms 55
> Average remove ops/ms 56
> Size = 614548
> Performance for container ConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 37
> Average put ops/ms 24
> Average remove ops/ms 25
> Size = 789004
> [testng-MapStressTest] Test
> testConcurrentHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 8, failed: 0, skipped: 0.
> Performance for container ConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 305
> Average put ops/ms 231
> Average remove ops/ms 236
> Size = 656558
> Performance for container ConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 88
> Average put ops/ms 66
> Average remove ops/ms 68
> Size = 716811
> Performance for container ConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 44
> Average put ops/ms 30
> Average remove ops/ms 31
> Size = 757864
> [testng-MapStressTest] Test
> testConcurrentHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 9, failed: 0, skipped: 0.
> Performance for container SynchronizedMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 171
> Average put ops/ms 171
> Average remove ops/ms 142
> Size = 679283
> Performance for container SynchronizedMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 52
> Average put ops/ms 52
> Average remove ops/ms 38
> Size = 806142
> Performance for container SynchronizedMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 26
> Average put ops/ms 27
> Average remove ops/ms 19
> Size = 860559
> [testng-MapStressTest] Test
> testHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 10, failed: 0, skipped: 0.
> Performance for container SynchronizedMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 171
> Average put ops/ms 172
> Average remove ops/ms 151
> Size = 722315
> Performance for container SynchronizedMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 51
> Average put ops/ms 54
> Average remove ops/ms 47
> Size = 810163
> Performance for container SynchronizedMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 25
> Average put ops/ms 26
> Average remove ops/ms 22
> Size = 848856
> [testng-MapStressTest] Test
> testHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 11, failed: 0, skipped: 0.
> Performance for container SynchronizedMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 169
> Average put ops/ms 173
> Average remove ops/ms 151
> Size = 802294
> Performance for container SynchronizedMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 51
> Average put ops/ms 53
> Average remove ops/ms 47
> Size = 666371
> Performance for container SynchronizedMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 26
> Average put ops/ms 26
> Average remove ops/ms 21
> Size = 829093
> [testng-MapStressTest] Test
> testHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 12, failed: 0, skipped: 0.
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed:
> 43,209.511 sec
>
> Results :
>
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Building Infinispan Tools
> [INFO] task-segment: [test]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] snapshot org.infinispan:infinispan-core:5.0.0-SNAPSHOT: checking
> for updates from jboss-public-repository
> [INFO] [enforcer:enforce {execution: enforce-java}]
> [INFO] [resources:resources {execution: default-resources}]
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 2 resources
> [INFO] [compiler:compile {execution: default-compile}]
> [INFO] Nothing to compile - all classes are up to date
> [INFO] [resources:testResources {execution: default-testResources}]
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] skip non existing resourceDirectory
> /home/ec2-user/infinispan/tools/src/test/resources
> [INFO] [compiler:testCompile {execution: default-testCompile}]
> [INFO] Nothing to compile - all classes are up to date
> [INFO] [surefire:test {execution: default-test}]
> [INFO] Surefire report directory:
> /home/ec2-user/infinispan/tools/target/surefire-reports
>
> -------------------------------------------------------
> T E S T S
> -------------------------------------------------------
> There are no tests to run.
>
> Results :
>
> Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [ERROR] BUILD FAILURE
> [INFO]
> ------------------------------------------------------------------------
> [INFO] No tests were executed! (Set -DfailIfNoTests=false to ignore
> this error.)
> [INFO]
> ------------------------------------------------------------------------
> [INFO] For more information, run Maven with the -e switch
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 720 minutes 23 seconds
> [INFO] Finished at: Wed Jul 06 17:13:51 UTC 2011
> [INFO] Final Memory: 206M/2001M
> [INFO]
> ------------------------------------------------------------------------
> [ec2-user@ip-10-38-110-25 infinispan]$
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Manik Surtani
I think we leave the old LRU as LRU_OLD and mark it as deprecated.  

Vladimir, does this apply to LIRS as well?


On 6 Jul 2011, at 21:08, Vladimir Blagojevic wrote:

No :-) We can leave it, it all depends on how many more CR cycles we have before Final release to field test new LRU!

On 11-07-06 2:46 PM, Sanne Grinovero wrote:

Awesome!
Are you really sure about eradicating the old implementation?

Cheers,
Sanne

On 6 Jul 2011 19:39, "Vladimir Blagojevic" <[hidden email]> wrote:
> Hey,
>
> Ok I've issued a pull request. Please review it
> https://github.com/infinispan/infinispan/pull/418
> I am very happy with performance increase observed in new LRU. It
> significantly beats synchronized Map and performance is very close to
> ConcurrentHashMap itself.
> I ran tests overnight for 12 hours (each test run is 20 minutes).
>
> Dan, please have a look at new MapStressTest and add enhancements you
> discussed with Galder and Sanne. Once you integrate these changes lets
> do another round of testing of both LRU impls and if all goes well we
> can drop old LRU entirely.
>
> Cheers,
> Vladimir
>
>
>
> [ec2-user@ip-10-38-110-25 infinispan]$ ps -e | grep java
> [ec2-user@ip-10-38-110-25 infinispan]$ cat perf_new_lru.log
> [INFO] Scanning for projects...
> [INFO] Reactor build order:
> [INFO] Infinispan Common Parent
> [INFO] Infinispan Core
> [INFO] Infinispan Tools
> [INFO] Infinispan Query API
> [INFO] Infinispan Tree API
> [INFO] Parent pom for cachestore modules
> [INFO] Infinispan JDBC CacheStore
> [INFO] Infinispan Lucene Directory Implementation
> [INFO] Infinispan JDBM CacheStore
> [INFO] Infinispan BDBJE CacheStore
> [INFO] Infinispan CloudCacheStore
> [INFO] Parent pom for server modules
> [INFO] Infinispan Server Core Module
> [INFO] Infinispan Server Hotrod Module
> [INFO] Infinispan Client Hotrod Module
> [INFO] Infinispan remote CacheStore
> [INFO] Infinispan CassandraCacheStore
> [INFO] Infinispan Server Memcached Module
> [INFO] Infinispan WebSocket Server
> [INFO] Infinispan REST Server
> [INFO] Infinispan RHQ Plugin
> [INFO] Infinispan Spring Integration
> [INFO] Infinispan GUI Demo
> [INFO] Infinispan EC2 Demo
> [INFO] Infinispan Distributed Executors and Map/Reduce Demo
> [INFO] Infinispan EC2 Demo UI
> [INFO] Infinispan Directory Demo
> [INFO] Infinispan Lucene Directory Demo
> [INFO] Infinispan GridFileSystem WebDAV interface
> [INFO] Infinispan Distribution
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Building Infinispan Common Parent
> [INFO] task-segment: [test]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] snapshot org.jboss.ws:jbossws-api:1.0.0-SNAPSHOT: checking for
> updates from jboss-public-repository
> [INFO] snapshot org.jboss.ws:jbossws-api:1.0.0-SNAPSHOT: checking for
> updates from jboss-public-repository-group
> [INFO] snapshot org.jboss.ws:jbossws-parent:1.0.10-SNAPSHOT: checking
> for updates from jboss-public-repository
> [INFO] snapshot org.jboss.ws:jbossws-parent:1.0.10-SNAPSHOT: checking
> for updates from jboss-public-repository-group
> [INFO] [enforcer:enforce {execution: enforce-java}]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Building Infinispan Core
> [INFO] task-segment: [test]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] [enforcer:enforce {execution: enforce-java}]
> [INFO] [resources:resources {execution: default-resources}]
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 17 resources
> [INFO] [compiler:compile {execution: default-compile}]
> [INFO] Compiling 52 source files to
> /home/ec2-user/infinispan/core/target/classes
> [INFO] Preparing exec:java
> [WARNING] Removing: java from forked lifecycle, to prevent recursive
> invocation.
> [INFO] [enforcer:enforce {execution: enforce-java}]
> [INFO] [exec:java {execution: default}]
> Generating schema file in
> /home/ec2-user/infinispan/core/src/main/resources/schema
> Using file name infinispan-config-5.0.xsd for schema
> Generated schema file successfully
> [INFO] [resources:testResources {execution: default-testResources}]
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 24 resources
> [INFO] [compiler:testCompile {execution: default-testCompile}]
> [INFO] Nothing to compile - all classes are up to date
> [INFO] [surefire:test {execution: default-test}]
> [INFO] Surefire report directory:
> /home/ec2-user/infinispan/core/target/surefire-reports
>
> -------------------------------------------------------
> T E S T S
> -------------------------------------------------------
> Running TestSuite
> Performance for container BoundedConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 188
> Average put ops/ms 126
> Average remove ops/ms 128
> Size = 743454
> Performance for container BoundedConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 54
> Average put ops/ms 33
> Average remove ops/ms 41
> Size = 762775
> Performance for container BoundedConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 26
> Average put ops/ms 15
> Average remove ops/ms 19
> Size = 758876
> [testng-MapStressTest] Test
> testBufferedConcurrentHashMapLRU(org.infinispan.stress.MapStressTest)
> succeeded.
> Test suite progress: tests succeeded: 1, failed: 0, skipped: 0.
> Performance for container BoundedConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 305
> Average put ops/ms 99
> Average remove ops/ms 213
> Size = 98200
> Performance for container BoundedConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 89
> Average put ops/ms 30
> Average remove ops/ms 55
> Size = 96159
> Performance for container BoundedConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 47
> Average put ops/ms 12
> Average remove ops/ms 19
> Size = 96977
> [testng-MapStressTest] Test
> testBufferedConcurrentHashMapLRU(org.infinispan.stress.MapStressTest)
> succeeded.
> Test suite progress: tests succeeded: 2, failed: 0, skipped: 0.
> Performance for container BoundedConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 234
> Average put ops/ms 112
> Average remove ops/ms 158
> Size = 393215
> Performance for container BoundedConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 68
> Average put ops/ms 32
> Average remove ops/ms 47
> Size = 391512
> Performance for container BoundedConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 35
> Average put ops/ms 15
> Average remove ops/ms 21
> Size = 390856
> [testng-MapStressTest] Test
> testBufferedConcurrentHashMapLRU(org.infinispan.stress.MapStressTest)
> succeeded.
> Test suite progress: tests succeeded: 3, failed: 0, skipped: 0.
> Performance for container CacheImpl max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 104
> Average put ops/ms 48
> Average remove ops/ms 58
> Size = 738030
> Performance for container CacheImpl max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 30
> Average put ops/ms 13
> Average remove ops/ms 22
> Size = 774764
> Performance for container CacheImpl max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 16
> Average put ops/ms 6
> Average remove ops/ms 13
> Size = 776668
> [testng-MapStressTest] Test
> testCache(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 4, failed: 0, skipped: 0.
> Performance for container CacheImpl max capacity is
> 131072[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 220
> Average put ops/ms 44
> Average remove ops/ms 123
> Size = 98304
> Performance for container CacheImpl max capacity is
> 131072[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 66
> Average put ops/ms 12
> Average remove ops/ms 39
> Size = 97832
> Performance for container CacheImpl max capacity is
> 131072[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 34
> Average put ops/ms 5
> Average remove ops/ms 21
> Size = 97802
> [testng-MapStressTest] Test
> testCache(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 5, failed: 0, skipped: 0.
> Performance for container CacheImpl max capacity is
> 524288[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 144
> Average put ops/ms 44
> Average remove ops/ms 77
> Size = 393216
> Performance for container CacheImpl max capacity is
> 524288[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 42
> Average put ops/ms 12
> Average remove ops/ms 28
> Size = 392794
> Performance for container CacheImpl max capacity is
> 524288[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 22
> Average put ops/ms 6
> Average remove ops/ms 16
> Size = 387007
> [testng-MapStressTest] Test
> testCache(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 6, failed: 0, skipped: 0.
> Performance for container ConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 303
> Average put ops/ms 235
> Average remove ops/ms 240
> Size = 711798
> Performance for container ConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 88
> Average put ops/ms 68
> Average remove ops/ms 70
> Size = 669381
> Performance for container ConcurrentHashMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 45
> Average put ops/ms 29
> Average remove ops/ms 31
> Size = 758993
> [testng-MapStressTest] Test
> testConcurrentHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 7, failed: 0, skipped: 0.
> Performance for container ConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 261
> Average put ops/ms 198
> Average remove ops/ms 197
> Size = 667905
> Performance for container ConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 75
> Average put ops/ms 55
> Average remove ops/ms 56
> Size = 614548
> Performance for container ConcurrentHashMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 37
> Average put ops/ms 24
> Average remove ops/ms 25
> Size = 789004
> [testng-MapStressTest] Test
> testConcurrentHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 8, failed: 0, skipped: 0.
> Performance for container ConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 305
> Average put ops/ms 231
> Average remove ops/ms 236
> Size = 656558
> Performance for container ConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 88
> Average put ops/ms 66
> Average remove ops/ms 68
> Size = 716811
> Performance for container ConcurrentHashMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 44
> Average put ops/ms 30
> Average remove ops/ms 31
> Size = 757864
> [testng-MapStressTest] Test
> testConcurrentHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 9, failed: 0, skipped: 0.
> Performance for container SynchronizedMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 171
> Average put ops/ms 171
> Average remove ops/ms 142
> Size = 679283
> Performance for container SynchronizedMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 52
> Average put ops/ms 52
> Average remove ops/ms 38
> Size = 806142
> Performance for container SynchronizedMap max capacity is
> 1048576[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 26
> Average put ops/ms 27
> Average remove ops/ms 19
> Size = 860559
> [testng-MapStressTest] Test
> testHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 10, failed: 0, skipped: 0.
> Performance for container SynchronizedMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 171
> Average put ops/ms 172
> Average remove ops/ms 151
> Size = 722315
> Performance for container SynchronizedMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 51
> Average put ops/ms 54
> Average remove ops/ms 47
> Size = 810163
> Performance for container SynchronizedMap max capacity is
> 131072[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 25
> Average put ops/ms 26
> Average remove ops/ms 22
> Size = 848856
> [testng-MapStressTest] Test
> testHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 11, failed: 0, skipped: 0.
> Performance for container SynchronizedMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[8,2,1]
> Average get ops/ms 169
> Average put ops/ms 173
> Average remove ops/ms 151
> Size = 802294
> Performance for container SynchronizedMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[32,4,2]
> Average get ops/ms 51
> Average put ops/ms 53
> Average remove ops/ms 47
> Size = 666371
> Performance for container SynchronizedMap max capacity is
> 524288[numReaders,numWriters,numRemovers]=[64,8,3]
> Average get ops/ms 26
> Average put ops/ms 26
> Average remove ops/ms 21
> Size = 829093
> [testng-MapStressTest] Test
> testHashMap(org.infinispan.stress.MapStressTest) succeeded.
> Test suite progress: tests succeeded: 12, failed: 0, skipped: 0.
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed:
> 43,209.511 sec
>
> Results :
>
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Building Infinispan Tools
> [INFO] task-segment: [test]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] snapshot org.infinispan:infinispan-core:5.0.0-SNAPSHOT: checking
> for updates from jboss-public-repository
> [INFO] [enforcer:enforce {execution: enforce-java}]
> [INFO] [resources:resources {execution: default-resources}]
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 2 resources
> [INFO] [compiler:compile {execution: default-compile}]
> [INFO] Nothing to compile - all classes are up to date
> [INFO] [resources:testResources {execution: default-testResources}]
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] skip non existing resourceDirectory
> /home/ec2-user/infinispan/tools/src/test/resources
> [INFO] [compiler:testCompile {execution: default-testCompile}]
> [INFO] Nothing to compile - all classes are up to date
> [INFO] [surefire:test {execution: default-test}]
> [INFO] Surefire report directory:
> /home/ec2-user/infinispan/tools/target/surefire-reports
>
> -------------------------------------------------------
> T E S T S
> -------------------------------------------------------
> There are no tests to run.
>
> Results :
>
> Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [ERROR] BUILD FAILURE
> [INFO]
> ------------------------------------------------------------------------
> [INFO] No tests were executed! (Set -DfailIfNoTests=false to ignore
> this error.)
> [INFO]
> ------------------------------------------------------------------------
> [INFO] For more information, run Maven with the -e switch
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 720 minutes 23 seconds
> [INFO] Finished at: Wed Jul 06 17:13:51 UTC 2011
> [INFO] Final Memory: 206M/2001M
> [INFO]
> ------------------------------------------------------------------------
> [ec2-user@ip-10-38-110-25 infinispan]$
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev



_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Vladimir Blagojevic
On 11-07-07 6:21 AM, Manik Surtani wrote:
> I think we leave the old LRU as LRU_OLD and mark it as deprecated.
>
> Vladimir, does this apply to LIRS as well?

No, not LIRS, this was LRU optimization overlooked from the beginning :-)
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Dan Berindei
In reply to this post by Manik Surtani
On Thu, Jul 7, 2011 at 1:21 PM, Manik Surtani <[hidden email]> wrote:
> I think we leave the old LRU as LRU_OLD and mark it as deprecated.

I for one am against keeping the old policy around, as the new LRU
policy implements exactly the same algorithm only in O(1) instead of
O(n).
It would have made sense to keep it if there was a difference in the
algorithm, but Vladimir even kept the batching from the old LRU
policy.

We should test it as thoroughly as we can, and to that end I've been
working on some additions to MapStressTest that try to measure how
good the eviction algorithm is. I think it already helped me find a
problem with the new LRU.

I've updated pull #414
(https://github.com/infinispan/infinispan/pull/414) to work on top of
Vladimir's pull request, in case you want to have a look. You might
want to adjust the number of keys and/or disable some of the options
in the data providers before running it though, it takes a lot of time
to run (and it also needs -Xmx2000m).

I've left it running overnight on the test cluster (cluster01 and
cluster10), I'll send an update with the results in the morning.

Cheers
Dan
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Dan Berindei
On Fri, Jul 8, 2011 at 2:53 AM, Dan Berindei <[hidden email]> wrote:

> I've updated pull #414
> (https://github.com/infinispan/infinispan/pull/414) to work on top of
> Vladimir's pull request, in case you want to have a look. You might
> want to adjust the number of keys and/or disable some of the options
> in the data providers before running it though, it takes a lot of time
> to run (and it also needs -Xmx2000m).
>
> I've left it running overnight on the test cluster (cluster01 and
> cluster10), I'll send an update with the results in the morning.
>

Morning update:
Ok, apparently -Xmx2000m wasn't enough for 2 million keys, so I had to
start the tests again in the morning, running each scenario on a
different machine.

I haven't run the tests with concurrency level 512, as the total
number of threads is only 100, but I suspect the old LRU still won't
catch up with the new LRU's performance.

It's interesting that in the writeOnMiss test the new LRU performance
dropped when I increased the concurrency level from 32 to 128. I think
it might be because the eviction.thresholdExpired() check in
BCHM.attemptEviction() is done without a lock and so it could return
true simultaneously for multiple threads - which will all proceed to
wait on the segment lock and attempt eviction at the same time.

Another strange pattern is that neither eviction policy respects the
capacity parameter exactly. LIRS rounds up the capacity to the next
power of 2, and LRU/LRUOld do the same rounding and then multiply by
0.75.

I'll report again once I fixed these and once I update the reporting -
I think the total number of misses might be more relevant than the
standard deviation of the keys at the end.

Cheers
Dan

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

mapStressTestResults_20min.txt (7K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Vladimir Blagojevic
In reply to this post by Dan Berindei
Dan,

Great work, why not update my forked tree
https://github.com/vblagoje/infinispan/tree/t_bchm with your work as
explained in section "Multi-step coordination between developers using
forked repositories" from
https://docs.jboss.org/author/display/ISPN/Contributing+-+Source+Control

And then we can merge this into master on Monday! Both your work and
mine change of LRU will be under the same pull
https://github.com/infinispan/infinispan/pull/418 You run the tests on
cluster lab and I'll repeat them on EC2.

Vladimir


On 11-07-07 7:53 PM, Dan Berindei wrote:

> On Thu, Jul 7, 2011 at 1:21 PM, Manik Surtani<[hidden email]>  wrote:
>> I think we leave the old LRU as LRU_OLD and mark it as deprecated.
> I for one am against keeping the old policy around, as the new LRU
> policy implements exactly the same algorithm only in O(1) instead of
> O(n).
> It would have made sense to keep it if there was a difference in the
> algorithm, but Vladimir even kept the batching from the old LRU
> policy.
>
> We should test it as thoroughly as we can, and to that end I've been
> working on some additions to MapStressTest that try to measure how
> good the eviction algorithm is. I think it already helped me find a
> problem with the new LRU.
>
> I've updated pull #414
> (https://github.com/infinispan/infinispan/pull/414) to work on top of
> Vladimir's pull request, in case you want to have a look. You might
> want to adjust the number of keys and/or disable some of the options
> in the data providers before running it though, it takes a lot of time
> to run (and it also needs -Xmx2000m).
>
> I've left it running overnight on the test cluster (cluster01 and
> cluster10), I'll send an update with the results in the morning.
>
> Cheers
> Dan
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Dan Berindei
In reply to this post by Dan Berindei
On Fri, Jul 8, 2011 at 12:44 PM, Dan Berindei <[hidden email]> wrote:
>
> It's interesting that in the writeOnMiss test the new LRU performance
> dropped when I increased the concurrency level from 32 to 128. I think
> it might be because the eviction.thresholdExpired() check in
> BCHM.attemptEviction() is done without a lock and so it could return
> true simultaneously for multiple threads - which will all proceed to
> wait on the segment lock and attempt eviction at the same time.
>

I added another batch threshold check while holding the lock and the
performance anomaly disappeared.

> Another strange pattern is that neither eviction policy respects the
> capacity parameter exactly. LIRS rounds up the capacity to the next
> power of 2, and LRU/LRUOld do the same rounding and then multiply by
> 0.75.
>

I updated BCHM to pass the initial capacity to the eviction policy,
and now all the policies keep the same number of entries.

> I'll report again once I fixed these and once I update the reporting -
> I think the total number of misses might be more relevant than the
> standard deviation of the keys at the end.
>

I got the tests running on the test cluster again and the new LRU is
clearly better in every respect than the old LRU. In fact the only
"problem" in the test results is that in the new "writeOnMiss"
scenario the hit ratio of the new LRU is much better than all the
other policies. There is probably a mistake somewhere in the test, if
it was a random thing I don't think it would have been so visible in a
20-minutes test:

MapStressTest configuration: capacity 500000, test running time 1200 seconds
Container BCHM:LIRS     Ops/s   11155.49  HitRatio      96.54  Size
 499968  stdDev  193558.30
Container BCHM:LRU      Ops/s   31292.06  HitRatio      97.84  Size
 500000  stdDev  193168.07
Container BCHM:LRU_OLD  Ops/s     116.89  HitRatio      76.11  Size
 500032  stdDev  197974.87

Testing write on miss performance with capacity 500000, keys 2000000,
concurrency level 32, threads 100
Container BCHM:LIRS     Ops/s    1684.01  HitRatio      63.13  Size
 499968  stdDev  338637.40
Container BCHM:LRU      Ops/s    4884.57  HitRatio      84.47  Size
 500000  stdDev  353336.31
Container BCHM:LRU_OLD  Ops/s      50.69  HitRatio      41.34  Size
 500032  stdDev  361239.68

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

mapStressTestResults_20min.txt (15K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Sanne Grinovero-3
Amazing, congratulations Vladimir & Dan!
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Manik Surtani
Yeah, nice work guys.  :)

On 11 Jul 2011, at 17:10, Sanne Grinovero wrote:

> Amazing, congratulations Vladimir & Dan!
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
[hidden email]
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org



_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Vladimir Blagojevic
In reply to this post by Dan Berindei
On 11-07-08 5:44 AM, Dan Berindei wrote:

> I haven't run the tests with concurrency level 512, as the total
> number of threads is only 100, but I suspect the old LRU still won't
> catch up with the new LRU's performance.
>
> It's interesting that in the writeOnMiss test the new LRU performance
> dropped when I increased the concurrency level from 32 to 128. I think
> it might be because the eviction.thresholdExpired() check in
> BCHM.attemptEviction() is done without a lock and so it could return
> true simultaneously for multiple threads - which will all proceed to
> wait on the segment lock and attempt eviction at the same time.
>

I am not sure about this Dan. I looked at this code for hours! I do not
see how two threads can call eviction#execute() concurrently.



_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Faster LRU

Dan Berindei
On Mon, Jul 11, 2011 at 7:29 PM, Vladimir Blagojevic
<[hidden email]> wrote:

> On 11-07-08 5:44 AM, Dan Berindei wrote:
>>
>> I haven't run the tests with concurrency level 512, as the total
>> number of threads is only 100, but I suspect the old LRU still won't
>> catch up with the new LRU's performance.
>>
>> It's interesting that in the writeOnMiss test the new LRU performance
>> dropped when I increased the concurrency level from 32 to 128. I think
>> it might be because the eviction.thresholdExpired() check in
>> BCHM.attemptEviction() is done without a lock and so it could return
>> true simultaneously for multiple threads - which will all proceed to
>> wait on the segment lock and attempt eviction at the same time.
>>
>
> I am not sure about this Dan. I looked at this code for hours! I do not see
> how two threads can call eviction#execute() concurrently.
>

Sorry I wasn't very clear, two threads can enter attemptEviction
simultaneously, one will get the lock and perform eviction, the other
will also try to get the lock and when it gets it it will proceed to
call eviction#execute() again.
So eviction#execute() is not called concurrently, but it is called
twice when it should have been called only once, and I think this
dilutes the advantages of batching.
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev