[infinispan-dev] Distributed Counter Discussion

classic Classic list List threaded Threaded
31 messages Options
12
Reply | Threaded
Open this post in threaded view
|

[infinispan-dev] Distributed Counter Discussion

Pedro Ruivo-2
Hi everybody,

Discussion about distributed counters.

== Public API ==

interface Counter

String getName() //counter name
long get() //current value. may return stale value due to concurrent
operations to other nodes.
void increment() //async or sync increment. default add(1)
void decrement() //async or sync decrement. default add(-1)
void add(long)   //async or sync add.
void reset()     //resets to initial value

Note: Tried to make the interface as simple as possible with support for
sync and async operations. To avoid any confusion, I consider an async
operation as happening somewhat in the future, i.e. eventually
increments/decrements.
The sync operation happens somewhat during the method execution.

interface AtomiCounter extends Counter

long addAndGet()       //adds a returns the new value. sync operation
long incrementAndGet() //increments and returns the new value. sync
operation. default addAndGet(1)
long decrementAndGet() //decrements and returns the new value. sync
operation. default addAndGet(-1)

interface AdvancedCounter extends Counter

long getMin/MaxThreshold() //returns the min and max threshold value
void add/removeListener()  //adds a listener that is invoked when the
value change. Can be extended to notify when it is "reseted" and when
the threshold is reached.

Note: should this interface be splitted?

== Details ==

This is what I have in mind. Two counter managers: one based on JGroups
counter and another one based on Infinispan cache.
The first one creates AtomicCounters and it first perfectly. All
counters are created with an initial value (zero by default)
The second generates counters with all the options available. It can mix
sync/async operation and all counters will be in the same cache. The
cache will be configure by us and it would be an internal cache. This
will use all the features available in the cache.

Configuration-wise, I'm thinking about 2 parameters: number of backups
and timeout (for sync operations).

So, comment bellow and let me know alternatives, improvement or if I
missed something.

ps. I also consider implement a counter based on JGroups-raft but I
believe it is an overkill.
ps2. sorry for the long email :( I tried to be shorter as possible.

Cheers,
Pedro
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Sanne Grinovero-3
Great starting point!
Some comments inline:

On 14 March 2016 at 19:14, Pedro Ruivo <[hidden email]> wrote:
> Hi everybody,
>
> Discussion about distributed counters.
>
> == Public API ==
>
> interface Counter

As a user, how do I get a Counter instance? From a the CacheContainer interface?

Will they have their own configuration section in the configuration file?

>
> String getName() //counter name
> long get()       //current value. may return stale value due to concurrent
> operations to other nodes.

This is what puzzles me the most. I'm not sure if the feature is
actually useful, unless we can clearly state how far outdated the
value could be.

I think a slightly more formal definition would be in order. For
example I think it would be acceptable to say that this will return a
value from the range of values the primary owner of this counter was
holding in the timeframe between the method is being invoked and the
time the value is returned.

Could it optionally be integrated with Total Order ? Transactions?

> void increment() //async or sync increment. default add(1)
> void decrement() //async or sync decrement. default add(-1)
> void add(long)   //async or sync add.
> void reset()     //resets to initial value
>
> Note: Tried to make the interface as simple as possible with support for
> sync and async operations. To avoid any confusion, I consider an async
> operation as happening somewhat in the future, i.e. eventually
> increments/decrements.
> The sync operation happens somewhat during the method execution.
>
> interface AtomiCounter extends Counter
>
> long addAndGet()       //adds a returns the new value. sync operation
> long incrementAndGet() //increments and returns the new value. sync
> operation. default addAndGet(1)
> long decrementAndGet() //decrements and returns the new value. sync
> operation. default addAndGet(-1)
>
> interface AdvancedCounter extends Counter
>
> long getMin/MaxThreshold() //returns the min and max threshold value

"threshold" ??

> void add/removeListener()  //adds a listener that is invoked when the
> value change. Can be extended to notify when it is "reseted" and when
> the threshold is reached.
>
> Note: should this interface be splitted?

I'd prefer a single interface, with reduced redundancy.
For example, is there really a benefit in having a "void increment()"
and also a "long addAndGet()" ? [Besides The fact that only the first
one can benefit of an async option]

Besides, I am no longer sure that it's good thing that methods in
Infinispan can be async vs sync depending on configuration switches;
I'd rather make it explicit in the signature and simplify the
configuration by removing such a flag.

Making the methods which are async-capable to look "explicitly async"
should also allow us to add completeable futures & similar.

>
> == Details ==
>
> This is what I have in mind. Two counter managers: one based on JGroups
> counter and another one based on Infinispan cache.
> The first one creates AtomicCounters and it first perfectly. All
> counters are created with an initial value (zero by default)
> The second generates counters with all the options available. It can mix
> sync/async operation and all counters will be in the same cache. The
> cache will be configure by us and it would be an internal cache. This
> will use all the features available in the cache.

Rather than a split organized on the backing implementation details,
I'd prefer to see a split based on purpose of the counter.

For example JGroups has different styles of counters: one might need
one to do an atomic increment which is monotonically increasing across
the whole cluster - from the point of view from an omniscent observer
- but in many cases we just need the generation of a guarantee of a
monotonic unique number: that allows for example to have each node
pre-allocate a range of values and pre-fetch a new block while its
running out of values.

For some systems this is not acceptable because a failing server might
result in some of its allocated numbers to not ever be used, so
creating gaps; for others it's not acceptable that the figures
returned by the "monotonic counter" are not monotonic across each
other when confronting results from different cluster nodes, but for
many use cases that's acceptable. My point being that we need to be
able to choose between these possible semantics.

As an Infinispan consumer being able to express such details is
essential, while having to choose between "backed by JGroups counters"
or not is irrelevant and actually makes it harder.

In terms of Infinispan integration, the most important point I'd like
to see is persistence: i.e. make sure you can store them in a
CacheStore.
How we do that efficiently will also affect design; for example I was
expecting to see a Counter as something which is stored in a Cache, so
inheriting details such as configured CacheStore(s) and number of
owners.

Thanks for starting this!

Sanne

>
> Configuration-wise, I'm thinking about 2 parameters: number of backups
> and timeout (for sync operations).
>
> So, comment bellow and let me know alternatives, improvement or if I
> missed something.
>
> ps. I also consider implement a counter based on JGroups-raft but I
> believe it is an overkill.
> ps2. sorry for the long email :( I tried to be shorter as possible.
>
> Cheers,
> Pedro
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Randall Hauch
In reply to this post by Pedro Ruivo-2
What are the requirements? What are the distributed counters for? Is the counter to be monotonically increasing? Can there be any missed values? Does the counter need to increment and decrement? What is the *smallest* API you need initially?

There are two choices when implementing a distributed counter: use central coordination (like JGroups counters), or use independent counters on separate machines that will eventually converge to the correct value (CRDTs). Coordinated counters are expensive and therefore slow, and can suffer from problems during network or cluster problems. For example, what happens during a split brain? OTOH, CRDTs are decentralized so therefore are very fast, easily merged, and fault tolerant; they’re excellent when counting things that are occurring independently and therefore may be more suited for monitoring/metrics/accumulators/etc.  Both have very different behaviors under ideal and failure scenarios, have different performance and consistency guarantees, and are useful in different scenarios. Make sure you choose accordingly.

For information about CRDTs, make sure you’ve read the CRDT paper by Shapiro: http://hal.upmc.fr/inria-00555588/document 

Randall

On Mar 14, 2016, at 2:14 PM, Pedro Ruivo <[hidden email]> wrote:

Hi everybody,

Discussion about distributed counters.

== Public API ==

interface Counter

String getName() //counter name
long get() //current value. may return stale value due to concurrent
operations to other nodes.
void increment() //async or sync increment. default add(1)
void decrement() //async or sync decrement. default add(-1)
void add(long)   //async or sync add.
void reset()     //resets to initial value

Note: Tried to make the interface as simple as possible with support for
sync and async operations. To avoid any confusion, I consider an async
operation as happening somewhat in the future, i.e. eventually
increments/decrements.
The sync operation happens somewhat during the method execution.

interface AtomiCounter extends Counter

long addAndGet()       //adds a returns the new value. sync operation
long incrementAndGet() //increments and returns the new value. sync
operation. default addAndGet(1)
long decrementAndGet() //decrements and returns the new value. sync
operation. default addAndGet(-1)

interface AdvancedCounter extends Counter

long getMin/MaxThreshold() //returns the min and max threshold value
void add/removeListener()  //adds a listener that is invoked when the
value change. Can be extended to notify when it is "reseted" and when
the threshold is reached.

Note: should this interface be splitted?

== Details ==

This is what I have in mind. Two counter managers: one based on JGroups
counter and another one based on Infinispan cache.
The first one creates AtomicCounters and it first perfectly. All
counters are created with an initial value (zero by default)
The second generates counters with all the options available. It can mix
sync/async operation and all counters will be in the same cache. The
cache will be configure by us and it would be an internal cache. This
will use all the features available in the cache.

Configuration-wise, I'm thinking about 2 parameters: number of backups
and timeout (for sync operations).

So, comment bellow and let me know alternatives, improvement or if I
missed something.

ps. I also consider implement a counter based on JGroups-raft but I
believe it is an overkill.
ps2. sorry for the long email :( I tried to be shorter as possible.

Cheers,
Pedro
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Bela Ban
In reply to this post by Pedro Ruivo-2
What are the requirements? To offer a cluster wide counter from an
Infinispan cache? Or is this going to be used internally, by Infinispan?

I've implemented 2 counters, in JGroups and in jgroups-raft. Here are
their properties:

#1 JGroups counter:
- Cluster wide _named_ counters with atomicity guarantees
- A counter is always managed by the current coordinator
- In a cluster split, you can end up with different counter instances
(one per coordinator)
- When a cluster partition heals, the current coordinator's counter
'wins', ie. values from the other counters are lost, even if they were
updated last

#2 jgroups-raft counter:
- Counter is managed by the leader
- A counter will not be accessible when there's no leader (no majority):
this is CP in CAP
- Counters cannot diverge in split brain scenarios; there's always at
most 1 counter (or 0)

If you can live with multiple counters being present in light of network
partitiond, then pick #1 as this is faster than #2 (which requires disk
writes for updates), and is always available, whereas #2 can be
non-available.

I'm thinking about changing the way #1 works. Currently, it uses the
next-in-line to backup a counter value on each update. I'd like to
remove this and replace it with a reconciliation round when the
coordinator changes: the new coordinator asks all members for their
current counter values. Requires a bit more work on a view change, but
less in steady state mode. Perhaps I can make it configurable.


On 14/03/16 20:14, Pedro Ruivo wrote:

> Hi everybody,
>
> Discussion about distributed counters.
>
> == Public API ==
>
> interface Counter
>
> String getName() //counter name
> long get() //current value. may return stale value due to concurrent
> operations to other nodes.
> void increment() //async or sync increment. default add(1)
> void decrement() //async or sync decrement. default add(-1)
> void add(long)   //async or sync add.
> void reset()     //resets to initial value
>
> Note: Tried to make the interface as simple as possible with support for
> sync and async operations. To avoid any confusion, I consider an async
> operation as happening somewhat in the future, i.e. eventually
> increments/decrements.
> The sync operation happens somewhat during the method execution.
>
> interface AtomiCounter extends Counter
>
> long addAndGet()       //adds a returns the new value. sync operation
> long incrementAndGet() //increments and returns the new value. sync
> operation. default addAndGet(1)
> long decrementAndGet() //decrements and returns the new value. sync
> operation. default addAndGet(-1)
>
> interface AdvancedCounter extends Counter
>
> long getMin/MaxThreshold() //returns the min and max threshold value
> void add/removeListener()  //adds a listener that is invoked when the
> value change. Can be extended to notify when it is "reseted" and when
> the threshold is reached.
>
> Note: should this interface be splitted?
>
> == Details ==
>
> This is what I have in mind. Two counter managers: one based on JGroups
> counter and another one based on Infinispan cache.
> The first one creates AtomicCounters and it first perfectly. All
> counters are created with an initial value (zero by default)
> The second generates counters with all the options available. It can mix
> sync/async operation and all counters will be in the same cache. The
> cache will be configure by us and it would be an internal cache. This
> will use all the features available in the cache.
>
> Configuration-wise, I'm thinking about 2 parameters: number of backups
> and timeout (for sync operations).
>
> So, comment bellow and let me know alternatives, improvement or if I
> missed something.
>
> ps. I also consider implement a counter based on JGroups-raft but I
> believe it is an overkill.
> ps2. sorry for the long email :( I tried to be shorter as possible.
>
> Cheers,
> Pedro
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Bela Ban, JGroups lead (http://www.jgroups.org)

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Bela Ban
In reply to this post by Randall Hauch


On 14/03/16 23:17, Randall Hauch wrote:

> What are the requirements? What are the distributed counters for? Is the
> counter to be monotonically increasing? Can there be any missed values?
> Does the counter need to increment and decrement? What is the *smallest*
> API you need initially?
>
> There are two choices when implementing a distributed counter: use
> central coordination (like JGroups counters), or use independent
> counters on separate machines that will eventually converge to the
> correct value (CRDTs). Coordinated counters are expensive and therefore
> slow, and can suffer from problems during network or cluster problems.

The question is what do you get for this? If your app can't afford
duplicate counter values during a network partition, then - yes - there
is some overhead. CRDTs won't be able to guarantee this property. OTOH
CRDTs are fast when you only care about some sort of eventual
consistency, and don't need 'hard' consistency.


> For example, what happens during a split brain? OTOH, CRDTs are
> decentralized so therefore are very fast, easily merged, and fault
> tolerant;

Yes, CRDTS are AP whereas jgroups-raft counters are CP. JGroups
counters, otoh, are CRAP (consistent, reliable, available and
partition-aware.

Take the last sentence with a grain of salt :-)

> they’re excellent when counting things that are occurring
> independently and therefore may be more suited for
> monitoring/metrics/accumulators/etc.  Both have very different behaviors
> under ideal and failure scenarios, have different performance and
> consistency guarantees, and are useful in different scenarios. Make sure
> you choose accordingly.
>
> For information about CRDTs, make sure you’ve read the CRDT paper by
> Shapiro: http://hal.upmc.fr/inria-00555588/document
>
> Randall
>
>> On Mar 14, 2016, at 2:14 PM, Pedro Ruivo <[hidden email]
>> <mailto:[hidden email]>> wrote:
>>
>> Hi everybody,
>>
>> Discussion about distributed counters.
>>
>> == Public API ==
>>
>> interface Counter
>>
>> String getName() //counter name
>> long get()//current value. may return stale value due to concurrent
>> operations to other nodes.
>> void increment() //async or sync increment. default add(1)
>> void decrement() //async or sync decrement. default add(-1)
>> void add(long)   //async or sync add.
>> void reset()     //resets to initial value
>>
>> Note: Tried to make the interface as simple as possible with support for
>> sync and async operations. To avoid any confusion, I consider an async
>> operation as happening somewhat in the future, i.e. eventually
>> increments/decrements.
>> The sync operation happens somewhat during the method execution.
>>
>> interface AtomiCounter extends Counter
>>
>> long addAndGet()       //adds a returns the new value. sync operation
>> long incrementAndGet() //increments and returns the new value. sync
>> operation. default addAndGet(1)
>> long decrementAndGet() //decrements and returns the new value. sync
>> operation. default addAndGet(-1)
>>
>> interface AdvancedCounter extends Counter
>>
>> long getMin/MaxThreshold() //returns the min and max threshold value
>> void add/removeListener()  //adds a listener that is invoked when the
>> value change. Can be extended to notify when it is "reseted" and when
>> the threshold is reached.
>>
>> Note: should this interface be splitted?
>>
>> == Details ==
>>
>> This is what I have in mind. Two counter managers: one based on JGroups
>> counter and another one based on Infinispan cache.
>> The first one creates AtomicCounters and it first perfectly. All
>> counters are created with an initial value (zero by default)
>> The second generates counters with all the options available. It can mix
>> sync/async operation and all counters will be in the same cache. The
>> cache will be configure by us and it would be an internal cache. This
>> will use all the features available in the cache.
>>
>> Configuration-wise, I'm thinking about 2 parameters: number of backups
>> and timeout (for sync operations).
>>
>> So, comment bellow and let me know alternatives, improvement or if I
>> missed something.
>>
>> ps. I also consider implement a counter based on JGroups-raft but I
>> believe it is an overkill.
>> ps2. sorry for the long email :( I tried to be shorter as possible.
>>
>> Cheers,
>> Pedro
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email] <mailto:[hidden email]>
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Bela Ban, JGroups lead (http://www.jgroups.org)

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Tristan Tarrant-2
In reply to this post by Bela Ban
On 15/03/2016 08:00, Bela Ban wrote:
> What are the requirements? To offer a cluster wide counter from an
> Infinispan cache? Or is this going to be used internally, by Infinispan?

The starting point is a requirement by apiman as a way to manage
throttling/quotas. Currently their requirement is not particularly
strict (i.e. synchronous), but they would like the ability to reset a
counter and possibly to receive an event when a certain threshold is
reached.

Obviously we'd like to cater to more use-cases, so the design should
take that into account.

Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Radim Vansa
In reply to this post by Sanne Grinovero-3
I second Sanne's opinion about the sync/async API: make the API express
the synchronicity directly. I would even propose that for synchronous
methods, there's only the CompletableFuture<Void> or
CompletableFuture<Long> variant; we are already reaching the concurrency
limits of application that lets threads block, so let's give user a hint
that he should use such notification API. If anyone prefers the sync
variant, he can always use get().

Let's settle on some nomenclature, too, because these JGroups-counters
and RAFT-counters don't have commonly known properties. It almost seems
that the term *counter* is so overloaded that we shouldn't use it all.

There is a widely known j.u.c.AtomicLong, so if we want to implement
that (strict JMM-like properties), let's call it AtomicLong. Does not
have to follow j.u.c.AtomicLong API, but it's definitely CP.

Something providing unique values should be called Sequence. This will
be probably the one batching ranges (therefore possibly with gaps). By
default non-monotonic, but monotonicity could be a ctor arg (in a
similar way as fairness is set for j.u.concurrent.* classes).

As for CRDTs, I can't imagine how this could be easily built on top of
current 'passive' cache (without any syncing). But as for names *CRDT
counter* is explanatory enough.

If someone needs quotas, let's create Quota according to their needs
(soft and hard threshold, fluent operation below soft, some jitter when
above that). It seems that this will be closest to Sequence by reserving
some ranges. Don't let them shoot themselves into foot with some liger
counter.

And I hope you'll build these on top of the functional API, with at most
one RPC per operation.

Radim

On 03/14/2016 10:27 PM, Sanne Grinovero wrote:

> Great starting point!
> Some comments inline:
>
> On 14 March 2016 at 19:14, Pedro Ruivo <[hidden email]> wrote:
>> Hi everybody,
>>
>> Discussion about distributed counters.
>>
>> == Public API ==
>>
>> interface Counter
> As a user, how do I get a Counter instance? From a the CacheContainer interface?
>
> Will they have their own configuration section in the configuration file?
>
>> String getName() //counter name
>> long get()       //current value. may return stale value due to concurrent
>> operations to other nodes.
> This is what puzzles me the most. I'm not sure if the feature is
> actually useful, unless we can clearly state how far outdated the
> value could be.
>
> I think a slightly more formal definition would be in order. For
> example I think it would be acceptable to say that this will return a
> value from the range of values the primary owner of this counter was
> holding in the timeframe between the method is being invoked and the
> time the value is returned.
>
> Could it optionally be integrated with Total Order ? Transactions?
>
>> void increment() //async or sync increment. default add(1)
>> void decrement() //async or sync decrement. default add(-1)
>> void add(long)   //async or sync add.
>> void reset()     //resets to initial value
>>
>> Note: Tried to make the interface as simple as possible with support for
>> sync and async operations. To avoid any confusion, I consider an async
>> operation as happening somewhat in the future, i.e. eventually
>> increments/decrements.
>> The sync operation happens somewhat during the method execution.
>>
>> interface AtomiCounter extends Counter
>>
>> long addAndGet()       //adds a returns the new value. sync operation
>> long incrementAndGet() //increments and returns the new value. sync
>> operation. default addAndGet(1)
>> long decrementAndGet() //decrements and returns the new value. sync
>> operation. default addAndGet(-1)
>>
>> interface AdvancedCounter extends Counter
>>
>> long getMin/MaxThreshold() //returns the min and max threshold value
> "threshold" ??
>
>> void add/removeListener()  //adds a listener that is invoked when the
>> value change. Can be extended to notify when it is "reseted" and when
>> the threshold is reached.
>>
>> Note: should this interface be splitted?
> I'd prefer a single interface, with reduced redundancy.
> For example, is there really a benefit in having a "void increment()"
> and also a "long addAndGet()" ? [Besides The fact that only the first
> one can benefit of an async option]
>
> Besides, I am no longer sure that it's good thing that methods in
> Infinispan can be async vs sync depending on configuration switches;
> I'd rather make it explicit in the signature and simplify the
> configuration by removing such a flag.
>
> Making the methods which are async-capable to look "explicitly async"
> should also allow us to add completeable futures & similar.
>
>> == Details ==
>>
>> This is what I have in mind. Two counter managers: one based on JGroups
>> counter and another one based on Infinispan cache.
>> The first one creates AtomicCounters and it first perfectly. All
>> counters are created with an initial value (zero by default)
>> The second generates counters with all the options available. It can mix
>> sync/async operation and all counters will be in the same cache. The
>> cache will be configure by us and it would be an internal cache. This
>> will use all the features available in the cache.
> Rather than a split organized on the backing implementation details,
> I'd prefer to see a split based on purpose of the counter.
>
> For example JGroups has different styles of counters: one might need
> one to do an atomic increment which is monotonically increasing across
> the whole cluster - from the point of view from an omniscent observer
> - but in many cases we just need the generation of a guarantee of a
> monotonic unique number: that allows for example to have each node
> pre-allocate a range of values and pre-fetch a new block while its
> running out of values.
>
> For some systems this is not acceptable because a failing server might
> result in some of its allocated numbers to not ever be used, so
> creating gaps; for others it's not acceptable that the figures
> returned by the "monotonic counter" are not monotonic across each
> other when confronting results from different cluster nodes, but for
> many use cases that's acceptable. My point being that we need to be
> able to choose between these possible semantics.
>
> As an Infinispan consumer being able to express such details is
> essential, while having to choose between "backed by JGroups counters"
> or not is irrelevant and actually makes it harder.
>
> In terms of Infinispan integration, the most important point I'd like
> to see is persistence: i.e. make sure you can store them in a
> CacheStore.
> How we do that efficiently will also affect design; for example I was
> expecting to see a Counter as something which is stored in a Cache, so
> inheriting details such as configured CacheStore(s) and number of
> owners.
>
> Thanks for starting this!
>
> Sanne
>
>> Configuration-wise, I'm thinking about 2 parameters: number of backups
>> and timeout (for sync operations).
>>
>> So, comment bellow and let me know alternatives, improvement or if I
>> missed something.
>>
>> ps. I also consider implement a counter based on JGroups-raft but I
>> believe it is an overkill.
>> ps2. sorry for the long email :( I tried to be shorter as possible.
>>
>> Cheers,
>> Pedro
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Radim Vansa <[hidden email]>
JBoss Performance Team

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Gustavo Fernandes-2
In reply to this post by Randall Hauch

For information about CRDTs, make sure you’ve read the CRDT paper by Shapiro: http://hal.upmc.fr/inria-00555588/document 


On that topic, I recently came across the paper "Scalable Eventually Consistent Counters over Unreliable Networks" [1]  which highlights some of
the scalability problems with CRDT counters at large scale: namely the propagation and maintenance of a version vector that keeps growing
over time and requires broadcasts across all members. The paper then describes how some implementors of CRDT counters tackled this limitation
by using "server side only" counters, and the inherent issues with that approach. Finally it proposes an alternative called "Handoff counters" in order
to overcome the scalability issue.

[1] http://arxiv.org/pdf/1307.3207v1.pdf

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Pedro Ruivo-2
In reply to this post by Sanne Grinovero-3
Comments inline

On 03/14/2016 09:27 PM, Sanne Grinovero wrote:
> On 14 March 2016 at 19:14, Pedro Ruivo <[hidden email]> wrote:
>
> As a user, how do I get a Counter instance? From a the CacheContainer interface?

You can get a Counter from a CounterManager. The CounterManager is
associated to the CacheManager.

I have two options to get the CounterManager:

CacheManager.as(CouterManager.class), as discussed during the Rome
meeting (2015). This will change the API in the core but it allows to
introduce the other building blocked (locks, sequences, latches, ...)

The other alternative I have in mind is CounterMager.get(CacheManager).
The only advantage I see is no need to change the API in core.

guys... comment what alternative do you prefer...

>
> Will they have their own configuration section in the configuration file?

I have no idea here... probably something like the cache store
configurations would work.

>> long get()       //current value. may return stale value due to concurrent
>> operations to other nodes.
>
> This is what puzzles me the most. I'm not sure if the feature is
> actually useful, unless we can clearly state how far outdated the
> value could be.
>
> I think a slightly more formal definition would be in order. For
> example I think it would be acceptable to say that this will return a
> value from the range of values the primary owner of this counter was
> holding in the timeframe between the method is being invoked and the
> time the value is returned.
>
> Could it optionally be integrated with Total Order ? Transactions?

That's why we are having this discussion. If someone heard some
requirements from an user, it would be good to write them here.

IMO, at least, the get() must include all the finished operations in the
local node.

>>
>> long getMin/MaxThreshold() //returns the min and max threshold value
>
> "threshold" ??

See Tristan's reply.

>
> I'd prefer a single interface, with reduced redundancy.
> For example, is there really a benefit in having a "void increment()"
> and also a "long addAndGet()" ? [Besides The fact that only the first
> one can benefit of an async option]
>
> Besides, I am no longer sure that it's good thing that methods in
> Infinispan can be async vs sync depending on configuration switches;
> I'd rather make it explicit in the signature and simplify the
> configuration by removing such a flag.
>
> Making the methods which are async-capable to look "explicitly async"
> should also allow us to add completeable futures & similar.

So, lets revamp the API. We can drop the AtomiCounter and create a
simple Counter interface like:

String getName()
CompletableFuture<Long> addAndGet()
long get()
void reset()

It is the simplest we can get and it covers all the semantics we need so
far. If something does not fit, we can start improving it.

>
> Rather than a split organized on the backing implementation details,
> I'd prefer to see a split based on purpose of the counter.

So, what about multiple CounterManager where in each implementation we
specified what semantic the Counter created ensures?

>
> For example JGroups has different styles of counters: one might need
> one to do an atomic increment which is monotonically increasing across
> the whole cluster - from the point of view from an omniscent observer
> - but in many cases we just need the generation of a guarantee of a
> monotonic unique number: that allows for example to have each node
> pre-allocate a range of values and pre-fetch a new block while its
> running out of values.

you lost me in the pre-allocate range of values. I think you are talking
about sequences (we also discussed that in Rome) but I don't think it
should be the primary objective of a counter.

>
> For some systems this is not acceptable because a failing server might
> result in some of its allocated numbers to not ever be used, so
> creating gaps; for others it's not acceptable that the figures
> returned by the "monotonic counter" are not monotonic across each
> other when confronting results from different cluster nodes, but for
> many use cases that's acceptable. My point being that we need to be
> able to choose between these possible semantics.
>
> As an Infinispan consumer being able to express such details is
> essential, while having to choose between "backed by JGroups counters"
> or not is irrelevant and actually makes it harder.
>
> In terms of Infinispan integration, the most important point I'd like
> to see is persistence: i.e. make sure you can store them in a
> CacheStore.
> How we do that efficiently will also affect design; for example I was
> expecting to see a Counter as something which is stored in a Cache, so
> inheriting details such as configured CacheStore(s) and number of
> owners.

I'm ok to support persistence but I don't see any advantage. Does anyone
have a use case for it?
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Radim Vansa
On 03/15/2016 12:58 PM, Pedro Ruivo wrote:
>
> On 03/14/2016 09:27 PM, Sanne Grinovero wrote:
>> Rather than a split organized on the backing implementation details,
>> I'd prefer to see a split based on purpose of the counter.
> So, what about multiple CounterManager where in each implementation we
> specified what semantic the Counter created ensures?
>

Please, don't do that. Different implementation = does the same thing
but has different internals. If it has different semantics (does
something different), use different interface. This way users will just
swap in 'more performant' implementation without thinking about the
consequences.

My 2c

R.

--
Radim Vansa <[hidden email]>
JBoss Performance Team

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Randall Hauch
In reply to this post by Radim Vansa

> On Mar 15, 2016, at 3:26 AM, Radim Vansa <[hidden email]> wrote:
>
> I second Sanne's opinion about the sync/async API: make the API express
> the synchronicity directly. I would even propose that for synchronous
> methods, there's only the CompletableFuture<Void> or
> CompletableFuture<Long> variant; we are already reaching the concurrency
> limits of application that lets threads block, so let's give user a hint
> that he should use such notification API. If anyone prefers the sync
> variant, he can always use get().
>
> Let's settle on some nomenclature, too, because these JGroups-counters
> and RAFT-counters don't have commonly known properties. It almost seems
> that the term *counter* is so overloaded that we shouldn't use it all.
>
> There is a widely known j.u.c.AtomicLong, so if we want to implement
> that (strict JMM-like properties), let's call it AtomicLong. Does not
> have to follow j.u.c.AtomicLong API, but it's definitely CP.
>
> Something providing unique values should be called Sequence. This will
> be probably the one batching ranges (therefore possibly with gaps). By
> default non-monotonic, but monotonicity could be a ctor arg (in a
> similar way as fairness is set for j.u.concurrent.* classes).

+1 for differentiating in the API between a counter that a client increments (or resets) and a sequence generator that a client can use to obtain (non)monotonic values.

>
> As for CRDTs, I can't imagine how this could be easily built on top of
> current 'passive' cache (without any syncing). But as for names *CRDT
> counter* is explanatory enough.
>
> If someone needs quotas, let's create Quota according to their needs
> (soft and hard threshold, fluent operation below soft, some jitter when
> above that). It seems that this will be closest to Sequence by reserving
> some ranges. Don't let them shoot themselves into foot with some liger
> counter.

+1 for creating a specific interface to enable quotas. Perhaps it is implemented with some more general purpose functionality that might be exposed in the future, but for now why not keep it narrowly focused.

>
> And I hope you'll build these on top of the functional API, with at most
> one RPC per operation.
>
> Radim
>
> On 03/14/2016 10:27 PM, Sanne Grinovero wrote:
>> Great starting point!
>> Some comments inline:
>>
>> On 14 March 2016 at 19:14, Pedro Ruivo <[hidden email]> wrote:
>>> Hi everybody,
>>>
>>> Discussion about distributed counters.
>>>
>>> == Public API ==
>>>
>>> interface Counter
>> As a user, how do I get a Counter instance? From a the CacheContainer interface?
>>
>> Will they have their own configuration section in the configuration file?
>>
>>> String getName() //counter name
>>> long get()       //current value. may return stale value due to concurrent
>>> operations to other nodes.
>> This is what puzzles me the most. I'm not sure if the feature is
>> actually useful, unless we can clearly state how far outdated the
>> value could be.
>>
>> I think a slightly more formal definition would be in order. For
>> example I think it would be acceptable to say that this will return a
>> value from the range of values the primary owner of this counter was
>> holding in the timeframe between the method is being invoked and the
>> time the value is returned.
>>
>> Could it optionally be integrated with Total Order ? Transactions?
>>
>>> void increment() //async or sync increment. default add(1)
>>> void decrement() //async or sync decrement. default add(-1)
>>> void add(long)   //async or sync add.
>>> void reset()     //resets to initial value
>>>
>>> Note: Tried to make the interface as simple as possible with support for
>>> sync and async operations. To avoid any confusion, I consider an async
>>> operation as happening somewhat in the future, i.e. eventually
>>> increments/decrements.
>>> The sync operation happens somewhat during the method execution.
>>>
>>> interface AtomiCounter extends Counter
>>>
>>> long addAndGet()       //adds a returns the new value. sync operation
>>> long incrementAndGet() //increments and returns the new value. sync
>>> operation. default addAndGet(1)
>>> long decrementAndGet() //decrements and returns the new value. sync
>>> operation. default addAndGet(-1)
>>>
>>> interface AdvancedCounter extends Counter
>>>
>>> long getMin/MaxThreshold() //returns the min and max threshold value
>> "threshold" ??
>>
>>> void add/removeListener()  //adds a listener that is invoked when the
>>> value change. Can be extended to notify when it is "reseted" and when
>>> the threshold is reached.
>>>
>>> Note: should this interface be splitted?
>> I'd prefer a single interface, with reduced redundancy.
>> For example, is there really a benefit in having a "void increment()"
>> and also a "long addAndGet()" ? [Besides The fact that only the first
>> one can benefit of an async option]
>>
>> Besides, I am no longer sure that it's good thing that methods in
>> Infinispan can be async vs sync depending on configuration switches;
>> I'd rather make it explicit in the signature and simplify the
>> configuration by removing such a flag.
>>
>> Making the methods which are async-capable to look "explicitly async"
>> should also allow us to add completeable futures & similar.
>>
>>> == Details ==
>>>
>>> This is what I have in mind. Two counter managers: one based on JGroups
>>> counter and another one based on Infinispan cache.
>>> The first one creates AtomicCounters and it first perfectly. All
>>> counters are created with an initial value (zero by default)
>>> The second generates counters with all the options available. It can mix
>>> sync/async operation and all counters will be in the same cache. The
>>> cache will be configure by us and it would be an internal cache. This
>>> will use all the features available in the cache.
>> Rather than a split organized on the backing implementation details,
>> I'd prefer to see a split based on purpose of the counter.
>>
>> For example JGroups has different styles of counters: one might need
>> one to do an atomic increment which is monotonically increasing across
>> the whole cluster - from the point of view from an omniscent observer
>> - but in many cases we just need the generation of a guarantee of a
>> monotonic unique number: that allows for example to have each node
>> pre-allocate a range of values and pre-fetch a new block while its
>> running out of values.
>>
>> For some systems this is not acceptable because a failing server might
>> result in some of its allocated numbers to not ever be used, so
>> creating gaps; for others it's not acceptable that the figures
>> returned by the "monotonic counter" are not monotonic across each
>> other when confronting results from different cluster nodes, but for
>> many use cases that's acceptable. My point being that we need to be
>> able to choose between these possible semantics.
>>
>> As an Infinispan consumer being able to express such details is
>> essential, while having to choose between "backed by JGroups counters"
>> or not is irrelevant and actually makes it harder.
>>
>> In terms of Infinispan integration, the most important point I'd like
>> to see is persistence: i.e. make sure you can store them in a
>> CacheStore.
>> How we do that efficiently will also affect design; for example I was
>> expecting to see a Counter as something which is stored in a Cache, so
>> inheriting details such as configured CacheStore(s) and number of
>> owners.
>>
>> Thanks for starting this!
>>
>> Sanne
>>
>>> Configuration-wise, I'm thinking about 2 parameters: number of backups
>>> and timeout (for sync operations).
>>>
>>> So, comment bellow and let me know alternatives, improvement or if I
>>> missed something.
>>>
>>> ps. I also consider implement a counter based on JGroups-raft but I
>>> believe it is an overkill.
>>> ps2. sorry for the long email :( I tried to be shorter as possible.
>>>
>>> Cheers,
>>> Pedro
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> [hidden email]
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> --
> Radim Vansa <[hidden email]>
> JBoss Performance Team
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Randall Hauch
In reply to this post by Bela Ban

On Mar 15, 2016, at 2:12 AM, Bela Ban <[hidden email]> wrote:



On 14/03/16 23:17, Randall Hauch wrote:
What are the requirements? What are the distributed counters for? Is the
counter to be monotonically increasing? Can there be any missed values?
Does the counter need to increment and decrement? What is the *smallest*
API you need initially?

There are two choices when implementing a distributed counter: use
central coordination (like JGroups counters), or use independent
counters on separate machines that will eventually converge to the
correct value (CRDTs). Coordinated counters are expensive and therefore
slow, and can suffer from problems during network or cluster problems.

The question is what do you get for this? If your app can't afford 
duplicate counter values during a network partition, then - yes - there 
is some overhead. CRDTs won't be able to guarantee this property. OTOH 
CRDTs are fast when you only care about some sort of eventual 
consistency, and don't need 'hard' consistency.

To be clear, I’m not saying they are interchangeable. They have very different properties, which is why the requirements will help determine which of them (if any) are applicable.



For example, what happens during a split brain? OTOH, CRDTs are
decentralized so therefore are very fast, easily merged, and fault
tolerant;

Yes, CRDTS are AP whereas jgroups-raft counters are CP. JGroups 
counters, otoh, are CRAP (consistent, reliable, available and 
partition-aware.

Take the last sentence with a grain of salt :-)

they’re excellent when counting things that are occurring
independently and therefore may be more suited for
monitoring/metrics/accumulators/etc.  Both have very different behaviors
under ideal and failure scenarios, have different performance and
consistency guarantees, and are useful in different scenarios. Make sure
you choose accordingly.

For information about CRDTs, make sure you’ve read the CRDT paper by
Shapiro: http://hal.upmc.fr/inria-00555588/document

Randall

On Mar 14, 2016, at 2:14 PM, Pedro Ruivo <[hidden email]
<[hidden email]>> wrote:

Hi everybody,

Discussion about distributed counters.

== Public API ==

interface Counter

String getName() //counter name
long get()//current value. may return stale value due to concurrent
operations to other nodes.
void increment() //async or sync increment. default add(1)
void decrement() //async or sync decrement. default add(-1)
void add(long)   //async or sync add.
void reset()     //resets to initial value

Note: Tried to make the interface as simple as possible with support for
sync and async operations. To avoid any confusion, I consider an async
operation as happening somewhat in the future, i.e. eventually
increments/decrements.
The sync operation happens somewhat during the method execution.

interface AtomiCounter extends Counter

long addAndGet()       //adds a returns the new value. sync operation
long incrementAndGet() //increments and returns the new value. sync
operation. default addAndGet(1)
long decrementAndGet() //decrements and returns the new value. sync
operation. default addAndGet(-1)

interface AdvancedCounter extends Counter

long getMin/MaxThreshold() //returns the min and max threshold value
void add/removeListener()  //adds a listener that is invoked when the
value change. Can be extended to notify when it is "reseted" and when
the threshold is reached.

Note: should this interface be splitted?

== Details ==

This is what I have in mind. Two counter managers: one based on JGroups
counter and another one based on Infinispan cache.
The first one creates AtomicCounters and it first perfectly. All
counters are created with an initial value (zero by default)
The second generates counters with all the options available. It can mix
sync/async operation and all counters will be in the same cache. The
cache will be configure by us and it would be an internal cache. This
will use all the features available in the cache.

Configuration-wise, I'm thinking about 2 parameters: number of backups
and timeout (for sync operations).

So, comment bellow and let me know alternatives, improvement or if I
missed something.

ps. I also consider implement a counter based on JGroups-raft but I
believe it is an overkill.
ps2. sorry for the long email :( I tried to be shorter as possible.

Cheers,
Pedro
_______________________________________________
infinispan-dev mailing list
[hidden email] <[hidden email]>
https://lists.jboss.org/mailman/listinfo/infinispan-dev



_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


-- 
Bela Ban, JGroups lead (http://www.jgroups.org)

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Eric Wittmann
In reply to this post by Pedro Ruivo-2
Greetings.  Apologies for coming in a bit late on this conversation.
Tristan pointed me to it a couple of days ago and unfortunately I'm just
now getting time to reply.

I can try to quickly give an overview of apiman's (JBoss API Management
Gateway) requirements.

What we're trying to do is implement support for Limiting policies:

* Rate Limiting/Throttling (e.g. limit of 100 requests per second)
* Quotas (e.g. limit of 100,000,000 requests per month)
* Transfer Quotas (e.g. limit of 2.5GB of data downloaded per day)

We will need to support multiple backing implementations of the Rate
Limiter, and we're trying to get Infinispan to be one of those
implementations.

In no particular order, we would need the following characteristics:

- Can be "squishy" for quotas and transfer quotas:  If you
   get 100,001,017 requests that's OK
- Strict would be cool as an option:  Hard-fail when the
   counter reaches the limit - no chance it will go over.
- Lots of individual counters:  users may publish 100s of
   APIs to the Gateway, and each API may be consumed by
   100s or 1000s of users/client.  Depending on configuration
   of the policy, *each* user/client has a separate limit.
- Counters need to be created dynamically:  users can
   add APIs via the Management UI, configure them to add
   policies (e.g. a Quota policy) and then publish them to
   a running Gateway, at which point end users can invoke
   the API through the Gateway, which will use a counter
   to enforce the Quota.
- Counter values reset at the end of a time boundary:  for
   example, at the end of the month the counter value for
   the example quota above would reset to 0.
- Don't care (right now) what the counter value is: at the
   moment we simply need to know if some counter max value
   has been reached.  In the future we would like to know
   when a max value is being "approached" (e.g. to notify a
   user)
- Should be persistent: it would not be ideal for e.g. per-
   month quota values to be lost on server restart.

That's all the high level requirements I can think of off the top of my
head, and after reading all of the current messages in this thread. :)

-Eric
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Bela Ban
Stupid question: whay do you need a distributed counter for this? Is the
service you're monitoring replicated in a cluster?

On 17/03/16 18:06, Eric Wittmann wrote:

> Greetings.  Apologies for coming in a bit late on this conversation.
> Tristan pointed me to it a couple of days ago and unfortunately I'm just
> now getting time to reply.
>
> I can try to quickly give an overview of apiman's (JBoss API Management
> Gateway) requirements.
>
> What we're trying to do is implement support for Limiting policies:
>
> * Rate Limiting/Throttling (e.g. limit of 100 requests per second)
> * Quotas (e.g. limit of 100,000,000 requests per month)
> * Transfer Quotas (e.g. limit of 2.5GB of data downloaded per day)
>
> We will need to support multiple backing implementations of the Rate
> Limiter, and we're trying to get Infinispan to be one of those
> implementations.
>
> In no particular order, we would need the following characteristics:
>
> - Can be "squishy" for quotas and transfer quotas:  If you
>     get 100,001,017 requests that's OK
> - Strict would be cool as an option:  Hard-fail when the
>     counter reaches the limit - no chance it will go over.
> - Lots of individual counters:  users may publish 100s of
>     APIs to the Gateway, and each API may be consumed by
>     100s or 1000s of users/client.  Depending on configuration
>     of the policy, *each* user/client has a separate limit.
> - Counters need to be created dynamically:  users can
>     add APIs via the Management UI, configure them to add
>     policies (e.g. a Quota policy) and then publish them to
>     a running Gateway, at which point end users can invoke
>     the API through the Gateway, which will use a counter
>     to enforce the Quota.
> - Counter values reset at the end of a time boundary:  for
>     example, at the end of the month the counter value for
>     the example quota above would reset to 0.
> - Don't care (right now) what the counter value is: at the
>     moment we simply need to know if some counter max value
>     has been reached.  In the future we would like to know
>     when a max value is being "approached" (e.g. to notify a
>     user)
> - Should be persistent: it would not be ideal for e.g. per-
>     month quota values to be lost on server restart.
>
> That's all the high level requirements I can think of off the top of my
> head, and after reading all of the current messages in this thread. :)
>
> -Eric
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Bela Ban, JGroups lead (http://www.jgroups.org)

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Eric Wittmann
Yes, precisely.  The API Gateway itself is clustered.  It services a
large volume of inbound traffic which it reverse-proxies to appropriate
back-end APIs after applying policies such as security, rate limiting,
caching, etc.

-Eric

On 3/18/2016 2:32 AM, Bela Ban wrote:

> Stupid question: whay do you need a distributed counter for this? Is the
> service you're monitoring replicated in a cluster?
>
> On 17/03/16 18:06, Eric Wittmann wrote:
>> Greetings.  Apologies for coming in a bit late on this conversation.
>> Tristan pointed me to it a couple of days ago and unfortunately I'm just
>> now getting time to reply.
>>
>> I can try to quickly give an overview of apiman's (JBoss API Management
>> Gateway) requirements.
>>
>> What we're trying to do is implement support for Limiting policies:
>>
>> * Rate Limiting/Throttling (e.g. limit of 100 requests per second)
>> * Quotas (e.g. limit of 100,000,000 requests per month)
>> * Transfer Quotas (e.g. limit of 2.5GB of data downloaded per day)
>>
>> We will need to support multiple backing implementations of the Rate
>> Limiter, and we're trying to get Infinispan to be one of those
>> implementations.
>>
>> In no particular order, we would need the following characteristics:
>>
>> - Can be "squishy" for quotas and transfer quotas:  If you
>>      get 100,001,017 requests that's OK
>> - Strict would be cool as an option:  Hard-fail when the
>>      counter reaches the limit - no chance it will go over.
>> - Lots of individual counters:  users may publish 100s of
>>      APIs to the Gateway, and each API may be consumed by
>>      100s or 1000s of users/client.  Depending on configuration
>>      of the policy, *each* user/client has a separate limit.
>> - Counters need to be created dynamically:  users can
>>      add APIs via the Management UI, configure them to add
>>      policies (e.g. a Quota policy) and then publish them to
>>      a running Gateway, at which point end users can invoke
>>      the API through the Gateway, which will use a counter
>>      to enforce the Quota.
>> - Counter values reset at the end of a time boundary:  for
>>      example, at the end of the month the counter value for
>>      the example quota above would reset to 0.
>> - Don't care (right now) what the counter value is: at the
>>      moment we simply need to know if some counter max value
>>      has been reached.  In the future we would like to know
>>      when a max value is being "approached" (e.g. to notify a
>>      user)
>> - Should be persistent: it would not be ideal for e.g. per-
>>      month quota values to be lost on server restart.
>>
>> That's all the high level requirements I can think of off the top of my
>> head, and after reading all of the current messages in this thread. :)
>>
>> -Eric
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Bela Ban
So actually you don't care if you have multiple counters in case of a
network split, but you do care that the numbers of different counters
get reconciled when a network partition heals.

Example
- C1: 1000
- Network split: C1: 1000, C2: 1000
- Different clients update counters on both sides of the partition: C1:
1500 C2: 1600
- Network split disappears, reconciling C1 to 2100: 1000 +500 +600. This
means the 500 added to C1 should have been added to C2 as well, and the
600 to C2 should have been added to C1

If such a behavior would be acceptable, then we could do without CP and
live with AP

On 18/03/16 14:19, Eric Wittmann wrote:

> Yes, precisely.  The API Gateway itself is clustered.  It services a
> large volume of inbound traffic which it reverse-proxies to appropriate
> back-end APIs after applying policies such as security, rate limiting,
> caching, etc.
>
> -Eric
>
> On 3/18/2016 2:32 AM, Bela Ban wrote:
>> Stupid question: whay do you need a distributed counter for this? Is the
>> service you're monitoring replicated in a cluster?
>>
>> On 17/03/16 18:06, Eric Wittmann wrote:
>>> Greetings.  Apologies for coming in a bit late on this conversation.
>>> Tristan pointed me to it a couple of days ago and unfortunately I'm just
>>> now getting time to reply.
>>>
>>> I can try to quickly give an overview of apiman's (JBoss API Management
>>> Gateway) requirements.
>>>
>>> What we're trying to do is implement support for Limiting policies:
>>>
>>> * Rate Limiting/Throttling (e.g. limit of 100 requests per second)
>>> * Quotas (e.g. limit of 100,000,000 requests per month)
>>> * Transfer Quotas (e.g. limit of 2.5GB of data downloaded per day)
>>>
>>> We will need to support multiple backing implementations of the Rate
>>> Limiter, and we're trying to get Infinispan to be one of those
>>> implementations.
>>>
>>> In no particular order, we would need the following characteristics:
>>>
>>> - Can be "squishy" for quotas and transfer quotas:  If you
>>>       get 100,001,017 requests that's OK
>>> - Strict would be cool as an option:  Hard-fail when the
>>>       counter reaches the limit - no chance it will go over.
>>> - Lots of individual counters:  users may publish 100s of
>>>       APIs to the Gateway, and each API may be consumed by
>>>       100s or 1000s of users/client.  Depending on configuration
>>>       of the policy, *each* user/client has a separate limit.
>>> - Counters need to be created dynamically:  users can
>>>       add APIs via the Management UI, configure them to add
>>>       policies (e.g. a Quota policy) and then publish them to
>>>       a running Gateway, at which point end users can invoke
>>>       the API through the Gateway, which will use a counter
>>>       to enforce the Quota.
>>> - Counter values reset at the end of a time boundary:  for
>>>       example, at the end of the month the counter value for
>>>       the example quota above would reset to 0.
>>> - Don't care (right now) what the counter value is: at the
>>>       moment we simply need to know if some counter max value
>>>       has been reached.  In the future we would like to know
>>>       when a max value is being "approached" (e.g. to notify a
>>>       user)
>>> - Should be persistent: it would not be ideal for e.g. per-
>>>       month quota values to be lost on server restart.
>>>
>>> That's all the high level requirements I can think of off the top of my
>>> head, and after reading all of the current messages in this thread. :)
>>>
>>> -Eric
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> [hidden email]
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Bela Ban, JGroups lead (http://www.jgroups.org)

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Eric Wittmann
Agreed. :)

On 3/18/2016 9:31 AM, Bela Ban wrote:

> So actually you don't care if you have multiple counters in case of a
> network split, but you do care that the numbers of different counters
> get reconciled when a network partition heals.
>
> Example
> - C1: 1000
> - Network split: C1: 1000, C2: 1000
> - Different clients update counters on both sides of the partition: C1:
> 1500 C2: 1600
> - Network split disappears, reconciling C1 to 2100: 1000 +500 +600. This
> means the 500 added to C1 should have been added to C2 as well, and the
> 600 to C2 should have been added to C1
>
> If such a behavior would be acceptable, then we could do without CP and
> live with AP
>
> On 18/03/16 14:19, Eric Wittmann wrote:
>> Yes, precisely.  The API Gateway itself is clustered.  It services a
>> large volume of inbound traffic which it reverse-proxies to appropriate
>> back-end APIs after applying policies such as security, rate limiting,
>> caching, etc.
>>
>> -Eric
>>
>> On 3/18/2016 2:32 AM, Bela Ban wrote:
>>> Stupid question: whay do you need a distributed counter for this? Is the
>>> service you're monitoring replicated in a cluster?
>>>
>>> On 17/03/16 18:06, Eric Wittmann wrote:
>>>> Greetings.  Apologies for coming in a bit late on this conversation.
>>>> Tristan pointed me to it a couple of days ago and unfortunately I'm just
>>>> now getting time to reply.
>>>>
>>>> I can try to quickly give an overview of apiman's (JBoss API Management
>>>> Gateway) requirements.
>>>>
>>>> What we're trying to do is implement support for Limiting policies:
>>>>
>>>> * Rate Limiting/Throttling (e.g. limit of 100 requests per second)
>>>> * Quotas (e.g. limit of 100,000,000 requests per month)
>>>> * Transfer Quotas (e.g. limit of 2.5GB of data downloaded per day)
>>>>
>>>> We will need to support multiple backing implementations of the Rate
>>>> Limiter, and we're trying to get Infinispan to be one of those
>>>> implementations.
>>>>
>>>> In no particular order, we would need the following characteristics:
>>>>
>>>> - Can be "squishy" for quotas and transfer quotas:  If you
>>>>        get 100,001,017 requests that's OK
>>>> - Strict would be cool as an option:  Hard-fail when the
>>>>        counter reaches the limit - no chance it will go over.
>>>> - Lots of individual counters:  users may publish 100s of
>>>>        APIs to the Gateway, and each API may be consumed by
>>>>        100s or 1000s of users/client.  Depending on configuration
>>>>        of the policy, *each* user/client has a separate limit.
>>>> - Counters need to be created dynamically:  users can
>>>>        add APIs via the Management UI, configure them to add
>>>>        policies (e.g. a Quota policy) and then publish them to
>>>>        a running Gateway, at which point end users can invoke
>>>>        the API through the Gateway, which will use a counter
>>>>        to enforce the Quota.
>>>> - Counter values reset at the end of a time boundary:  for
>>>>        example, at the end of the month the counter value for
>>>>        the example quota above would reset to 0.
>>>> - Don't care (right now) what the counter value is: at the
>>>>        moment we simply need to know if some counter max value
>>>>        has been reached.  In the future we would like to know
>>>>        when a max value is being "approached" (e.g. to notify a
>>>>        user)
>>>> - Should be persistent: it would not be ideal for e.g. per-
>>>>        month quota values to be lost on server restart.
>>>>
>>>> That's all the high level requirements I can think of off the top of my
>>>> head, and after reading all of the current messages in this thread. :)
>>>>
>>>> -Eric
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> [hidden email]
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Pedro Ruivo-2
Hi all,

@Eric, thanks for the requirements.

@Bela, does JGroups counter supports that semantics (AP)? Infinispan
does not have eventually consistency (yet) neither an update log. So, it
can't reconcile the counter and you will lose one of the partition updates.

On 03/18/2016 02:19 PM, Eric Wittmann wrote:

> Agreed. :)
>
> On 3/18/2016 9:31 AM, Bela Ban wrote:
>> So actually you don't care if you have multiple counters in case of a
>> network split, but you do care that the numbers of different counters
>> get reconciled when a network partition heals.
>>
>> Example
>> - C1: 1000
>> - Network split: C1: 1000, C2: 1000
>> - Different clients update counters on both sides of the partition: C1:
>> 1500 C2: 1600
>> - Network split disappears, reconciling C1 to 2100: 1000 +500 +600. This
>> means the 500 added to C1 should have been added to C2 as well, and the
>> 600 to C2 should have been added to C1
>>
>> If such a behavior would be acceptable, then we could do without CP and
>> live with AP
>>
>> On 18/03/16 14:19, Eric Wittmann wrote:
>>> Yes, precisely.  The API Gateway itself is clustered.  It services a
>>> large volume of inbound traffic which it reverse-proxies to appropriate
>>> back-end APIs after applying policies such as security, rate limiting,
>>> caching, etc.
>>>
>>> -Eric
>>>
>>> On 3/18/2016 2:32 AM, Bela Ban wrote:
>>>> Stupid question: whay do you need a distributed counter for this? Is the
>>>> service you're monitoring replicated in a cluster?
>>>>
>>>> On 17/03/16 18:06, Eric Wittmann wrote:
>>>>> Greetings.  Apologies for coming in a bit late on this conversation.
>>>>> Tristan pointed me to it a couple of days ago and unfortunately I'm just
>>>>> now getting time to reply.
>>>>>
>>>>> I can try to quickly give an overview of apiman's (JBoss API Management
>>>>> Gateway) requirements.
>>>>>
>>>>> What we're trying to do is implement support for Limiting policies:
>>>>>
>>>>> * Rate Limiting/Throttling (e.g. limit of 100 requests per second)
>>>>> * Quotas (e.g. limit of 100,000,000 requests per month)
>>>>> * Transfer Quotas (e.g. limit of 2.5GB of data downloaded per day)
>>>>>
>>>>> We will need to support multiple backing implementations of the Rate
>>>>> Limiter, and we're trying to get Infinispan to be one of those
>>>>> implementations.
>>>>>
>>>>> In no particular order, we would need the following characteristics:
>>>>>
>>>>> - Can be "squishy" for quotas and transfer quotas:  If you
>>>>>         get 100,001,017 requests that's OK
>>>>> - Strict would be cool as an option:  Hard-fail when the
>>>>>         counter reaches the limit - no chance it will go over.
>>>>> - Lots of individual counters:  users may publish 100s of
>>>>>         APIs to the Gateway, and each API may be consumed by
>>>>>         100s or 1000s of users/client.  Depending on configuration
>>>>>         of the policy, *each* user/client has a separate limit.
>>>>> - Counters need to be created dynamically:  users can
>>>>>         add APIs via the Management UI, configure them to add
>>>>>         policies (e.g. a Quota policy) and then publish them to
>>>>>         a running Gateway, at which point end users can invoke
>>>>>         the API through the Gateway, which will use a counter
>>>>>         to enforce the Quota.
>>>>> - Counter values reset at the end of a time boundary:  for
>>>>>         example, at the end of the month the counter value for
>>>>>         the example quota above would reset to 0.
>>>>> - Don't care (right now) what the counter value is: at the
>>>>>         moment we simply need to know if some counter max value
>>>>>         has been reached.  In the future we would like to know
>>>>>         when a max value is being "approached" (e.g. to notify a
>>>>>         user)
>>>>> - Should be persistent: it would not be ideal for e.g. per-
>>>>>         month quota values to be lost on server restart.
>>>>>
>>>>> That's all the high level requirements I can think of off the top of my
>>>>> head, and after reading all of the current messages in this thread. :)
>>>>>
>>>>> -Eric
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> [hidden email]
>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> [hidden email]
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Bela Ban


On 21/03/16 11:12, Pedro Ruivo wrote:
> Hi all,
>
> @Eric, thanks for the requirements.
>
> @Bela, does JGroups counter supports that semantics (AP)?

No. You'd have to catch the MergeView and do this manually.

> Infinispan does not have eventually consistency (yet) neither an update log. So, it
> can't reconcile the counter and you will lose one of the partition updates.

Same for the JGroups counter service. The jgroups-raft CounterService
provides strong consistency, but at the expense of availability.

> On 03/18/2016 02:19 PM, Eric Wittmann wrote:
>> Agreed. :)
>>
>> On 3/18/2016 9:31 AM, Bela Ban wrote:
>>> So actually you don't care if you have multiple counters in case of a
>>> network split, but you do care that the numbers of different counters
>>> get reconciled when a network partition heals.
>>>
>>> Example
>>> - C1: 1000
>>> - Network split: C1: 1000, C2: 1000
>>> - Different clients update counters on both sides of the partition: C1:
>>> 1500 C2: 1600
>>> - Network split disappears, reconciling C1 to 2100: 1000 +500 +600. This
>>> means the 500 added to C1 should have been added to C2 as well, and the
>>> 600 to C2 should have been added to C1
>>>
>>> If such a behavior would be acceptable, then we could do without CP and
>>> live with AP
>>>
>>> On 18/03/16 14:19, Eric Wittmann wrote:
>>>> Yes, precisely.  The API Gateway itself is clustered.  It services a
>>>> large volume of inbound traffic which it reverse-proxies to appropriate
>>>> back-end APIs after applying policies such as security, rate limiting,
>>>> caching, etc.
>>>>
>>>> -Eric
>>>>
>>>> On 3/18/2016 2:32 AM, Bela Ban wrote:
>>>>> Stupid question: whay do you need a distributed counter for this? Is the
>>>>> service you're monitoring replicated in a cluster?
>>>>>
>>>>> On 17/03/16 18:06, Eric Wittmann wrote:
>>>>>> Greetings.  Apologies for coming in a bit late on this conversation.
>>>>>> Tristan pointed me to it a couple of days ago and unfortunately I'm just
>>>>>> now getting time to reply.
>>>>>>
>>>>>> I can try to quickly give an overview of apiman's (JBoss API Management
>>>>>> Gateway) requirements.
>>>>>>
>>>>>> What we're trying to do is implement support for Limiting policies:
>>>>>>
>>>>>> * Rate Limiting/Throttling (e.g. limit of 100 requests per second)
>>>>>> * Quotas (e.g. limit of 100,000,000 requests per month)
>>>>>> * Transfer Quotas (e.g. limit of 2.5GB of data downloaded per day)
>>>>>>
>>>>>> We will need to support multiple backing implementations of the Rate
>>>>>> Limiter, and we're trying to get Infinispan to be one of those
>>>>>> implementations.
>>>>>>
>>>>>> In no particular order, we would need the following characteristics:
>>>>>>
>>>>>> - Can be "squishy" for quotas and transfer quotas:  If you
>>>>>>          get 100,001,017 requests that's OK
>>>>>> - Strict would be cool as an option:  Hard-fail when the
>>>>>>          counter reaches the limit - no chance it will go over.
>>>>>> - Lots of individual counters:  users may publish 100s of
>>>>>>          APIs to the Gateway, and each API may be consumed by
>>>>>>          100s or 1000s of users/client.  Depending on configuration
>>>>>>          of the policy, *each* user/client has a separate limit.
>>>>>> - Counters need to be created dynamically:  users can
>>>>>>          add APIs via the Management UI, configure them to add
>>>>>>          policies (e.g. a Quota policy) and then publish them to
>>>>>>          a running Gateway, at which point end users can invoke
>>>>>>          the API through the Gateway, which will use a counter
>>>>>>          to enforce the Quota.
>>>>>> - Counter values reset at the end of a time boundary:  for
>>>>>>          example, at the end of the month the counter value for
>>>>>>          the example quota above would reset to 0.
>>>>>> - Don't care (right now) what the counter value is: at the
>>>>>>          moment we simply need to know if some counter max value
>>>>>>          has been reached.  In the future we would like to know
>>>>>>          when a max value is being "approached" (e.g. to notify a
>>>>>>          user)
>>>>>> - Should be persistent: it would not be ideal for e.g. per-
>>>>>>          month quota values to be lost on server restart.
>>>>>>
>>>>>> That's all the high level requirements I can think of off the top of my
>>>>>> head, and after reading all of the current messages in this thread. :)
>>>>>>
>>>>>> -Eric
>>>>>> _______________________________________________
>>>>>> infinispan-dev mailing list
>>>>>> [hidden email]
>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>
>>>>>
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> [hidden email]
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Bela Ban, JGroups lead (http://www.jgroups.org)

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Distributed Counter Discussion

Dan Berindei
On Mon, Mar 21, 2016 at 1:43 PM, Bela Ban <[hidden email]> wrote:

>
>
> On 21/03/16 11:12, Pedro Ruivo wrote:
>> Hi all,
>>
>> @Eric, thanks for the requirements.
>>
>> @Bela, does JGroups counter supports that semantics (AP)?
>
> No. You'd have to catch the MergeView and do this manually.

I should also mention that you don't get a "cluster split" event. With
when a cluster ABC splits into A and BC and merges back, you could get
quite a view sequence like this:

A, B, C: A|3 [A, B, C]
A: A|4 [A, B]
A: A|5 [A] (could be missing)
B, C: B|4 [B, C]
A, B, C: B|6 [B, C, A] (merge view)

So it's not that easy to keep track of counter additions "since the split".

>
>> Infinispan does not have eventually consistency (yet) neither an update log. So, it
>> can't reconcile the counter and you will lose one of the partition updates.
>
> Same for the JGroups counter service. The jgroups-raft CounterService
> provides strong consistency, but at the expense of availability.
>
>> On 03/18/2016 02:19 PM, Eric Wittmann wrote:
>>> Agreed. :)
>>>
>>> On 3/18/2016 9:31 AM, Bela Ban wrote:
>>>> So actually you don't care if you have multiple counters in case of a
>>>> network split, but you do care that the numbers of different counters
>>>> get reconciled when a network partition heals.
>>>>
>>>> Example
>>>> - C1: 1000
>>>> - Network split: C1: 1000, C2: 1000
>>>> - Different clients update counters on both sides of the partition: C1:
>>>> 1500 C2: 1600
>>>> - Network split disappears, reconciling C1 to 2100: 1000 +500 +600. This
>>>> means the 500 added to C1 should have been added to C2 as well, and the
>>>> 600 to C2 should have been added to C1
>>>>
>>>> If such a behavior would be acceptable, then we could do without CP and
>>>> live with AP
>>>>
>>>> On 18/03/16 14:19, Eric Wittmann wrote:
>>>>> Yes, precisely.  The API Gateway itself is clustered.  It services a
>>>>> large volume of inbound traffic which it reverse-proxies to appropriate
>>>>> back-end APIs after applying policies such as security, rate limiting,
>>>>> caching, etc.
>>>>>
>>>>> -Eric
>>>>>
>>>>> On 3/18/2016 2:32 AM, Bela Ban wrote:
>>>>>> Stupid question: whay do you need a distributed counter for this? Is the
>>>>>> service you're monitoring replicated in a cluster?
>>>>>>
>>>>>> On 17/03/16 18:06, Eric Wittmann wrote:
>>>>>>> Greetings.  Apologies for coming in a bit late on this conversation.
>>>>>>> Tristan pointed me to it a couple of days ago and unfortunately I'm just
>>>>>>> now getting time to reply.
>>>>>>>
>>>>>>> I can try to quickly give an overview of apiman's (JBoss API Management
>>>>>>> Gateway) requirements.
>>>>>>>
>>>>>>> What we're trying to do is implement support for Limiting policies:
>>>>>>>
>>>>>>> * Rate Limiting/Throttling (e.g. limit of 100 requests per second)
>>>>>>> * Quotas (e.g. limit of 100,000,000 requests per month)
>>>>>>> * Transfer Quotas (e.g. limit of 2.5GB of data downloaded per day)
>>>>>>>
>>>>>>> We will need to support multiple backing implementations of the Rate
>>>>>>> Limiter, and we're trying to get Infinispan to be one of those
>>>>>>> implementations.
>>>>>>>
>>>>>>> In no particular order, we would need the following characteristics:
>>>>>>>
>>>>>>> - Can be "squishy" for quotas and transfer quotas:  If you
>>>>>>>          get 100,001,017 requests that's OK
>>>>>>> - Strict would be cool as an option:  Hard-fail when the
>>>>>>>          counter reaches the limit - no chance it will go over.
>>>>>>> - Lots of individual counters:  users may publish 100s of
>>>>>>>          APIs to the Gateway, and each API may be consumed by
>>>>>>>          100s or 1000s of users/client.  Depending on configuration
>>>>>>>          of the policy, *each* user/client has a separate limit.
>>>>>>> - Counters need to be created dynamically:  users can
>>>>>>>          add APIs via the Management UI, configure them to add
>>>>>>>          policies (e.g. a Quota policy) and then publish them to
>>>>>>>          a running Gateway, at which point end users can invoke
>>>>>>>          the API through the Gateway, which will use a counter
>>>>>>>          to enforce the Quota.
>>>>>>> - Counter values reset at the end of a time boundary:  for
>>>>>>>          example, at the end of the month the counter value for
>>>>>>>          the example quota above would reset to 0.
>>>>>>> - Don't care (right now) what the counter value is: at the
>>>>>>>          moment we simply need to know if some counter max value
>>>>>>>          has been reached.  In the future we would like to know
>>>>>>>          when a max value is being "approached" (e.g. to notify a
>>>>>>>          user)
>>>>>>> - Should be persistent: it would not be ideal for e.g. per-
>>>>>>>          month quota values to be lost on server restart.
>>>>>>>
>>>>>>> That's all the high level requirements I can think of off the top of my
>>>>>>> head, and after reading all of the current messages in this thread. :)
>>>>>>>
>>>>>>> -Eric
>>>>>>> _______________________________________________
>>>>>>> infinispan-dev mailing list
>>>>>>> [hidden email]
>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> [hidden email]
>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> [hidden email]
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>
> --
> Bela Ban, JGroups lead (http://www.jgroups.org)
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
12