[infinispan-dev] Embedded mode: how-to get all caches started on all nodes?

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

[infinispan-dev] Embedded mode: how-to get all caches started on all nodes?

Thomas SEGISMONT
Hi,

This email follows up on my testing of the Infinispan Cluster Manager for Vert.x on Kubernetes.

In one of the tests, we want to make sure that, after a rolling update of the application, the data submitted to Vert.x' AsyncMap is still present. And I found that when the underlying cache is predefined in infinispan.xml, the data is present, otherwise it's not.

I pushed a simple reproducer on GitHub: https://github.com/tsegismont/cachedataloss

The code does this:
- a first node is started, and creates data
- new nodes are started, but they don't invoke cacheManager.getCache
- the initial member is killed
- a "testing" member is started, printing out the data in the console

Here are my findings.

1/ Even when caches are declared in infinispan.xml, the data is lost after the initial member goes away.

A little digging showed that the caches are really distributed only after you invoke cacheManager.getCache

2/ Checking cluster status "starts" triggers distribution

I was wondering why the behavior was not the same as with my Vert.x testing on Openshift. And then realized the only difference was the cluster readiness check, which reads the cluster health. So I updated the reproducer code to add such a check (still without invoking cacheManager.getCache). Then the caches defined in infinispan.xml have their data distributed.

So,

1/ How can I make sure caches are distributed on all nodes, even if some nodes never try to get a reference with cacheManager.getCache, or don't check cluster health?
2/ Are we doing something wrong with our way to declare the default configuration for caches [1][2]?

Thanks,

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes?

Tristan Tarrant-2
You need to use the brand new CacheAdmin API:

http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches


Tristan

On 3/1/18 4:30 PM, Thomas SEGISMONT wrote:

> Hi,
>
> This email follows up on my testing of the Infinispan Cluster Manager
> for Vert.x on Kubernetes.
>
> In one of the tests, we want to make sure that, after a rolling update
> of the application, the data submitted to Vert.x' AsyncMap is still
> present. And I found that when the underlying cache is predefined in
> infinispan.xml, the data is present, otherwise it's not.
>
> I pushed a simple reproducer on GitHub:
> https://github.com/tsegismont/cachedataloss
>
> The code does this:
> - a first node is started, and creates data
> - new nodes are started, but they don't invoke cacheManager.getCache
> - the initial member is killed
> - a "testing" member is started, printing out the data in the console
>
> Here are my findings.
>
> 1/ Even when caches are declared in infinispan.xml, the data is lost
> after the initial member goes away.
>
> A little digging showed that the caches are really distributed only
> after you invoke cacheManager.getCache
>
> 2/ Checking cluster status "starts" triggers distribution
>
> I was wondering why the behavior was not the same as with my Vert.x
> testing on Openshift. And then realized the only difference was the
> cluster readiness check, which reads the cluster health. So I updated
> the reproducer code to add such a check (still without invoking
> cacheManager.getCache). Then the caches defined in infinispan.xml have
> their data distributed.
>
> So,
>
> 1/ How can I make sure caches are distributed on all nodes, even if some
> nodes never try to get a reference with cacheManager.getCache, or don't
> check cluster health?
> 2/ Are we doing something wrong with our way to declare the default
> configuration for caches [1][2]?
>
> Thanks,
> Thomas
>
> [1]
> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10
> [2]
> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes?

Thomas SEGISMONT

2018-03-01 16:36 GMT+01:00 Tristan Tarrant <[hidden email]>:
You need to use the brand new CacheAdmin API:

http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches

I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2.

Is there any way to achieve these goals with 9.1.x?




Tristan

On 3/1/18 4:30 PM, Thomas SEGISMONT wrote:
> Hi,
>
> This email follows up on my testing of the Infinispan Cluster Manager
> for Vert.x on Kubernetes.
>
> In one of the tests, we want to make sure that, after a rolling update
> of the application, the data submitted to Vert.x' AsyncMap is still
> present. And I found that when the underlying cache is predefined in
> infinispan.xml, the data is present, otherwise it's not.
>
> I pushed a simple reproducer on GitHub:
> https://github.com/tsegismont/cachedataloss
>
> The code does this:
> - a first node is started, and creates data
> - new nodes are started, but they don't invoke cacheManager.getCache
> - the initial member is killed
> - a "testing" member is started, printing out the data in the console
>
> Here are my findings.
>
> 1/ Even when caches are declared in infinispan.xml, the data is lost
> after the initial member goes away.
>
> A little digging showed that the caches are really distributed only
> after you invoke cacheManager.getCache
>
> 2/ Checking cluster status "starts" triggers distribution
>
> I was wondering why the behavior was not the same as with my Vert.x
> testing on Openshift. And then realized the only difference was the
> cluster readiness check, which reads the cluster health. So I updated
> the reproducer code to add such a check (still without invoking
> cacheManager.getCache). Then the caches defined in infinispan.xml have
> their data distributed.
>
> So,
>
> 1/ How can I make sure caches are distributed on all nodes, even if some
> nodes never try to get a reference with cacheManager.getCache, or don't
> check cluster health?
> 2/ Are we doing something wrong with our way to declare the default
> configuration for caches [1][2]?
>
> Thanks,
> Thomas
>
> [1]
> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10
> [2]
> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes?

William Burns-3


On Thu, Mar 1, 2018 at 11:14 AM Thomas SEGISMONT <[hidden email]> wrote:
2018-03-01 16:36 GMT+01:00 Tristan Tarrant <[hidden email]>:
You need to use the brand new CacheAdmin API:

http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches

I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2.

Is there any way to achieve these goals with 9.1.x?

You could try using the ClusterExecutor to invoke getCache across all nodes. Note it has to return null since a Cache is not Serializable.

String cacheName = ;
cache.getCacheManager().executor().submitConsumer(cm -> {
         cm.getCache(cacheName);
         return null;
      }, (a, v, t) -> {
            if (v != null) {
               System.out.println("There was an exception retrieving " + cacheName + " from node: " + a);
            }
         }
      );
 





Tristan

On 3/1/18 4:30 PM, Thomas SEGISMONT wrote:
> Hi,
>
> This email follows up on my testing of the Infinispan Cluster Manager
> for Vert.x on Kubernetes.
>
> In one of the tests, we want to make sure that, after a rolling update
> of the application, the data submitted to Vert.x' AsyncMap is still
> present. And I found that when the underlying cache is predefined in
> infinispan.xml, the data is present, otherwise it's not.
>
> I pushed a simple reproducer on GitHub:
> https://github.com/tsegismont/cachedataloss
>
> The code does this:
> - a first node is started, and creates data
> - new nodes are started, but they don't invoke cacheManager.getCache
> - the initial member is killed
> - a "testing" member is started, printing out the data in the console
>
> Here are my findings.
>
> 1/ Even when caches are declared in infinispan.xml, the data is lost
> after the initial member goes away.
>
> A little digging showed that the caches are really distributed only
> after you invoke cacheManager.getCache
>
> 2/ Checking cluster status "starts" triggers distribution
>
> I was wondering why the behavior was not the same as with my Vert.x
> testing on Openshift. And then realized the only difference was the
> cluster readiness check, which reads the cluster health. So I updated
> the reproducer code to add such a check (still without invoking
> cacheManager.getCache). Then the caches defined in infinispan.xml have
> their data distributed.
>
> So,
>
> 1/ How can I make sure caches are distributed on all nodes, even if some
> nodes never try to get a reference with cacheManager.getCache, or don't
> check cluster health?
> 2/ Are we doing something wrong with our way to declare the default
> configuration for caches [1][2]?
>
> Thanks,
> Thomas
>
> [1]
> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10
> [2]
> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes?

Tristan Tarrant-2
In reply to this post by Thomas SEGISMONT
Why not just prestart caches ?

On 3/1/18 5:14 PM, Thomas SEGISMONT wrote:

>
> 2018-03-01 16:36 GMT+01:00 Tristan Tarrant <[hidden email]
> <mailto:[hidden email]>>:
>
>     You need to use the brand new CacheAdmin API:
>
>     http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches
>     <http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches>
>
>
> I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2.
>
> Is there any way to achieve these goals with 9.1.x?
>
>
>
>
>     Tristan
>
>     On 3/1/18 4:30 PM, Thomas SEGISMONT wrote:
>      > Hi,
>      >
>      > This email follows up on my testing of the Infinispan Cluster Manager
>      > for Vert.x on Kubernetes.
>      >
>      > In one of the tests, we want to make sure that, after a rolling
>     update
>      > of the application, the data submitted to Vert.x' AsyncMap is still
>      > present. And I found that when the underlying cache is predefined in
>      > infinispan.xml, the data is present, otherwise it's not.
>      >
>      > I pushed a simple reproducer on GitHub:
>      > https://github.com/tsegismont/cachedataloss
>     <https://github.com/tsegismont/cachedataloss>
>      >
>      > The code does this:
>      > - a first node is started, and creates data
>      > - new nodes are started, but they don't invoke cacheManager.getCache
>      > - the initial member is killed
>      > - a "testing" member is started, printing out the data in the console
>      >
>      > Here are my findings.
>      >
>      > 1/ Even when caches are declared in infinispan.xml, the data is lost
>      > after the initial member goes away.
>      >
>      > A little digging showed that the caches are really distributed only
>      > after you invoke cacheManager.getCache
>      >
>      > 2/ Checking cluster status "starts" triggers distribution
>      >
>      > I was wondering why the behavior was not the same as with my Vert.x
>      > testing on Openshift. And then realized the only difference was the
>      > cluster readiness check, which reads the cluster health. So I updated
>      > the reproducer code to add such a check (still without invoking
>      > cacheManager.getCache). Then the caches defined in infinispan.xml
>     have
>      > their data distributed.
>      >
>      > So,
>      >
>      > 1/ How can I make sure caches are distributed on all nodes, even
>     if some
>      > nodes never try to get a reference with cacheManager.getCache, or
>     don't
>      > check cluster health?
>      > 2/ Are we doing something wrong with our way to declare the default
>      > configuration for caches [1][2]?
>      >
>      > Thanks,
>      > Thomas
>      >
>      > [1]
>      >
>     https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10
>     <https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10>
>      > [2]
>      >
>     https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22
>     <https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22>
>      >
>      > _______________________________________________
>      > infinispan-dev mailing list
>      > [hidden email]
>     <mailto:[hidden email]>
>      > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>     <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>      >
>
>     --
>     Tristan Tarrant
>     Infinispan Lead and Data Grid Architect
>     JBoss, a division of Red Hat
>     _______________________________________________
>     infinispan-dev mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>     <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes?

William Burns-3
In reply to this post by William Burns-3


On Thu, Mar 1, 2018 at 11:21 AM William Burns <[hidden email]> wrote:
On Thu, Mar 1, 2018 at 11:14 AM Thomas SEGISMONT <[hidden email]> wrote:
2018-03-01 16:36 GMT+01:00 Tristan Tarrant <[hidden email]>:
You need to use the brand new CacheAdmin API:

http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches

I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2.

Is there any way to achieve these goals with 9.1.x?

You could try using the ClusterExecutor to invoke getCache across all nodes. Note it has to return null since a Cache is not Serializable.

 
Fixed typo below, sorry 
 
String cacheName = ;
cache.getCacheManager().executor().submitConsumer(cm -> {
         cm.getCache(cacheName);
         return null;
      }, (a, v, t) -> {
            if (t != null) {
               System.out.println("There was an exception " + t + " retrieving " + cacheName + " from node: " + a);
            }
         }
      );
 





Tristan

On 3/1/18 4:30 PM, Thomas SEGISMONT wrote:
> Hi,
>
> This email follows up on my testing of the Infinispan Cluster Manager
> for Vert.x on Kubernetes.
>
> In one of the tests, we want to make sure that, after a rolling update
> of the application, the data submitted to Vert.x' AsyncMap is still
> present. And I found that when the underlying cache is predefined in
> infinispan.xml, the data is present, otherwise it's not.
>
> I pushed a simple reproducer on GitHub:
> https://github.com/tsegismont/cachedataloss
>
> The code does this:
> - a first node is started, and creates data
> - new nodes are started, but they don't invoke cacheManager.getCache
> - the initial member is killed
> - a "testing" member is started, printing out the data in the console
>
> Here are my findings.
>
> 1/ Even when caches are declared in infinispan.xml, the data is lost
> after the initial member goes away.
>
> A little digging showed that the caches are really distributed only
> after you invoke cacheManager.getCache
>
> 2/ Checking cluster status "starts" triggers distribution
>
> I was wondering why the behavior was not the same as with my Vert.x
> testing on Openshift. And then realized the only difference was the
> cluster readiness check, which reads the cluster health. So I updated
> the reproducer code to add such a check (still without invoking
> cacheManager.getCache). Then the caches defined in infinispan.xml have
> their data distributed.
>
> So,
>
> 1/ How can I make sure caches are distributed on all nodes, even if some
> nodes never try to get a reference with cacheManager.getCache, or don't
> check cluster health?
> 2/ Are we doing something wrong with our way to declare the default
> configuration for caches [1][2]?
>
> Thanks,
> Thomas
>
> [1]
> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10
> [2]
> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes?

Thomas SEGISMONT
In reply to this post by Tristan Tarrant-2


2018-03-01 17:25 GMT+01:00 Tristan Tarrant <[hidden email]>:
Why not just prestart caches ?


How can you do that? Will it work for caches created after the node has started?
 
On 3/1/18 5:14 PM, Thomas SEGISMONT wrote:
>
> 2018-03-01 16:36 GMT+01:00 Tristan Tarrant <[hidden email]
> <mailto:[hidden email]>>:
>
>     You need to use the brand new CacheAdmin API:
>
>     http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches
>     <http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches>
>
>
> I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2.
>
> Is there any way to achieve these goals with 9.1.x?
>
>
>
>
>     Tristan
>
>     On 3/1/18 4:30 PM, Thomas SEGISMONT wrote:
>      > Hi,
>      >
>      > This email follows up on my testing of the Infinispan Cluster Manager
>      > for Vert.x on Kubernetes.
>      >
>      > In one of the tests, we want to make sure that, after a rolling
>     update
>      > of the application, the data submitted to Vert.x' AsyncMap is still
>      > present. And I found that when the underlying cache is predefined in
>      > infinispan.xml, the data is present, otherwise it's not.
>      >
>      > I pushed a simple reproducer on GitHub:
>      > https://github.com/tsegismont/cachedataloss
>     <https://github.com/tsegismont/cachedataloss>
>      >
>      > The code does this:
>      > - a first node is started, and creates data
>      > - new nodes are started, but they don't invoke cacheManager.getCache
>      > - the initial member is killed
>      > - a "testing" member is started, printing out the data in the console
>      >
>      > Here are my findings.
>      >
>      > 1/ Even when caches are declared in infinispan.xml, the data is lost
>      > after the initial member goes away.
>      >
>      > A little digging showed that the caches are really distributed only
>      > after you invoke cacheManager.getCache
>      >
>      > 2/ Checking cluster status "starts" triggers distribution
>      >
>      > I was wondering why the behavior was not the same as with my Vert.x
>      > testing on Openshift. And then realized the only difference was the
>      > cluster readiness check, which reads the cluster health. So I updated
>      > the reproducer code to add such a check (still without invoking
>      > cacheManager.getCache). Then the caches defined in infinispan.xml
>     have
>      > their data distributed.
>      >
>      > So,
>      >
>      > 1/ How can I make sure caches are distributed on all nodes, even
>     if some
>      > nodes never try to get a reference with cacheManager.getCache, or
>     don't
>      > check cluster health?
>      > 2/ Are we doing something wrong with our way to declare the default
>      > configuration for caches [1][2]?
>      >
>      > Thanks,
>      > Thomas
>      >
>      > [1]
>      >
>     https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10
>     <https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10>
>      > [2]
>      >
>     https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22
>     <https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22>
>      >
>      > _______________________________________________
>      > infinispan-dev mailing list
>      > [hidden email]
>     <mailto:[hidden email]>
>      > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>     <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>      >
>
>     --
>     Tristan Tarrant
>     Infinispan Lead and Data Grid Architect
>     JBoss, a division of Red Hat
>     _______________________________________________
>     infinispan-dev mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>     <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev