[infinispan-dev] Using load balancers for Infinispan in Kubernetes

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[infinispan-dev] Using load balancers for Infinispan in Kubernetes

Sebastian Laskawiec
Hey guys!

Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following:

pasted1

As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse.

During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3].

The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings):
  • Benchmark app deployed inside Kuberenetes and using internal addresses (baseline):
    • 10k puts: 674.244 ±  16.654
    • 10k puts&gets: 1288.437 ± 136.207
  • Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence:
    • 10k puts: 1465.567 ± 176.349
    • 10k puts&gets: 2684.984 ± 114.993
  • Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing:
    • 10k puts: 1052.891 ±  31.218
    • 10k puts&gets: 2465.586 ±  85.034
Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small.

So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer.

Thanks,
Sebastian 


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Using load balancers for Infinispan in Kubernetes

Sanne Grinovero-3
Hi Sebastian,

the "intelligent routing" of Hot Rod being one of - if not the main - reason to use Hot Rod, I wonder if we shouldn't rather suggest people to stick with HTTP (REST) in such architectures.

Several people have suggested in the past the need to have an HTTP smart load balancer which would be able to route the external REST requests to the right node. Essentially have people use REST over the wider network, up to reaching the Infinispan cluster where the service endpoint (the load balancer) can convert them to optimised Hot Rod calls, or just leave them in the same format but routing them with the same intelligence to the right nodes.

I realise my proposal requires some work on several fronts, at very least we would need:
 - feature parity Hot Rod / REST so that people can actually use it
 - a REST load balancer

But I think the output of such a direction would be far more reusable, as both these points are high on the wish list anyway.

Not least having a "REST load balancer" would allow to deploy Infinispan as an HTTP cache; just honouring the HTTP caching protocols and existing standards would allow people to use any client to their liking, without us having to maintain Hot Rod clients and support it on many exotic platforms - we would still have Hot Rod clients but we'd be able to pick a smaller set of strategical platforms (e.g. Windows doesn't have to be in that list).

Such a load balancer could be written in Java (recent WildFly versions are able to do this efficiently) or it could be written in another language, all it takes is to integrate an Hot Rod client - or just the intelligence of it- as an extension into an existing load balancer of our choice.

Allow me a bit more nit-picking on your benchmarks ;)
As you pointed out yourself there are several flaws in your setup: "didn't tune", "running in a VM", "benchmarked on a mac mini", ...if you know it's a flawed setup I'd rather not publish figures, especially not suggest to make decisions based on such results.
At this level of design need to focus on getting the architecture right; it should be self-speaking that your proposal of actually using intelligent routing in some way should be better than not using it. Once we'll have an agreement on a sound architecture, then we'll be able to make the implementation efficient.

Thanks,
Sanne




On 30 May 2017 at 13:43, Sebastian Laskawiec <[hidden email]> wrote:
Hey guys!

Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following:

pasted1

As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse.

During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3].

The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings):
  • Benchmark app deployed inside Kuberenetes and using internal addresses (baseline):
    • 10k puts: 674.244 ±  16.654
    • 10k puts&gets: 1288.437 ± 136.207
  • Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence:
    • 10k puts: 1465.567 ± 176.349
    • 10k puts&gets: 2684.984 ± 114.993
  • Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing:
    • 10k puts: 1052.891 ±  31.218
    • 10k puts&gets: 2465.586 ±  85.034
Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small.

So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer.

Thanks,
Sebastian 


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Using load balancers for Infinispan in Kubernetes

Emmanuel Bernard
In reply to this post by Sebastian Laskawiec
To Sanne’s point, I think HTTP(/2) would be a better longer term path if we think we can make it as efficient as current HR. But let’s evaluate the numbers of cycles to reach that point. Doing Seb’s approach might be a good first step.
Speaking of Sebastian, I have been discussing with Burr, Edson on the idea of a *node* sidecar (as opposed to a *pod* sidecar). To your problem, could you use Daemonset to enforce one Load Balancer per node or at least per project instead of one per pod deployed with Infinispan in it?

WDYT, is it possible?

On 30 May 2017, at 20:43, Sebastian Laskawiec <[hidden email]> wrote:

Hey guys!

Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following:

<pasted1.png>

As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse.

During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3].

The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings):
  • Benchmark app deployed inside Kuberenetes and using internal addresses (baseline):
    • 10k puts: 674.244 ±  16.654
    • 10k puts&gets: 1288.437 ± 136.207
  • Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence:
    • 10k puts: 1465.567 ± 176.349
    • 10k puts&gets: 2684.984 ± 114.993
  • Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing:
    • 10k puts: 1052.891 ±  31.218
    • 10k puts&gets: 2465.586 ±  85.034
Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small.

So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer.

Thanks,
Sebastian 

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Using load balancers for Infinispan in Kubernetes

Radim Vansa
In reply to this post by Sanne Grinovero-3
On 05/30/2017 04:46 PM, Sanne Grinovero wrote:

> Hi Sebastian,
>
> the "intelligent routing" of Hot Rod being one of - if not the main -
> reason to use Hot Rod, I wonder if we shouldn't rather suggest people
> to stick with HTTP (REST) in such architectures.
>
> Several people have suggested in the past the need to have an HTTP
> smart load balancer which would be able to route the external REST
> requests to the right node. Essentially have people use REST over the
> wider network, up to reaching the Infinispan cluster where the service
> endpoint (the load balancer) can convert them to optimised Hot Rod
> calls, or just leave them in the same format but routing them with the
> same intelligence to the right nodes.
>
> I realise my proposal requires some work on several fronts, at very
> least we would need:
>  - feature parity Hot Rod / REST so that people can actually use it
>  - a REST load balancer
>
> But I think the output of such a direction would be far more reusable,
> as both these points are high on the wish list anyway.

You could already create this architecture: expose the REST interface on
a node with capacity factor 0 and this node will convert the REST calls
into 'optimized JGroups calls'. You could have multiple such nodes, to
eliminate single point of failure. There could be a very short hiccup
when you remove/add these 'routers', but since these don't contain any
data, it will be very short. Or you could even keep data on these nodes
and then some of the operations will be even faster. Problem solved?

>
> Not least having a "REST load balancer" would allow to deploy
> Infinispan as an HTTP cache; just honouring the HTTP caching protocols
> and existing standards would allow people to use any client to their
> liking, without us having to maintain Hot Rod clients and support it
> on many exotic platforms - we would still have Hot Rod clients but
> we'd be able to pick a smaller set of strategical platforms (e.g.
> Windows doesn't have to be in that list).
>
> Such a load balancer could be written in Java (recent WildFly versions
> are able to do this efficiently) or it could be written in another
> language, all it takes is to integrate an Hot Rod client - or just the
> intelligence of it- as an extension into an existing load balancer of
> our choice.
>
> Allow me a bit more nit-picking on your benchmarks ;)
> As you pointed out yourself there are several flaws in your setup:
> "didn't tune", "running in a VM", "benchmarked on a mac mini", ...if
> you know it's a flawed setup I'd rather not publish figures,
> especially not suggest to make decisions based on such results.
> At this level of design need to focus on getting the architecture
> right; it should be self-speaking that your proposal of actually using
> intelligent routing in some way should be better than not using it.
> Once we'll have an agreement on a sound architecture, then we'll be
> able to make the implementation efficient.
>
> Thanks,
> Sanne
>
>
>
>
> On 30 May 2017 at 13:43, Sebastian Laskawiec <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Hey guys!
>
>     Over past few weeks I've been working on accessing Infinispan
>     cluster deployed inside Kubernetes from the outside world. The POC
>     diagram looks like the following:
>
>     pasted1
>
>     As a reminder, the easiest (though not the most effective) way to
>     do it is to expose a load balancer Service (or a Node Port
>     Service) and access it using a client with basic intelligence (so
>     that it doesn't try to update server list based on topology
>     information). As you might expect, this won't give you much
>     performance but at least you could access the cluster. Another
>     approach is to use TLS/SNI but again, the performance would be
>     even worse.
>
>     During the research I tried to answer this problem and created
>     "External IP Controller" [1] (and corresponding Pull Request for
>     mapping internal/external addresses [2]). The main idea is to
>     create a controller deployed inside Kubernetes which will create
>     (and destroy if not needed) a load balancer per Infinispan Pod.
>     Additionally the controller exposes mapping between internal and
>     external addresses which allows the client to properly update
>     server list as well as consistent hash information. A full working
>     example is located here [3].
>
>     The biggest question is whether it's worth it? The short answer is
>     yes. Here are some benchmark results of performing 10k puts and
>     10k puts&gets (please take them with a big grain of salt, I didn't
>     optimize any server settings):
>
>       * Benchmark app deployed inside Kuberenetes and using internal
>         addresses (baseline):
>           o 10k puts: 674.244 ±  16.654
>           o 10k puts&gets: 1288.437 ± 136.207
>       * Benchamrking app deployed in a VM outside of Kubernetes with
>         basic intelligence:
>           o *10k puts: 1465.567 ± 176.349*
>           o *10k puts&gets: 2684.984 ± 114.993*
>       * Benchamrking app deployed in a VM outside of Kubernetes with
>         address mapping and topology aware hashing:
>           o *10k puts: 1052.891 ±  31.218*
>           o *10k puts&gets: 2465.586 ±  85.034*
>
>     Note that benchmarking Infinispan from a VM might be very
>     misleading since it depends on data center configuration.
>     Benchmarks above definitely contain some delay between Google
>     Compute Engine VM and a Kubernetes cluster deployed in Google
>     Container Engine. How big is the delay? Hard to tell. What counts
>     is the difference between client using basic intelligence and
>     topology aware intelligence. And as you can see it's not that small.
>
>     So the bottom line - if you can, deploy your application along
>     with Infinispan cluster inside Kubernetes. That's the fastest
>     configuration since only iptables are involved. Otherwise use a
>     load balancer per pod with External IP Controller. If you don't
>     care about performance, just use basic client intelligence and
>     expose everything using single load balancer.
>
>     Thanks,
>     Sebastian
>
>     [1] https://github.com/slaskawi/external-ip-proxy
>     <https://github.com/slaskawi/external-ip-proxy>
>     [2] https://github.com/infinispan/infinispan/pull/5164
>     <https://github.com/infinispan/infinispan/pull/5164>
>     [3]
>     https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark
>     <https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark>
>
>     _______________________________________________
>     infinispan-dev mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>     <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Radim Vansa <[hidden email]>
JBoss Performance Team

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Using load balancers for Infinispan in Kubernetes

Sebastian Laskawiec
In reply to this post by Sanne Grinovero-3
Hey Sanne,

Comments inlined.

Thanks,
Sebastian

On Tue, May 30, 2017 at 5:58 PM Sanne Grinovero <[hidden email]> wrote:
Hi Sebastian,

the "intelligent routing" of Hot Rod being one of - if not the main - reason to use Hot Rod, I wonder if we shouldn't rather suggest people to stick with HTTP (REST) in such architectures.

Several people have suggested in the past the need to have an HTTP smart load balancer which would be able to route the external REST requests to the right node. Essentially have people use REST over the wider network, up to reaching the Infinispan cluster where the service endpoint (the load balancer) can convert them to optimised Hot Rod calls, or just leave them in the same format but routing them with the same intelligence to the right nodes.

I realise my proposal requires some work on several fronts, at very least we would need:
 - feature parity Hot Rod / REST so that people can actually use it
 - a REST load balancer

But I think the output of such a direction would be far more reusable, as both these points are high on the wish list anyway.

Unfortunately I'm not convinced into this idea. Let me elaborate...

It goes without saying that HTTP payload is simply larger and require much more processing. That alone makes it slower than Hot Rod (I believe Martin could provide you some numbers on that). The second arguments is that switching/routing inside Kubernetes is bloody fast (since it's based on iptables) and some cloud vendors optimize it even further (e.g. Google Andromeda [1][2], I would be surprised if AWS didn't have anything similar). During the work on this prototype I wrote a simple async binary proxy [3] and measured GCP load balancer vs my proxy performance. They were twice as fast [4][5]. You may argue whether I could write a better proxy. Probably I could, but the bottom line is that another performance hit is inevitable. They are really fast and they operate on their own infrastructure (load balancers is something that is provided by the cloud vendor to Kubernetes, not the other way around).

So with all that in mind, are we going to get better results comparing to my proposal for Hot Rod? I dare to doubt, even with HTTP/2 support (which comes really soon I hope). The second question is whether this new "REST load balancer" will work better than a standard load balancer using round robin strategy? Again I dare to doubt, even if you you're faster at routing request to proper node, you introduce another layer of latency.

Of course the priority of this is up to Tristan but I definitely wouldn't place it high on todo list. And before even looking at it I would recommend taking Netty HTTP Proxy, putting it in the middle between real load balancer and Infinispan app and measure performance with and without it. Another test could be with 1 and 10 replicas to check the performance penalty of hitting 100% and 10% requests into proper node.

 
Not least having a "REST load balancer" would allow to deploy Infinispan as an HTTP cache; just honouring the HTTP caching protocols and existing standards would allow people to use any client to their liking,

Could you please give me an example how this could work? The only way that I know is to plug a cache into reverse proxy. NGNIX supports pluggable Redis for example [6].

 
without us having to maintain Hot Rod clients and support it on many exotic platforms - we would still have Hot Rod clients but we'd be able to pick a smaller set of strategical platforms (e.g. Windows doesn't have to be in that list).

As I mentioned before, I really doubts HTTP will be faster then Hot Rod in any scenario.
 
Such a load balancer could be written in Java (recent WildFly versions are able to do this efficiently) or it could be written in another language, all it takes is to integrate an Hot Rod client - or just the intelligence of it- as an extension into an existing load balancer of our choice.

As I mentioned before, with custom load balancer you're introducing another layer of latency. It's not a free ride.
 
Allow me a bit more nit-picking on your benchmarks ;)
As you pointed out yourself there are several flaws in your setup: "didn't tune", "running in a VM", "benchmarked on a mac mini", ...if you know it's a flawed setup I'd rather not publish figures, especially not suggest to make decisions based on such results.

Why not? Infinispan is a public project and anyone can benchmark it using JMH and taking decisions based on figures is always better than on intuition. Even though there were multiple unknown factors involved in this benchmark (this is why I pointed them out and asked to take the results with a grain of salt), the test conditions for all scenarios were the same. For me this is sufficient to give a general recommendation as I did. BTW, this recommendation fits exactly my expectations (communication inside Kube the fastest, LB per Pod a bit slower and no advanced routing the slowest). Finally, the recommendation is based on a POC which by definition means it doesn't fit all scenarios. You should always measure your system!

So unless you can prove the benchmark results are fundamentally wrong and I have drawn wrong conclusions (e.g. a simple client is the fastest solution whereas inside Kubernetes communication is the slowest), please don't use "naaah, that's wrong" argument. It's rude.
 
At this level of design need to focus on getting the architecture right; it should be self-speaking that your proposal of actually using intelligent routing in some way should be better than not using it.

My benchmark confirmed this. But as always I would be happy to discuss some alternatives. But before trying to convince me to "REST Router", please prove that introducing a load balancer (or just a simple async proxy for start) gives similar or better performance than a simple load balancer with round robin strategy.
 
Once we'll have an agreement on a sound architecture, then we'll be able to make the implementation efficient.

Thanks,
Sanne




On 30 May 2017 at 13:43, Sebastian Laskawiec <[hidden email]> wrote:
Hey guys!

Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following:

pasted1

As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse.

During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3].

The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings):
  • Benchmark app deployed inside Kuberenetes and using internal addresses (baseline):
    • 10k puts: 674.244 ±  16.654
    • 10k puts&gets: 1288.437 ± 136.207
  • Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence:
    • 10k puts: 1465.567 ± 176.349
    • 10k puts&gets: 2684.984 ± 114.993
  • Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing:
    • 10k puts: 1052.891 ±  31.218
    • 10k puts&gets: 2465.586 ±  85.034
Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small.

So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer.

Thanks,
Sebastian 


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--

SEBASTIAN ŁASKAWIEC

INFINISPAN DEVELOPER


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Using load balancers for Infinispan in Kubernetes

Galder Zamarreño
Cool down peoples!

http://www.quickmeme.com/meme/35ovcy

Sebastian, don't think Sanne was being rude, he's just blunt and we need his bluntness :)

Sanne, be nice to Sebastian and get him a beer next time around ;)

Peace out! :)
--
Galder Zamarreño
Infinispan, Red Hat

> On 31 May 2017, at 09:38, Sebastian Laskawiec <[hidden email]> wrote:
>
> Hey Sanne,
>
> Comments inlined.
>
> Thanks,
> Sebastian
>
> On Tue, May 30, 2017 at 5:58 PM Sanne Grinovero <[hidden email]> wrote:
> Hi Sebastian,
>
> the "intelligent routing" of Hot Rod being one of - if not the main - reason to use Hot Rod, I wonder if we shouldn't rather suggest people to stick with HTTP (REST) in such architectures.
>
> Several people have suggested in the past the need to have an HTTP smart load balancer which would be able to route the external REST requests to the right node. Essentially have people use REST over the wider network, up to reaching the Infinispan cluster where the service endpoint (the load balancer) can convert them to optimised Hot Rod calls, or just leave them in the same format but routing them with the same intelligence to the right nodes.
>
> I realise my proposal requires some work on several fronts, at very least we would need:
>  - feature parity Hot Rod / REST so that people can actually use it
>  - a REST load balancer
>
> But I think the output of such a direction would be far more reusable, as both these points are high on the wish list anyway.
>
> Unfortunately I'm not convinced into this idea. Let me elaborate...
>
> It goes without saying that HTTP payload is simply larger and require much more processing. That alone makes it slower than Hot Rod (I believe Martin could provide you some numbers on that). The second arguments is that switching/routing inside Kubernetes is bloody fast (since it's based on iptables) and some cloud vendors optimize it even further (e.g. Google Andromeda [1][2], I would be surprised if AWS didn't have anything similar). During the work on this prototype I wrote a simple async binary proxy [3] and measured GCP load balancer vs my proxy performance. They were twice as fast [4][5]. You may argue whether I could write a better proxy. Probably I could, but the bottom line is that another performance hit is inevitable. They are really fast and they operate on their own infrastructure (load balancers is something that is provided by the cloud vendor to Kubernetes, not the other way around).
>
> So with all that in mind, are we going to get better results comparing to my proposal for Hot Rod? I dare to doubt, even with HTTP/2 support (which comes really soon I hope). The second question is whether this new "REST load balancer" will work better than a standard load balancer using round robin strategy? Again I dare to doubt, even if you you're faster at routing request to proper node, you introduce another layer of latency.
>
> Of course the priority of this is up to Tristan but I definitely wouldn't place it high on todo list. And before even looking at it I would recommend taking Netty HTTP Proxy, putting it in the middle between real load balancer and Infinispan app and measure performance with and without it. Another test could be with 1 and 10 replicas to check the performance penalty of hitting 100% and 10% requests into proper node.
>
> [1] https://cloudplatform.googleblog.com/2014/08/containers-vms-kubernetes-and-vmware.html
> [2] https://cloudplatform.googleblog.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html
> [3] https://github.com/slaskawi/external-ip-proxy/blob/Benchmark_with_proxy/Proxy/Proxy.go
> [4] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20proxy.txt
> [5] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20loadbalancer.txt
>  
> Not least having a "REST load balancer" would allow to deploy Infinispan as an HTTP cache; just honouring the HTTP caching protocols and existing standards would allow people to use any client to their liking,
>
> Could you please give me an example how this could work? The only way that I know is to plug a cache into reverse proxy. NGNIX supports pluggable Redis for example [6].
>
> [6] https://www.nginx.com/resources/wiki/modules/redis/
>  
> without us having to maintain Hot Rod clients and support it on many exotic platforms - we would still have Hot Rod clients but we'd be able to pick a smaller set of strategical platforms (e.g. Windows doesn't have to be in that list).
>
> As I mentioned before, I really doubts HTTP will be faster then Hot Rod in any scenario.
>  
> Such a load balancer could be written in Java (recent WildFly versions are able to do this efficiently) or it could be written in another language, all it takes is to integrate an Hot Rod client - or just the intelligence of it- as an extension into an existing load balancer of our choice.
>
> As I mentioned before, with custom load balancer you're introducing another layer of latency. It's not a free ride.
>  
> Allow me a bit more nit-picking on your benchmarks ;)
> As you pointed out yourself there are several flaws in your setup: "didn't tune", "running in a VM", "benchmarked on a mac mini", ...if you know it's a flawed setup I'd rather not publish figures, especially not suggest to make decisions based on such results.
>
> Why not? Infinispan is a public project and anyone can benchmark it using JMH and taking decisions based on figures is always better than on intuition. Even though there were multiple unknown factors involved in this benchmark (this is why I pointed them out and asked to take the results with a grain of salt), the test conditions for all scenarios were the same. For me this is sufficient to give a general recommendation as I did. BTW, this recommendation fits exactly my expectations (communication inside Kube the fastest, LB per Pod a bit slower and no advanced routing the slowest). Finally, the recommendation is based on a POC which by definition means it doesn't fit all scenarios. You should always measure your system!
>
> So unless you can prove the benchmark results are fundamentally wrong and I have drawn wrong conclusions (e.g. a simple client is the fastest solution whereas inside Kubernetes communication is the slowest), please don't use "naaah, that's wrong" argument. It's rude.
>  
> At this level of design need to focus on getting the architecture right; it should be self-speaking that your proposal of actually using intelligent routing in some way should be better than not using it.
>
> My benchmark confirmed this. But as always I would be happy to discuss some alternatives. But before trying to convince me to "REST Router", please prove that introducing a load balancer (or just a simple async proxy for start) gives similar or better performance than a simple load balancer with round robin strategy.
>  
> Once we'll have an agreement on a sound architecture, then we'll be able to make the implementation efficient.
>
> Thanks,
> Sanne
>
>
>
>
> On 30 May 2017 at 13:43, Sebastian Laskawiec <[hidden email]> wrote:
> Hey guys!
>
> Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following:
>
> <pasted1.png>
>
> As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse.
>
> During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3].
>
> The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings):
> • Benchmark app deployed inside Kuberenetes and using internal addresses (baseline):
> • 10k puts: 674.244 ±  16.654
> • 10k puts&gets: 1288.437 ± 136.207
> • Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence:
> • 10k puts: 1465.567 ± 176.349
> • 10k puts&gets: 2684.984 ± 114.993
> • Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing:
> • 10k puts: 1052.891 ±  31.218
> • 10k puts&gets: 2465.586 ±  85.034
> Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small.
>
> So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer.
>
> Thanks,
> Sebastian
>
> [1] https://github.com/slaskawi/external-ip-proxy
> [2] https://github.com/infinispan/infinispan/pull/5164
> [3] https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> --
> SEBASTIAN ŁASKAWIEC
> INFINISPAN DEVELOPER
> Red Hat EMEA
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Using load balancers for Infinispan in Kubernetes

Sanne Grinovero-3
On 31 May 2017 at 17:48, Galder Zamarreño <[hidden email]> wrote:
> Cool down peoples!
>
> http://www.quickmeme.com/meme/35ovcy
>
> Sebastian, don't think Sanne was being rude, he's just blunt and we need his bluntness :)
>
> Sanne, be nice to Sebastian and get him a beer next time around ;)

Hey, he started it! His email was formatted with HTML !? ;)

But seriously, I didn't mean to be rude or disrespectful; if it comes
across like that I'm sorry. FWIW the answers seemed cool to me too.

Let me actually clarify that I love the attitude of Sebastian to try
the various approaches and get with some measurements to help with
important design decisions. It's good that we spend some time
evaluating the alternatives, and it's equally good that we debate the
trade-offs here.

As warned in my email I'm "nit-picking" on the benchmark methodology,
and probably more than the usual, because I care!

I am highlighting what I believe to be useful advice though: the
absolute metrics of such tests need not be taken as primary
(exclusive?) decision factor. Which doesn't mean that performing such
tests is not useful, they certainly provide a lot to think about.
Yet the interpretation of such results need to not be generalised, and
the interpretation process is more important than the absolute
ballpark figures they provide; for example it's paramount to figure
out which factors of the test could theoretically invert the results.
Using them to identify a binary faster/slow yes/no to proof/disproof a
design decision is a dangerous fallacy .. and I'm not picking on
Sebastian specifically, just reminding about it as we've all been
guilty of it: confirmation bias, etc..

The best advice I've ever had myself in performance analysis is to not
try to figure out which implementation is faster "on my machine", but
to understand why it's producing a specific result, and what is
preventing it to produce an higher figure.
Once you know that, it's very valuable information as it will tell you
either what needs fixing with a benchmark, or what needs to be done to
improve the performance of your implementation ;)
So that's why I personally don't publish figures often, but hey I
still run such tests too and spend a lot of time analysing them, to
eventually share what I figure out in the process...

Thanks,
Sanne



>
> Peace out! :)
> --
> Galder Zamarreño
> Infinispan, Red Hat
>
>> On 31 May 2017, at 09:38, Sebastian Laskawiec <[hidden email]> wrote:
>>
>> Hey Sanne,
>>
>> Comments inlined.
>>
>> Thanks,
>> Sebastian
>>
>> On Tue, May 30, 2017 at 5:58 PM Sanne Grinovero <[hidden email]> wrote:
>> Hi Sebastian,
>>
>> the "intelligent routing" of Hot Rod being one of - if not the main - reason to use Hot Rod, I wonder if we shouldn't rather suggest people to stick with HTTP (REST) in such architectures.
>>
>> Several people have suggested in the past the need to have an HTTP smart load balancer which would be able to route the external REST requests to the right node. Essentially have people use REST over the wider network, up to reaching the Infinispan cluster where the service endpoint (the load balancer) can convert them to optimised Hot Rod calls, or just leave them in the same format but routing them with the same intelligence to the right nodes.
>>
>> I realise my proposal requires some work on several fronts, at very least we would need:
>>  - feature parity Hot Rod / REST so that people can actually use it
>>  - a REST load balancer
>>
>> But I think the output of such a direction would be far more reusable, as both these points are high on the wish list anyway.
>>
>> Unfortunately I'm not convinced into this idea. Let me elaborate...
>>
>> It goes without saying that HTTP payload is simply larger and require much more processing. That alone makes it slower than Hot Rod (I believe Martin could provide you some numbers on that). The second arguments is that switching/routing inside Kubernetes is bloody fast (since it's based on iptables) and some cloud vendors optimize it even further (e.g. Google Andromeda [1][2], I would be surprised if AWS didn't have anything similar). During the work on this prototype I wrote a simple async binary proxy [3] and measured GCP load balancer vs my proxy performance. They were twice as fast [4][5]. You may argue whether I could write a better proxy. Probably I could, but the bottom line is that another performance hit is inevitable. They are really fast and they operate on their own infrastructure (load balancers is something that is provided by the cloud vendor to Kubernetes, not the other way around).
>>
>> So with all that in mind, are we going to get better results comparing to my proposal for Hot Rod? I dare to doubt, even with HTTP/2 support (which comes really soon I hope). The second question is whether this new "REST load balancer" will work better than a standard load balancer using round robin strategy? Again I dare to doubt, even if you you're faster at routing request to proper node, you introduce another layer of latency.
>>
>> Of course the priority of this is up to Tristan but I definitely wouldn't place it high on todo list. And before even looking at it I would recommend taking Netty HTTP Proxy, putting it in the middle between real load balancer and Infinispan app and measure performance with and without it. Another test could be with 1 and 10 replicas to check the performance penalty of hitting 100% and 10% requests into proper node.
>>
>> [1] https://cloudplatform.googleblog.com/2014/08/containers-vms-kubernetes-and-vmware.html
>> [2] https://cloudplatform.googleblog.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html
>> [3] https://github.com/slaskawi/external-ip-proxy/blob/Benchmark_with_proxy/Proxy/Proxy.go
>> [4] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20proxy.txt
>> [5] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20loadbalancer.txt
>>
>> Not least having a "REST load balancer" would allow to deploy Infinispan as an HTTP cache; just honouring the HTTP caching protocols and existing standards would allow people to use any client to their liking,
>>
>> Could you please give me an example how this could work? The only way that I know is to plug a cache into reverse proxy. NGNIX supports pluggable Redis for example [6].
>>
>> [6] https://www.nginx.com/resources/wiki/modules/redis/
>>
>> without us having to maintain Hot Rod clients and support it on many exotic platforms - we would still have Hot Rod clients but we'd be able to pick a smaller set of strategical platforms (e.g. Windows doesn't have to be in that list).
>>
>> As I mentioned before, I really doubts HTTP will be faster then Hot Rod in any scenario.
>>
>> Such a load balancer could be written in Java (recent WildFly versions are able to do this efficiently) or it could be written in another language, all it takes is to integrate an Hot Rod client - or just the intelligence of it- as an extension into an existing load balancer of our choice.
>>
>> As I mentioned before, with custom load balancer you're introducing another layer of latency. It's not a free ride.
>>
>> Allow me a bit more nit-picking on your benchmarks ;)
>> As you pointed out yourself there are several flaws in your setup: "didn't tune", "running in a VM", "benchmarked on a mac mini", ...if you know it's a flawed setup I'd rather not publish figures, especially not suggest to make decisions based on such results.
>>
>> Why not? Infinispan is a public project and anyone can benchmark it using JMH and taking decisions based on figures is always better than on intuition. Even though there were multiple unknown factors involved in this benchmark (this is why I pointed them out and asked to take the results with a grain of salt), the test conditions for all scenarios were the same. For me this is sufficient to give a general recommendation as I did. BTW, this recommendation fits exactly my expectations (communication inside Kube the fastest, LB per Pod a bit slower and no advanced routing the slowest). Finally, the recommendation is based on a POC which by definition means it doesn't fit all scenarios. You should always measure your system!
>>
>> So unless you can prove the benchmark results are fundamentally wrong and I have drawn wrong conclusions (e.g. a simple client is the fastest solution whereas inside Kubernetes communication is the slowest), please don't use "naaah, that's wrong" argument. It's rude.
>>
>> At this level of design need to focus on getting the architecture right; it should be self-speaking that your proposal of actually using intelligent routing in some way should be better than not using it.
>>
>> My benchmark confirmed this. But as always I would be happy to discuss some alternatives. But before trying to convince me to "REST Router", please prove that introducing a load balancer (or just a simple async proxy for start) gives similar or better performance than a simple load balancer with round robin strategy.
>>
>> Once we'll have an agreement on a sound architecture, then we'll be able to make the implementation efficient.
>>
>> Thanks,
>> Sanne
>>
>>
>>
>>
>> On 30 May 2017 at 13:43, Sebastian Laskawiec <[hidden email]> wrote:
>> Hey guys!
>>
>> Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following:
>>
>> <pasted1.png>
>>
>> As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse.
>>
>> During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3].
>>
>> The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings):
>>       • Benchmark app deployed inside Kuberenetes and using internal addresses (baseline):
>>               • 10k puts: 674.244 ±  16.654
>>               • 10k puts&gets: 1288.437 ± 136.207
>>       • Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence:
>>               • 10k puts: 1465.567 ± 176.349
>>               • 10k puts&gets: 2684.984 ± 114.993
>>       • Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing:
>>               • 10k puts: 1052.891 ±  31.218
>>               • 10k puts&gets: 2465.586 ±  85.034
>> Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small.
>>
>> So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer.
>>
>> Thanks,
>> Sebastian
>>
>> [1] https://github.com/slaskawi/external-ip-proxy
>> [2] https://github.com/infinispan/infinispan/pull/5164
>> [3] https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> --
>> SEBASTIAN ŁASKAWIEC
>> INFINISPAN DEVELOPER
>> Red Hat EMEA
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Using load balancers for Infinispan in Kubernetes

Sebastian Laskawiec
In reply to this post by Emmanuel Bernard
Hey Emmanuel,

Comments inlined.

Thanks,
Sebastian

On Wed, May 31, 2017 at 2:56 PM Emmanuel Bernard <[hidden email]> wrote:
To Sanne’s point, I think HTTP(/2) would be a better longer term path if we think we can make it as efficient as current HR. But let’s evaluate the numbers of cycles to reach that point. Doing Seb’s approach might be a good first step.

I will be looking into HTTP/2 implementation starting from today/tomorrow. So it should be there soon. And of course I will do some benchmarks (or even help Jiri to upgrade perfCheck to do benchmarks using HTTP/1.1 and HTTP/2).

Also please bear in mind that there will probably be two ways of switching protocols - using HTTP/1.1 Upgrade header and TLS/ALPN negotiation. As you might expect, the latter enforces TLS (and therefore the throughput will be lower). 
 
Speaking of Sebastian, I have been discussing with Burr, Edson on the idea of a *node* sidecar (as opposed to a *pod* sidecar). To your problem, could you use Daemonset to enforce one Load Balancer per node or at least per project instead of one per pod deployed with Infinispan in it?

Unless I missed anything, it won't buy us anything. The idea behind the POC is to make all Infinispan nodes directly accessible from the outside world. The client must be able to access whichever node it wishes. This is achieved by creating a load balancer per Infinispan pod. So the load balancer works more like an external IP rather than a "real" load balancer. 

Just FYI, another round of comments on L3/L4 TCP Ingress has just started: https://github.com/kubernetes/kubernetes/issues/23291
The rough estimate is to get it in Kube 1.8. Once this is implemented, we could use a TCP Ingress per pod (instead of load balancer per pod). The main difference will probably be in $$$. Load balancers are pretty expensive.
 

WDYT, is it possible?

On 30 May 2017, at 20:43, Sebastian Laskawiec <[hidden email]> wrote:

Hey guys!

Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following:

<pasted1.png>

As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse.

During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3].

The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings):
  • Benchmark app deployed inside Kuberenetes and using internal addresses (baseline):
    • 10k puts: 674.244 ±  16.654
    • 10k puts&gets: 1288.437 ± 136.207
  • Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence:
    • 10k puts: 1465.567 ± 176.349
    • 10k puts&gets: 2684.984 ± 114.993
  • Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing:
    • 10k puts: 1052.891 ±  31.218
    • 10k puts&gets: 2465.586 ±  85.034
Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small.

So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer.

Thanks,
Sebastian 

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--

SEBASTIAN ŁASKAWIEC

INFINISPAN DEVELOPER


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Using load balancers for Infinispan in Kubernetes

Sebastian Laskawiec
In reply to this post by Sanne Grinovero-3
I'm calm!! I'm calm!!
pasted1
Being serious now, no offence taken and thanks for clarification Sanne.
Next time I'll just publish some percentage difference which will probably express better what I want to tell. And if you see anything wrong with the test methodology that could influence the relation between those tests, just let me know.

Don't you think guys we should have a reference OpenShift/Kubernetes environment for testing (minikube/minishift are great but it's always "it works on my machine" type of thing)? I expect there will be more and more discussions around cloud topics. If you are interested in this, please let me know off-list. I will try to arrange something.

Thanks,
Sebastian

On Thu, Jun 1, 2017 at 4:49 AM Sanne Grinovero <[hidden email]> wrote:
On 31 May 2017 at 17:48, Galder Zamarreño <[hidden email]> wrote:
> Cool down peoples!
>
> http://www.quickmeme.com/meme/35ovcy
>
> Sebastian, don't think Sanne was being rude, he's just blunt and we need his bluntness :)
>
> Sanne, be nice to Sebastian and get him a beer next time around ;)

Hey, he started it! His email was formatted with HTML !? ;)

But seriously, I didn't mean to be rude or disrespectful; if it comes
across like that I'm sorry. FWIW the answers seemed cool to me too.

Let me actually clarify that I love the attitude of Sebastian to try
the various approaches and get with some measurements to help with
important design decisions. It's good that we spend some time
evaluating the alternatives, and it's equally good that we debate the
trade-offs here.

As warned in my email I'm "nit-picking" on the benchmark methodology,
and probably more than the usual, because I care!

I am highlighting what I believe to be useful advice though: the
absolute metrics of such tests need not be taken as primary
(exclusive?) decision factor. Which doesn't mean that performing such
tests is not useful, they certainly provide a lot to think about.
Yet the interpretation of such results need to not be generalised, and
the interpretation process is more important than the absolute
ballpark figures they provide; for example it's paramount to figure
out which factors of the test could theoretically invert the results.
Using them to identify a binary faster/slow yes/no to proof/disproof a
design decision is a dangerous fallacy .. and I'm not picking on
Sebastian specifically, just reminding about it as we've all been
guilty of it: confirmation bias, etc..

The best advice I've ever had myself in performance analysis is to not
try to figure out which implementation is faster "on my machine", but
to understand why it's producing a specific result, and what is
preventing it to produce an higher figure.
Once you know that, it's very valuable information as it will tell you
either what needs fixing with a benchmark, or what needs to be done to
improve the performance of your implementation ;)
So that's why I personally don't publish figures often, but hey I
still run such tests too and spend a lot of time analysing them, to
eventually share what I figure out in the process...

Thanks,
Sanne



>
> Peace out! :)
> --
> Galder Zamarreño
> Infinispan, Red Hat
>
>> On 31 May 2017, at 09:38, Sebastian Laskawiec <[hidden email]> wrote:
>>
>> Hey Sanne,
>>
>> Comments inlined.
>>
>> Thanks,
>> Sebastian
>>
>> On Tue, May 30, 2017 at 5:58 PM Sanne Grinovero <[hidden email]> wrote:
>> Hi Sebastian,
>>
>> the "intelligent routing" of Hot Rod being one of - if not the main - reason to use Hot Rod, I wonder if we shouldn't rather suggest people to stick with HTTP (REST) in such architectures.
>>
>> Several people have suggested in the past the need to have an HTTP smart load balancer which would be able to route the external REST requests to the right node. Essentially have people use REST over the wider network, up to reaching the Infinispan cluster where the service endpoint (the load balancer) can convert them to optimised Hot Rod calls, or just leave them in the same format but routing them with the same intelligence to the right nodes.
>>
>> I realise my proposal requires some work on several fronts, at very least we would need:
>>  - feature parity Hot Rod / REST so that people can actually use it
>>  - a REST load balancer
>>
>> But I think the output of such a direction would be far more reusable, as both these points are high on the wish list anyway.
>>
>> Unfortunately I'm not convinced into this idea. Let me elaborate...
>>
>> It goes without saying that HTTP payload is simply larger and require much more processing. That alone makes it slower than Hot Rod (I believe Martin could provide you some numbers on that). The second arguments is that switching/routing inside Kubernetes is bloody fast (since it's based on iptables) and some cloud vendors optimize it even further (e.g. Google Andromeda [1][2], I would be surprised if AWS didn't have anything similar). During the work on this prototype I wrote a simple async binary proxy [3] and measured GCP load balancer vs my proxy performance. They were twice as fast [4][5]. You may argue whether I could write a better proxy. Probably I could, but the bottom line is that another performance hit is inevitable. They are really fast and they operate on their own infrastructure (load balancers is something that is provided by the cloud vendor to Kubernetes, not the other way around).
>>
>> So with all that in mind, are we going to get better results comparing to my proposal for Hot Rod? I dare to doubt, even with HTTP/2 support (which comes really soon I hope). The second question is whether this new "REST load balancer" will work better than a standard load balancer using round robin strategy? Again I dare to doubt, even if you you're faster at routing request to proper node, you introduce another layer of latency.
>>
>> Of course the priority of this is up to Tristan but I definitely wouldn't place it high on todo list. And before even looking at it I would recommend taking Netty HTTP Proxy, putting it in the middle between real load balancer and Infinispan app and measure performance with and without it. Another test could be with 1 and 10 replicas to check the performance penalty of hitting 100% and 10% requests into proper node.
>>
>> [1] https://cloudplatform.googleblog.com/2014/08/containers-vms-kubernetes-and-vmware.html
>> [2] https://cloudplatform.googleblog.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html
>> [3] https://github.com/slaskawi/external-ip-proxy/blob/Benchmark_with_proxy/Proxy/Proxy.go
>> [4] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20proxy.txt
>> [5] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20loadbalancer.txt
>>
>> Not least having a "REST load balancer" would allow to deploy Infinispan as an HTTP cache; just honouring the HTTP caching protocols and existing standards would allow people to use any client to their liking,
>>
>> Could you please give me an example how this could work? The only way that I know is to plug a cache into reverse proxy. NGNIX supports pluggable Redis for example [6].
>>
>> [6] https://www.nginx.com/resources/wiki/modules/redis/
>>
>> without us having to maintain Hot Rod clients and support it on many exotic platforms - we would still have Hot Rod clients but we'd be able to pick a smaller set of strategical platforms (e.g. Windows doesn't have to be in that list).
>>
>> As I mentioned before, I really doubts HTTP will be faster then Hot Rod in any scenario.
>>
>> Such a load balancer could be written in Java (recent WildFly versions are able to do this efficiently) or it could be written in another language, all it takes is to integrate an Hot Rod client - or just the intelligence of it- as an extension into an existing load balancer of our choice.
>>
>> As I mentioned before, with custom load balancer you're introducing another layer of latency. It's not a free ride.
>>
>> Allow me a bit more nit-picking on your benchmarks ;)
>> As you pointed out yourself there are several flaws in your setup: "didn't tune", "running in a VM", "benchmarked on a mac mini", ...if you know it's a flawed setup I'd rather not publish figures, especially not suggest to make decisions based on such results.
>>
>> Why not? Infinispan is a public project and anyone can benchmark it using JMH and taking decisions based on figures is always better than on intuition. Even though there were multiple unknown factors involved in this benchmark (this is why I pointed them out and asked to take the results with a grain of salt), the test conditions for all scenarios were the same. For me this is sufficient to give a general recommendation as I did. BTW, this recommendation fits exactly my expectations (communication inside Kube the fastest, LB per Pod a bit slower and no advanced routing the slowest). Finally, the recommendation is based on a POC which by definition means it doesn't fit all scenarios. You should always measure your system!
>>
>> So unless you can prove the benchmark results are fundamentally wrong and I have drawn wrong conclusions (e.g. a simple client is the fastest solution whereas inside Kubernetes communication is the slowest), please don't use "naaah, that's wrong" argument. It's rude.
>>
>> At this level of design need to focus on getting the architecture right; it should be self-speaking that your proposal of actually using intelligent routing in some way should be better than not using it.
>>
>> My benchmark confirmed this. But as always I would be happy to discuss some alternatives. But before trying to convince me to "REST Router", please prove that introducing a load balancer (or just a simple async proxy for start) gives similar or better performance than a simple load balancer with round robin strategy.
>>
>> Once we'll have an agreement on a sound architecture, then we'll be able to make the implementation efficient.
>>
>> Thanks,
>> Sanne
>>
>>
>>
>>
>> On 30 May 2017 at 13:43, Sebastian Laskawiec <[hidden email]> wrote:
>> Hey guys!
>>
>> Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following:
>>
>> <pasted1.png>
>>
>> As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse.
>>
>> During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3].
>>
>> The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings):
>>       • Benchmark app deployed inside Kuberenetes and using internal addresses (baseline):
>>               • 10k puts: 674.244 ±  16.654
>>               • 10k puts&gets: 1288.437 ± 136.207
>>       • Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence:
>>               • 10k puts: 1465.567 ± 176.349
>>               • 10k puts&gets: 2684.984 ± 114.993
>>       • Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing:
>>               • 10k puts: 1052.891 ±  31.218
>>               • 10k puts&gets: 2465.586 ±  85.034
>> Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small.
>>
>> So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer.
>>
>> Thanks,
>> Sebastian
>>
>> [1] https://github.com/slaskawi/external-ip-proxy
>> [2] https://github.com/infinispan/infinispan/pull/5164
>> [3] https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> --
>> SEBASTIAN ŁASKAWIEC
>> INFINISPAN DEVELOPER
>> Red Hat EMEA
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--

SEBASTIAN ŁASKAWIEC

INFINISPAN DEVELOPER


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Using load balancers for Infinispan in Kubernetes

Sebastian Laskawiec
Hey guys,

I'm receiving more and more queries about this solution so I marked [1] for 9.1.0.Beta1 milestone.

Please let me know if that's NOT ok [3] :)

Thanks,
Sebastian

[3] Just checking Sanne's and Will's spam filters :)

On Thu, Jun 1, 2017 at 9:09 AM Sebastian Laskawiec <[hidden email]> wrote:
I'm calm!! I'm calm!!
pasted1
Being serious now, no offence taken and thanks for clarification Sanne.
Next time I'll just publish some percentage difference which will probably express better what I want to tell. And if you see anything wrong with the test methodology that could influence the relation between those tests, just let me know.

Don't you think guys we should have a reference OpenShift/Kubernetes environment for testing (minikube/minishift are great but it's always "it works on my machine" type of thing)? I expect there will be more and more discussions around cloud topics. If you are interested in this, please let me know off-list. I will try to arrange something.

Thanks,
Sebastian

On Thu, Jun 1, 2017 at 4:49 AM Sanne Grinovero <[hidden email]> wrote:
On 31 May 2017 at 17:48, Galder Zamarreño <[hidden email]> wrote:
> Cool down peoples!
>
> http://www.quickmeme.com/meme/35ovcy
>
> Sebastian, don't think Sanne was being rude, he's just blunt and we need his bluntness :)
>
> Sanne, be nice to Sebastian and get him a beer next time around ;)

Hey, he started it! His email was formatted with HTML !? ;)

But seriously, I didn't mean to be rude or disrespectful; if it comes
across like that I'm sorry. FWIW the answers seemed cool to me too.

Let me actually clarify that I love the attitude of Sebastian to try
the various approaches and get with some measurements to help with
important design decisions. It's good that we spend some time
evaluating the alternatives, and it's equally good that we debate the
trade-offs here.

As warned in my email I'm "nit-picking" on the benchmark methodology,
and probably more than the usual, because I care!

I am highlighting what I believe to be useful advice though: the
absolute metrics of such tests need not be taken as primary
(exclusive?) decision factor. Which doesn't mean that performing such
tests is not useful, they certainly provide a lot to think about.
Yet the interpretation of such results need to not be generalised, and
the interpretation process is more important than the absolute
ballpark figures they provide; for example it's paramount to figure
out which factors of the test could theoretically invert the results.
Using them to identify a binary faster/slow yes/no to proof/disproof a
design decision is a dangerous fallacy .. and I'm not picking on
Sebastian specifically, just reminding about it as we've all been
guilty of it: confirmation bias, etc..

The best advice I've ever had myself in performance analysis is to not
try to figure out which implementation is faster "on my machine", but
to understand why it's producing a specific result, and what is
preventing it to produce an higher figure.
Once you know that, it's very valuable information as it will tell you
either what needs fixing with a benchmark, or what needs to be done to
improve the performance of your implementation ;)
So that's why I personally don't publish figures often, but hey I
still run such tests too and spend a lot of time analysing them, to
eventually share what I figure out in the process...

Thanks,
Sanne



>
> Peace out! :)
> --
> Galder Zamarreño
> Infinispan, Red Hat
>
>> On 31 May 2017, at 09:38, Sebastian Laskawiec <[hidden email]> wrote:
>>
>> Hey Sanne,
>>
>> Comments inlined.
>>
>> Thanks,
>> Sebastian
>>
>> On Tue, May 30, 2017 at 5:58 PM Sanne Grinovero <[hidden email]> wrote:
>> Hi Sebastian,
>>
>> the "intelligent routing" of Hot Rod being one of - if not the main - reason to use Hot Rod, I wonder if we shouldn't rather suggest people to stick with HTTP (REST) in such architectures.
>>
>> Several people have suggested in the past the need to have an HTTP smart load balancer which would be able to route the external REST requests to the right node. Essentially have people use REST over the wider network, up to reaching the Infinispan cluster where the service endpoint (the load balancer) can convert them to optimised Hot Rod calls, or just leave them in the same format but routing them with the same intelligence to the right nodes.
>>
>> I realise my proposal requires some work on several fronts, at very least we would need:
>>  - feature parity Hot Rod / REST so that people can actually use it
>>  - a REST load balancer
>>
>> But I think the output of such a direction would be far more reusable, as both these points are high on the wish list anyway.
>>
>> Unfortunately I'm not convinced into this idea. Let me elaborate...
>>
>> It goes without saying that HTTP payload is simply larger and require much more processing. That alone makes it slower than Hot Rod (I believe Martin could provide you some numbers on that). The second arguments is that switching/routing inside Kubernetes is bloody fast (since it's based on iptables) and some cloud vendors optimize it even further (e.g. Google Andromeda [1][2], I would be surprised if AWS didn't have anything similar). During the work on this prototype I wrote a simple async binary proxy [3] and measured GCP load balancer vs my proxy performance. They were twice as fast [4][5]. You may argue whether I could write a better proxy. Probably I could, but the bottom line is that another performance hit is inevitable. They are really fast and they operate on their own infrastructure (load balancers is something that is provided by the cloud vendor to Kubernetes, not the other way around).
>>
>> So with all that in mind, are we going to get better results comparing to my proposal for Hot Rod? I dare to doubt, even with HTTP/2 support (which comes really soon I hope). The second question is whether this new "REST load balancer" will work better than a standard load balancer using round robin strategy? Again I dare to doubt, even if you you're faster at routing request to proper node, you introduce another layer of latency.
>>
>> Of course the priority of this is up to Tristan but I definitely wouldn't place it high on todo list. And before even looking at it I would recommend taking Netty HTTP Proxy, putting it in the middle between real load balancer and Infinispan app and measure performance with and without it. Another test could be with 1 and 10 replicas to check the performance penalty of hitting 100% and 10% requests into proper node.
>>
>> [1] https://cloudplatform.googleblog.com/2014/08/containers-vms-kubernetes-and-vmware.html
>> [2] https://cloudplatform.googleblog.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html
>> [3] https://github.com/slaskawi/external-ip-proxy/blob/Benchmark_with_proxy/Proxy/Proxy.go
>> [4] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20proxy.txt
>> [5] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20loadbalancer.txt
>>
>> Not least having a "REST load balancer" would allow to deploy Infinispan as an HTTP cache; just honouring the HTTP caching protocols and existing standards would allow people to use any client to their liking,
>>
>> Could you please give me an example how this could work? The only way that I know is to plug a cache into reverse proxy. NGNIX supports pluggable Redis for example [6].
>>
>> [6] https://www.nginx.com/resources/wiki/modules/redis/
>>
>> without us having to maintain Hot Rod clients and support it on many exotic platforms - we would still have Hot Rod clients but we'd be able to pick a smaller set of strategical platforms (e.g. Windows doesn't have to be in that list).
>>
>> As I mentioned before, I really doubts HTTP will be faster then Hot Rod in any scenario.
>>
>> Such a load balancer could be written in Java (recent WildFly versions are able to do this efficiently) or it could be written in another language, all it takes is to integrate an Hot Rod client - or just the intelligence of it- as an extension into an existing load balancer of our choice.
>>
>> As I mentioned before, with custom load balancer you're introducing another layer of latency. It's not a free ride.
>>
>> Allow me a bit more nit-picking on your benchmarks ;)
>> As you pointed out yourself there are several flaws in your setup: "didn't tune", "running in a VM", "benchmarked on a mac mini", ...if you know it's a flawed setup I'd rather not publish figures, especially not suggest to make decisions based on such results.
>>
>> Why not? Infinispan is a public project and anyone can benchmark it using JMH and taking decisions based on figures is always better than on intuition. Even though there were multiple unknown factors involved in this benchmark (this is why I pointed them out and asked to take the results with a grain of salt), the test conditions for all scenarios were the same. For me this is sufficient to give a general recommendation as I did. BTW, this recommendation fits exactly my expectations (communication inside Kube the fastest, LB per Pod a bit slower and no advanced routing the slowest). Finally, the recommendation is based on a POC which by definition means it doesn't fit all scenarios. You should always measure your system!
>>
>> So unless you can prove the benchmark results are fundamentally wrong and I have drawn wrong conclusions (e.g. a simple client is the fastest solution whereas inside Kubernetes communication is the slowest), please don't use "naaah, that's wrong" argument. It's rude.
>>
>> At this level of design need to focus on getting the architecture right; it should be self-speaking that your proposal of actually using intelligent routing in some way should be better than not using it.
>>
>> My benchmark confirmed this. But as always I would be happy to discuss some alternatives. But before trying to convince me to "REST Router", please prove that introducing a load balancer (or just a simple async proxy for start) gives similar or better performance than a simple load balancer with round robin strategy.
>>
>> Once we'll have an agreement on a sound architecture, then we'll be able to make the implementation efficient.
>>
>> Thanks,
>> Sanne
>>
>>
>>
>>
>> On 30 May 2017 at 13:43, Sebastian Laskawiec <[hidden email]> wrote:
>> Hey guys!
>>
>> Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following:
>>
>> <pasted1.png>
>>
>> As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse.
>>
>> During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3].
>>
>> The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings):
>>       • Benchmark app deployed inside Kuberenetes and using internal addresses (baseline):
>>               • 10k puts: 674.244 ±  16.654
>>               • 10k puts&gets: 1288.437 ± 136.207
>>       • Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence:
>>               • 10k puts: 1465.567 ± 176.349
>>               • 10k puts&gets: 2684.984 ± 114.993
>>       • Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing:
>>               • 10k puts: 1052.891 ±  31.218
>>               • 10k puts&gets: 2465.586 ±  85.034
>> Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small.
>>
>> So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer.
>>
>> Thanks,
>> Sebastian
>>
>> [1] https://github.com/slaskawi/external-ip-proxy
>> [2] https://github.com/infinispan/infinispan/pull/5164
>> [3] https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> --
>> SEBASTIAN ŁASKAWIEC
>> INFINISPAN DEVELOPER
>> Red Hat EMEA
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--

SEBASTIAN ŁASKAWIEC

INFINISPAN DEVELOPER

--

SEBASTIAN ŁASKAWIEC

INFINISPAN DEVELOPER


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Loading...