[infinispan-dev] Exposing cluster deployed in the cloud

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

[infinispan-dev] Exposing cluster deployed in the cloud

Sebastian Laskawiec
Hey guys!

A while ago I started working on exposing Infinispan Cluster which is hosted in Kubernetes to the outside world:

pasted1

I'm currently struggling to get solution like this into the platform [1] but in the meantime I created a very simple POC and I'm testing it locally [2]. 

There are two main problems with the scenario described above:
  1. Infinispan server announces internal addresses (172.17.x.x) to the client. The client needs to remap them into external ones (172.29.x.x).
  2. A custom Consistent Hash needs to be supplied to the Hot Rod client. When accessing cache, the Hot Rod Client needs to calculate server id for internal address and then map it to the external one.
If there will be no strong opinions regarding to this, I plan to implement this shortly. There will be additional method in Hot Rod Client configuration (ConfigurationBuilder#addServerMapping(String mappingClass)) which will be responsible for mapping external addresses to internal and vice-versa.

Thoughts?

Thanks,
Sebastian


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Exposing cluster deployed in the cloud

Gustavo Fernandes-2
Questions inlined:

On Mon, May 8, 2017 at 8:57 AM, Sebastian Laskawiec <[hidden email]> wrote:
Hey guys!

A while ago I started working on exposing Infinispan Cluster which is hosted in Kubernetes to the outside world:


What about SNI, wasn't this scenario the reason why it was implemented, IOW to allow HR clients to access an ispn hosted in the cloud?

 

pasted1

I'm currently struggling to get solution like this into the platform [1] but in the meantime I created a very simple POC and I'm testing it locally [2]. 

What does "application" mean in the diagram? Are those different pods, or single containers part of a pod?

There isn't much doc available at [2], how does it work?
 

There are two main problems with the scenario described above:
  1. Infinispan server announces internal addresses (172.17.x.x) to the client. The client needs to remap them into external ones (172.29.x.x).

How would the external address be allocated, e.g. during scaling up and down and how the HR client would know how to map them correctly?
 
  1. A custom Consistent Hash needs to be supplied to the Hot Rod client. When accessing cache, the Hot Rod Client needs to calculate server id for internal address and then map it to the external one.
If there will be no strong opinions regarding to this, I plan to implement this shortly. There will be additional method in Hot Rod Client configuration (ConfigurationBuilder#addServerMapping(String mappingClass)) which will be responsible for mapping external addresses to internal and vice-versa.

Thoughts?

Thanks,
Sebastian


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Exposing cluster deployed in the cloud

Sebastian Laskawiec
Hey Gustavo,

Comments inlined.

Thanks,
Sebastian

On Mon, May 8, 2017 at 11:13 AM Gustavo Fernandes <[hidden email]> wrote:
Questions inlined:

On Mon, May 8, 2017 at 8:57 AM, Sebastian Laskawiec <[hidden email]> wrote:
Hey guys!

A while ago I started working on exposing Infinispan Cluster which is hosted in Kubernetes to the outside world:


What about SNI, wasn't this scenario the reason why it was implemented, IOW to allow HR clients to access an ispn hosted in the cloud?

The short answer is no.

There are at least two major disadvantages of using SNI to connect a Pod:
  1. You still need to pass a FQDN in the SNI field. FQDN looks like this [1] transactions-repository-1-myproject.192.168.0.17.nip.io. This allows you to send TCP packets to a desired Route. In order to reach a specific Pod (assuming one among many), you need to get through a Route and a Service. So it seems you will need a "Pod <-> Service <-> Route" combination per each Pod. Ouch!!
  2. TLS slows everything down (by ~50% from my benchmark)
Also you statement that SNI is needed to access an Infinispan Server hosted in the cloud is misleading. I think it originated a year ago and even then it wasn't quite accurate even then. You can create a Service per Pod and expose it using a LoadBalancer or a NodePort. In my experience creating a Load Balancer per Pod is much simpler than creating a Clustered Service + Route combination and enforcing TLS/SNI. 

 

 

pasted1

I'm currently struggling to get solution like this into the platform [1] but in the meantime I created a very simple POC and I'm testing it locally [2]. 

What does "application" mean in the diagram? Are those different pods, or single containers part of a pod?

Those are Pods. Sorry, I made this image too generic.
 

There isn't much doc available at [2], how does it work?

What I'm trying to solve here is accessing the data using shortest possible path - using a "single hop" as we used to call it. 

In order to do that the client and all the servers need to have the same consistent hash (which is obtained by the client from one of the servers). The problem is that this obtained consistent hash contains internal IP addresses used by the servers to form a cluster. Those addresses are not achievable by the client - it needs to use external ones. So the idea is to let the client use the Consistent Hash with internal addresses but right before sending get request, remap the internal address to the external one. I haven't tried it but looking at the code it shouldn't be that hard.
 
 

There are two main problems with the scenario described above:
  1. Infinispan server announces internal addresses (172.17.x.x) to the client. The client needs to remap them into external ones (172.29.x.x).

How would the external address be allocated, e.g. during scaling up and down and how the HR client would know how to map them correctly?

This is the discovery part of the problem and it is pretty hard to be solved. For Kubernetes we can expose a 3rd party REST service which will expose this information. I'm experimenting with this approach in my solution: https://github.com/slaskawi/external-ip-proxy/blob/master/Main.go#L57 (later this week I plan to expose also runtime configuration with internal <-> external mapping).

Unfortunately the same problem exists also in some OpenStack configurations (OpenStack also uses internal/external addresses). Therefore some custom REST service would also be needed there. But this is very low priority to me.
 
 
  1. A custom Consistent Hash needs to be supplied to the Hot Rod client. When accessing cache, the Hot Rod Client needs to calculate server id for internal address and then map it to the external one.
If there will be no strong opinions regarding to this, I plan to implement this shortly. There will be additional method in Hot Rod Client configuration (ConfigurationBuilder#addServerMapping(String mappingClass)) which will be responsible for mapping external addresses to internal and vice-versa.

Thoughts?

Thanks,
Sebastian


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--

SEBASTIAN ŁASKAWIEC

INFINISPAN DEVELOPER


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Exposing cluster deployed in the cloud

Tristan Tarrant-2
In reply to this post by Sebastian Laskawiec
Sebastian,
are you familiar with Hot Rod's proxyHost/proxyPort [1]. In server it is
configured using external-host / external-port attributes on the
topology-state-transfer element [2]



[1]
https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/configuration/HotRodServerConfigurationBuilder.java#L43
[2]
https://github.com/infinispan/infinispan/blob/master/server/integration/endpoint/src/main/resources/schema/jboss-infinispan-endpoint_9_0.xsd#L203


On 5/8/17 9:57 AM, Sebastian Laskawiec wrote:

> Hey guys!
>
> A while ago I started working on exposing Infinispan Cluster which is
> hosted in Kubernetes to the outside world:
>
> pasted1
>
> I'm currently struggling to get solution like this into the platform [1]
> but in the meantime I created a very simple POC and I'm testing it
> locally [2].
>
> There are two main problems with the scenario described above:
>
>  1. Infinispan server announces internal addresses (172.17.x.x) to the
>     client. The client needs to remap them into external ones (172.29.x.x).
>  2. A custom Consistent Hash needs to be supplied to the Hot Rod client.
>     When accessing cache, the Hot Rod Client needs to calculate server
>     id for internal address and then map it to the external one.
>
> If there will be no strong opinions regarding to this, I plan to
> implement this shortly. There will be additional method in Hot Rod
> Client configuration (ConfigurationBuilder#addServerMapping(String
> mappingClass)) which will be responsible for mapping external addresses
> to internal and vice-versa.
>
> Thoughts?
>
> Thanks,
> Sebastian
>
> [1] https://github.com/kubernetes/community/pull/446
> [2] https://github.com/slaskawi/external-ip-proxy
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Exposing cluster deployed in the cloud

Sebastian Laskawiec
Hey Tristan!

I checked this part and it won't do the trick. The problem is that the server does not know which address is used for exposing its services. Moreover, this address can change with time.

Thanks,
Sebastian

On Tue, May 9, 2017 at 3:28 PM Tristan Tarrant <[hidden email]> wrote:
Sebastian,
are you familiar with Hot Rod's proxyHost/proxyPort [1]. In server it is
configured using external-host / external-port attributes on the
topology-state-transfer element [2]



[1]
https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/configuration/HotRodServerConfigurationBuilder.java#L43
[2]
https://github.com/infinispan/infinispan/blob/master/server/integration/endpoint/src/main/resources/schema/jboss-infinispan-endpoint_9_0.xsd#L203


On 5/8/17 9:57 AM, Sebastian Laskawiec wrote:
> Hey guys!
>
> A while ago I started working on exposing Infinispan Cluster which is
> hosted in Kubernetes to the outside world:
>
> pasted1
>
> I'm currently struggling to get solution like this into the platform [1]
> but in the meantime I created a very simple POC and I'm testing it
> locally [2].
>
> There are two main problems with the scenario described above:
>
>  1. Infinispan server announces internal addresses (172.17.x.x) to the
>     client. The client needs to remap them into external ones (172.29.x.x).
>  2. A custom Consistent Hash needs to be supplied to the Hot Rod client.
>     When accessing cache, the Hot Rod Client needs to calculate server
>     id for internal address and then map it to the external one.
>
> If there will be no strong opinions regarding to this, I plan to
> implement this shortly. There will be additional method in Hot Rod
> Client configuration (ConfigurationBuilder#addServerMapping(String
> mappingClass)) which will be responsible for mapping external addresses
> to internal and vice-versa.
>
> Thoughts?
>
> Thanks,
> Sebastian
>
> [1] https://github.com/kubernetes/community/pull/446
> [2] https://github.com/slaskawi/external-ip-proxy
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--

SEBASTIAN ŁASKAWIEC

INFINISPAN DEVELOPER


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Exposing cluster deployed in the cloud

Tristan Tarrant-2
We would need to provide a way to supply the external address at
runtime, e.g. via JMX.

Tristan

On 5/22/17 2:50 PM, Sebastian Laskawiec wrote:

> Hey Tristan!
>
> I checked this part and it won't do the trick. The problem is that the
> server does not know which address is used for exposing its services.
> Moreover, this address can change with time.
>
> Thanks,
> Sebastian
>
> On Tue, May 9, 2017 at 3:28 PM Tristan Tarrant <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Sebastian,
>     are you familiar with Hot Rod's proxyHost/proxyPort [1]. In server it is
>     configured using external-host / external-port attributes on the
>     topology-state-transfer element [2]
>
>
>
>     [1]
>     https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/configuration/HotRodServerConfigurationBuilder.java#L43
>     [2]
>     https://github.com/infinispan/infinispan/blob/master/server/integration/endpoint/src/main/resources/schema/jboss-infinispan-endpoint_9_0.xsd#L203
>
>
>     On 5/8/17 9:57 AM, Sebastian Laskawiec wrote:
>      > Hey guys!
>      >
>      > A while ago I started working on exposing Infinispan Cluster which is
>      > hosted in Kubernetes to the outside world:
>      >
>      > pasted1
>      >
>      > I'm currently struggling to get solution like this into the
>     platform [1]
>      > but in the meantime I created a very simple POC and I'm testing it
>      > locally [2].
>      >
>      > There are two main problems with the scenario described above:
>      >
>      >  1. Infinispan server announces internal addresses (172.17.x.x)
>     to the
>      >     client. The client needs to remap them into external ones
>     (172.29.x.x).
>      >  2. A custom Consistent Hash needs to be supplied to the Hot Rod
>     client.
>      >     When accessing cache, the Hot Rod Client needs to calculate
>     server
>      >     id for internal address and then map it to the external one.
>      >
>      > If there will be no strong opinions regarding to this, I plan to
>      > implement this shortly. There will be additional method in Hot Rod
>      > Client configuration (ConfigurationBuilder#addServerMapping(String
>      > mappingClass)) which will be responsible for mapping external
>     addresses
>      > to internal and vice-versa.
>      >
>      > Thoughts?
>      >
>      > Thanks,
>      > Sebastian
>      >
>      > [1] https://github.com/kubernetes/community/pull/446
>      > [2] https://github.com/slaskawi/external-ip-proxy
>      >
>      >
>      > _______________________________________________
>      > infinispan-dev mailing list
>      > [hidden email]
>     <mailto:[hidden email]>
>      > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>      >
>
>     --
>     Tristan Tarrant
>     Infinispan Lead
>     JBoss, a division of Red Hat
>     _______________________________________________
>     infinispan-dev mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> --
>
> SEBASTIANŁASKAWIEC
>
> INFINISPAN DEVELOPER
>
> Red HatEMEA <https://www.redhat.com/>
>
> <https://red.ht/sig>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Exposing cluster deployed in the cloud

Sebastian Laskawiec
I think the external/internal address translation should be provided by the user. I'm working on a prototype here: https://github.com/slaskawi/infinispan/commit/eeeeae7b567fd84946cba90153d7abf2dd0d6641

I will tidy it up and send a pull request later this week.

On Mon, May 22, 2017 at 4:49 PM Tristan Tarrant <[hidden email]> wrote:
We would need to provide a way to supply the external address at
runtime, e.g. via JMX.

Tristan

On 5/22/17 2:50 PM, Sebastian Laskawiec wrote:
> Hey Tristan!
>
> I checked this part and it won't do the trick. The problem is that the
> server does not know which address is used for exposing its services.
> Moreover, this address can change with time.
>
> Thanks,
> Sebastian
>
> On Tue, May 9, 2017 at 3:28 PM Tristan Tarrant <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Sebastian,
>     are you familiar with Hot Rod's proxyHost/proxyPort [1]. In server it is
>     configured using external-host / external-port attributes on the
>     topology-state-transfer element [2]
>
>
>
>     [1]
>     https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/configuration/HotRodServerConfigurationBuilder.java#L43
>     [2]
>     https://github.com/infinispan/infinispan/blob/master/server/integration/endpoint/src/main/resources/schema/jboss-infinispan-endpoint_9_0.xsd#L203
>
>
>     On 5/8/17 9:57 AM, Sebastian Laskawiec wrote:
>      > Hey guys!
>      >
>      > A while ago I started working on exposing Infinispan Cluster which is
>      > hosted in Kubernetes to the outside world:
>      >
>      > pasted1
>      >
>      > I'm currently struggling to get solution like this into the
>     platform [1]
>      > but in the meantime I created a very simple POC and I'm testing it
>      > locally [2].
>      >
>      > There are two main problems with the scenario described above:
>      >
>      >  1. Infinispan server announces internal addresses (172.17.x.x)
>     to the
>      >     client. The client needs to remap them into external ones
>     (172.29.x.x).
>      >  2. A custom Consistent Hash needs to be supplied to the Hot Rod
>     client.
>      >     When accessing cache, the Hot Rod Client needs to calculate
>     server
>      >     id for internal address and then map it to the external one.
>      >
>      > If there will be no strong opinions regarding to this, I plan to
>      > implement this shortly. There will be additional method in Hot Rod
>      > Client configuration (ConfigurationBuilder#addServerMapping(String
>      > mappingClass)) which will be responsible for mapping external
>     addresses
>      > to internal and vice-versa.
>      >
>      > Thoughts?
>      >
>      > Thanks,
>      > Sebastian
>      >
>      > [1] https://github.com/kubernetes/community/pull/446
>      > [2] https://github.com/slaskawi/external-ip-proxy
>      >
>      >
>      > _______________________________________________
>      > infinispan-dev mailing list
>      > [hidden email]
>     <mailto:[hidden email]>
>      > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>      >
>
>     --
>     Tristan Tarrant
>     Infinispan Lead
>     JBoss, a division of Red Hat
>     _______________________________________________
>     infinispan-dev mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> --
>
> SEBASTIANŁASKAWIEC
>
> INFINISPAN DEVELOPER
>
> Red HatEMEA <https://www.redhat.com/>
>
> <https://red.ht/sig>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--

SEBASTIAN ŁASKAWIEC

INFINISPAN DEVELOPER


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev