Quantcast

[infinispan-dev] Multi tenancy support for Infinispan

classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[infinispan-dev] Multi tenancy support for Infinispan

Sebastian Laskawiec
Dear Community,

Please have a look at the design of Multi tenancy support for Infinispan [1]. I would be more than happy to get some feedback from you.

Highlights:
  • The implementation will be based on a Router (which will be built based on Netty)
  • Multiple Hot Rod and REST servers will be attached to the router which in turn will be attached to the endpoint
  • The router will operate on a binary protocol when using Hot Rod clients and path-based routing when using REST
  • Memcached will be out of scope
  • The router will support SSL+SNI
Thanks
Sebastian


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Emmanuel Bernard
Is the router a software component of all nodes in the cluster ?
Does the router then redirect all request to the same cache-container for all tenant? How is the isolation done then?
Or does each tenant have effectively different cache containers and thus be "physically" isolated?
Or is that config dependent (from a endpoint to the cache-container) and some tenants could share the same cache container. In which case will they see the same data ?

Finally I think the design should allow for "dynamic" tenant configuration. Meaning that I don't have to change the config manually when I add a new customer / tenant. 

That's all, and sorry for the naive questions :)

On 29 avr. 2016, at 17:29, Sebastian Laskawiec <[hidden email]> wrote:

Dear Community,

Please have a look at the design of Multi tenancy support for Infinispan [1]. I would be more than happy to get some feedback from you.

Highlights:
  • The implementation will be based on a Router (which will be built based on Netty)
  • Multiple Hot Rod and REST servers will be attached to the router which in turn will be attached to the endpoint
  • The router will operate on a binary protocol when using Hot Rod clients and path-based routing when using REST
  • Memcached will be out of scope
  • The router will support SSL+SNI
Thanks
Sebastian

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Sebastian Laskawiec
Hey Emmanuel!

Comments inlined.

There is one more thing to discuss - how SNI [1] for Hotrod server fits into the Router design. Obviously there is some overlap and the for SSL+SNI needs to be also implemented in the Router [2] (it potentially needs to decrypt an encrypted "switch-to-tenant" command). Moreover, if the client sends his SNI Host Name with the request - we can connect him to proper CacheContainer even without sending "switch-to-tenant" command. Of course there is some overhead here as well - if someone has only one Hot Rod server and he want's to use SNI - he would need to configure a Router, which would always send everything to a single server. 

Thanks
Sebastian


On Fri, May 6, 2016 at 8:37 PM, Emmanuel Bernard <[hidden email]> wrote:
Is the router a software component of all nodes in the cluster ?

Yes
 
Does the router then redirect all request to the same cache-container for all tenant? How is the isolation done then?

Each tenant have its own Cache Container, so they are fully isolated. As the matter of fact this is how it is done now - you can run multiple Hot Rod server in one node (but each of them is attached to different port). The router takes this concept one step further and offers "one entry point" for all embedded Hot Rod servers.
 
Or does each tenant have effectively different cache containers and thus be "physically" isolated?
Or is that config dependent (from a endpoint to the cache-container) and some tenants could share the same cache container. In which case will they see the same data ?

All tenants operate of their own Cache Containers, so there will not see each other's data. However if you create 2 CacheContainers with the same cluster name (//subsystem/cache-container/transport/@cluster) they should see each other's data. I think this should be a recommended way for handling this kind of things. 
 

Finally I think the design should allow for "dynamic" tenant configuration. Meaning that I don't have to change the config manually when I add a new customer / tenant. 

I totally agree. @Tristan - could you please tell me how dynamic reconfiguration via CLI works? I probably should fit into that with router configuration (I assume all existing Protocol Server and Endpoint configuration support it).
 

That's all, and sorry for the naive questions :)

No problem - they were very good questions.
 

On 29 avr. 2016, at 17:29, Sebastian Laskawiec <[hidden email]> wrote:

Dear Community,

Please have a look at the design of Multi tenancy support for Infinispan [1]. I would be more than happy to get some feedback from you.

Highlights:
  • The implementation will be based on a Router (which will be built based on Netty)
  • Multiple Hot Rod and REST servers will be attached to the router which in turn will be attached to the endpoint
  • The router will operate on a binary protocol when using Hot Rod clients and path-based routing when using REST
  • Memcached will be out of scope
  • The router will support SSL+SNI
Thanks
Sebastian

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Radim Vansa
In reply to this post by Sebastian Laskawiec
As for the questions:
* Is SSL required for SNI? I can imagine that multi-tenancy would make
sense even in situations when the connection does not need to be
encrypted. Moreover, if we plan to eventually have HR clients with async
API (and using async I/O), SSL is even more PITA. Btw., do we have any
numbers how much SSL affects perf? (that's a question for QA, though)

* I don't think that dynamic switching of tenants would make sense,
since that would require to invalidate all RemoteCache instances, near
caches, connection pools, everything. So it's the same as starting from
scratch.

R.





On 04/29/2016 05:29 PM, Sebastian Laskawiec wrote:

> Dear Community,
>
> Please have a look at the design of Multi tenancy support for
> Infinispan [1]. I would be more than happy to get some feedback from you.
>
> Highlights:
>
>   * The implementation will be based on a Router (which will be built
>     based on Netty)
>   * Multiple Hot Rod and REST servers will be attached to the router
>     which in turn will be attached to the endpoint
>   * The router will operate on a binary protocol when using Hot Rod
>     clients and path-based routing when using REST
>   * Memcached will be out of scope
>   * The router will support SSL+SNI
>
> Thanks
> Sebastian
>
> [1]
> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Radim Vansa <[hidden email]>
JBoss Performance Team

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Sebastian Laskawiec
Hey Radim!

Comments inlined.

Thanks
Sebastian

On Mon, May 9, 2016 at 12:55 PM, Radim Vansa <[hidden email]> wrote:
As for the questions:
* Is SSL required for SNI? I can imagine that multi-tenancy would make
sense even in situations when the connection does not need to be
encrypted. Moreover, if we plan to eventually have HR clients with async
API (and using async I/O), SSL is even more PITA. Btw., do we have any
numbers how much SSL affects perf? (that's a question for QA, though)

Unfortunately no. SNI is an extension of TLS [2] which is an upgrade of SSL. In Java SNI Host names are specified in SSLParameters [3].

Of course SSL slows things down a bit, that's why we also need a "switch-to-tenant" command which would be used by the clients who do not want SSL. However if someone uses SNI and SSL (and only then) we can switch him to proper tenant automatically (because we have enough information at that point).
 

* I don't think that dynamic switching of tenants would make sense,
since that would require to invalidate all RemoteCache instances, near
caches, connection pools, everything. So it's the same as starting from
scratch.

Frankly I also have a mixed feelings about this feature. I think it would be much nicer if we switched to another tenant by doing disconnect/connect sequence (and not switching dynamically).
 

R.





On 04/29/2016 05:29 PM, Sebastian Laskawiec wrote:
> Dear Community,
>
> Please have a look at the design of Multi tenancy support for
> Infinispan [1]. I would be more than happy to get some feedback from you.
>
> Highlights:
>
>   * The implementation will be based on a Router (which will be built
>     based on Netty)
>   * Multiple Hot Rod and REST servers will be attached to the router
>     which in turn will be attached to the endpoint
>   * The router will operate on a binary protocol when using Hot Rod
>     clients and path-based routing when using REST
>   * Memcached will be out of scope
>   * The router will support SSL+SNI
>
> Thanks
> Sebastian
>
> [1]
> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server

>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Radim Vansa <[hidden email]>
JBoss Performance Team

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Radim Vansa
On 05/09/2016 07:52 AM, Sebastian Laskawiec wrote:

> Hey Radim!
>
> Comments inlined.
>
> Thanks
> Sebastian
>
> On Mon, May 9, 2016 at 12:55 PM, Radim Vansa <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     As for the questions:
>     * Is SSL required for SNI? I can imagine that multi-tenancy would make
>     sense even in situations when the connection does not need to be
>     encrypted. Moreover, if we plan to eventually have HR clients with
>     async
>     API (and using async I/O), SSL is even more PITA. Btw., do we have any
>     numbers how much SSL affects perf? (that's a question for QA, though)
>
>
> Unfortunately no. SNI is an extension of TLS [2] which is an upgrade
> of SSL. In Java SNI Host names are specified in SSLParameters [3].
>
> Of course SSL slows things down a bit, that's why we also need a
> "switch-to-tenant" command which would be used by the clients who do
> not want SSL. However if someone uses SNI and SSL (and only then) we
> can switch him to proper tenant automatically (because we have enough
> information at that point).

So you can initiate connection with SSL (+SNI) and then downgrade it to
plain-text?

>
>     * I don't think that dynamic switching of tenants would make sense,
>     since that would require to invalidate all RemoteCache instances, near
>     caches, connection pools, everything. So it's the same as starting
>     from
>     scratch.
>
>
> Frankly I also have a mixed feelings about this feature. I think it
> would be much nicer if we switched to another tenant by doing
> disconnect/connect sequence (and not switching dynamically).
>
>
>     R.
>
>
>
>
>
>     On 04/29/2016 05:29 PM, Sebastian Laskawiec wrote:
>     > Dear Community,
>     >
>     > Please have a look at the design of Multi tenancy support for
>     > Infinispan [1]. I would be more than happy to get some feedback
>     from you.
>     >
>     > Highlights:
>     >
>     >   * The implementation will be based on a Router (which will be
>     built
>     >     based on Netty)
>     >   * Multiple Hot Rod and REST servers will be attached to the router
>     >     which in turn will be attached to the endpoint
>     >   * The router will operate on a binary protocol when using Hot Rod
>     >     clients and path-based routing when using REST
>     >   * Memcached will be out of scope
>     >   * The router will support SSL+SNI
>     >
>     > Thanks
>     > Sebastian
>     >
>     > [1]
>     >
>     https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
>
> [2] https://tools.ietf.org/html/rfc6066#page-6
> [3]
> https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLParameters.html#getServerNames--
>
>
>     >
>     >
>     > _______________________________________________
>     > infinispan-dev mailing list
>     > [hidden email]
>     <mailto:[hidden email]>
>     > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
>     --
>     Radim Vansa <[hidden email] <mailto:[hidden email]>>
>     JBoss Performance Team
>
>     _______________________________________________
>     infinispan-dev mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Radim Vansa <[hidden email]>
JBoss Performance Team

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Sebastian Laskawiec

On Mon, May 9, 2016 at 3:30 PM, Radim Vansa <[hidden email]> wrote:
So you can initiate connection with SSL (+SNI) and then downgrade it to
plain-text?

No, that's not possible. SNI Host Name is used to match proper certificate from KeyStore. After successful handshake, you communicate further with SSL/TLS.

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Tristan Tarrant-2
In reply to this post by Sebastian Laskawiec
Not sure I like the introduction of another component at the front.

My original idea for allowing the client to choose the container was:

- with TLS: use SNI to choose the container
- without TLS: enhance the PING operation of the Hot Rod protocol to
also take the server name. This would need to be a requirement when
exposing multiple containers over the same endpoint.

 From a client API perspective, there would be no difference between the
above two approaches: just specify the server name and depending on the
transport, select the right one.

Tristan

On 29/04/2016 17:29, Sebastian Laskawiec wrote:

> Dear Community,
>
> Please have a look at the design of Multi tenancy support for Infinispan
> [1]. I would be more than happy to get some feedback from you.
>
> Highlights:
>
>   * The implementation will be based on a Router (which will be built
>     based on Netty)
>   * Multiple Hot Rod and REST servers will be attached to the router
>     which in turn will be attached to the endpoint
>   * The router will operate on a binary protocol when using Hot Rod
>     clients and path-based routing when using REST
>   * Memcached will be out of scope
>   * The router will support SSL+SNI
>
> Thanks
> Sebastian
>
> [1]
> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Sebastian Laskawiec
Hey Tristan!

If I understood you correctly, you're suggesting to enhance the ProtocolServer to support multiple EmbeddedCacheManagers (probably with shared transport and by that I mean started on the same Netty server).

Yes, that also could work but I'm not convinced if we won't loose some configuration flexibility. 

Let's consider a configuration file - https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use authentication for CacheContainer cc1 (and not for cc2) and encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using this kind of different options makes sense in terms of multi tenancy. And please note that if we start a new Netty server for each CacheContainer - we almost ended up with the router I proposed.

The second argument for using a router is extracting the routing logic into a separate module. Otherwise we would probably end up with several if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting this has also additional advantage that we limit changes in those modules (actually there will be probably 2 changes #1 we should be able to start a ProtocolServer without starting a Netty server (the Router will do it in multi tenant configuration) and #2 collect Netty handlers from ProtocolServer).

To sum it up - the router's implementation seems to be more complicated but in the long run I think it might be worth it. 


@Galder - you wrote a huge part of the Hot Rod server - I would love to hear your opinion as well.

Thanks
Sebastian



On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <[hidden email]> wrote:
Not sure I like the introduction of another component at the front.

My original idea for allowing the client to choose the container was:

- with TLS: use SNI to choose the container
- without TLS: enhance the PING operation of the Hot Rod protocol to
also take the server name. This would need to be a requirement when
exposing multiple containers over the same endpoint.

 From a client API perspective, there would be no difference between the
above two approaches: just specify the server name and depending on the
transport, select the right one.

Tristan

On 29/04/2016 17:29, Sebastian Laskawiec wrote:
> Dear Community,
>
> Please have a look at the design of Multi tenancy support for Infinispan
> [1]. I would be more than happy to get some feedback from you.
>
> Highlights:
>
>   * The implementation will be based on a Router (which will be built
>     based on Netty)
>   * Multiple Hot Rod and REST servers will be attached to the router
>     which in turn will be attached to the endpoint
>   * The router will operate on a binary protocol when using Hot Rod
>     clients and path-based routing when using REST
>   * Memcached will be out of scope
>   * The router will support SSL+SNI
>
> Thanks
> Sebastian
>
> [1]
> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Sebastian Laskawiec
Hey guys!

Any last call on this? I'm going to start the implementation on Monday.

Thanks
Sebastian

On Wed, May 11, 2016 at 10:38 AM, Sebastian Laskawiec <[hidden email]> wrote:
Hey Tristan!

If I understood you correctly, you're suggesting to enhance the ProtocolServer to support multiple EmbeddedCacheManagers (probably with shared transport and by that I mean started on the same Netty server).

Yes, that also could work but I'm not convinced if we won't loose some configuration flexibility. 

Let's consider a configuration file - https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use authentication for CacheContainer cc1 (and not for cc2) and encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using this kind of different options makes sense in terms of multi tenancy. And please note that if we start a new Netty server for each CacheContainer - we almost ended up with the router I proposed.

The second argument for using a router is extracting the routing logic into a separate module. Otherwise we would probably end up with several if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting this has also additional advantage that we limit changes in those modules (actually there will be probably 2 changes #1 we should be able to start a ProtocolServer without starting a Netty server (the Router will do it in multi tenant configuration) and #2 collect Netty handlers from ProtocolServer).

To sum it up - the router's implementation seems to be more complicated but in the long run I think it might be worth it. 


@Galder - you wrote a huge part of the Hot Rod server - I would love to hear your opinion as well.

Thanks
Sebastian



On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <[hidden email]> wrote:
Not sure I like the introduction of another component at the front.

My original idea for allowing the client to choose the container was:

- with TLS: use SNI to choose the container
- without TLS: enhance the PING operation of the Hot Rod protocol to
also take the server name. This would need to be a requirement when
exposing multiple containers over the same endpoint.

 From a client API perspective, there would be no difference between the
above two approaches: just specify the server name and depending on the
transport, select the right one.

Tristan

On 29/04/2016 17:29, Sebastian Laskawiec wrote:
> Dear Community,
>
> Please have a look at the design of Multi tenancy support for Infinispan
> [1]. I would be more than happy to get some feedback from you.
>
> Highlights:
>
>   * The implementation will be based on a Router (which will be built
>     based on Netty)
>   * Multiple Hot Rod and REST servers will be attached to the router
>     which in turn will be attached to the endpoint
>   * The router will operate on a binary protocol when using Hot Rod
>     clients and path-based routing when using REST
>   * Memcached will be out of scope
>   * The router will support SSL+SNI
>
> Thanks
> Sebastian
>
> [1]
> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev



_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Sanne Grinovero-3
Hi Sebastian,

the design seems to assume that what people want is to have multiple
cache containers, one per tenant.
Did you consider the tradeoffs comparing to a solution in which you
have a single cache container to manage all caches, but isolate
tenants by having each one access only the subset of caches it is
owning?

I haven't thought about all implications, but it seems desirable that
all caches - from all tenants - could be managed as a whole. For
example in future one might want to know how the memory consumption is
being balanced across different tenants and caches, and have some
smart policies around such concepts. Were would such logic live? It
seems like there is a need for a global coordination of resources
across all caches, and so far this has been the CacheManager.
You could change this, but then an higher level component will be
needed to orchestrate the various CacheManager instances at server
level.

Similarly, different Caches will need to share some resources; I would
expect for example that when you want to run "Infinispan as a
Service", you'd want to also provide the option of enabling some
popular CacheStores in an easy way for the end user (like a checkbox,
as simple as "enable JDBC cachestore" or even higher level "enable
persistent backup").
Taking for example the JDBC CacheStore, I think you'd not want to
create a new database instance dynamically for each instance but
rather have them all share the same, so adding a tenant-id to the key,
but also having the JDBC connection pool to this database shared
across all tenants.

I realize that this alternative approach will have you face some other
issues - like adding tenant-aware capabilities to some CacheStore
implementations - but sharing and managing the resources is crucial to
implement multi-tenancy: if we don't, why would you not rather start
separate instances of the Infinispan server?

Thanks,
Sanne



On 13 May 2016 at 14:51, Sebastian Laskawiec <[hidden email]> wrote:

> Hey guys!
>
> Any last call on this? I'm going to start the implementation on Monday.
>
> Thanks
> Sebastian
>
> On Wed, May 11, 2016 at 10:38 AM, Sebastian Laskawiec <[hidden email]>
> wrote:
>>
>> Hey Tristan!
>>
>> If I understood you correctly, you're suggesting to enhance the
>> ProtocolServer to support multiple EmbeddedCacheManagers (probably with
>> shared transport and by that I mean started on the same Netty server).
>>
>> Yes, that also could work but I'm not convinced if we won't loose some
>> configuration flexibility.
>>
>> Let's consider a configuration file -
>> https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for
>> example use authentication for CacheContainer cc1 (and not for cc2) and
>> encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I
>> think using this kind of different options makes sense in terms of multi
>> tenancy. And please note that if we start a new Netty server for each
>> CacheContainer - we almost ended up with the router I proposed.
>>
>> The second argument for using a router is extracting the routing logic
>> into a separate module. Otherwise we would probably end up with several
>> if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting
>> this has also additional advantage that we limit changes in those modules
>> (actually there will be probably 2 changes #1 we should be able to start a
>> ProtocolServer without starting a Netty server (the Router will do it in
>> multi tenant configuration) and #2 collect Netty handlers from
>> ProtocolServer).
>>
>> To sum it up - the router's implementation seems to be more complicated
>> but in the long run I think it might be worth it.
>>
>> I also wrote the summary of the above here:
>> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach
>>
>> @Galder - you wrote a huge part of the Hot Rod server - I would love to
>> hear your opinion as well.
>>
>> Thanks
>> Sebastian
>>
>>
>>
>> On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <[hidden email]>
>> wrote:
>>>
>>> Not sure I like the introduction of another component at the front.
>>>
>>> My original idea for allowing the client to choose the container was:
>>>
>>> - with TLS: use SNI to choose the container
>>> - without TLS: enhance the PING operation of the Hot Rod protocol to
>>> also take the server name. This would need to be a requirement when
>>> exposing multiple containers over the same endpoint.
>>>
>>>  From a client API perspective, there would be no difference between the
>>> above two approaches: just specify the server name and depending on the
>>> transport, select the right one.
>>>
>>> Tristan
>>>
>>> On 29/04/2016 17:29, Sebastian Laskawiec wrote:
>>> > Dear Community,
>>> >
>>> > Please have a look at the design of Multi tenancy support for
>>> > Infinispan
>>> > [1]. I would be more than happy to get some feedback from you.
>>> >
>>> > Highlights:
>>> >
>>> >   * The implementation will be based on a Router (which will be built
>>> >     based on Netty)
>>> >   * Multiple Hot Rod and REST servers will be attached to the router
>>> >     which in turn will be attached to the endpoint
>>> >   * The router will operate on a binary protocol when using Hot Rod
>>> >     clients and path-based routing when using REST
>>> >   * Memcached will be out of scope
>>> >   * The router will support SSL+SNI
>>> >
>>> > Thanks
>>> > Sebastian
>>> >
>>> > [1]
>>> >
>>> > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
>>> >
>>> >
>>> > _______________________________________________
>>> > infinispan-dev mailing list
>>> > [hidden email]
>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> >
>>>
>>> --
>>> Tristan Tarrant
>>> Infinispan Lead
>>> JBoss, a division of Red Hat
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> [hidden email]
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Sebastian Laskawiec
Hey Sanne!

Comments inlined.

Thanks
Sebastian

On Sun, May 15, 2016 at 11:27 PM, Sanne Grinovero <[hidden email]> wrote:
Hi Sebastian,

the design seems to assume that what people want is to have multiple
cache containers, one per tenant.
Did you consider the tradeoffs comparing to a solution in which you
have a single cache container to manage all caches, but isolate
tenants by having each one access only the subset of caches it is
owning?

This approach was the first I crossed out from my list mainly due to isolation, name clashes (but the they are easy to solve - just prefix cache name with tenant) and configuration on Cache Manager level (some tenants might want to use different authentication settings, marshallers etc).
 
I haven't thought about all implications, but it seems desirable that
all caches - from all tenants - could be managed as a whole. For
example in future one might want to know how the memory consumption is
being balanced across different tenants and caches, and have some
smart policies around such concepts. Were would such logic live? It
seems like there is a need for a global coordination of resources
across all caches, and so far this has been the CacheManager.

Our roadmap contains a health check endpoint. We might aggregate them and create some scaling policies based on aggregated data from all CacheManagers. 

Regarding to the memory consumption I've seen it implemented the other way around (you can measure how much memory you have by `cat /sys/fs/cgroup/memory/memory.limit_in_bytes` and using it for constructing -Xmx). This way your container will never go beyond the memory limit. I believe this is not an ideal way but definitely the easiest. 
 
You could change this, but then an higher level component will be
needed to orchestrate the various CacheManager instances at server
level.

Yes, the Router could do that. But I would assume that's the health check feature and not multi tenancy. 
 
Similarly, different Caches will need to share some resources; I would
expect for example that when you want to run "Infinispan as a
Service", you'd want to also provide the option of enabling some
popular CacheStores in an easy way for the end user (like a checkbox,
as simple as "enable JDBC cachestore" or even higher level "enable
persistent backup"). 
Taking for example the JDBC CacheStore, I think you'd not want to
create a new database instance dynamically for each instance but
rather have them all share the same, so adding a tenant-id to the key,
but also having the JDBC connection pool to this database shared
across all tenants.

Those points seem to be valid but again we assume similar configuration for all clients in hosted Infinispan service. This may not always be true (as I pointed out - settings on CacheManager level) and I would prefer to have a configuration flexibility here. We may also address some of the resource consumption/performance issues on the Cloud layer e.g. add a MySQL DB to each Infinispan pod - this way all DB connection will be local to the machine which runs the containers.

I realize that this alternative approach will have you face some other
issues - like adding tenant-aware capabilities to some CacheStore
implementations - but sharing and managing the resources is crucial to
implement multi-tenancy: if we don't, why would you not rather start
separate instances of the Infinispan server?

I think Tristan had similar question about starting Infinispan server instances and maybe I didn't emphasize it enough in the design. 

The goal of adding a router is to allow configuring and starting a CacheManager without starting Netty server. The Netty server will be started only in the Router and it will "borrow" handlers from given Protocol Server.

However point granted for Cache Store resources utilization - if all our tenants want to have a JDBC Cache Store than we might create lots of connections to the database. But please note that we will have some problems on Cache Store level this way or another (if we decided to implement multi tenancy as sharing the same CacheManager than we would need to solve data isolation problem). The router approach at least guarantees us isolation which is a #1 priority for me in multi tenancy. I'm just thinking - maybe we should enhance Cloud Cache Store (the name fits ideally here) to deal with such situation and recommend it for our clients as the best tool for storing multi tenant data?
 

Thanks,
Sanne



On 13 May 2016 at 14:51, Sebastian Laskawiec <[hidden email]> wrote:
> Hey guys!
>
> Any last call on this? I'm going to start the implementation on Monday.
>
> Thanks
> Sebastian
>
> On Wed, May 11, 2016 at 10:38 AM, Sebastian Laskawiec <[hidden email]>
> wrote:
>>
>> Hey Tristan!
>>
>> If I understood you correctly, you're suggesting to enhance the
>> ProtocolServer to support multiple EmbeddedCacheManagers (probably with
>> shared transport and by that I mean started on the same Netty server).
>>
>> Yes, that also could work but I'm not convinced if we won't loose some
>> configuration flexibility.
>>
>> Let's consider a configuration file -
>> https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for
>> example use authentication for CacheContainer cc1 (and not for cc2) and
>> encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I
>> think using this kind of different options makes sense in terms of multi
>> tenancy. And please note that if we start a new Netty server for each
>> CacheContainer - we almost ended up with the router I proposed.
>>
>> The second argument for using a router is extracting the routing logic
>> into a separate module. Otherwise we would probably end up with several
>> if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting
>> this has also additional advantage that we limit changes in those modules
>> (actually there will be probably 2 changes #1 we should be able to start a
>> ProtocolServer without starting a Netty server (the Router will do it in
>> multi tenant configuration) and #2 collect Netty handlers from
>> ProtocolServer).
>>
>> To sum it up - the router's implementation seems to be more complicated
>> but in the long run I think it might be worth it.
>>
>> I also wrote the summary of the above here:
>> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach
>>
>> @Galder - you wrote a huge part of the Hot Rod server - I would love to
>> hear your opinion as well.
>>
>> Thanks
>> Sebastian
>>
>>
>>
>> On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <[hidden email]>
>> wrote:
>>>
>>> Not sure I like the introduction of another component at the front.
>>>
>>> My original idea for allowing the client to choose the container was:
>>>
>>> - with TLS: use SNI to choose the container
>>> - without TLS: enhance the PING operation of the Hot Rod protocol to
>>> also take the server name. This would need to be a requirement when
>>> exposing multiple containers over the same endpoint.
>>>
>>>  From a client API perspective, there would be no difference between the
>>> above two approaches: just specify the server name and depending on the
>>> transport, select the right one.
>>>
>>> Tristan
>>>
>>> On 29/04/2016 17:29, Sebastian Laskawiec wrote:
>>> > Dear Community,
>>> >
>>> > Please have a look at the design of Multi tenancy support for
>>> > Infinispan
>>> > [1]. I would be more than happy to get some feedback from you.
>>> >
>>> > Highlights:
>>> >
>>> >   * The implementation will be based on a Router (which will be built
>>> >     based on Netty)
>>> >   * Multiple Hot Rod and REST servers will be attached to the router
>>> >     which in turn will be attached to the endpoint
>>> >   * The router will operate on a binary protocol when using Hot Rod
>>> >     clients and path-based routing when using REST
>>> >   * Memcached will be out of scope
>>> >   * The router will support SSL+SNI
>>> >
>>> > Thanks
>>> > Sebastian
>>> >
>>> > [1]
>>> >
>>> > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
>>> >
>>> >
>>> > _______________________________________________
>>> > infinispan-dev mailing list
>>> > [hidden email]
>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> >
>>>
>>> --
>>> Tristan Tarrant
>>> Infinispan Lead
>>> JBoss, a division of Red Hat
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> [hidden email]
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Galder Zamarreño
In reply to this post by Sebastian Laskawiec
Hi all,

Sorry for the delay getting back on this.

The addition of a new component does not worry me so much. It has the advantage of implementing it once independent of the backend endpoint, whether HR or Rest.

What I'm struggling to understand is what protocol the clients will use to talk to the router. It seems wasteful having to build two protocols at this level, e.g. one at TCP level and one at REST level. If you're going to end up building two protocols, the benefit of the router component dissapears and then you might as well embedded the two routing protocols within REST and HR directly.

In other words, for the router component to make sense, I think it should:

1. Clients, no matter whether HR or REST, to use 1 single protocol to the router. The natural thing here would be HTTP/2 or similar protocol.
2. The router then talks HR or REST to the backend. Here the router uses TCP or HTTP protocol based on the backend needs.

^ The above implies that HR client has to talk TCP when using HR server directly or HTTP/2 when using it via router, but I don't think this is too bad and it gives us some experience working with HTTP/2 besides the work Anton is carrying out as part of GSoC.

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 11 May 2016, at 10:38, Sebastian Laskawiec <[hidden email]> wrote:
>
> Hey Tristan!
>
> If I understood you correctly, you're suggesting to enhance the ProtocolServer to support multiple EmbeddedCacheManagers (probably with shared transport and by that I mean started on the same Netty server).
>
> Yes, that also could work but I'm not convinced if we won't loose some configuration flexibility.
>
> Let's consider a configuration file - https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use authentication for CacheContainer cc1 (and not for cc2) and encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using this kind of different options makes sense in terms of multi tenancy. And please note that if we start a new Netty server for each CacheContainer - we almost ended up with the router I proposed.
>
> The second argument for using a router is extracting the routing logic into a separate module. Otherwise we would probably end up with several if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting this has also additional advantage that we limit changes in those modules (actually there will be probably 2 changes #1 we should be able to start a ProtocolServer without starting a Netty server (the Router will do it in multi tenant configuration) and #2 collect Netty handlers from ProtocolServer).
>
> To sum it up - the router's implementation seems to be more complicated but in the long run I think it might be worth it.
>
> I also wrote the summary of the above here: https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach
>
> @Galder - you wrote a huge part of the Hot Rod server - I would love to hear your opinion as well.
>
> Thanks
> Sebastian
>
>
>
> On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <[hidden email]> wrote:
> Not sure I like the introduction of another component at the front.
>
> My original idea for allowing the client to choose the container was:
>
> - with TLS: use SNI to choose the container
> - without TLS: enhance the PING operation of the Hot Rod protocol to
> also take the server name. This would need to be a requirement when
> exposing multiple containers over the same endpoint.
>
>  From a client API perspective, there would be no difference between the
> above two approaches: just specify the server name and depending on the
> transport, select the right one.
>
> Tristan
>
> On 29/04/2016 17:29, Sebastian Laskawiec wrote:
> > Dear Community,
> >
> > Please have a look at the design of Multi tenancy support for Infinispan
> > [1]. I would be more than happy to get some feedback from you.
> >
> > Highlights:
> >
> >   * The implementation will be based on a Router (which will be built
> >     based on Netty)
> >   * Multiple Hot Rod and REST servers will be attached to the router
> >     which in turn will be attached to the endpoint
> >   * The router will operate on a binary protocol when using Hot Rod
> >     clients and path-based routing when using REST
> >   * Memcached will be out of scope
> >   * The router will support SSL+SNI
> >
> > Thanks
> > Sebastian
> >
> > [1]
> > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
>
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Sebastian Laskawiec
Hey Galder!

Comments inlined.

Thanks
Sebastian

On Wed, May 25, 2016 at 10:52 AM, Galder Zamarreño <[hidden email]> wrote:
Hi all,

Sorry for the delay getting back on this.

The addition of a new component does not worry me so much. It has the advantage of implementing it once independent of the backend endpoint, whether HR or Rest.

What I'm struggling to understand is what protocol the clients will use to talk to the router. It seems wasteful having to build two protocols at this level, e.g. one at TCP level and one at REST level. If you're going to end up building two protocols, the benefit of the router component dissapears and then you might as well embedded the two routing protocols within REST and HR directly.

I think I wasn't clear enough in the design how the routing works...

In your scenario - both servers (hotrod and rest) will start EmbeddedCacheManagers internally but none of them will start Netty transport. The only transport that will be turned on is the router. The router will be responsible for recognizing the request type (if HTTP - find proper REST server, if HotRod protocol - find proper HotRod) and attaching handlers at the end of the pipeline.

Regarding to custom protocol (this usecase could be used with Hotrod clients which do not use SSL (so SNI routing is not possible)), you and Tristan got me thinking whether we really need it. Maybe we should require SSL+SNI when using HotRod protocol with no exceptions? The thing that bothers me is that SSL makes the whole setup twice slower: https://gist.github.com/slaskawi/51f76b0658b9ee0c9351bd17224b1ba2#file-gistfile1-txt-L1753-L1754
 

In other words, for the router component to make sense, I think it should:

1. Clients, no matter whether HR or REST, to use 1 single protocol to the router. The natural thing here would be HTTP/2 or similar protocol.

Yes, that's the goal.
 
2. The router then talks HR or REST to the backend. Here the router uses TCP or HTTP protocol based on the backend needs.

It's even simpler - it just uses the backend's Netty Handlers.

Since the SNI implementation is ready, please have a look: https://github.com/infinispan/infinispan/pull/4348
 

^ The above implies that HR client has to talk TCP when using HR server directly or HTTP/2 when using it via router, but I don't think this is too bad and it gives us some experience working with HTTP/2 besides the work Anton is carrying out as part of GSoC. 

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 11 May 2016, at 10:38, Sebastian Laskawiec <[hidden email]> wrote:
>
> Hey Tristan!
>
> If I understood you correctly, you're suggesting to enhance the ProtocolServer to support multiple EmbeddedCacheManagers (probably with shared transport and by that I mean started on the same Netty server).
>
> Yes, that also could work but I'm not convinced if we won't loose some configuration flexibility.
>
> Let's consider a configuration file - https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use authentication for CacheContainer cc1 (and not for cc2) and encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using this kind of different options makes sense in terms of multi tenancy. And please note that if we start a new Netty server for each CacheContainer - we almost ended up with the router I proposed.
>
> The second argument for using a router is extracting the routing logic into a separate module. Otherwise we would probably end up with several if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting this has also additional advantage that we limit changes in those modules (actually there will be probably 2 changes #1 we should be able to start a ProtocolServer without starting a Netty server (the Router will do it in multi tenant configuration) and #2 collect Netty handlers from ProtocolServer).
>
> To sum it up - the router's implementation seems to be more complicated but in the long run I think it might be worth it.
>
> I also wrote the summary of the above here: https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach
>
> @Galder - you wrote a huge part of the Hot Rod server - I would love to hear your opinion as well.
>
> Thanks
> Sebastian
>
>
>
> On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <[hidden email]> wrote:
> Not sure I like the introduction of another component at the front.
>
> My original idea for allowing the client to choose the container was:
>
> - with TLS: use SNI to choose the container
> - without TLS: enhance the PING operation of the Hot Rod protocol to
> also take the server name. This would need to be a requirement when
> exposing multiple containers over the same endpoint.
>
>  From a client API perspective, there would be no difference between the
> above two approaches: just specify the server name and depending on the
> transport, select the right one.
>
> Tristan
>
> On 29/04/2016 17:29, Sebastian Laskawiec wrote:
> > Dear Community,
> >
> > Please have a look at the design of Multi tenancy support for Infinispan
> > [1]. I would be more than happy to get some feedback from you.
> >
> > Highlights:
> >
> >   * The implementation will be based on a Router (which will be built
> >     based on Netty)
> >   * Multiple Hot Rod and REST servers will be attached to the router
> >     which in turn will be attached to the endpoint
> >   * The router will operate on a binary protocol when using Hot Rod
> >     clients and path-based routing when using REST
> >   * Memcached will be out of scope
> >   * The router will support SSL+SNI
> >
> > Thanks
> > Sebastian
> >
> > [1]
> > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
>
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Sebastian Laskawiec
Hey!

The multi-tenancy support for Hot Rod and REST has been implemented [2]. Since the PR is gigantic, I marked some interesting places for review so you might want to skip boilerplate parts.

The Memcached and WebSockets implementations are currently out of scope. If you would like us to implement them, please vote on the following tickets:
Thanks
Sebastian


On Thu, May 26, 2016 at 4:51 PM, Sebastian Laskawiec <[hidden email]> wrote:
Hey Galder!

Comments inlined.

Thanks
Sebastian

On Wed, May 25, 2016 at 10:52 AM, Galder Zamarreño <[hidden email]> wrote:
Hi all,

Sorry for the delay getting back on this.

The addition of a new component does not worry me so much. It has the advantage of implementing it once independent of the backend endpoint, whether HR or Rest.

What I'm struggling to understand is what protocol the clients will use to talk to the router. It seems wasteful having to build two protocols at this level, e.g. one at TCP level and one at REST level. If you're going to end up building two protocols, the benefit of the router component dissapears and then you might as well embedded the two routing protocols within REST and HR directly.

I think I wasn't clear enough in the design how the routing works...

In your scenario - both servers (hotrod and rest) will start EmbeddedCacheManagers internally but none of them will start Netty transport. The only transport that will be turned on is the router. The router will be responsible for recognizing the request type (if HTTP - find proper REST server, if HotRod protocol - find proper HotRod) and attaching handlers at the end of the pipeline.

Regarding to custom protocol (this usecase could be used with Hotrod clients which do not use SSL (so SNI routing is not possible)), you and Tristan got me thinking whether we really need it. Maybe we should require SSL+SNI when using HotRod protocol with no exceptions? The thing that bothers me is that SSL makes the whole setup twice slower: https://gist.github.com/slaskawi/51f76b0658b9ee0c9351bd17224b1ba2#file-gistfile1-txt-L1753-L1754
 

In other words, for the router component to make sense, I think it should:

1. Clients, no matter whether HR or REST, to use 1 single protocol to the router. The natural thing here would be HTTP/2 or similar protocol.

Yes, that's the goal.
 
2. The router then talks HR or REST to the backend. Here the router uses TCP or HTTP protocol based on the backend needs.

It's even simpler - it just uses the backend's Netty Handlers.

Since the SNI implementation is ready, please have a look: https://github.com/infinispan/infinispan/pull/4348
 

^ The above implies that HR client has to talk TCP when using HR server directly or HTTP/2 when using it via router, but I don't think this is too bad and it gives us some experience working with HTTP/2 besides the work Anton is carrying out as part of GSoC. 

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 11 May 2016, at 10:38, Sebastian Laskawiec <[hidden email]> wrote:
>
> Hey Tristan!
>
> If I understood you correctly, you're suggesting to enhance the ProtocolServer to support multiple EmbeddedCacheManagers (probably with shared transport and by that I mean started on the same Netty server).
>
> Yes, that also could work but I'm not convinced if we won't loose some configuration flexibility.
>
> Let's consider a configuration file - https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use authentication for CacheContainer cc1 (and not for cc2) and encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using this kind of different options makes sense in terms of multi tenancy. And please note that if we start a new Netty server for each CacheContainer - we almost ended up with the router I proposed.
>
> The second argument for using a router is extracting the routing logic into a separate module. Otherwise we would probably end up with several if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting this has also additional advantage that we limit changes in those modules (actually there will be probably 2 changes #1 we should be able to start a ProtocolServer without starting a Netty server (the Router will do it in multi tenant configuration) and #2 collect Netty handlers from ProtocolServer).
>
> To sum it up - the router's implementation seems to be more complicated but in the long run I think it might be worth it.
>
> I also wrote the summary of the above here: https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach
>
> @Galder - you wrote a huge part of the Hot Rod server - I would love to hear your opinion as well.
>
> Thanks
> Sebastian
>
>
>
> On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <[hidden email]> wrote:
> Not sure I like the introduction of another component at the front.
>
> My original idea for allowing the client to choose the container was:
>
> - with TLS: use SNI to choose the container
> - without TLS: enhance the PING operation of the Hot Rod protocol to
> also take the server name. This would need to be a requirement when
> exposing multiple containers over the same endpoint.
>
>  From a client API perspective, there would be no difference between the
> above two approaches: just specify the server name and depending on the
> transport, select the right one.
>
> Tristan
>
> On 29/04/2016 17:29, Sebastian Laskawiec wrote:
> > Dear Community,
> >
> > Please have a look at the design of Multi tenancy support for Infinispan
> > [1]. I would be more than happy to get some feedback from you.
> >
> > Highlights:
> >
> >   * The implementation will be based on a Router (which will be built
> >     based on Netty)
> >   * Multiple Hot Rod and REST servers will be attached to the router
> >     which in turn will be attached to the endpoint
> >   * The router will operate on a binary protocol when using Hot Rod
> >     clients and path-based routing when using REST
> >   * Memcached will be out of scope
> >   * The router will support SSL+SNI
> >
> > Thanks
> > Sebastian
> >
> > [1]
> > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
>
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev



_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Sebastian Laskawiec
Hey!

Dan pointed out a very interesting thing [1] - we could use host header for multi-tenant REST endpoints. Although I really like the idea (this header was introduced to support this kind of use cases), it might be a bit problematic from security point of view (if someone forgets to set it, he'll be talking to someone else Cache Container).

What do you think about this? Should we implement this (now or later)?

I vote for yes and implement it in 9.1 (or 9.0 if there is enough time).

Thanks
Sebastian

On Wed, Jun 29, 2016 at 8:55 AM, Sebastian Laskawiec <[hidden email]> wrote:
Hey!

The multi-tenancy support for Hot Rod and REST has been implemented [2]. Since the PR is gigantic, I marked some interesting places for review so you might want to skip boilerplate parts.

The Memcached and WebSockets implementations are currently out of scope. If you would like us to implement them, please vote on the following tickets:
Thanks
Sebastian


On Thu, May 26, 2016 at 4:51 PM, Sebastian Laskawiec <[hidden email]> wrote:
Hey Galder!

Comments inlined.

Thanks
Sebastian

On Wed, May 25, 2016 at 10:52 AM, Galder Zamarreño <[hidden email]> wrote:
Hi all,

Sorry for the delay getting back on this.

The addition of a new component does not worry me so much. It has the advantage of implementing it once independent of the backend endpoint, whether HR or Rest.

What I'm struggling to understand is what protocol the clients will use to talk to the router. It seems wasteful having to build two protocols at this level, e.g. one at TCP level and one at REST level. If you're going to end up building two protocols, the benefit of the router component dissapears and then you might as well embedded the two routing protocols within REST and HR directly.

I think I wasn't clear enough in the design how the routing works...

In your scenario - both servers (hotrod and rest) will start EmbeddedCacheManagers internally but none of them will start Netty transport. The only transport that will be turned on is the router. The router will be responsible for recognizing the request type (if HTTP - find proper REST server, if HotRod protocol - find proper HotRod) and attaching handlers at the end of the pipeline.

Regarding to custom protocol (this usecase could be used with Hotrod clients which do not use SSL (so SNI routing is not possible)), you and Tristan got me thinking whether we really need it. Maybe we should require SSL+SNI when using HotRod protocol with no exceptions? The thing that bothers me is that SSL makes the whole setup twice slower: https://gist.github.com/slaskawi/51f76b0658b9ee0c9351bd17224b1ba2#file-gistfile1-txt-L1753-L1754
 

In other words, for the router component to make sense, I think it should:

1. Clients, no matter whether HR or REST, to use 1 single protocol to the router. The natural thing here would be HTTP/2 or similar protocol.

Yes, that's the goal.
 
2. The router then talks HR or REST to the backend. Here the router uses TCP or HTTP protocol based on the backend needs.

It's even simpler - it just uses the backend's Netty Handlers.

Since the SNI implementation is ready, please have a look: https://github.com/infinispan/infinispan/pull/4348
 

^ The above implies that HR client has to talk TCP when using HR server directly or HTTP/2 when using it via router, but I don't think this is too bad and it gives us some experience working with HTTP/2 besides the work Anton is carrying out as part of GSoC. 

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 11 May 2016, at 10:38, Sebastian Laskawiec <[hidden email]> wrote:
>
> Hey Tristan!
>
> If I understood you correctly, you're suggesting to enhance the ProtocolServer to support multiple EmbeddedCacheManagers (probably with shared transport and by that I mean started on the same Netty server).
>
> Yes, that also could work but I'm not convinced if we won't loose some configuration flexibility.
>
> Let's consider a configuration file - https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use authentication for CacheContainer cc1 (and not for cc2) and encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using this kind of different options makes sense in terms of multi tenancy. And please note that if we start a new Netty server for each CacheContainer - we almost ended up with the router I proposed.
>
> The second argument for using a router is extracting the routing logic into a separate module. Otherwise we would probably end up with several if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting this has also additional advantage that we limit changes in those modules (actually there will be probably 2 changes #1 we should be able to start a ProtocolServer without starting a Netty server (the Router will do it in multi tenant configuration) and #2 collect Netty handlers from ProtocolServer).
>
> To sum it up - the router's implementation seems to be more complicated but in the long run I think it might be worth it.
>
> I also wrote the summary of the above here: https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach
>
> @Galder - you wrote a huge part of the Hot Rod server - I would love to hear your opinion as well.
>
> Thanks
> Sebastian
>
>
>
> On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <[hidden email]> wrote:
> Not sure I like the introduction of another component at the front.
>
> My original idea for allowing the client to choose the container was:
>
> - with TLS: use SNI to choose the container
> - without TLS: enhance the PING operation of the Hot Rod protocol to
> also take the server name. This would need to be a requirement when
> exposing multiple containers over the same endpoint.
>
>  From a client API perspective, there would be no difference between the
> above two approaches: just specify the server name and depending on the
> transport, select the right one.
>
> Tristan
>
> On 29/04/2016 17:29, Sebastian Laskawiec wrote:
> > Dear Community,
> >
> > Please have a look at the design of Multi tenancy support for Infinispan
> > [1]. I would be more than happy to get some feedback from you.
> >
> > Highlights:
> >
> >   * The implementation will be based on a Router (which will be built
> >     based on Netty)
> >   * Multiple Hot Rod and REST servers will be attached to the router
> >     which in turn will be attached to the endpoint
> >   * The router will operate on a binary protocol when using Hot Rod
> >     clients and path-based routing when using REST
> >   * Memcached will be out of scope
> >   * The router will support SSL+SNI
> >
> > Thanks
> > Sebastian
> >
> > [1]
> > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
>
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev




_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Sebastian Laskawiec
After investigating ALPN [1] and HTTP/2 [2] support I revisited this feature to see how everything fits together.

Just as a reminder - the idea behind multi-tenant router is to implement a component which will have references to all deployed Hot Rod and REST servers (Memcached and WebSockets are out of the scope at this point) [3] and will be able to forward requests to proper instance.

Since we'd like to create an ALPN-based, polyglot client at some point, I believe the router concept should be a little bit more generic. It should be able to use SNI for routing as well as negotiate the protocol using ALPN or even switch to different protocol using HTTP 1.1/Upgrade header. Having this in mind, I would like to rebase multi-tenancy feature and slightly modify router endpoint configuration to something like this:

<router-connector socket-binding="router">
  <default-encryption security-realm="other"/>
  <multi-tenancy>
    <hotrod name="hotrod1">
       <sni host-name="sni1" security-realm="other"/>
     </hotrod>
     <rest name="rest1">
       <path prefix="rest1"/>
       <!-- to be implemented in the future - HTTP + host header as Dan suggested -->
       <host name="test" />
     </rest>
  </multi-tenancy>
  <!-- to be implemented in the future -->
  <polyglot>
    <hotrod name="hotrod1">
       <priority />
     </hotrod>
  </polyglot>
</router-connector>

With this configuration, the router should be really flexible and extendable.

If there will be no negative comments, I'll start working on that tomorrow. 

Thanks
Sebastian

On Mon, Jul 18, 2016 at 9:14 AM, Sebastian Laskawiec <[hidden email]> wrote:
Hey!

Dan pointed out a very interesting thing [1] - we could use host header for multi-tenant REST endpoints. Although I really like the idea (this header was introduced to support this kind of use cases), it might be a bit problematic from security point of view (if someone forgets to set it, he'll be talking to someone else Cache Container).

What do you think about this? Should we implement this (now or later)?

I vote for yes and implement it in 9.1 (or 9.0 if there is enough time).

Thanks
Sebastian

On Wed, Jun 29, 2016 at 8:55 AM, Sebastian Laskawiec <[hidden email]> wrote:
Hey!

The multi-tenancy support for Hot Rod and REST has been implemented [2]. Since the PR is gigantic, I marked some interesting places for review so you might want to skip boilerplate parts.

The Memcached and WebSockets implementations are currently out of scope. If you would like us to implement them, please vote on the following tickets:
Thanks
Sebastian


On Thu, May 26, 2016 at 4:51 PM, Sebastian Laskawiec <[hidden email]> wrote:
Hey Galder!

Comments inlined.

Thanks
Sebastian

On Wed, May 25, 2016 at 10:52 AM, Galder Zamarreño <[hidden email]> wrote:
Hi all,

Sorry for the delay getting back on this.

The addition of a new component does not worry me so much. It has the advantage of implementing it once independent of the backend endpoint, whether HR or Rest.

What I'm struggling to understand is what protocol the clients will use to talk to the router. It seems wasteful having to build two protocols at this level, e.g. one at TCP level and one at REST level. If you're going to end up building two protocols, the benefit of the router component dissapears and then you might as well embedded the two routing protocols within REST and HR directly.

I think I wasn't clear enough in the design how the routing works...

In your scenario - both servers (hotrod and rest) will start EmbeddedCacheManagers internally but none of them will start Netty transport. The only transport that will be turned on is the router. The router will be responsible for recognizing the request type (if HTTP - find proper REST server, if HotRod protocol - find proper HotRod) and attaching handlers at the end of the pipeline.

Regarding to custom protocol (this usecase could be used with Hotrod clients which do not use SSL (so SNI routing is not possible)), you and Tristan got me thinking whether we really need it. Maybe we should require SSL+SNI when using HotRod protocol with no exceptions? The thing that bothers me is that SSL makes the whole setup twice slower: https://gist.github.com/slaskawi/51f76b0658b9ee0c9351bd17224b1ba2#file-gistfile1-txt-L1753-L1754
 

In other words, for the router component to make sense, I think it should:

1. Clients, no matter whether HR or REST, to use 1 single protocol to the router. The natural thing here would be HTTP/2 or similar protocol.

Yes, that's the goal.
 
2. The router then talks HR or REST to the backend. Here the router uses TCP or HTTP protocol based on the backend needs.

It's even simpler - it just uses the backend's Netty Handlers.

Since the SNI implementation is ready, please have a look: https://github.com/infinispan/infinispan/pull/4348
 

^ The above implies that HR client has to talk TCP when using HR server directly or HTTP/2 when using it via router, but I don't think this is too bad and it gives us some experience working with HTTP/2 besides the work Anton is carrying out as part of GSoC. 

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 11 May 2016, at 10:38, Sebastian Laskawiec <[hidden email]> wrote:
>
> Hey Tristan!
>
> If I understood you correctly, you're suggesting to enhance the ProtocolServer to support multiple EmbeddedCacheManagers (probably with shared transport and by that I mean started on the same Netty server).
>
> Yes, that also could work but I'm not convinced if we won't loose some configuration flexibility.
>
> Let's consider a configuration file - https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use authentication for CacheContainer cc1 (and not for cc2) and encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using this kind of different options makes sense in terms of multi tenancy. And please note that if we start a new Netty server for each CacheContainer - we almost ended up with the router I proposed.
>
> The second argument for using a router is extracting the routing logic into a separate module. Otherwise we would probably end up with several if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting this has also additional advantage that we limit changes in those modules (actually there will be probably 2 changes #1 we should be able to start a ProtocolServer without starting a Netty server (the Router will do it in multi tenant configuration) and #2 collect Netty handlers from ProtocolServer).
>
> To sum it up - the router's implementation seems to be more complicated but in the long run I think it might be worth it.
>
> I also wrote the summary of the above here: https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach
>
> @Galder - you wrote a huge part of the Hot Rod server - I would love to hear your opinion as well.
>
> Thanks
> Sebastian
>
>
>
> On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <[hidden email]> wrote:
> Not sure I like the introduction of another component at the front.
>
> My original idea for allowing the client to choose the container was:
>
> - with TLS: use SNI to choose the container
> - without TLS: enhance the PING operation of the Hot Rod protocol to
> also take the server name. This would need to be a requirement when
> exposing multiple containers over the same endpoint.
>
>  From a client API perspective, there would be no difference between the
> above two approaches: just specify the server name and depending on the
> transport, select the right one.
>
> Tristan
>
> On 29/04/2016 17:29, Sebastian Laskawiec wrote:
> > Dear Community,
> >
> > Please have a look at the design of Multi tenancy support for Infinispan
> > [1]. I would be more than happy to get some feedback from you.
> >
> > Highlights:
> >
> >   * The implementation will be based on a Router (which will be built
> >     based on Netty)
> >   * Multiple Hot Rod and REST servers will be attached to the router
> >     which in turn will be attached to the endpoint
> >   * The router will operate on a binary protocol when using Hot Rod
> >     clients and path-based routing when using REST
> >   * Memcached will be out of scope
> >   * The router will support SSL+SNI
> >
> > Thanks
> > Sebastian
> >
> > [1]
> > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
>
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev





_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Anton Gabov
Sebastian, correct me if I'm wrong.

As I understand, client will have Router instance, which has info about servers, caches in these servers and support protocols (HotRod, HTTP/1, HTTP/2).

So, I have some questions:
1) Will Router keep all connections up or close connection after the request? For instance, client need to make request for some server. It creates connection, make request and close connection (or we keep connection and leave it opened).
2) How update from HTTP/2 to HotRod can be done? I cannot imagine this situation, but I would like to know it :)
3) Can Router be configurated programmatically or only by xml configuration? 

Best wishes,
Anton.

2016-09-12 9:57 GMT+03:00 Sebastian Laskawiec <[hidden email]>:
After investigating ALPN [1] and HTTP/2 [2] support I revisited this feature to see how everything fits together.

Just as a reminder - the idea behind multi-tenant router is to implement a component which will have references to all deployed Hot Rod and REST servers (Memcached and WebSockets are out of the scope at this point) [3] and will be able to forward requests to proper instance.

Since we'd like to create an ALPN-based, polyglot client at some point, I believe the router concept should be a little bit more generic. It should be able to use SNI for routing as well as negotiate the protocol using ALPN or even switch to different protocol using HTTP 1.1/Upgrade header. Having this in mind, I would like to rebase multi-tenancy feature and slightly modify router endpoint configuration to something like this:

<router-connector socket-binding="router">
  <default-encryption security-realm="other"/>
  <multi-tenancy>
    <hotrod name="hotrod1">
       <sni host-name="sni1" security-realm="other"/>
     </hotrod>
     <rest name="rest1">
       <path prefix="rest1"/>
       <!-- to be implemented in the future - HTTP + host header as Dan suggested -->
       <host name="test" />
     </rest>
  </multi-tenancy>
  <!-- to be implemented in the future -->
  <polyglot>
    <hotrod name="hotrod1">
       <priority />
     </hotrod>
  </polyglot>
</router-connector>

With this configuration, the router should be really flexible and extendable.

If there will be no negative comments, I'll start working on that tomorrow. 

Thanks
Sebastian

On Mon, Jul 18, 2016 at 9:14 AM, Sebastian Laskawiec <[hidden email]> wrote:
Hey!

Dan pointed out a very interesting thing [1] - we could use host header for multi-tenant REST endpoints. Although I really like the idea (this header was introduced to support this kind of use cases), it might be a bit problematic from security point of view (if someone forgets to set it, he'll be talking to someone else Cache Container).

What do you think about this? Should we implement this (now or later)?

I vote for yes and implement it in 9.1 (or 9.0 if there is enough time).

Thanks
Sebastian

On Wed, Jun 29, 2016 at 8:55 AM, Sebastian Laskawiec <[hidden email]> wrote:
Hey!

The multi-tenancy support for Hot Rod and REST has been implemented [2]. Since the PR is gigantic, I marked some interesting places for review so you might want to skip boilerplate parts.

The Memcached and WebSockets implementations are currently out of scope. If you would like us to implement them, please vote on the following tickets:
Thanks
Sebastian


On Thu, May 26, 2016 at 4:51 PM, Sebastian Laskawiec <[hidden email]> wrote:
Hey Galder!

Comments inlined.

Thanks
Sebastian

On Wed, May 25, 2016 at 10:52 AM, Galder Zamarreño <[hidden email]> wrote:
Hi all,

Sorry for the delay getting back on this.

The addition of a new component does not worry me so much. It has the advantage of implementing it once independent of the backend endpoint, whether HR or Rest.

What I'm struggling to understand is what protocol the clients will use to talk to the router. It seems wasteful having to build two protocols at this level, e.g. one at TCP level and one at REST level. If you're going to end up building two protocols, the benefit of the router component dissapears and then you might as well embedded the two routing protocols within REST and HR directly.

I think I wasn't clear enough in the design how the routing works...

In your scenario - both servers (hotrod and rest) will start EmbeddedCacheManagers internally but none of them will start Netty transport. The only transport that will be turned on is the router. The router will be responsible for recognizing the request type (if HTTP - find proper REST server, if HotRod protocol - find proper HotRod) and attaching handlers at the end of the pipeline.

Regarding to custom protocol (this usecase could be used with Hotrod clients which do not use SSL (so SNI routing is not possible)), you and Tristan got me thinking whether we really need it. Maybe we should require SSL+SNI when using HotRod protocol with no exceptions? The thing that bothers me is that SSL makes the whole setup twice slower: https://gist.github.com/slaskawi/51f76b0658b9ee0c9351bd17224b1ba2#file-gistfile1-txt-L1753-L1754
 

In other words, for the router component to make sense, I think it should:

1. Clients, no matter whether HR or REST, to use 1 single protocol to the router. The natural thing here would be HTTP/2 or similar protocol.

Yes, that's the goal.
 
2. The router then talks HR or REST to the backend. Here the router uses TCP or HTTP protocol based on the backend needs.

It's even simpler - it just uses the backend's Netty Handlers.

Since the SNI implementation is ready, please have a look: https://github.com/infinispan/infinispan/pull/4348
 

^ The above implies that HR client has to talk TCP when using HR server directly or HTTP/2 when using it via router, but I don't think this is too bad and it gives us some experience working with HTTP/2 besides the work Anton is carrying out as part of GSoC. 

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 11 May 2016, at 10:38, Sebastian Laskawiec <[hidden email]> wrote:
>
> Hey Tristan!
>
> If I understood you correctly, you're suggesting to enhance the ProtocolServer to support multiple EmbeddedCacheManagers (probably with shared transport and by that I mean started on the same Netty server).
>
> Yes, that also could work but I'm not convinced if we won't loose some configuration flexibility.
>
> Let's consider a configuration file - https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use authentication for CacheContainer cc1 (and not for cc2) and encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using this kind of different options makes sense in terms of multi tenancy. And please note that if we start a new Netty server for each CacheContainer - we almost ended up with the router I proposed.
>
> The second argument for using a router is extracting the routing logic into a separate module. Otherwise we would probably end up with several if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting this has also additional advantage that we limit changes in those modules (actually there will be probably 2 changes #1 we should be able to start a ProtocolServer without starting a Netty server (the Router will do it in multi tenant configuration) and #2 collect Netty handlers from ProtocolServer).
>
> To sum it up - the router's implementation seems to be more complicated but in the long run I think it might be worth it.
>
> I also wrote the summary of the above here: https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach
>
> @Galder - you wrote a huge part of the Hot Rod server - I would love to hear your opinion as well.
>
> Thanks
> Sebastian
>
>
>
> On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <[hidden email]> wrote:
> Not sure I like the introduction of another component at the front.
>
> My original idea for allowing the client to choose the container was:
>
> - with TLS: use SNI to choose the container
> - without TLS: enhance the PING operation of the Hot Rod protocol to
> also take the server name. This would need to be a requirement when
> exposing multiple containers over the same endpoint.
>
>  From a client API perspective, there would be no difference between the
> above two approaches: just specify the server name and depending on the
> transport, select the right one.
>
> Tristan
>
> On 29/04/2016 17:29, Sebastian Laskawiec wrote:
> > Dear Community,
> >
> > Please have a look at the design of Multi tenancy support for Infinispan
> > [1]. I would be more than happy to get some feedback from you.
> >
> > Highlights:
> >
> >   * The implementation will be based on a Router (which will be built
> >     based on Netty)
> >   * Multiple Hot Rod and REST servers will be attached to the router
> >     which in turn will be attached to the endpoint
> >   * The router will operate on a binary protocol when using Hot Rod
> >     clients and path-based routing when using REST
> >   * Memcached will be out of scope
> >   * The router will support SSL+SNI
> >
> > Thanks
> > Sebastian
> >
> > [1]
> > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
>
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev





_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Multi tenancy support for Infinispan

Sebastian Laskawiec
Hey Anton!

Just to clarify - the router is a concept implemented in Infinispan *Server*. In the endpoint to be 100% precise. Each server will have this component up and running and it will take the incomming TCP connection and pass it to proper NettyServer or RestServer instance (after choosing proper tenant or negotiating protocol with ALPN once we have the implementation).

On the client side we will need something a little but different. A polyglot client will need to look into available protocol implementations (let's imagine we have a client which supports Hot Rod and HTTP/2 protocol) during the TLS handshake and pick the best one. For the sake of this example - Hot Rod could have a higher priority because it's faster.

I assume your questions are slightly missed (since they assume the router on a client side) but let me try to answer them...

Thanks
Sebastian

On Mon, Sep 12, 2016 at 11:10 AM, Anton Gabov <[hidden email]> wrote:
Sebastian, correct me if I'm wrong.

As I understand, client will have Router instance, which has info about servers, caches in these servers and support protocols (HotRod, HTTP/1, HTTP/2).

So, I have some questions:
1) Will Router keep all connections up or close connection after the request? For instance, client need to make request for some server. It creates connection, make request and close connection (or we keep connection and leave it opened).

I believe it should keep (or pool) them. 

Moreover when considering Kubernetes we need to go through an Ingress [1]. Plus there are also PetSets [2]. I've heard some rumors that the routing for them might use SNI. So we might need to use TLS/SNI differently depending on scenario and possibly holding more than one connection per server. Unfortunately I can not confirm this at this stage.

 
2) How update from HTTP/2 to HotRod can be done? I cannot imagine this situation, but I would like to know it :)

We can not upgrade since since HTTP/2 doesn't support the upgrade procedure. 

However you can upgrade from HTTP 1.1 using the Upgrade header [3] or negotiate using HTTP/2 using ALPN [4]. The same approach might be used to upgrade (or negotiate) any TCP based protocol (including HTTP for REST, Memcached since it's plain text or Hot Rod).

 
3) Can Router be configurated programmatically or only by xml configuration? 

Since this is a server component - only XML will be available for the client*.

[*] But if you look carefully, the implementation allows you to bootstrap everything from java using proper ConfigurationBuilders. Of course they should be used only internally. 
 

Best wishes,
Anton.

2016-09-12 9:57 GMT+03:00 Sebastian Laskawiec <[hidden email]>:
After investigating ALPN [1] and HTTP/2 [2] support I revisited this feature to see how everything fits together.

Just as a reminder - the idea behind multi-tenant router is to implement a component which will have references to all deployed Hot Rod and REST servers (Memcached and WebSockets are out of the scope at this point) [3] and will be able to forward requests to proper instance.

Since we'd like to create an ALPN-based, polyglot client at some point, I believe the router concept should be a little bit more generic. It should be able to use SNI for routing as well as negotiate the protocol using ALPN or even switch to different protocol using HTTP 1.1/Upgrade header. Having this in mind, I would like to rebase multi-tenancy feature and slightly modify router endpoint configuration to something like this:

<router-connector socket-binding="router">
  <default-encryption security-realm="other"/>
  <multi-tenancy>
    <hotrod name="hotrod1">
       <sni host-name="sni1" security-realm="other"/>
     </hotrod>
     <rest name="rest1">
       <path prefix="rest1"/>
       <!-- to be implemented in the future - HTTP + host header as Dan suggested -->
       <host name="test" />
     </rest>
  </multi-tenancy>
  <!-- to be implemented in the future -->
  <polyglot>
    <hotrod name="hotrod1">
       <priority />
     </hotrod>
  </polyglot>
</router-connector>

With this configuration, the router should be really flexible and extendable.

If there will be no negative comments, I'll start working on that tomorrow. 

Thanks
Sebastian

On Mon, Jul 18, 2016 at 9:14 AM, Sebastian Laskawiec <[hidden email]> wrote:
Hey!

Dan pointed out a very interesting thing [1] - we could use host header for multi-tenant REST endpoints. Although I really like the idea (this header was introduced to support this kind of use cases), it might be a bit problematic from security point of view (if someone forgets to set it, he'll be talking to someone else Cache Container).

What do you think about this? Should we implement this (now or later)?

I vote for yes and implement it in 9.1 (or 9.0 if there is enough time).

Thanks
Sebastian

On Wed, Jun 29, 2016 at 8:55 AM, Sebastian Laskawiec <[hidden email]> wrote:
Hey!

The multi-tenancy support for Hot Rod and REST has been implemented [2]. Since the PR is gigantic, I marked some interesting places for review so you might want to skip boilerplate parts.

The Memcached and WebSockets implementations are currently out of scope. If you would like us to implement them, please vote on the following tickets:
Thanks
Sebastian


On Thu, May 26, 2016 at 4:51 PM, Sebastian Laskawiec <[hidden email]> wrote:
Hey Galder!

Comments inlined.

Thanks
Sebastian

On Wed, May 25, 2016 at 10:52 AM, Galder Zamarreño <[hidden email]> wrote:
Hi all,

Sorry for the delay getting back on this.

The addition of a new component does not worry me so much. It has the advantage of implementing it once independent of the backend endpoint, whether HR or Rest.

What I'm struggling to understand is what protocol the clients will use to talk to the router. It seems wasteful having to build two protocols at this level, e.g. one at TCP level and one at REST level. If you're going to end up building two protocols, the benefit of the router component dissapears and then you might as well embedded the two routing protocols within REST and HR directly.

I think I wasn't clear enough in the design how the routing works...

In your scenario - both servers (hotrod and rest) will start EmbeddedCacheManagers internally but none of them will start Netty transport. The only transport that will be turned on is the router. The router will be responsible for recognizing the request type (if HTTP - find proper REST server, if HotRod protocol - find proper HotRod) and attaching handlers at the end of the pipeline.

Regarding to custom protocol (this usecase could be used with Hotrod clients which do not use SSL (so SNI routing is not possible)), you and Tristan got me thinking whether we really need it. Maybe we should require SSL+SNI when using HotRod protocol with no exceptions? The thing that bothers me is that SSL makes the whole setup twice slower: https://gist.github.com/slaskawi/51f76b0658b9ee0c9351bd17224b1ba2#file-gistfile1-txt-L1753-L1754
 

In other words, for the router component to make sense, I think it should:

1. Clients, no matter whether HR or REST, to use 1 single protocol to the router. The natural thing here would be HTTP/2 or similar protocol.

Yes, that's the goal.
 
2. The router then talks HR or REST to the backend. Here the router uses TCP or HTTP protocol based on the backend needs.

It's even simpler - it just uses the backend's Netty Handlers.

Since the SNI implementation is ready, please have a look: https://github.com/infinispan/infinispan/pull/4348
 

^ The above implies that HR client has to talk TCP when using HR server directly or HTTP/2 when using it via router, but I don't think this is too bad and it gives us some experience working with HTTP/2 besides the work Anton is carrying out as part of GSoC. 

Cheers,
--
Galder Zamarreño
Infinispan, Red Hat

> On 11 May 2016, at 10:38, Sebastian Laskawiec <[hidden email]> wrote:
>
> Hey Tristan!
>
> If I understood you correctly, you're suggesting to enhance the ProtocolServer to support multiple EmbeddedCacheManagers (probably with shared transport and by that I mean started on the same Netty server).
>
> Yes, that also could work but I'm not convinced if we won't loose some configuration flexibility.
>
> Let's consider a configuration file - https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use authentication for CacheContainer cc1 (and not for cc2) and encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using this kind of different options makes sense in terms of multi tenancy. And please note that if we start a new Netty server for each CacheContainer - we almost ended up with the router I proposed.
>
> The second argument for using a router is extracting the routing logic into a separate module. Otherwise we would probably end up with several if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting this has also additional advantage that we limit changes in those modules (actually there will be probably 2 changes #1 we should be able to start a ProtocolServer without starting a Netty server (the Router will do it in multi tenant configuration) and #2 collect Netty handlers from ProtocolServer).
>
> To sum it up - the router's implementation seems to be more complicated but in the long run I think it might be worth it.
>
> I also wrote the summary of the above here: https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach
>
> @Galder - you wrote a huge part of the Hot Rod server - I would love to hear your opinion as well.
>
> Thanks
> Sebastian
>
>
>
> On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <[hidden email]> wrote:
> Not sure I like the introduction of another component at the front.
>
> My original idea for allowing the client to choose the container was:
>
> - with TLS: use SNI to choose the container
> - without TLS: enhance the PING operation of the Hot Rod protocol to
> also take the server name. This would need to be a requirement when
> exposing multiple containers over the same endpoint.
>
>  From a client API perspective, there would be no difference between the
> above two approaches: just specify the server name and depending on the
> transport, select the right one.
>
> Tristan
>
> On 29/04/2016 17:29, Sebastian Laskawiec wrote:
> > Dear Community,
> >
> > Please have a look at the design of Multi tenancy support for Infinispan
> > [1]. I would be more than happy to get some feedback from you.
> >
> > Highlights:
> >
> >   * The implementation will be based on a Router (which will be built
> >     based on Netty)
> >   * Multiple Hot Rod and REST servers will be attached to the router
> >     which in turn will be attached to the endpoint
> >   * The router will operate on a binary protocol when using Hot Rod
> >     clients and path-based routing when using REST
> >   * Memcached will be out of scope
> >   * The router will support SSL+SNI
> >
> > Thanks
> > Sebastian
> >
> > [1]
> > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
>
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev





_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Loading...