[infinispan-dev] HTTP/2 Upgrade [0] ideas and thoughts

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

[infinispan-dev] HTTP/2 Upgrade [0] ideas and thoughts

Sebastian Laskawiec

I started sketching some ideas how HTTP/2 client and the upgrade procedure could work in cloud environment. Before digging in, let me explain some basic concepts and facts:
  • Kubernetes as well as OpenShift operate on a very simple architecture. We have a group of Pods (Docker Containers), a Service which acts as a load balancer and a Route (which acts as a proxy for serving requests to the outside world).
  • Communication between components deployed on the same Kubernetes/OpenShift cluster looks like this: MyApp -> Service (target app) -> one of the Pods
  • Communication from the outside world looks like this: MyApp -> the Internet -> Route (target app) -> Service (target app) -> one of the Pods [1]
  • Currently Kubernetes/OpenShift Services use round-robin strategy for load balancing, can use Client IP affinity or can use HTTP Cookies for session stickiness.
  • OpenShift Route (or Kubernetes Ingress) can support TLS. They can downgrade HTTPS to HTTP (in other words terminate it), pass through an encrypted request without inspecting the content or reencrypt it with different certificate [2].
  • HTTP/2 does not have upgrade header. It uses TLS with ALPN to negotiate which protocol should be used [3].
  • HTTP/2 can support custom protocols (which allows writing custom Frames, Settings and Error Codes) [4].
The initial idea of using HTTP 1.1/Upgrade (as I mentioned above HTTP/2 doesn't have this concept, it uses ALPN) is to support Hot Rod Clients from the outside of the Kubernetes/OpenShift cluster. The client connects to a random Pod (through w Route) using HTTP Protocol and upgrades the connection to Hot Rod Protocol. 

After thinking about it for a while, several things don't fit. A Hot Rod client uses topology information to minimize the amount of hops. When we access the data using a Route (or Ingress) and a Service we don't control to which Pod we're connecting to. Moreover, if we switch from HTTP to Hot Rod protocol (which is based on TCP), we lose HTTP Headers which could be possible used for routing inside Kubernetes/OpenShift. Switching protocols is also problematic since HTTP/2 does not support upgrade header (as I mentioned above - it uses ALPN). ALPN support needs to be implemented in Hot Rod Server (this is the only component which has enough data to say which protocols are supported; OpenShift Routes or Kunernetes Ingresses don't have this knowledge). This means that Routes and Services sees encrypted traffic and won't be able to help us with Routing.

So what can we do about it? There are a couple of ideas how to solve those problems.

The first one is to enhance the Hot Rod Client to initialize a connection pool. The client could periodically initialize a new connection and send PING operation. If the connection is already in the pool - close it. Otherwise add it to the pool. We can call it brute-force-cluster-discovery :) It should be good for any round-robin like load-balancers.

The second idea is to implement a fully fledged HTTP/2 client for Hot Rod (@Anton - I think you're working on that aren't you?). We could use HTTP Headers to control to which Pod we are connecting to (this would require adding some code to Kubernetes/OpenShift but it shouldn't be very hard). After TLS Handshake we could use topology information and HTTP Headers to initialize connection to all cluster members. In this scenario ALPN won't be needed (since we will implement separate server and client) which is firewall friendly (operates on HTTP/HTTPS ports).

Having in mind the raise of stateful apps in the cloud we could also propose PetSets enhancement and donate extra code for supporting them from Ingress/Route perspective. The result should be similar to the previous option (donating some code to Services/Ingresses and Routes). 

The above are only some ideas how all the pieces could work together. I'll also consult it with the OpenShift Team - maybe they will give us some more hints.

Think about it for a while and let me know if you have any ideas...


infinispan-dev mailing list
[hidden email]