[infinispan-dev] The future of Infinispan Docker image

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

[infinispan-dev] The future of Infinispan Docker image

Sebastian Laskawiec
Hey!

Together with Ryan we are thinking about the future of Infinispan Docker image [1].

Currently we use a single Dockerfile and a bootstrap script which is responsible for setting up memory limits and creating/generating (if necessary) credentials. Our build pipeline uses Docker HUB integration hooks, so whenever we push a new commit (or a tag) our images are being rebuilt. This is very simple to understand and very powerful setup.

However we are thinking about bringing product and project images closer together and possibly reusing some bits (a common example might be Jolokia - those bits could be easily reused without touching core server distribution). This however requires converting our image to a framework called Concreate [2]. Concreate divides setup scripts into modules which are later on assembled into a single Dockerfile and built. Modules can also be pulled from other public git repository and I consider this as the most powerful option. It is also worth to mention, that Concreate is based on YAML file - here's an example of JDG image [3].

As you can see, this can be quite a change so I would like to reach out for some opinions. The biggest issue I can see is that we will lose our Docker HUB build pipeline and we will need to build and push images on our CI (which already does this locally for Online Services). 

WDYT?

Thanks,
Sebastian


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] The future of Infinispan Docker image

Gustavo Fernandes-2
IMHO we should ship things like scripts, external modules, drivers, etc with the server itself, leaving the least amount of logic in the Docker image.

What you are proposing is the opposite: introducing a templating engine that adds a level of indirection to the Docker image (the Dockerfile is generated) plus
it grabs jars, modules, scripts, xmls, etc from potentially external sources and does several patches to the server at Docker image creation time.

WRT the docker hub, I think it could be used with Concreate by using hooks, I did a quick experiment of a Dockerhub automated build having a dynamically generating a Dockerfile in [1], but I guess
the biggest question is if the added overall complexity is worth it. I'm leaning towards a -1, but would like to hear more opinions :)


Thanks,
Gustavo

On Tue, Nov 7, 2017 at 3:14 PM, Sebastian Laskawiec <[hidden email]> wrote:
Hey!

Together with Ryan we are thinking about the future of Infinispan Docker image [1].

Currently we use a single Dockerfile and a bootstrap script which is responsible for setting up memory limits and creating/generating (if necessary) credentials. Our build pipeline uses Docker HUB integration hooks, so whenever we push a new commit (or a tag) our images are being rebuilt. This is very simple to understand and very powerful setup.

However we are thinking about bringing product and project images closer together and possibly reusing some bits (a common example might be Jolokia - those bits could be easily reused without touching core server distribution). This however requires converting our image to a framework called Concreate [2]. Concreate divides setup scripts into modules which are later on assembled into a single Dockerfile and built. Modules can also be pulled from other public git repository and I consider this as the most powerful option. It is also worth to mention, that Concreate is based on YAML file - here's an example of JDG image [3].

As you can see, this can be quite a change so I would like to reach out for some opinions. The biggest issue I can see is that we will lose our Docker HUB build pipeline and we will need to build and push images on our CI (which already does this locally for Online Services). 

WDYT?

Thanks,
Sebastian


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] The future of Infinispan Docker image

Sebastian Laskawiec
That's a very good point Gustavo. 

Let me try to iterate on pros and cons of each approach:
  • Putting all bits into distribution:
    • Pros:
      • Unified approach for both project and product
      • Supporting all platforms with a single distribution
    • Cons:
      • Long turnaround from community to the product based bits (like Online Services)
      • Some work has already been done in Concreate-based approach (like Jolokia) and battle-tested (e.g. with EAP).
  • Putting all additional bits into integration layers (Concreate-based approach):
    • Pros:
      • Short turnaround, in most of the cases we need to patch the integration bits only
      • Some integration bits has already been implemented for us (Joloka, DB drivers etc)
    • Cons:
      • Some integrations bits needs to be reimplemented, e.g. KUBE_PING
      • Each integration layer needs to have its own code (e.g. community Docker image, xPaaS images, Online Services)
I must admit that in the past I was a pretty big fan of putting all bits into community distribution and driving it forward from there. But this actually changed once Concreate tool appeared. It allows to externalize modules into separate repositories which promotes code reuse (e.g. we could easily use Jolokia integration implemented for EAP and at the same time provide our own custom configuration for it). Of course most of the bits assume that underlying OS is RHEL which is not true for the community (community images use CentOS) so there might be some mismatch there but it's definitely something to start with. The final argument that made me change my mind was turnaround loop. Going through all those releases is quite time-consuming and sometimes we just need to update micro version to fix something. A nice example of this is KUBE_PING which had a memory leak - with concreate-based approach we could fix it in one day; but as long as it is in distribution, we need to wait whole release cycle. 

Thanks,
Sebastian

On Tue, Nov 7, 2017 at 8:07 PM Gustavo Fernandes <[hidden email]> wrote:
IMHO we should ship things like scripts, external modules, drivers, etc with the server itself, leaving the least amount of logic in the Docker image.

What you are proposing is the opposite: introducing a templating engine that adds a level of indirection to the Docker image (the Dockerfile is generated) plus
it grabs jars, modules, scripts, xmls, etc from potentially external sources and does several patches to the server at Docker image creation time.

WRT the docker hub, I think it could be used with Concreate by using hooks, I did a quick experiment of a Dockerhub automated build having a dynamically generating a Dockerfile in [1], but I guess
the biggest question is if the added overall complexity is worth it. I'm leaning towards a -1, but would like to hear more opinions :)


Thanks,
Gustavo

On Tue, Nov 7, 2017 at 3:14 PM, Sebastian Laskawiec <[hidden email]> wrote:
Hey!

Together with Ryan we are thinking about the future of Infinispan Docker image [1].

Currently we use a single Dockerfile and a bootstrap script which is responsible for setting up memory limits and creating/generating (if necessary) credentials. Our build pipeline uses Docker HUB integration hooks, so whenever we push a new commit (or a tag) our images are being rebuilt. This is very simple to understand and very powerful setup.

However we are thinking about bringing product and project images closer together and possibly reusing some bits (a common example might be Jolokia - those bits could be easily reused without touching core server distribution). This however requires converting our image to a framework called Concreate [2]. Concreate divides setup scripts into modules which are later on assembled into a single Dockerfile and built. Modules can also be pulled from other public git repository and I consider this as the most powerful option. It is also worth to mention, that Concreate is based on YAML file - here's an example of JDG image [3].

As you can see, this can be quite a change so I would like to reach out for some opinions. The biggest issue I can see is that we will lose our Docker HUB build pipeline and we will need to build and push images on our CI (which already does this locally for Online Services). 

WDYT?

Thanks,
Sebastian


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] The future of Infinispan Docker image

Gustavo Fernandes-2
IMHO the cons are much more significant than the pros, here's a few more:

- Increase the barrier to users/contributors, forcing them to learn a new tool if they need to customize the image;
- Prevents usage of new/existent features in the Dockerfile, such as [1], at least until the generator supports it;
- Makes the integration with Dockerhub harder.

Furthermore, integrating Jolokia and DB drivers are trivial tasks, it hardly justifies migrating the image completely just to be able to re-use some external scripts to patch the server at Docker build time.

With relation to the release cycle, well, this is another discussion. As far as Infinispan is concerned, it takes roughly 1h to release both the project and the docker image :)

So my vote is -1

Thanks,
Gustavo 

On Thu, Nov 9, 2017 at 11:33 AM, Sebastian Laskawiec <[hidden email]> wrote:
That's a very good point Gustavo. 

Let me try to iterate on pros and cons of each approach:
  • Putting all bits into distribution:
    • Pros:
      • Unified approach for both project and product
      • Supporting all platforms with a single distribution
    • Cons:
      • Long turnaround from community to the product based bits (like Online Services)
      • Some work has already been done in Concreate-based approach (like Jolokia) and battle-tested (e.g. with EAP).
  • Putting all additional bits into integration layers (Concreate-based approach):
    • Pros:
      • Short turnaround, in most of the cases we need to patch the integration bits only
      • Some integration bits has already been implemented for us (Joloka, DB drivers etc)
    • Cons:
      • Some integrations bits needs to be reimplemented, e.g. KUBE_PING
      • Each integration layer needs to have its own code (e.g. community Docker image, xPaaS images, Online Services)
I must admit that in the past I was a pretty big fan of putting all bits into community distribution and driving it forward from there. But this actually changed once Concreate tool appeared. It allows to externalize modules into separate repositories which promotes code reuse (e.g. we could easily use Jolokia integration implemented for EAP and at the same time provide our own custom configuration for it). Of course most of the bits assume that underlying OS is RHEL which is not true for the community (community images use CentOS) so there might be some mismatch there but it's definitely something to start with. The final argument that made me change my mind was turnaround loop. Going through all those releases is quite time-consuming and sometimes we just need to update micro version to fix something. A nice example of this is KUBE_PING which had a memory leak - with concreate-based approach we could fix it in one day; but as long as it is in distribution, we need to wait whole release cycle. 

Thanks,
Sebastian

On Tue, Nov 7, 2017 at 8:07 PM Gustavo Fernandes <[hidden email]> wrote:
IMHO we should ship things like scripts, external modules, drivers, etc with the server itself, leaving the least amount of logic in the Docker image.

What you are proposing is the opposite: introducing a templating engine that adds a level of indirection to the Docker image (the Dockerfile is generated) plus
it grabs jars, modules, scripts, xmls, etc from potentially external sources and does several patches to the server at Docker image creation time.

WRT the docker hub, I think it could be used with Concreate by using hooks, I did a quick experiment of a Dockerhub automated build having a dynamically generating a Dockerfile in [1], but I guess
the biggest question is if the added overall complexity is worth it. I'm leaning towards a -1, but would like to hear more opinions :)


Thanks,
Gustavo

On Tue, Nov 7, 2017 at 3:14 PM, Sebastian Laskawiec <[hidden email]> wrote:
Hey!

Together with Ryan we are thinking about the future of Infinispan Docker image [1].

Currently we use a single Dockerfile and a bootstrap script which is responsible for setting up memory limits and creating/generating (if necessary) credentials. Our build pipeline uses Docker HUB integration hooks, so whenever we push a new commit (or a tag) our images are being rebuilt. This is very simple to understand and very powerful setup.

However we are thinking about bringing product and project images closer together and possibly reusing some bits (a common example might be Jolokia - those bits could be easily reused without touching core server distribution). This however requires converting our image to a framework called Concreate [2]. Concreate divides setup scripts into modules which are later on assembled into a single Dockerfile and built. Modules can also be pulled from other public git repository and I consider this as the most powerful option. It is also worth to mention, that Concreate is based on YAML file - here's an example of JDG image [3].

As you can see, this can be quite a change so I would like to reach out for some opinions. The biggest issue I can see is that we will lose our Docker HUB build pipeline and we will need to build and push images on our CI (which already does this locally for Online Services). 

WDYT?

Thanks,
Sebastian


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] The future of Infinispan Docker image

Galder Zamarreno
I lean towards Gustavo's arguments, so -1 from me.

> On 10 Nov 2017, at 18:31, Gustavo Fernandes <[hidden email]> wrote:
>
> IMHO the cons are much more significant than the pros, here's a few more:
>
> - Increase the barrier to users/contributors, forcing them to learn a new tool if they need to customize the image;
> - Prevents usage of new/existent features in the Dockerfile, such as [1], at least until the generator supports it;
> - Makes the integration with Dockerhub harder.
>
> Furthermore, integrating Jolokia and DB drivers are trivial tasks, it hardly justifies migrating the image completely just to be able to re-use some external scripts to patch the server at Docker build time.
>
> With relation to the release cycle, well, this is another discussion. As far as Infinispan is concerned, it takes roughly 1h to release both the project and the docker image :)
>
> So my vote is -1
>
> [1] https://docs.docker.com/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds
>
> Thanks,
> Gustavo  
>
> On Thu, Nov 9, 2017 at 11:33 AM, Sebastian Laskawiec <[hidden email]> wrote:
> That's a very good point Gustavo.
>
> Let me try to iterate on pros and cons of each approach:
> • Putting all bits into distribution:
> • Pros:
> • Unified approach for both project and product
> • Supporting all platforms with a single distribution
> • Cons:
> • Long turnaround from community to the product based bits (like Online Services)
> • Some work has already been done in Concreate-based approach (like Jolokia) and battle-tested (e.g. with EAP).
> • Putting all additional bits into integration layers (Concreate-based approach):
> • Pros:
> • Short turnaround, in most of the cases we need to patch the integration bits only
> • Some integration bits has already been implemented for us (Joloka, DB drivers etc)
> • Cons:
> • Some integrations bits needs to be reimplemented, e.g. KUBE_PING
> • Each integration layer needs to have its own code (e.g. community Docker image, xPaaS images, Online Services)
> I must admit that in the past I was a pretty big fan of putting all bits into community distribution and driving it forward from there. But this actually changed once Concreate tool appeared. It allows to externalize modules into separate repositories which promotes code reuse (e.g. we could easily use Jolokia integration implemented for EAP and at the same time provide our own custom configuration for it). Of course most of the bits assume that underlying OS is RHEL which is not true for the community (community images use CentOS) so there might be some mismatch there but it's definitely something to start with. The final argument that made me change my mind was turnaround loop. Going through all those releases is quite time-consuming and sometimes we just need to update micro version to fix something. A nice example of this is KUBE_PING which had a memory leak - with concreate-based approach we could fix it in one day; but as long as it is in distribution, we need to wait whole release cycle.
>
> Thanks,
> Sebastian
>
> On Tue, Nov 7, 2017 at 8:07 PM Gustavo Fernandes <[hidden email]> wrote:
> IMHO we should ship things like scripts, external modules, drivers, etc with the server itself, leaving the least amount of logic in the Docker image.
>
> What you are proposing is the opposite: introducing a templating engine that adds a level of indirection to the Docker image (the Dockerfile is generated) plus
> it grabs jars, modules, scripts, xmls, etc from potentially external sources and does several patches to the server at Docker image creation time.
>
> WRT the docker hub, I think it could be used with Concreate by using hooks, I did a quick experiment of a Dockerhub automated build having a dynamically generating a Dockerfile in [1], but I guess
> the biggest question is if the added overall complexity is worth it. I'm leaning towards a -1, but would like to hear more opinions :)
>
> [1] https://hub.docker.com/r/gustavonalle/dockerhub-test/
>
> Thanks,
> Gustavo
>
> On Tue, Nov 7, 2017 at 3:14 PM, Sebastian Laskawiec <[hidden email]> wrote:
> Hey!
>
> Together with Ryan we are thinking about the future of Infinispan Docker image [1].
>
> Currently we use a single Dockerfile and a bootstrap script which is responsible for setting up memory limits and creating/generating (if necessary) credentials. Our build pipeline uses Docker HUB integration hooks, so whenever we push a new commit (or a tag) our images are being rebuilt. This is very simple to understand and very powerful setup.
>
> However we are thinking about bringing product and project images closer together and possibly reusing some bits (a common example might be Jolokia - those bits could be easily reused without touching core server distribution). This however requires converting our image to a framework called Concreate [2]. Concreate divides setup scripts into modules which are later on assembled into a single Dockerfile and built. Modules can also be pulled from other public git repository and I consider this as the most powerful option. It is also worth to mention, that Concreate is based on YAML file - here's an example of JDG image [3].
>
> As you can see, this can be quite a change so I would like to reach out for some opinions. The biggest issue I can see is that we will lose our Docker HUB build pipeline and we will need to build and push images on our CI (which already does this locally for Online Services).
>
> WDYT?
>
> Thanks,
> Sebastian
>
> [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server
> [2] http://concreate.readthedocs.io/en/latest/
> [3] https://github.com/jboss-container-images/jboss-datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] The future of Infinispan Docker image

Tristan Tarrant-2
In reply to this post by Gustavo Fernandes-2
I tend to agree with Gustavo.
The docker image should be as straightforward as possible. All the fancy
build tools and layerings just create multiple levels of indirection. It
also makes things more brittle.

So -1 from me.

Tristan

On 11/10/17 6:31 PM, Gustavo Fernandes wrote:

> IMHO the cons are much more significant than the pros, here's a few more:
>
> - Increase the barrier to users/contributors, forcing them to learn a
> new tool if they need to customize the image;
> - Prevents usage of new/existent features in the Dockerfile, such as
> [1], at least until the generator supports it;
> - Makes the integration with Dockerhub harder.
>
> Furthermore, integrating Jolokia and DB drivers are trivial tasks, it
> hardly justifies migrating the image completely just to be able to
> re-use some external scripts to patch the server at Docker build time.
>
> With relation to the release cycle, well, this is another discussion. As
> far as Infinispan is concerned, it takes roughly 1h to release both the
> project and the docker image :)
>
> So my vote is -1
>
> [1]
> https://docs.docker.com/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds 
> <https://docs.docker.com/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds>
>
> Thanks,
> Gustavo
>
> On Thu, Nov 9, 2017 at 11:33 AM, Sebastian Laskawiec
> <[hidden email] <mailto:[hidden email]>> wrote:
>
>     That's a very good point Gustavo.
>
>     Let me try to iterate on pros and cons of each approach:
>
>       * Putting all bits into distribution:
>           o Pros:
>               + Unified approach for both project and product
>               + Supporting all platforms with a single distribution
>           o Cons:
>               + Long turnaround from community to the product based bits
>                 (like Online Services)
>               + Some work has already been done in Concreate-based
>                 approach (like Jolokia) and battle-tested (e.g. with EAP).
>       * Putting all additional bits into integration layers
>         (Concreate-based approach):
>           o Pros:
>               + Short turnaround, in most of the cases we need to patch
>                 the integration bits only
>               + Some integration bits has already been implemented for
>                 us (Joloka, DB drivers etc)
>           o Cons:
>               + Some integrations bits needs to be reimplemented, e.g.
>                 KUBE_PING
>               + Each integration layer needs to have its own code (e.g.
>                 community Docker image, xPaaS images, Online Services)
>
>     I must admit that in the past I was a pretty big fan of putting all
>     bits into community distribution and driving it forward from there.
>     But this actually changed once Concreate tool appeared. It allows to
>     externalize modules into separate repositories which promotes code
>     reuse (e.g. we could easily use Jolokia integration implemented for
>     EAP and at the same time provide our own custom configuration for
>     it). Of course most of the bits assume that underlying OS is RHEL
>     which is not true for the community (community images use CentOS) so
>     there might be some mismatch there but it's definitely something to
>     start with. The final argument that made me change my mind was
>     turnaround loop. Going through all those releases is quite
>     time-consuming and sometimes we just need to update micro version to
>     fix something. A nice example of this is KUBE_PING which had a
>     memory leak - with concreate-based approach we could fix it in one
>     day; but as long as it is in distribution, we need to wait whole
>     release cycle.
>
>     Thanks,
>     Sebastian
>
>     On Tue, Nov 7, 2017 at 8:07 PM Gustavo Fernandes
>     <[hidden email] <mailto:[hidden email]>> wrote:
>
>         IMHO we should ship things like scripts, external modules,
>         drivers, etc with the server itself, leaving the least amount of
>         logic in the Docker image.
>
>         What you are proposing is the opposite: introducing a templating
>         engine that adds a level of indirection to the Docker image (the
>         Dockerfile is generated) plus
>         it grabs jars, modules, scripts, xmls, etc from potentially
>         external sources and does several patches to the server at
>         Docker image creation time.
>
>         WRT the docker hub, I think it could be used with Concreate by
>         using hooks, I did a quick experiment of a Dockerhub automated
>         build having a dynamically generating a Dockerfile in [1], but I
>         guess
>         the biggest question is if the added overall complexity is worth
>         it. I'm leaning towards a -1, but would like to hear more
>         opinions :)
>
>         [1] https://hub.docker.com/r/gustavonalle/dockerhub-test/
>         <https://hub.docker.com/r/gustavonalle/dockerhub-test/>
>
>         Thanks,
>         Gustavo
>
>         On Tue, Nov 7, 2017 at 3:14 PM, Sebastian Laskawiec
>         <[hidden email] <mailto:[hidden email]>> wrote:
>
>             Hey!
>
>             Together with Ryan we are thinking about the future of
>             Infinispan Docker image [1].
>
>             Currently we use a single Dockerfile and a bootstrap script
>             which is responsible for setting up memory limits and
>             creating/generating (if necessary) credentials. Our build
>             pipeline uses Docker HUB integration hooks, so whenever we
>             push a new commit (or a tag) our images are being rebuilt.
>             This is very simple to understand and very powerful setup.
>
>             However we are thinking about bringing product and project
>             images closer together and possibly reusing some bits (a
>             common example might be Jolokia - those bits could be easily
>             reused without touching core server distribution). This
>             however requires converting our image to a framework called
>             Concreate [2]. Concreate divides setup scripts into modules
>             which are later on assembled into a single Dockerfile and
>             built. Modules can also be pulled from other public git
>             repository and I consider this as the most powerful option.
>             It is also worth to mention, that Concreate is based on YAML
>             file - here's an example of JDG image [3].
>
>             As you can see, this can be quite a change so I would like
>             to reach out for some opinions. The biggest issue I can see
>             is that we will lose our Docker HUB build pipeline and we
>             will need to build and push images on our CI (which already
>             does this locally for Online Services).
>
>             WDYT?
>
>             Thanks,
>             Sebastian
>
>             [1]
>             https://github.com/jboss-dockerfiles/infinispan/tree/master/server
>             <https://github.com/jboss-dockerfiles/infinispan/tree/master/server>
>             [2] http://concreate.readthedocs.io/en/latest/
>             <http://concreate.readthedocs.io/en/latest/>
>             [3]
>             https://github.com/jboss-container-images/jboss-datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml
>             <https://github.com/jboss-container-images/jboss-datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml>
>
>             _______________________________________________
>             infinispan-dev mailing list
>             [hidden email]
>             <mailto:[hidden email]>
>             https://lists.jboss.org/mailman/listinfo/infinispan-dev
>             <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
>
>         _______________________________________________
>         infinispan-dev mailing list
>         [hidden email]
>         <mailto:[hidden email]>
>         https://lists.jboss.org/mailman/listinfo/infinispan-dev
>         <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
>
>     _______________________________________________
>     infinispan-dev mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>     <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] The future of Infinispan Docker image

Sebastian Laskawiec
Agreed than. We'll stick with plan Dockerfile.

Thanks everyone for good discussion and putting good arguments on the table.

On Mon, Nov 20, 2017 at 10:28 AM Tristan Tarrant <[hidden email]> wrote:
I tend to agree with Gustavo.
The docker image should be as straightforward as possible. All the fancy
build tools and layerings just create multiple levels of indirection. It
also makes things more brittle.

So -1 from me.

Tristan

On 11/10/17 6:31 PM, Gustavo Fernandes wrote:
> IMHO the cons are much more significant than the pros, here's a few more:
>
> - Increase the barrier to users/contributors, forcing them to learn a
> new tool if they need to customize the image;
> - Prevents usage of new/existent features in the Dockerfile, such as
> [1], at least until the generator supports it;
> - Makes the integration with Dockerhub harder.
>
> Furthermore, integrating Jolokia and DB drivers are trivial tasks, it
> hardly justifies migrating the image completely just to be able to
> re-use some external scripts to patch the server at Docker build time.
>
> With relation to the release cycle, well, this is another discussion. As
> far as Infinispan is concerned, it takes roughly 1h to release both the
> project and the docker image :)
>
> So my vote is -1
>
> [1]
> https://docs.docker.com/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds
> <https://docs.docker.com/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds>
>
> Thanks,
> Gustavo
>
> On Thu, Nov 9, 2017 at 11:33 AM, Sebastian Laskawiec
> <[hidden email] <mailto:[hidden email]>> wrote:
>
>     That's a very good point Gustavo.
>
>     Let me try to iterate on pros and cons of each approach:
>
>       * Putting all bits into distribution:
>           o Pros:
>               + Unified approach for both project and product
>               + Supporting all platforms with a single distribution
>           o Cons:
>               + Long turnaround from community to the product based bits
>                 (like Online Services)
>               + Some work has already been done in Concreate-based
>                 approach (like Jolokia) and battle-tested (e.g. with EAP).
>       * Putting all additional bits into integration layers
>         (Concreate-based approach):
>           o Pros:
>               + Short turnaround, in most of the cases we need to patch
>                 the integration bits only
>               + Some integration bits has already been implemented for
>                 us (Joloka, DB drivers etc)
>           o Cons:
>               + Some integrations bits needs to be reimplemented, e.g.
>                 KUBE_PING
>               + Each integration layer needs to have its own code (e.g.
>                 community Docker image, xPaaS images, Online Services)
>
>     I must admit that in the past I was a pretty big fan of putting all
>     bits into community distribution and driving it forward from there.
>     But this actually changed once Concreate tool appeared. It allows to
>     externalize modules into separate repositories which promotes code
>     reuse (e.g. we could easily use Jolokia integration implemented for
>     EAP and at the same time provide our own custom configuration for
>     it). Of course most of the bits assume that underlying OS is RHEL
>     which is not true for the community (community images use CentOS) so
>     there might be some mismatch there but it's definitely something to
>     start with. The final argument that made me change my mind was
>     turnaround loop. Going through all those releases is quite
>     time-consuming and sometimes we just need to update micro version to
>     fix something. A nice example of this is KUBE_PING which had a
>     memory leak - with concreate-based approach we could fix it in one
>     day; but as long as it is in distribution, we need to wait whole
>     release cycle.
>
>     Thanks,
>     Sebastian
>
>     On Tue, Nov 7, 2017 at 8:07 PM Gustavo Fernandes
>     <[hidden email] <mailto:[hidden email]>> wrote:
>
>         IMHO we should ship things like scripts, external modules,
>         drivers, etc with the server itself, leaving the least amount of
>         logic in the Docker image.
>
>         What you are proposing is the opposite: introducing a templating
>         engine that adds a level of indirection to the Docker image (the
>         Dockerfile is generated) plus
>         it grabs jars, modules, scripts, xmls, etc from potentially
>         external sources and does several patches to the server at
>         Docker image creation time.
>
>         WRT the docker hub, I think it could be used with Concreate by
>         using hooks, I did a quick experiment of a Dockerhub automated
>         build having a dynamically generating a Dockerfile in [1], but I
>         guess
>         the biggest question is if the added overall complexity is worth
>         it. I'm leaning towards a -1, but would like to hear more
>         opinions :)
>
>         [1] https://hub.docker.com/r/gustavonalle/dockerhub-test/
>         <https://hub.docker.com/r/gustavonalle/dockerhub-test/>
>
>         Thanks,
>         Gustavo
>
>         On Tue, Nov 7, 2017 at 3:14 PM, Sebastian Laskawiec
>         <[hidden email] <mailto:[hidden email]>> wrote:
>
>             Hey!
>
>             Together with Ryan we are thinking about the future of
>             Infinispan Docker image [1].
>
>             Currently we use a single Dockerfile and a bootstrap script
>             which is responsible for setting up memory limits and
>             creating/generating (if necessary) credentials. Our build
>             pipeline uses Docker HUB integration hooks, so whenever we
>             push a new commit (or a tag) our images are being rebuilt.
>             This is very simple to understand and very powerful setup.
>
>             However we are thinking about bringing product and project
>             images closer together and possibly reusing some bits (a
>             common example might be Jolokia - those bits could be easily
>             reused without touching core server distribution). This
>             however requires converting our image to a framework called
>             Concreate [2]. Concreate divides setup scripts into modules
>             which are later on assembled into a single Dockerfile and
>             built. Modules can also be pulled from other public git
>             repository and I consider this as the most powerful option.
>             It is also worth to mention, that Concreate is based on YAML
>             file - here's an example of JDG image [3].
>
>             As you can see, this can be quite a change so I would like
>             to reach out for some opinions. The biggest issue I can see
>             is that we will lose our Docker HUB build pipeline and we
>             will need to build and push images on our CI (which already
>             does this locally for Online Services).
>
>             WDYT?
>
>             Thanks,
>             Sebastian
>
>             [1]
>             https://github.com/jboss-dockerfiles/infinispan/tree/master/server
>             <https://github.com/jboss-dockerfiles/infinispan/tree/master/server>
>             [2] http://concreate.readthedocs.io/en/latest/
>             <http://concreate.readthedocs.io/en/latest/>
>             [3]
>             https://github.com/jboss-container-images/jboss-datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml
>             <https://github.com/jboss-container-images/jboss-datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml>
>
>             _______________________________________________
>             infinispan-dev mailing list
>             [hidden email]
>             <mailto:[hidden email]>
>             https://lists.jboss.org/mailman/listinfo/infinispan-dev
>             <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
>
>         _______________________________________________
>         infinispan-dev mailing list
>         [hidden email]
>         <mailto:[hidden email]>
>         https://lists.jboss.org/mailman/listinfo/infinispan-dev
>         <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
>
>     _______________________________________________
>     infinispan-dev mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>     <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev