[infinispan-dev] Adjusting memory settings in template

classic Classic list List threaded Threaded
14 messages Options
Reply | Threaded
Open this post in threaded view
|

[infinispan-dev] Adjusting memory settings in template

Galder Zamarreno
Hi Sebastian,

How do you change memory settings for Infinispan started via service catalog?

The memory settings seem defined in [1], but this is not one of the parameters supported.

I guess we want this as parameter?

Cheers,

[1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
--
Galder Zamarreño
Infinispan, Red Hat


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Sebastian Laskawiec
It's very tricky...

Memory is adjusted automatically to the container size [1] (of course you may override it by supplying Xmx or "-n" as parameters [2]). The safe limit is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that you can squeeze Infinispan much, much more).

Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in bustable memory category so if there is additional memory in the node, we'll get it. But if not, we won't go below 512 MB (and 500 mCPU).

Thanks,
Sebastian

[4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html

On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]> wrote:
Hi Sebastian,

How do you change memory settings for Infinispan started via service catalog?

The memory settings seem defined in [1], but this is not one of the parameters supported.

I guess we want this as parameter?

Cheers,

[1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
--
Galder Zamarreño
Infinispan, Red Hat


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Sanne Grinovero-3
On 22 September 2017 at 13:49, Sebastian Laskawiec <[hidden email]> wrote:

> It's very tricky...
>
> Memory is adjusted automatically to the container size [1] (of course you
> may override it by supplying Xmx or "-n" as parameters [2]). The safe limit
> is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap,
> that you can squeeze Infinispan much, much more).
>
> Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in
> bustable memory category so if there is additional memory in the node, we'll
> get it. But if not, we won't go below 512 MB (and 500 mCPU).

I hope that's a temporary choice of the work in process?

Doesn't sound acceptable to address real world requirements..
Infinispan expects users to estimate how much memory they will need -
which is hard enough - and then we should at least be able to start a
cluster to address the specified need. Being able to rely on 512MB
only per node would require lots of nodes even for small data sets,
leading to extreme resource waste as each node would consume some non
negligible portion of memory just to run the thing.

Thanks,
Sanne

>
> Thanks,
> Sebastian
>
> [1]
> https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> [2]
> https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> [4]
> https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
>
> On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]> wrote:
>>
>> Hi Sebastian,
>>
>> How do you change memory settings for Infinispan started via service
>> catalog?
>>
>> The memory settings seem defined in [1], but this is not one of the
>> parameters supported.
>>
>> I guess we want this as parameter?
>>
>> Cheers,
>>
>> [1]
>> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Sebastian Laskawiec


On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero <[hidden email]> wrote:
On 22 September 2017 at 13:49, Sebastian Laskawiec <[hidden email]> wrote:
> It's very tricky...
>
> Memory is adjusted automatically to the container size [1] (of course you
> may override it by supplying Xmx or "-n" as parameters [2]). The safe limit
> is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap,
> that you can squeeze Infinispan much, much more).
>
> Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in
> bustable memory category so if there is additional memory in the node, we'll
> get it. But if not, we won't go below 512 MB (and 500 mCPU).

I hope that's a temporary choice of the work in process?

Doesn't sound acceptable to address real world requirements..
Infinispan expects users to estimate how much memory they will need -
which is hard enough - and then we should at least be able to start a
cluster to address the specified need. Being able to rely on 512MB
only per node would require lots of nodes even for small data sets,
leading to extreme resource waste as each node would consume some non
negligible portion of memory just to run the thing.

hmmm yeah - its finished. 

I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or setting 50% of container memory?

If the former and you set nothing, you will get the worse QoS and Kubernetes will shut your container in first order whenever it gets out of resources (I really recommend reading [4] and watching [3]). If the latter, yeah I guess we can tune it a little with off-heap but, as my the latest tests showed, if you enable RocksDB Cache Store, allocating even 50% is too much (the container got killed by OOM Killer). That's probably the reason why setting MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So even setting it to 50% means that we take the risk...

So TBH, I see no silver bullet here and I'm open for suggestions. IMO if you're really know what you're doing, you should set Xmx yourself (this will turn off setting Xmx automatically by the bootstrap script) and possibly set limits (and adjust requests) in your Deployment Configuration (if you set both requests and limits you will have the best QoS). 


Thanks,
Sanne

>
> Thanks,
> Sebastian
>
> [1]
> https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> [2]
> https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> [4]
> https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
>
> On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]> wrote:
>>
>> Hi Sebastian,
>>
>> How do you change memory settings for Infinispan started via service
>> catalog?
>>
>> The memory settings seem defined in [1], but this is not one of the
>> parameters supported.
>>
>> I guess we want this as parameter?
>>
>> Cheers,
>>
>> [1]
>> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Galder Zamarreno
In reply to this post by Sebastian Laskawiec
I don't understand your reply here... are you talking about Infinispan instances deployed on OpenShift Online? Or on premise?

I can understand having some limits for OpenShift Online, but these templates should also be applicable on premise, in which case I should be able to easily define how much memory I want for the data grid, and the rest of the parameters would be worked out by OpenShift/Kubernetes?

To demand on premise users to go and change their template just to adjust the memory settings seems to me goes against all the usability improvements we're trying to achieve.

Cheers,

> On 22 Sep 2017, at 14:49, Sebastian Laskawiec <[hidden email]> wrote:
>
> It's very tricky...
>
> Memory is adjusted automatically to the container size [1] (of course you may override it by supplying Xmx or "-n" as parameters [2]). The safe limit is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that you can squeeze Infinispan much, much more).
>
> Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in bustable memory category so if there is additional memory in the node, we'll get it. But if not, we won't go below 512 MB (and 500 mCPU).
>
> Thanks,
> Sebastian
>
> [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> [2] https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> [4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
>
> On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]> wrote:
> Hi Sebastian,
>
> How do you change memory settings for Infinispan started via service catalog?
>
> The memory settings seem defined in [1], but this is not one of the parameters supported.
>
> I guess we want this as parameter?
>
> Cheers,
>
> [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
> --
> Galder Zamarreño
> Infinispan, Red Hat
>

--
Galder Zamarreño
Infinispan, Red Hat


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Galder Zamarreno
In reply to this post by Sebastian Laskawiec


> On 22 Sep 2017, at 17:58, Sebastian Laskawiec <[hidden email]> wrote:
>
>
>
> On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero <[hidden email]> wrote:
> On 22 September 2017 at 13:49, Sebastian Laskawiec <[hidden email]> wrote:
> > It's very tricky...
> >
> > Memory is adjusted automatically to the container size [1] (of course you
> > may override it by supplying Xmx or "-n" as parameters [2]). The safe limit
> > is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap,
> > that you can squeeze Infinispan much, much more).
> >
> > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in
> > bustable memory category so if there is additional memory in the node, we'll
> > get it. But if not, we won't go below 512 MB (and 500 mCPU).
>
> I hope that's a temporary choice of the work in process?
>
> Doesn't sound acceptable to address real world requirements..
> Infinispan expects users to estimate how much memory they will need -
> which is hard enough - and then we should at least be able to start a
> cluster to address the specified need. Being able to rely on 512MB
> only per node would require lots of nodes even for small data sets,
> leading to extreme resource waste as each node would consume some non
> negligible portion of memory just to run the thing.
>
> hmmm yeah - its finished.
>
> I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or setting 50% of container memory?
>
> If the former and you set nothing, you will get the worse QoS and Kubernetes will shut your container in first order whenever it gets out of resources (I really recommend reading [4] and watching [3]). If the latter, yeah I guess we can tune it a little with off-heap but, as my the latest tests showed, if you enable RocksDB Cache Store, allocating even 50% is too much (the container got killed by OOM Killer). That's probably the reason why setting MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So even setting it to 50% means that we take the risk...
>
> So TBH, I see no silver bullet here and I'm open for suggestions. IMO if you're really know what you're doing, you should set Xmx yourself (this will turn off setting Xmx automatically by the bootstrap script) and possibly set limits (and adjust requests) in your Deployment Configuration (if you set both requests and limits you will have the best QoS).

Try put it this way:

I've just started an Infinispan ephermeral instance and trying to load some data and it's running out of memory. What knobs/settings does the template offer to make sure I have a big enough Infinispan instance(s) to handle my data?

(Don't reply with: make your data smaller)

Cheers,

>
>
> Thanks,
> Sanne
>
> >
> > Thanks,
> > Sebastian
> >
> > [1]
> > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> > [2]
> > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> > [4]
> > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
> >
> > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]> wrote:
> >>
> >> Hi Sebastian,
> >>
> >> How do you change memory settings for Infinispan started via service
> >> catalog?
> >>
> >> The memory settings seem defined in [1], but this is not one of the
> >> parameters supported.
> >>
> >> I guess we want this as parameter?
> >>
> >> Cheers,
> >>
> >> [1]
> >> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
> >> --
> >> Galder Zamarreño
> >> Infinispan, Red Hat
> >>
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Sebastian Laskawiec
In reply to this post by Galder Zamarreno


On Mon, Sep 25, 2017 at 11:54 AM Galder Zamarreño <[hidden email]> wrote:
I don't understand your reply here... are you talking about Infinispan instances deployed on OpenShift Online? Or on premise?

TBH - I think there is no difference, so I'm thinking about both.
 
I can understand having some limits for OpenShift Online, but these templates should also be applicable on premise, in which case I should be able to easily define how much memory I want for the data grid, and the rest of the parameters would be worked out by OpenShift/Kubernetes?

I have written a couple of emails about this on internal mailing list. Let me just point of some bits here:
  • We need to set either Xmx or MaxRAM to tell the JVM how much memory it can allocate. As you probably know JDK8 is not CGroups aware by default (there are some experimental options but they set MaxRAM parameter equal to CGroups limit; this translates to Xmx=MaxRAM(CGroups limit) / 4. I guess allocating Xmx=(CGroups limit)/4 is too high for us, so we need to set it explicitly.
  • in our Docker image we set Xmx = 50% of CGroups limit. This is better than settings above but there is some risk in certain scenarios.
  • As I mentioned in my previous email, in the templates we are setting Requests (not Limits!!!). So you will probably get more memory than specified in the template but it depends on the node you're running on. The key point is that you won't get less than those 512 MB.
  • You can always edit your DeploymentConfig (after creating your application from template) and adjust Limits (or even requests).
  • For simple scenarios and bigger containers (like 4 GB) we can go more than 50% (see internal mailing list for details).
And as I said before - if you guys think we should do it differently, I'm open for suggestions. I think it's quite standard way of configuring this sort of stuff.

To demand on premise users to go and change their template just to adjust the memory settings seems to me goes against all the usability improvements we're trying to achieve.

At some point you need to define how much memory you will need. Whether it's in the template, your DeploymentConfiguration (created from template using oc process), Quota - it doesn't matter. You must write it somewhere - don't you? With current approach, the best way to do it is in Deployment Configuration Requests. This sets CGroups limit, and based on that, Infinispan bootstrap scripts will calculate Xmx. 
 

Cheers,

> On 22 Sep 2017, at 14:49, Sebastian Laskawiec <[hidden email]> wrote:
>
> It's very tricky...
>
> Memory is adjusted automatically to the container size [1] (of course you may override it by supplying Xmx or "-n" as parameters [2]). The safe limit is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that you can squeeze Infinispan much, much more).
>
> Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in bustable memory category so if there is additional memory in the node, we'll get it. But if not, we won't go below 512 MB (and 500 mCPU).
>
> Thanks,
> Sebastian
>
> [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> [2] https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> [4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
>
> On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]> wrote:
> Hi Sebastian,
>
> How do you change memory settings for Infinispan started via service catalog?
>
> The memory settings seem defined in [1], but this is not one of the parameters supported.
>
> I guess we want this as parameter?
>
> Cheers,
>
> [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
> --
> Galder Zamarreño
> Infinispan, Red Hat
>

--
Galder Zamarreño
Infinispan, Red Hat


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Sebastian Laskawiec
In reply to this post by Galder Zamarreno


On Mon, Sep 25, 2017 at 11:58 AM Galder Zamarreño <[hidden email]> wrote:


> On 22 Sep 2017, at 17:58, Sebastian Laskawiec <[hidden email]> wrote:
>
>
>
> On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero <[hidden email]> wrote:
> On 22 September 2017 at 13:49, Sebastian Laskawiec <[hidden email]> wrote:
> > It's very tricky...
> >
> > Memory is adjusted automatically to the container size [1] (of course you
> > may override it by supplying Xmx or "-n" as parameters [2]). The safe limit
> > is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap,
> > that you can squeeze Infinispan much, much more).
> >
> > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in
> > bustable memory category so if there is additional memory in the node, we'll
> > get it. But if not, we won't go below 512 MB (and 500 mCPU).
>
> I hope that's a temporary choice of the work in process?
>
> Doesn't sound acceptable to address real world requirements..
> Infinispan expects users to estimate how much memory they will need -
> which is hard enough - and then we should at least be able to start a
> cluster to address the specified need. Being able to rely on 512MB
> only per node would require lots of nodes even for small data sets,
> leading to extreme resource waste as each node would consume some non
> negligible portion of memory just to run the thing.
>
> hmmm yeah - its finished.
>
> I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or setting 50% of container memory?
>
> If the former and you set nothing, you will get the worse QoS and Kubernetes will shut your container in first order whenever it gets out of resources (I really recommend reading [4] and watching [3]). If the latter, yeah I guess we can tune it a little with off-heap but, as my the latest tests showed, if you enable RocksDB Cache Store, allocating even 50% is too much (the container got killed by OOM Killer). That's probably the reason why setting MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So even setting it to 50% means that we take the risk...
>
> So TBH, I see no silver bullet here and I'm open for suggestions. IMO if you're really know what you're doing, you should set Xmx yourself (this will turn off setting Xmx automatically by the bootstrap script) and possibly set limits (and adjust requests) in your Deployment Configuration (if you set both requests and limits you will have the best QoS).

Try put it this way:

I've just started an Infinispan ephermeral instance and trying to load some data and it's running out of memory. What knobs/settings does the template offer to make sure I have a big enough Infinispan instance(s) to handle my data?

Unfortunately calculating the number of instances based on input (e.g. "I want to have 10 GB of space for my data, please calculate how many 1 GB instances I need to create and adjust my app") is something that can not be done with templates. Templates are pretty simple and they do not support any calculations. You will probably need an Ansible Service Broker or Service Broker SDK to do it.

So assuming you did the math on paper and you need 10 replicas, 1 GB each - just type oc edit dc/<your_app> and modify number of replicas and increase memory request. That should do the trick. Alternatively you can edit the ConfigMap and turn eviction on (but it really depends on your use case).

BTW, the number of replicas is a parameter in template [1]. I can also expose memory request if you want me to (in that case just shoot me a ticket: https://github.com/infinispan/infinispan-openshift-templates/issues). And let me say it one more time - I'm open for suggestions (and pull requests) if you think this is not the way it should be done.

 

(Don't reply with: make your data smaller)

Cheers,

>
>
> Thanks,
> Sanne
>
> >
> > Thanks,
> > Sebastian
> >
> > [1]
> > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> > [2]
> > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> > [4]
> > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
> >
> > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]> wrote:
> >>
> >> Hi Sebastian,
> >>
> >> How do you change memory settings for Infinispan started via service
> >> catalog?
> >>
> >> The memory settings seem defined in [1], but this is not one of the
> >> parameters supported.
> >>
> >> I guess we want this as parameter?
> >>
> >> Cheers,
> >>
> >> [1]
> >> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
> >> --
> >> Galder Zamarreño
> >> Infinispan, Red Hat
> >>
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Galder Zamarreno


> On 25 Sep 2017, at 12:37, Sebastian Laskawiec <[hidden email]> wrote:
>
>
>
> On Mon, Sep 25, 2017 at 11:58 AM Galder Zamarreño <[hidden email]> wrote:
>
>
> > On 22 Sep 2017, at 17:58, Sebastian Laskawiec <[hidden email]> wrote:
> >
> >
> >
> > On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero <[hidden email]> wrote:
> > On 22 September 2017 at 13:49, Sebastian Laskawiec <[hidden email]> wrote:
> > > It's very tricky...
> > >
> > > Memory is adjusted automatically to the container size [1] (of course you
> > > may override it by supplying Xmx or "-n" as parameters [2]). The safe limit
> > > is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap,
> > > that you can squeeze Infinispan much, much more).
> > >
> > > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in
> > > bustable memory category so if there is additional memory in the node, we'll
> > > get it. But if not, we won't go below 512 MB (and 500 mCPU).
> >
> > I hope that's a temporary choice of the work in process?
> >
> > Doesn't sound acceptable to address real world requirements..
> > Infinispan expects users to estimate how much memory they will need -
> > which is hard enough - and then we should at least be able to start a
> > cluster to address the specified need. Being able to rely on 512MB
> > only per node would require lots of nodes even for small data sets,
> > leading to extreme resource waste as each node would consume some non
> > negligible portion of memory just to run the thing.
> >
> > hmmm yeah - its finished.
> >
> > I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or setting 50% of container memory?
> >
> > If the former and you set nothing, you will get the worse QoS and Kubernetes will shut your container in first order whenever it gets out of resources (I really recommend reading [4] and watching [3]). If the latter, yeah I guess we can tune it a little with off-heap but, as my the latest tests showed, if you enable RocksDB Cache Store, allocating even 50% is too much (the container got killed by OOM Killer). That's probably the reason why setting MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So even setting it to 50% means that we take the risk...
> >
> > So TBH, I see no silver bullet here and I'm open for suggestions. IMO if you're really know what you're doing, you should set Xmx yourself (this will turn off setting Xmx automatically by the bootstrap script) and possibly set limits (and adjust requests) in your Deployment Configuration (if you set both requests and limits you will have the best QoS).
>
> Try put it this way:
>
> I've just started an Infinispan ephermeral instance and trying to load some data and it's running out of memory. What knobs/settings does the template offer to make sure I have a big enough Infinispan instance(s) to handle my data?
>
> Unfortunately calculating the number of instances based on input (e.g. "I want to have 10 GB of space for my data, please calculate how many 1 GB instances I need to create and adjust my app") is something that can not be done with templates. Templates are pretty simple and they do not support any calculations. You will probably need an Ansible Service Broker or Service Broker SDK to do it.
>
> So assuming you did the math on paper and you need 10 replicas, 1 GB each - just type oc edit dc/<your_app> and modify number of replicas and increase memory request. That should do the trick. Alternatively you can edit the ConfigMap and turn eviction on (but it really depends on your use case).
>
> BTW, the number of replicas is a parameter in template [1]. I can also expose memory request if you want me to (in that case just shoot me a ticket: https://github.com/infinispan/infinispan-openshift-templates/issues). And let me say it one more time - I'm open for suggestions (and pull requests) if you think this is not the way it should be done.

I don't know how the overarching OpenShift caching, or shared memory services will be exposed, as an OpenShift user that was to store data in Infinispan, I should be able to provide how much (total) data I will put on it, and optionally how many backups I want for the data, and OpenShift should maybe provide with some options on how to do this:

User: I want 2gb of data
OpenShift: Assuming default of 1 backup (2 copies of data), I can offer you (assuming at least 25% overhead):

a) 2 nodes of 2b
b) 4 nodes of 1gb
c) 8 nodes of 512mb

And user decides...

Assuming those higher level OpenShift services consume the Infinispan OpenShift templates, and you try to implement a situation like above, where the user specifies total amount of data, and you decide what options to offer them..., then the template would need to expose number of instances (done already) and memory for each of those instance (not there yet).

Still, I'll try to see if I can get my use case working with only 512mb per node, and use the number of instances as a way to add more memory. However, I feel that only exposing number of instances is not enough...

Btw, this is something that needs to be agreed on and should be part of our Infinispan OpenShift integration specification/plan.

Cheers,

>
> [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L382
>  
>
> (Don't reply with: make your data smaller)
>
> Cheers,
>
> >
> >
> > Thanks,
> > Sanne
> >
> > >
> > > Thanks,
> > > Sebastian
> > >
> > > [1]
> > > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> > > [2]
> > > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> > > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> > > [4]
> > > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
> > >
> > > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]> wrote:
> > >>
> > >> Hi Sebastian,
> > >>
> > >> How do you change memory settings for Infinispan started via service
> > >> catalog?
> > >>
> > >> The memory settings seem defined in [1], but this is not one of the
> > >> parameters supported.
> > >>
> > >> I guess we want this as parameter?
> > >>
> > >> Cheers,
> > >>
> > >> [1]
> > >> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
> > >> --
> > >> Galder Zamarreño
> > >> Infinispan, Red Hat
> > >>
> > >
> > > _______________________________________________
> > > infinispan-dev mailing list
> > > [hidden email]
> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> --
> Galder Zamarreño
> Infinispan, Red Hat
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Infinispan, Red Hat


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Sanne Grinovero-3
In reply to this post by Sebastian Laskawiec
On 22 September 2017 at 16:58, Sebastian Laskawiec <[hidden email]> wrote:

>
>
> On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero <[hidden email]>
> wrote:
>>
>> On 22 September 2017 at 13:49, Sebastian Laskawiec <[hidden email]>
>> wrote:
>> > It's very tricky...
>> >
>> > Memory is adjusted automatically to the container size [1] (of course
>> > you
>> > may override it by supplying Xmx or "-n" as parameters [2]). The safe
>> > limit
>> > is roughly Xmx=Xms=50% of container capacity (unless you do the
>> > off-heap,
>> > that you can squeeze Infinispan much, much more).
>> >
>> > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in
>> > bustable memory category so if there is additional memory in the node,
>> > we'll
>> > get it. But if not, we won't go below 512 MB (and 500 mCPU).
>>
>> I hope that's a temporary choice of the work in process?
>>
>> Doesn't sound acceptable to address real world requirements..
>> Infinispan expects users to estimate how much memory they will need -
>> which is hard enough - and then we should at least be able to start a
>> cluster to address the specified need. Being able to rely on 512MB
>> only per node would require lots of nodes even for small data sets,
>> leading to extreme resource waste as each node would consume some non
>> negligible portion of memory just to run the thing.
>
>
> hmmm yeah - its finished.
>
> I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or
> setting 50% of container memory?

If the orchestrator "might" give us more than 512MB but this is not
guaranteed, we can't rely on it and we'll have to assume we have 512M
only.
I see no use in getting some heap size which was not explicitly set;
if there's extra available memory that's not too bad to use as native
memory (e.g. buffering RocksDB IO operations) so you might as well not
assign it to the JVM - since we can't rely on it we won't make
effective use of it.

Secondarily, yes we should make sure it's easy enough to request nodes
with more than 512MB each as Infinispan gets way more useful with
larger heaps. The ROI on 512MB would make me want to use a different
technology!

>
> If the former and you set nothing, you will get the worse QoS and Kubernetes
> will shut your container in first order whenever it gets out of resources (I
> really recommend reading [4] and watching [3]). If the latter, yeah I guess
> we can tune it a little with off-heap but, as my the latest tests showed, if
> you enable RocksDB Cache Store, allocating even 50% is too much (the
> container got killed by OOM Killer). That's probably the reason why setting
> MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So even setting
> it to 50% means that we take the risk...
>
> So TBH, I see no silver bullet here and I'm open for suggestions. IMO if
> you're really know what you're doing, you should set Xmx yourself (this will
> turn off setting Xmx automatically by the bootstrap script) and possibly set
> limits (and adjust requests) in your Deployment Configuration (if you set
> both requests and limits you will have the best QoS).

+1 Let's recommend this approach, and discourage the automated sizing
at least until we can implement some of the things Galder is also
suggesting. I'd just remove that option as it's going to cause more
trouble than what it's worth it.

You are the OpenShift expert and I have no idea how this could be done :)
I'm just highlighting that Infinispan can't deal with having some
variable heap size, having this would makes right-size tuning
extremely more complex to users - heck I wouldn't know how to do it
myself.

+1 to Galder's suggestions; I particularly like the idea to create
various templates specifically tuned for some fixed heap values; for
example we could create one for each of the common machine types on
popular cloud providers. Not suggesting to have a template for each of
them but we could pick some reasonable configurations so that then we
can help matching the template to the physical machine. I guess this
doesn't translate directly to OpenShift resource limits but that's
something you could figure out? After all an OS container has to run
on some cloud so it would still help people to have a template
"suited" for each popular, actually existing machine type.
Incidentally this approach would also produce helpful configuration
templates for people running on clouds directly.

Thanks,
Sanne

>
>>
>> Thanks,
>> Sanne
>>
>> >
>> > Thanks,
>> > Sebastian
>> >
>> > [1]
>> >
>> > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
>> > [2]
>> >
>> > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
>> > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
>> > [4]
>> >
>> > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
>> >
>> > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]>
>> > wrote:
>> >>
>> >> Hi Sebastian,
>> >>
>> >> How do you change memory settings for Infinispan started via service
>> >> catalog?
>> >>
>> >> The memory settings seem defined in [1], but this is not one of the
>> >> parameters supported.
>> >>
>> >> I guess we want this as parameter?
>> >>
>> >> Cheers,
>> >>
>> >> [1]
>> >>
>> >> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
>> >> --
>> >> Galder Zamarreño
>> >> Infinispan, Red Hat
>> >>
>> >
>> > _______________________________________________
>> > infinispan-dev mailing list
>> > [hidden email]
>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Emmanuel Bernard
In reply to this post by Sebastian Laskawiec
Sebastian,

What Galder, Sanne and others are saying is that in OpenShift on prem, there is no or at least a higher limit in the minimal container memory you can ask. And in these deployment, Infinispan should target the multi GB, not 512 MB.

Of course, *if* you ask for a guaranteed 512MB, then it would be silly to try and consume more.

On 25 Sep 2017, at 12:30, Sebastian Laskawiec <[hidden email]> wrote:



On Mon, Sep 25, 2017 at 11:54 AM Galder Zamarreño <[hidden email]> wrote:
I don't understand your reply here... are you talking about Infinispan instances deployed on OpenShift Online? Or on premise?

TBH - I think there is no difference, so I'm thinking about both.
 
I can understand having some limits for OpenShift Online, but these templates should also be applicable on premise, in which case I should be able to easily define how much memory I want for the data grid, and the rest of the parameters would be worked out by OpenShift/Kubernetes?

I have written a couple of emails about this on internal mailing list. Let me just point of some bits here:
  • We need to set either Xmx or MaxRAM to tell the JVM how much memory it can allocate. As you probably know JDK8 is not CGroups aware by default (there are some experimental options but they set MaxRAM parameter equal to CGroups limit; this translates to Xmx=MaxRAM(CGroups limit) / 4. I guess allocating Xmx=(CGroups limit)/4 is too high for us, so we need to set it explicitly.
  • in our Docker image we set Xmx = 50% of CGroups limit. This is better than settings above but there is some risk in certain scenarios.
  • As I mentioned in my previous email, in the templates we are setting Requests (not Limits!!!). So you will probably get more memory than specified in the template but it depends on the node you're running on. The key point is that you won't get less than those 512 MB.
  • You can always edit your DeploymentConfig (after creating your application from template) and adjust Limits (or even requests).
  • For simple scenarios and bigger containers (like 4 GB) we can go more than 50% (see internal mailing list for details).
And as I said before - if you guys think we should do it differently, I'm open for suggestions. I think it's quite standard way of configuring this sort of stuff.

To demand on premise users to go and change their template just to adjust the memory settings seems to me goes against all the usability improvements we're trying to achieve.

At some point you need to define how much memory you will need. Whether it's in the template, your DeploymentConfiguration (created from template using oc process), Quota - it doesn't matter. You must write it somewhere - don't you? With current approach, the best way to do it is in Deployment Configuration Requests. This sets CGroups limit, and based on that, Infinispan bootstrap scripts will calculate Xmx. 
 

Cheers,

> On 22 Sep 2017, at 14:49, Sebastian Laskawiec <[hidden email]> wrote:
>
> It's very tricky...
>
> Memory is adjusted automatically to the container size [1] (of course you may override it by supplying Xmx or "-n" as parameters [2]). The safe limit is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that you can squeeze Infinispan much, much more).
>
> Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in bustable memory category so if there is additional memory in the node, we'll get it. But if not, we won't go below 512 MB (and 500 mCPU).
>
> Thanks,
> Sebastian
>
> [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> [2] https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> [4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
>
> On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]> wrote:
> Hi Sebastian,
>
> How do you change memory settings for Infinispan started via service catalog?
>
> The memory settings seem defined in [1], but this is not one of the parameters supported.
>
> I guess we want this as parameter?
>
> Cheers,
>
> [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
> --
> Galder Zamarreño
> Infinispan, Red Hat
>

--
Galder Zamarreño
Infinispan, Red Hat

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Sebastian Laskawiec
So how about exposing two parameters - Xms/Xmx and Total amount of memory for Pod (Request = Limit in that case). Would it work for you?

On Thu, Sep 28, 2017 at 8:38 AM Emmanuel Bernard <[hidden email]> wrote:
Sebastian,

What Galder, Sanne and others are saying is that in OpenShift on prem, there is no or at least a higher limit in the minimal container memory you can ask. And in these deployment, Infinispan should target the multi GB, not 512 MB.

Of course, *if* you ask for a guaranteed 512MB, then it would be silly to try and consume more.

On 25 Sep 2017, at 12:30, Sebastian Laskawiec <[hidden email]> wrote:



On Mon, Sep 25, 2017 at 11:54 AM Galder Zamarreño <[hidden email]> wrote:
I don't understand your reply here... are you talking about Infinispan instances deployed on OpenShift Online? Or on premise?

TBH - I think there is no difference, so I'm thinking about both.
 
I can understand having some limits for OpenShift Online, but these templates should also be applicable on premise, in which case I should be able to easily define how much memory I want for the data grid, and the rest of the parameters would be worked out by OpenShift/Kubernetes?

I have written a couple of emails about this on internal mailing list. Let me just point of some bits here:
  • We need to set either Xmx or MaxRAM to tell the JVM how much memory it can allocate. As you probably know JDK8 is not CGroups aware by default (there are some experimental options but they set MaxRAM parameter equal to CGroups limit; this translates to Xmx=MaxRAM(CGroups limit) / 4. I guess allocating Xmx=(CGroups limit)/4 is too high for us, so we need to set it explicitly.
  • in our Docker image we set Xmx = 50% of CGroups limit. This is better than settings above but there is some risk in certain scenarios.
  • As I mentioned in my previous email, in the templates we are setting Requests (not Limits!!!). So you will probably get more memory than specified in the template but it depends on the node you're running on. The key point is that you won't get less than those 512 MB.
  • You can always edit your DeploymentConfig (after creating your application from template) and adjust Limits (or even requests).
  • For simple scenarios and bigger containers (like 4 GB) we can go more than 50% (see internal mailing list for details).
And as I said before - if you guys think we should do it differently, I'm open for suggestions. I think it's quite standard way of configuring this sort of stuff.

To demand on premise users to go and change their template just to adjust the memory settings seems to me goes against all the usability improvements we're trying to achieve.

At some point you need to define how much memory you will need. Whether it's in the template, your DeploymentConfiguration (created from template using oc process), Quota - it doesn't matter. You must write it somewhere - don't you? With current approach, the best way to do it is in Deployment Configuration Requests. This sets CGroups limit, and based on that, Infinispan bootstrap scripts will calculate Xmx. 
 

Cheers,

> On 22 Sep 2017, at 14:49, Sebastian Laskawiec <[hidden email]> wrote:
>
> It's very tricky...
>
> Memory is adjusted automatically to the container size [1] (of course you may override it by supplying Xmx or "-n" as parameters [2]). The safe limit is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that you can squeeze Infinispan much, much more).
>
> Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in bustable memory category so if there is additional memory in the node, we'll get it. But if not, we won't go below 512 MB (and 500 mCPU).
>
> Thanks,
> Sebastian
>
> [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> [2] https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> [4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
>
> On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]> wrote:
> Hi Sebastian,
>
> How do you change memory settings for Infinispan started via service catalog?
>
> The memory settings seem defined in [1], but this is not one of the parameters supported.
>
> I guess we want this as parameter?
>
> Cheers,
>
> [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
> --
> Galder Zamarreño
> Infinispan, Red Hat
>

--
Galder Zamarreño
Infinispan, Red Hat

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Emmanuel Bernard
I am personally content if you provide the total amount of memory for Pod and you as OSB designer decide of the -Xms/Xmx for the services. Unlike what Sanne said I think, Amazon and the like they don’t give you x GB of cache. They give you an instance of Redis or Memcached within a VM that has x amount of GB allocated. What you can stuck in is left as an exercise for the reader.

Not ideal but I think they went for the practical in this case.

For the pain JDG, then more options is fine.

On 28 Sep 2017, at 12:00, Sebastian Laskawiec <[hidden email]> wrote:

So how about exposing two parameters - Xms/Xmx and Total amount of memory for Pod (Request = Limit in that case). Would it work for you?

On Thu, Sep 28, 2017 at 8:38 AM Emmanuel Bernard <[hidden email]> wrote:
Sebastian,

What Galder, Sanne and others are saying is that in OpenShift on prem, there is no or at least a higher limit in the minimal container memory you can ask. And in these deployment, Infinispan should target the multi GB, not 512 MB.

Of course, *if* you ask for a guaranteed 512MB, then it would be silly to try and consume more.

On 25 Sep 2017, at 12:30, Sebastian Laskawiec <[hidden email]> wrote:



On Mon, Sep 25, 2017 at 11:54 AM Galder Zamarreño <[hidden email]> wrote:
I don't understand your reply here... are you talking about Infinispan instances deployed on OpenShift Online? Or on premise?

TBH - I think there is no difference, so I'm thinking about both.
 
I can understand having some limits for OpenShift Online, but these templates should also be applicable on premise, in which case I should be able to easily define how much memory I want for the data grid, and the rest of the parameters would be worked out by OpenShift/Kubernetes?

I have written a couple of emails about this on internal mailing list. Let me just point of some bits here:
  • We need to set either Xmx or MaxRAM to tell the JVM how much memory it can allocate. As you probably know JDK8 is not CGroups aware by default (there are some experimental options but they set MaxRAM parameter equal to CGroups limit; this translates to Xmx=MaxRAM(CGroups limit) / 4. I guess allocating Xmx=(CGroups limit)/4 is too high for us, so we need to set it explicitly.
  • in our Docker image we set Xmx = 50% of CGroups limit. This is better than settings above but there is some risk in certain scenarios.
  • As I mentioned in my previous email, in the templates we are setting Requests (not Limits!!!). So you will probably get more memory than specified in the template but it depends on the node you're running on. The key point is that you won't get less than those 512 MB.
  • You can always edit your DeploymentConfig (after creating your application from template) and adjust Limits (or even requests).
  • For simple scenarios and bigger containers (like 4 GB) we can go more than 50% (see internal mailing list for details).
And as I said before - if you guys think we should do it differently, I'm open for suggestions. I think it's quite standard way of configuring this sort of stuff.

To demand on premise users to go and change their template just to adjust the memory settings seems to me goes against all the usability improvements we're trying to achieve.

At some point you need to define how much memory you will need. Whether it's in the template, your DeploymentConfiguration (created from template using oc process), Quota - it doesn't matter. You must write it somewhere - don't you? With current approach, the best way to do it is in Deployment Configuration Requests. This sets CGroups limit, and based on that, Infinispan bootstrap scripts will calculate Xmx. 
 

Cheers,

> On 22 Sep 2017, at 14:49, Sebastian Laskawiec <[hidden email]> wrote:
>
> It's very tricky...
>
> Memory is adjusted automatically to the container size [1] (of course you may override it by supplying Xmx or "-n" as parameters [2]). The safe limit is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that you can squeeze Infinispan much, much more).
>
> Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in bustable memory category so if there is additional memory in the node, we'll get it. But if not, we won't go below 512 MB (and 500 mCPU).
>
> Thanks,
> Sebastian
>
> [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> [2] https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> [4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
>
> On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]> wrote:
> Hi Sebastian,
>
> How do you change memory settings for Infinispan started via service catalog?
>
> The memory settings seem defined in [1], but this is not one of the parameters supported.
>
> I guess we want this as parameter?
>
> Cheers,
>
> [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
> --
> Galder Zamarreño
> Infinispan, Red Hat
>

--
Galder Zamarreño
Infinispan, Red Hat

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Adjusting memory settings in template

Emmanuel Bernard
Just to clarify,

What the user should be able to set is memory request according to the definition here

and we chose a memory limit with reasonable margin (20%?) but aim at never going over memory request
And to achieve that, we will build estimates based on the test work Sebastian has been doing around Xmx / memory request ratio. Each usage type will have to have revised estimates. These will be “hardcoded” for a given memory request size. At least for a given usage and version of Infinispan.

I like Sanne’s idea of a calculator where you input your data size needs and it offers pod number / pod size options. But we will have to offer that in the doc or something. Not as part of the service catalog UI in its current incarnation.

Emmanuel

On 28 Sep 2017, at 15:13, Emmanuel Bernard <[hidden email]> wrote:

I am personally content if you provide the total amount of memory for Pod and you as OSB designer decide of the -Xms/Xmx for the services. Unlike what Sanne said I think, Amazon and the like they don’t give you x GB of cache. They give you an instance of Redis or Memcached within a VM that has x amount of GB allocated. What you can stuck in is left as an exercise for the reader.

Not ideal but I think they went for the practical in this case.

For the pain JDG, then more options is fine.

On 28 Sep 2017, at 12:00, Sebastian Laskawiec <[hidden email]> wrote:

So how about exposing two parameters - Xms/Xmx and Total amount of memory for Pod (Request = Limit in that case). Would it work for you?

On Thu, Sep 28, 2017 at 8:38 AM Emmanuel Bernard <[hidden email]> wrote:
Sebastian,

What Galder, Sanne and others are saying is that in OpenShift on prem, there is no or at least a higher limit in the minimal container memory you can ask. And in these deployment, Infinispan should target the multi GB, not 512 MB.

Of course, *if* you ask for a guaranteed 512MB, then it would be silly to try and consume more.

On 25 Sep 2017, at 12:30, Sebastian Laskawiec <[hidden email]> wrote:



On Mon, Sep 25, 2017 at 11:54 AM Galder Zamarreño <[hidden email]> wrote:
I don't understand your reply here... are you talking about Infinispan instances deployed on OpenShift Online? Or on premise?

TBH - I think there is no difference, so I'm thinking about both.
 
I can understand having some limits for OpenShift Online, but these templates should also be applicable on premise, in which case I should be able to easily define how much memory I want for the data grid, and the rest of the parameters would be worked out by OpenShift/Kubernetes?

I have written a couple of emails about this on internal mailing list. Let me just point of some bits here:
  • We need to set either Xmx or MaxRAM to tell the JVM how much memory it can allocate. As you probably know JDK8 is not CGroups aware by default (there are some experimental options but they set MaxRAM parameter equal to CGroups limit; this translates to Xmx=MaxRAM(CGroups limit) / 4. I guess allocating Xmx=(CGroups limit)/4 is too high for us, so we need to set it explicitly.
  • in our Docker image we set Xmx = 50% of CGroups limit. This is better than settings above but there is some risk in certain scenarios.
  • As I mentioned in my previous email, in the templates we are setting Requests (not Limits!!!). So you will probably get more memory than specified in the template but it depends on the node you're running on. The key point is that you won't get less than those 512 MB.
  • You can always edit your DeploymentConfig (after creating your application from template) and adjust Limits (or even requests).
  • For simple scenarios and bigger containers (like 4 GB) we can go more than 50% (see internal mailing list for details).
And as I said before - if you guys think we should do it differently, I'm open for suggestions. I think it's quite standard way of configuring this sort of stuff.

To demand on premise users to go and change their template just to adjust the memory settings seems to me goes against all the usability improvements we're trying to achieve.

At some point you need to define how much memory you will need. Whether it's in the template, your DeploymentConfiguration (created from template using oc process), Quota - it doesn't matter. You must write it somewhere - don't you? With current approach, the best way to do it is in Deployment Configuration Requests. This sets CGroups limit, and based on that, Infinispan bootstrap scripts will calculate Xmx. 
 

Cheers,

> On 22 Sep 2017, at 14:49, Sebastian Laskawiec <[hidden email]> wrote:
>
> It's very tricky...
>
> Memory is adjusted automatically to the container size [1] (of course you may override it by supplying Xmx or "-n" as parameters [2]). The safe limit is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that you can squeeze Infinispan much, much more).
>
> Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in bustable memory category so if there is additional memory in the node, we'll get it. But if not, we won't go below 512 MB (and 500 mCPU).
>
> Thanks,
> Sebastian
>
> [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> [2] https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> [4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
>
> On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <[hidden email]> wrote:
> Hi Sebastian,
>
> How do you change memory settings for Infinispan started via service catalog?
>
> The memory settings seem defined in [1], but this is not one of the parameters supported.
>
> I guess we want this as parameter?
>
> Cheers,
>
> [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
> --
> Galder Zamarreño
> Infinispan, Red Hat
>

--
Galder Zamarreño
Infinispan, Red Hat

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev