[infinispan-dev] Ceph cache store

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[infinispan-dev] Ceph cache store

Vojtech Juranek
Hi,
I've implemented initial version of Ceph [1] cache store [2]. Cache entries
are stored into Ceph pools [3], one pool per cache if not configured
otherwise. The cache store leverages librados [4] java binding for direct
communication with Ceph cluster/RADOS (see e.g. Ceph architecture overview [5]
for high-level understanding what is difference between accessing RADOS via
RADOS gateway or POSIX file system client and librados).

Would be there any interest in such cache store? If yes, any recommendations
for improvements are welcome.

Thanks
Vojta

[1] http://ceph.com/
[2] https://github.com/vjuranek/infinispan-cachestore-ceph
[3] http://docs.ceph.com/docs/jewel/rados/operations/pools/
[4] http://docs.ceph.com/docs/hammer/rados/api/librados-intro/
[5] http://docs.ceph.com/docs/hammer/architecture/
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

signature.asc (484 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Ceph cache store

Tristan Tarrant-2
On 22/09/16 15:25, Vojtech Juranek wrote:
> Hi,
> I've implemented initial version of Ceph [1] cache store [2]. Cache entries
Nice one Vojtech !
Now you have given me a reason to install and learn about Ceph, although
I don't think I have the exabyte-scale capacity :)

Are there any recommendations / patterns on how Ceph should be used to
make better use of its features ?

Tristan

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Ceph cache store

Vojtech Juranek
> I don't think I have the exabyte-scale capacity :)

AFAIK this is not mandatory, cluster with capacity of dozen of petabytes
should be fine for the initial testing and learning :-)
 
> Are there any recommendations / patterns on how Ceph should be used to
> make better use of its features ?

you can find some general performance tuning tips like [1], but I'm not aware
of any recommended usage patterns. However, I'm Ceph beginner, so maybe it's
just my ignorance.
As for ceph-ispn specifically, I'd like to learn more about CRUSH algorithm
and CRUSH map options [2] if it would be possible to map ISPN segment to
specified Ceph primary OSD, which would allow us to run ISPN node and it's
appropriate primary OSD on the same machine (similar thing we do in ISPN-Spark
integration), which should result into better performance.

[1]
http://tracker.ceph.com/projects/ceph/wiki/7_Best_Practices_to_Maximize_Your_Ceph_Cluster's_Performance
[2] http://docs.ceph.com/docs/jewel/rados/operations/crush-map/
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

signature.asc (484 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Ceph cache store

Sebastian Laskawiec
In reply to this post by Vojtech Juranek
Great job Vojtech!

The only thing that comes into my mind is to test it with Kubernetes/OpenShift Ceph volumes [6]. 

But I guess this is OpenShift/Kubernetes configuration rather than Ceph CacheStore itself.

Thanks
Sebastian


On Thu, Sep 22, 2016 at 3:25 PM, Vojtech Juranek <[hidden email]> wrote:
Hi,
I've implemented initial version of Ceph [1] cache store [2]. Cache entries
are stored into Ceph pools [3], one pool per cache if not configured
otherwise. The cache store leverages librados [4] java binding for direct
communication with Ceph cluster/RADOS (see e.g. Ceph architecture overview [5]
for high-level understanding what is difference between accessing RADOS via
RADOS gateway or POSIX file system client and librados).

Would be there any interest in such cache store? If yes, any recommendations
for improvements are welcome.

Thanks
Vojta

[1] http://ceph.com/
[2] https://github.com/vjuranek/infinispan-cachestore-ceph
[3] http://docs.ceph.com/docs/jewel/rados/operations/pools/
[4] http://docs.ceph.com/docs/hammer/rados/api/librados-intro/
[5] http://docs.ceph.com/docs/hammer/architecture/
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Ceph cache store

Vojtech Juranek
Hi Sebastian,
sorry for late reply.

> The only thing that comes into my mind is to test it with
> Kubernetes/OpenShift Ceph volumes [6].

I'm not very familiar with k8s and its doc page doesn't provide any detail how
it works under the hood, but AFAICT (looking on the source code [1]), it uses
Ceph FS, not directly librados which is used by cache store, so IMHO there's
not much to test. Single file store or soft index files store would be more
appropriate to test with k8s Ceph volume.

But what I'd like to do definitely in the future is some performance
comparison between Ceph store using directly librados, cloud store using Ceph
via RadosGW and single file store using Ceph via CephFS.

Thanks
Vojta

[1]
https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cephfs/cephfs.go

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

signature.asc (484 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [infinispan-dev] Ceph cache store

Sebastian Laskawiec
Ok, sounds good. Thanks Vojtech!

On Thu, Sep 29, 2016 at 11:13 AM, Vojtech Juranek <[hidden email]> wrote:
Hi Sebastian,
sorry for late reply.

> The only thing that comes into my mind is to test it with
> Kubernetes/OpenShift Ceph volumes [6].

I'm not very familiar with k8s and its doc page doesn't provide any detail how
it works under the hood, but AFAICT (looking on the source code [1]), it uses
Ceph FS, not directly librados which is used by cache store, so IMHO there's
not much to test. Single file store or soft index files store would be more
appropriate to test with k8s Ceph volume.

But what I'd like to do definitely in the future is some performance
comparison between Ceph store using directly librados, cloud store using Ceph
via RadosGW and single file store using Ceph via CephFS.

Thanks
Vojta

[1]
https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cephfs/cephfs.go

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Loading...