[infinispan-dev] Hot Rod testing

classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|

[infinispan-dev] Hot Rod testing

Tristan Tarrant-2
Recently I've had a chat with Galder, Will and Vittorio about how we
test the Hot Rod server module and the various clients. We also
discussed some of this in the past, but we now need to move forward with
a better strategy.

First up is the Hot Rod server module testsuite: it is the only part of
the code which still uses Scala. Will has a partial port of it to Java,
but we're wondering if it is worth completing that work, seeing that
most of the tests in that testsuite, in particular those related to the
protocol itself, are actually duplicated by the Java Hot Rod client's
testsuite which also happens to be our reference implementation of a
client and is much more extensive.
The only downside of removing it  is that verification will require
running the client testsuite, instead of being self-contained.

Next up is how we test clients.

The Java client, partially described above, runs all of the tests
against ad-hoc embedded servers. Some of these tests, in particular
those related to topology, start and stop new servers on the fly.

The server integration testsuite performs yet another set of tests, some
of which overlap the above, but using the actual full-blown server. It
doesn't test for topology changes.

The C++ client wraps the native client in a Java wrapper generated by
SWIG and runs the Java client testsuite. It then checks against a
blacklist of known failures. It also has a small number of native tests
which use the server distribution.

The Node.js client has its own home-grown testsuite which also uses the
server distribution.

Duplication aside, which in some cases is unavoidable, it is impossible
to confidently say that each client is properly tested.

Since complete unification is impossible because of the different
testing harnesses used by the various platforms/languages, I propose the
following:

- we identify and group the tests depending on their scope (basic
protocol ops, bulk ops, topology/failover, security, etc). A client
which implements the functionality of a group MUST pass all of the tests
in that group with NO exceptions
- we assign a unique identifier to each group/test combination (e.g.
HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be
collected in a "test book" (some kind of structured file) for comparison
with client test runs
- we refactor the Java client testsuite according to the above grouping
/ naming strategy so that testsuite which use the wrapping approach
(i.e. C++ with SWIG) can consume it by directly specifying the supported
groups
- other clients get reorganized so that they support the above grouping

I understand this is quite some work, but the current situation isn't
really sustainable.

Let me know what your thoughts are


Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Sanne Grinovero-3

I was actually planning to start a similar topic, but from the point of view of user's testing needs.

I've recently created Hibernate OGM support for Hot Rod, and it wasn't as easy as other NoSQL databases to test; luckily I have some knowledge and contact on Infinispan ;) but I had to develop several helpers and refine the approach to testing over multiple iterations.

I ended up developing a JUnit rule - handy for individual test runs in the IDE - and with a Maven life cycle extension and also with an Arquillian extension, which I needed to run both the Hot Rod server and start a Wildfly instance to host my client app.

At some point I was also in trouble with conflicting dependencies so considered making a Maven plugin to manage the server lifecycle as a proper IT phase - I didn't ultimately make this as I found an easier solution but it would be great if Infinispan could provide such helpers to end users too.. Forking the ANT scripts from the Infinispan project to assemble and start my own (as you do..) seems quite cumbersome for users ;)

Especially the server is not even available via Maven coordinates.

I'm of course happy to contribute my battle-tested Test helpers to Infinispan, but they are meant for JUnit users.

Finally, comparing to developing OGM integrations for other NoSQL stores.. It's really hard work when there is no "viewer" of the cache content.

We need some kind of interactive console to explore the stored data, I felt like driving blind: developing based on black box, when something doesn't work as expected it's challenging to figure if one has a bug with the storage method rather than the reading method, or maybe the encoding not quite right or the query options being used.. sometimes it's the used flags or the configuration properties (hell, I've been swearing a lot at some of these flags!)

Thanks,
Sanne


On 15 Sep 2016 11:07, "Tristan Tarrant" <[hidden email]> wrote:
Recently I've had a chat with Galder, Will and Vittorio about how we
test the Hot Rod server module and the various clients. We also
discussed some of this in the past, but we now need to move forward with
a better strategy.

First up is the Hot Rod server module testsuite: it is the only part of
the code which still uses Scala. Will has a partial port of it to Java,
but we're wondering if it is worth completing that work, seeing that
most of the tests in that testsuite, in particular those related to the
protocol itself, are actually duplicated by the Java Hot Rod client's
testsuite which also happens to be our reference implementation of a
client and is much more extensive.
The only downside of removing it  is that verification will require
running the client testsuite, instead of being self-contained.

Next up is how we test clients.

The Java client, partially described above, runs all of the tests
against ad-hoc embedded servers. Some of these tests, in particular
those related to topology, start and stop new servers on the fly.

The server integration testsuite performs yet another set of tests, some
of which overlap the above, but using the actual full-blown server. It
doesn't test for topology changes.

The C++ client wraps the native client in a Java wrapper generated by
SWIG and runs the Java client testsuite. It then checks against a
blacklist of known failures. It also has a small number of native tests
which use the server distribution.

The Node.js client has its own home-grown testsuite which also uses the
server distribution.

Duplication aside, which in some cases is unavoidable, it is impossible
to confidently say that each client is properly tested.

Since complete unification is impossible because of the different
testing harnesses used by the various platforms/languages, I propose the
following:

- we identify and group the tests depending on their scope (basic
protocol ops, bulk ops, topology/failover, security, etc). A client
which implements the functionality of a group MUST pass all of the tests
in that group with NO exceptions
- we assign a unique identifier to each group/test combination (e.g.
HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be
collected in a "test book" (some kind of structured file) for comparison
with client test runs
- we refactor the Java client testsuite according to the above grouping
/ naming strategy so that testsuite which use the wrapping approach
(i.e. C++ with SWIG) can consume it by directly specifying the supported
groups
- other clients get reorganized so that they support the above grouping

I understand this is quite some work, but the current situation isn't
really sustainable.

Let me know what your thoughts are


Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Gustavo Fernandes-2


On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero <[hidden email]> wrote:

I was actually planning to start a similar topic, but from the point of view of user's testing needs.

I've recently created Hibernate OGM support for Hot Rod, and it wasn't as easy as other NoSQL databases to test; luckily I have some knowledge and contact on Infinispan ;) but I had to develop several helpers and refine the approach to testing over multiple iterations.

I ended up developing a JUnit rule - handy for individual test runs in the IDE - and with a Maven life cycle extension and also with an Arquillian extension, which I needed to run both the Hot Rod server and start a Wildfly instance to host my client app.

At some point I was also in trouble with conflicting dependencies so considered making a Maven plugin to manage the server lifecycle as a proper IT phase - I didn't ultimately make this as I found an easier solution but it would be great if Infinispan could provide such helpers to end users too.. Forking the ANT scripts from the Infinispan project to assemble and start my own (as you do..) seems quite cumbersome for users ;)

Especially the server is not even available via Maven coordinates.

The server is available at [1]

I'm of course happy to contribute my battle-tested Test helpers to Infinispan, but they are meant for JUnit users.
Finally, comparing to developing OGM integrations for other NoSQL stores.. It's really hard work when there is no "viewer" of the cache content.

We need some kind of interactive console to explore the stored data, I felt like driving blind: developing based on black box, when something doesn't work as expected it's challenging to figure if one has a bug with the storage method rather than the reading method, or maybe the encoding not quite right or the query options being used.. sometimes it's the used flags or the configuration properties (hell, I've been swearing a lot at some of these flags!)

Thanks,
Sanne


On 15 Sep 2016 11:07, "Tristan Tarrant" <[hidden email]> wrote:
Recently I've had a chat with Galder, Will and Vittorio about how we
test the Hot Rod server module and the various clients. We also
discussed some of this in the past, but we now need to move forward with
a better strategy.

First up is the Hot Rod server module testsuite: it is the only part of
the code which still uses Scala. Will has a partial port of it to Java,
but we're wondering if it is worth completing that work, seeing that
most of the tests in that testsuite, in particular those related to the
protocol itself, are actually duplicated by the Java Hot Rod client's
testsuite which also happens to be our reference implementation of a
client and is much more extensive.
The only downside of removing it  is that verification will require
running the client testsuite, instead of being self-contained.

Next up is how we test clients.

The Java client, partially described above, runs all of the tests
against ad-hoc embedded servers. Some of these tests, in particular
those related to topology, start and stop new servers on the fly.

The server integration testsuite performs yet another set of tests, some
of which overlap the above, but using the actual full-blown server. It
doesn't test for topology changes.

The C++ client wraps the native client in a Java wrapper generated by
SWIG and runs the Java client testsuite. It then checks against a
blacklist of known failures. It also has a small number of native tests
which use the server distribution.

The Node.js client has its own home-grown testsuite which also uses the
server distribution.

Duplication aside, which in some cases is unavoidable, it is impossible
to confidently say that each client is properly tested.

Since complete unification is impossible because of the different
testing harnesses used by the various platforms/languages, I propose the
following:

- we identify and group the tests depending on their scope (basic
protocol ops, bulk ops, topology/failover, security, etc). A client
which implements the functionality of a group MUST pass all of the tests
in that group with NO exceptions
- we assign a unique identifier to each group/test combination (e.g.
HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be
collected in a "test book" (some kind of structured file) for comparison
with client test runs
- we refactor the Java client testsuite according to the above grouping
/ naming strategy so that testsuite which use the wrapping approach
(i.e. C++ with SWIG) can consume it by directly specifying the supported
groups
- other clients get reorganized so that they support the above grouping

I understand this is quite some work, but the current situation isn't
really sustainable.

Let me know what your thoughts are


Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Sebastian Laskawiec
How about turning the problem upside down and creating a TCK suite which runs on JUnit and has pluggable clients? The TCK suite would be responsible for bootstrapping servers, turning them down and validating the results.

The biggest advantage of this approach is that all those things are pretty well known in Java world (e.g. using Arquillian for managing server lifecycle or JUnit for assertions). But the biggest challenge is how to plug for example a JavaScript client into the suite? How to call it from Java. 

Thanks
Sebastian

On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes <[hidden email]> wrote:


On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero <[hidden email]> wrote:

I was actually planning to start a similar topic, but from the point of view of user's testing needs.

I've recently created Hibernate OGM support for Hot Rod, and it wasn't as easy as other NoSQL databases to test; luckily I have some knowledge and contact on Infinispan ;) but I had to develop several helpers and refine the approach to testing over multiple iterations.

I ended up developing a JUnit rule - handy for individual test runs in the IDE - and with a Maven life cycle extension and also with an Arquillian extension, which I needed to run both the Hot Rod server and start a Wildfly instance to host my client app.

At some point I was also in trouble with conflicting dependencies so considered making a Maven plugin to manage the server lifecycle as a proper IT phase - I didn't ultimately make this as I found an easier solution but it would be great if Infinispan could provide such helpers to end users too.. Forking the ANT scripts from the Infinispan project to assemble and start my own (as you do..) seems quite cumbersome for users ;)

Especially the server is not even available via Maven coordinates.

The server is available at [1]

I'm of course happy to contribute my battle-tested Test helpers to Infinispan, but they are meant for JUnit users.
Finally, comparing to developing OGM integrations for other NoSQL stores.. It's really hard work when there is no "viewer" of the cache content.

We need some kind of interactive console to explore the stored data, I felt like driving blind: developing based on black box, when something doesn't work as expected it's challenging to figure if one has a bug with the storage method rather than the reading method, or maybe the encoding not quite right or the query options being used.. sometimes it's the used flags or the configuration properties (hell, I've been swearing a lot at some of these flags!)

Thanks,
Sanne


On 15 Sep 2016 11:07, "Tristan Tarrant" <[hidden email]> wrote:
Recently I've had a chat with Galder, Will and Vittorio about how we
test the Hot Rod server module and the various clients. We also
discussed some of this in the past, but we now need to move forward with
a better strategy.

First up is the Hot Rod server module testsuite: it is the only part of
the code which still uses Scala. Will has a partial port of it to Java,
but we're wondering if it is worth completing that work, seeing that
most of the tests in that testsuite, in particular those related to the
protocol itself, are actually duplicated by the Java Hot Rod client's
testsuite which also happens to be our reference implementation of a
client and is much more extensive.
The only downside of removing it  is that verification will require
running the client testsuite, instead of being self-contained.

Next up is how we test clients.

The Java client, partially described above, runs all of the tests
against ad-hoc embedded servers. Some of these tests, in particular
those related to topology, start and stop new servers on the fly.

The server integration testsuite performs yet another set of tests, some
of which overlap the above, but using the actual full-blown server. It
doesn't test for topology changes.

The C++ client wraps the native client in a Java wrapper generated by
SWIG and runs the Java client testsuite. It then checks against a
blacklist of known failures. It also has a small number of native tests
which use the server distribution.

The Node.js client has its own home-grown testsuite which also uses the
server distribution.

Duplication aside, which in some cases is unavoidable, it is impossible
to confidently say that each client is properly tested.

Since complete unification is impossible because of the different
testing harnesses used by the various platforms/languages, I propose the
following:

- we identify and group the tests depending on their scope (basic
protocol ops, bulk ops, topology/failover, security, etc). A client
which implements the functionality of a group MUST pass all of the tests
in that group with NO exceptions
- we assign a unique identifier to each group/test combination (e.g.
HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be
collected in a "test book" (some kind of structured file) for comparison
with client test runs
- we refactor the Java client testsuite according to the above grouping
/ naming strategy so that testsuite which use the wrapping approach
(i.e. C++ with SWIG) can consume it by directly specifying the supported
groups
- other clients get reorganized so that they support the above grouping

I understand this is quite some work, but the current situation isn't
really sustainable.

Let me know what your thoughts are


Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Tristan Tarrant-2
In reply to this post by Sanne Grinovero-3
On 15/09/16 13:33, Sanne Grinovero wrote:
> Especially the server is not even available via Maven coordinates.

You didn't try hard enough:

org.infinispan.server:infinispan-server:9.0.0.Alpha4:zip:bin

<groupId>org.infinispan.server</groupId>
<artifactId>infinispan-server</artifactId>
<version>9.0.0.Alpha4</version>
<type>zip</type>
<classifier>bin</classifier>

:)

Tristan

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Tristan Tarrant-2
In reply to this post by Sebastian Laskawiec
Whatever we choose, this solves only half of the problem: enumerating
and classifying the tests is the hard part.

Tristan

On 15/09/16 13:58, Sebastian Laskawiec wrote:

> How about turning the problem upside down and creating a TCK suite
> which runs on JUnit and has pluggable clients? The TCK suite would be
> responsible for bootstrapping servers, turning them down and
> validating the results.
>
> The biggest advantage of this approach is that all those things are
> pretty well known in Java world (e.g. using Arquillian for managing
> server lifecycle or JUnit for assertions). But the biggest challenge
> is how to plug for example a JavaScript client into the suite? How to
> call it from Java.
>
> Thanks
> Sebastian
>
> On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes
> <[hidden email] <mailto:[hidden email]>> wrote:
>
>
>
>     On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero
>     <[hidden email] <mailto:[hidden email]>> wrote:
>
>         I was actually planning to start a similar topic, but from the
>         point of view of user's testing needs.
>
>         I've recently created Hibernate OGM support for Hot Rod, and
>         it wasn't as easy as other NoSQL databases to test; luckily I
>         have some knowledge and contact on Infinispan ;) but I had to
>         develop several helpers and refine the approach to testing
>         over multiple iterations.
>
>         I ended up developing a JUnit rule - handy for individual test
>         runs in the IDE - and with a Maven life cycle extension and
>         also with an Arquillian extension, which I needed to run both
>         the Hot Rod server and start a Wildfly instance to host my
>         client app.
>
>         At some point I was also in trouble with conflicting
>         dependencies so considered making a Maven plugin to manage the
>         server lifecycle as a proper IT phase - I didn't ultimately
>         make this as I found an easier solution but it would be great
>         if Infinispan could provide such helpers to end users too..
>         Forking the ANT scripts from the Infinispan project to
>         assemble and start my own (as you do..) seems quite cumbersome
>         for users ;)
>
>         Especially the server is not even available via Maven
>         coordinates/./
>
>     The server is available at [1]
>
>     [1]
>     http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/
>     <http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/>
>
>         I'm of course happy to contribute my battle-tested Test
>         helpers to Infinispan, but they are meant for JUnit users.
>         Finally, comparing to developing OGM integrations for other
>         NoSQL stores.. It's really hard work when there is no "viewer"
>         of the cache content.
>
>         We need some kind of interactive console to explore the stored
>         data, I felt like driving blind: developing based on black
>         box, when something doesn't work as expected it's challenging
>         to figure if one has a bug with the storage method rather than
>         the reading method, or maybe the encoding not quite right or
>         the query options being used.. sometimes it's the used flags
>         or the configuration properties (hell, I've been swearing a
>         lot at some of these flags!)
>
>         Thanks,
>         Sanne
>
>
>         On 15 Sep 2016 11:07, "Tristan Tarrant" <[hidden email]
>         <mailto:[hidden email]>> wrote:
>
>             Recently I've had a chat with Galder, Will and Vittorio
>             about how we
>             test the Hot Rod server module and the various clients. We
>             also
>             discussed some of this in the past, but we now need to
>             move forward with
>             a better strategy.
>
>             First up is the Hot Rod server module testsuite: it is the
>             only part of
>             the code which still uses Scala. Will has a partial port
>             of it to Java,
>             but we're wondering if it is worth completing that work,
>             seeing that
>             most of the tests in that testsuite, in particular those
>             related to the
>             protocol itself, are actually duplicated by the Java Hot
>             Rod client's
>             testsuite which also happens to be our reference
>             implementation of a
>             client and is much more extensive.
>             The only downside of removing it  is that verification
>             will require
>             running the client testsuite, instead of being self-contained.
>
>             Next up is how we test clients.
>
>             The Java client, partially described above, runs all of
>             the tests
>             against ad-hoc embedded servers. Some of these tests, in
>             particular
>             those related to topology, start and stop new servers on
>             the fly.
>
>             The server integration testsuite performs yet another set
>             of tests, some
>             of which overlap the above, but using the actual
>             full-blown server. It
>             doesn't test for topology changes.
>
>             The C++ client wraps the native client in a Java wrapper
>             generated by
>             SWIG and runs the Java client testsuite. It then checks
>             against a
>             blacklist of known failures. It also has a small number of
>             native tests
>             which use the server distribution.
>
>             The Node.js client has its own home-grown testsuite which
>             also uses the
>             server distribution.
>
>             Duplication aside, which in some cases is unavoidable, it
>             is impossible
>             to confidently say that each client is properly tested.
>
>             Since complete unification is impossible because of the
>             different
>             testing harnesses used by the various platforms/languages,
>             I propose the
>             following:
>
>             - we identify and group the tests depending on their scope
>             (basic
>             protocol ops, bulk ops, topology/failover, security, etc).
>             A client
>             which implements the functionality of a group MUST pass
>             all of the tests
>             in that group with NO exceptions
>             - we assign a unique identifier to each group/test
>             combination (e.g.
>             HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These
>             should be
>             collected in a "test book" (some kind of structured file)
>             for comparison
>             with client test runs
>             - we refactor the Java client testsuite according to the
>             above grouping
>             / naming strategy so that testsuite which use the wrapping
>             approach
>             (i.e. C++ with SWIG) can consume it by directly specifying
>             the supported
>             groups
>             - other clients get reorganized so that they support the
>             above grouping
>
>             I understand this is quite some work, but the current
>             situation isn't
>             really sustainable.
>
>             Let me know what your thoughts are
>
>
>             Tristan
>             --
>             Tristan Tarrant
>             Infinispan Lead
>             JBoss, a division of Red Hat
>             _______________________________________________
>             infinispan-dev mailing list
>             [hidden email]
>             <mailto:[hidden email]>
>             https://lists.jboss.org/mailman/listinfo/infinispan-dev
>             <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
>
>         _______________________________________________
>         infinispan-dev mailing list
>         [hidden email]
>         <mailto:[hidden email]>
>         https://lists.jboss.org/mailman/listinfo/infinispan-dev
>         <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
>
>
>     _______________________________________________
>     infinispan-dev mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>     <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Tristan Tarrant-2
Anyway, I like the idea. Can we sketch a POC ?

Tristan


On 15/09/16 14:24, Tristan Tarrant wrote:

> Whatever we choose, this solves only half of the problem: enumerating
> and classifying the tests is the hard part.
>
> Tristan
>
> On 15/09/16 13:58, Sebastian Laskawiec wrote:
>> How about turning the problem upside down and creating a TCK suite
>> which runs on JUnit and has pluggable clients? The TCK suite would be
>> responsible for bootstrapping servers, turning them down and
>> validating the results.
>>
>> The biggest advantage of this approach is that all those things are
>> pretty well known in Java world (e.g. using Arquillian for managing
>> server lifecycle or JUnit for assertions). But the biggest challenge
>> is how to plug for example a JavaScript client into the suite? How to
>> call it from Java.
>>
>> Thanks
>> Sebastian
>>
>> On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes
>> <[hidden email] <mailto:[hidden email]>> wrote:
>>
>>
>>
>>     On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero
>>     <[hidden email] <mailto:[hidden email]>> wrote:
>>
>>         I was actually planning to start a similar topic, but from the
>>         point of view of user's testing needs.
>>
>>         I've recently created Hibernate OGM support for Hot Rod, and
>>         it wasn't as easy as other NoSQL databases to test; luckily I
>>         have some knowledge and contact on Infinispan ;) but I had to
>>         develop several helpers and refine the approach to testing
>>         over multiple iterations.
>>
>>         I ended up developing a JUnit rule - handy for individual test
>>         runs in the IDE - and with a Maven life cycle extension and
>>         also with an Arquillian extension, which I needed to run both
>>         the Hot Rod server and start a Wildfly instance to host my
>>         client app.
>>
>>         At some point I was also in trouble with conflicting
>>         dependencies so considered making a Maven plugin to manage the
>>         server lifecycle as a proper IT phase - I didn't ultimately
>>         make this as I found an easier solution but it would be great
>>         if Infinispan could provide such helpers to end users too..
>>         Forking the ANT scripts from the Infinispan project to
>>         assemble and start my own (as you do..) seems quite cumbersome
>>         for users ;)
>>
>>         Especially the server is not even available via Maven
>>         coordinates/./
>>
>>     The server is available at [1]
>>
>>     [1]
>> http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/
>> <http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/>
>>
>>         I'm of course happy to contribute my battle-tested Test
>>         helpers to Infinispan, but they are meant for JUnit users.
>>         Finally, comparing to developing OGM integrations for other
>>         NoSQL stores.. It's really hard work when there is no "viewer"
>>         of the cache content.
>>
>>         We need some kind of interactive console to explore the stored
>>         data, I felt like driving blind: developing based on black
>>         box, when something doesn't work as expected it's challenging
>>         to figure if one has a bug with the storage method rather than
>>         the reading method, or maybe the encoding not quite right or
>>         the query options being used.. sometimes it's the used flags
>>         or the configuration properties (hell, I've been swearing a
>>         lot at some of these flags!)
>>
>>         Thanks,
>>         Sanne
>>
>>
>>         On 15 Sep 2016 11:07, "Tristan Tarrant" <[hidden email]
>>         <mailto:[hidden email]>> wrote:
>>
>>             Recently I've had a chat with Galder, Will and Vittorio
>>             about how we
>>             test the Hot Rod server module and the various clients. We
>>             also
>>             discussed some of this in the past, but we now need to
>>             move forward with
>>             a better strategy.
>>
>>             First up is the Hot Rod server module testsuite: it is the
>>             only part of
>>             the code which still uses Scala. Will has a partial port
>>             of it to Java,
>>             but we're wondering if it is worth completing that work,
>>             seeing that
>>             most of the tests in that testsuite, in particular those
>>             related to the
>>             protocol itself, are actually duplicated by the Java Hot
>>             Rod client's
>>             testsuite which also happens to be our reference
>>             implementation of a
>>             client and is much more extensive.
>>             The only downside of removing it  is that verification
>>             will require
>>             running the client testsuite, instead of being
>> self-contained.
>>
>>             Next up is how we test clients.
>>
>>             The Java client, partially described above, runs all of
>>             the tests
>>             against ad-hoc embedded servers. Some of these tests, in
>>             particular
>>             those related to topology, start and stop new servers on
>>             the fly.
>>
>>             The server integration testsuite performs yet another set
>>             of tests, some
>>             of which overlap the above, but using the actual
>>             full-blown server. It
>>             doesn't test for topology changes.
>>
>>             The C++ client wraps the native client in a Java wrapper
>>             generated by
>>             SWIG and runs the Java client testsuite. It then checks
>>             against a
>>             blacklist of known failures. It also has a small number of
>>             native tests
>>             which use the server distribution.
>>
>>             The Node.js client has its own home-grown testsuite which
>>             also uses the
>>             server distribution.
>>
>>             Duplication aside, which in some cases is unavoidable, it
>>             is impossible
>>             to confidently say that each client is properly tested.
>>
>>             Since complete unification is impossible because of the
>>             different
>>             testing harnesses used by the various platforms/languages,
>>             I propose the
>>             following:
>>
>>             - we identify and group the tests depending on their scope
>>             (basic
>>             protocol ops, bulk ops, topology/failover, security, etc).
>>             A client
>>             which implements the functionality of a group MUST pass
>>             all of the tests
>>             in that group with NO exceptions
>>             - we assign a unique identifier to each group/test
>>             combination (e.g.
>>             HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These
>>             should be
>>             collected in a "test book" (some kind of structured file)
>>             for comparison
>>             with client test runs
>>             - we refactor the Java client testsuite according to the
>>             above grouping
>>             / naming strategy so that testsuite which use the wrapping
>>             approach
>>             (i.e. C++ with SWIG) can consume it by directly specifying
>>             the supported
>>             groups
>>             - other clients get reorganized so that they support the
>>             above grouping
>>
>>             I understand this is quite some work, but the current
>>             situation isn't
>>             really sustainable.
>>
>>             Let me know what your thoughts are
>>
>>
>>             Tristan
>>             --
>>             Tristan Tarrant
>>             Infinispan Lead
>>             JBoss, a division of Red Hat
>>             _______________________________________________
>>             infinispan-dev mailing list
>>             [hidden email]
>>             <mailto:[hidden email]>
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>>
>>
>>         _______________________________________________
>>         infinispan-dev mailing list
>>         [hidden email]
>>         <mailto:[hidden email]>
>>         https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>>
>>
>>
>>     _______________________________________________
>>     infinispan-dev mailing list
>>     [hidden email]
>> <mailto:[hidden email]>
>>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>>
>>
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>


--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Alan Field
I also like this idea for a Unit-Based TCK for all clients, if this is possible.

> - we identify and group the tests depending on their scope (basic
> protocol ops, bulk ops, topology/failover, security, etc). A client
> which implements the functionality of a group MUST pass all of the tests
> in that group with NO exceptions

This makes sense to me, but I also agree that the hard part will be in categorizing the tests into these buckets. Should the groups be divided by intelligence as well? I'm just wondering about "dumb" clients like REST and Memcached.

> - we assign a unique identifier to each group/test combination (e.g.
> HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be
> collected in a "test book" (some kind of structured file) for comparison
> with client test runs

Are these identifiers just used as the JUNit test group names?

> - we refactor the Java client testsuite according to the above grouping
> / naming strategy so that testsuite which use the wrapping approach
> (i.e. C++ with SWIG) can consume it by directly specifying the supported
> groups

This makes sense to me as well.

I think the other requirements here are that the client tests must use a real server distribution and not the embedded server. Any non-duplicated tests from the server integration test suite have to be migrated to the client test suite as well. I think this also is an opportunity to inventory the client test suite and reduce it to the most minimal number of tests that verify the adherence to the protocol and expected behavior beyond the protocol.

Thanks,
Alan

----- Original Message -----

> From: "Tristan Tarrant" <[hidden email]>
> To: [hidden email]
> Sent: Thursday, September 15, 2016 12:27:54 PM
> Subject: Re: [infinispan-dev] Hot Rod testing
>
> Anyway, I like the idea. Can we sketch a POC ?
>
> Tristan
>
>
> On 15/09/16 14:24, Tristan Tarrant wrote:
> > Whatever we choose, this solves only half of the problem: enumerating
> > and classifying the tests is the hard part.
> >
> > Tristan
> >
> > On 15/09/16 13:58, Sebastian Laskawiec wrote:
> >> How about turning the problem upside down and creating a TCK suite
> >> which runs on JUnit and has pluggable clients? The TCK suite would be
> >> responsible for bootstrapping servers, turning them down and
> >> validating the results.
> >>
> >> The biggest advantage of this approach is that all those things are
> >> pretty well known in Java world (e.g. using Arquillian for managing
> >> server lifecycle or JUnit for assertions). But the biggest challenge
> >> is how to plug for example a JavaScript client into the suite? How to
> >> call it from Java.
> >>
> >> Thanks
> >> Sebastian
> >>
> >> On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes
> >> <[hidden email] <mailto:[hidden email]>> wrote:
> >>
> >>
> >>
> >>     On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero
> >>     <[hidden email] <mailto:[hidden email]>> wrote:
> >>
> >>         I was actually planning to start a similar topic, but from the
> >>         point of view of user's testing needs.
> >>
> >>         I've recently created Hibernate OGM support for Hot Rod, and
> >>         it wasn't as easy as other NoSQL databases to test; luckily I
> >>         have some knowledge and contact on Infinispan ;) but I had to
> >>         develop several helpers and refine the approach to testing
> >>         over multiple iterations.
> >>
> >>         I ended up developing a JUnit rule - handy for individual test
> >>         runs in the IDE - and with a Maven life cycle extension and
> >>         also with an Arquillian extension, which I needed to run both
> >>         the Hot Rod server and start a Wildfly instance to host my
> >>         client app.
> >>
> >>         At some point I was also in trouble with conflicting
> >>         dependencies so considered making a Maven plugin to manage the
> >>         server lifecycle as a proper IT phase - I didn't ultimately
> >>         make this as I found an easier solution but it would be great
> >>         if Infinispan could provide such helpers to end users too..
> >>         Forking the ANT scripts from the Infinispan project to
> >>         assemble and start my own (as you do..) seems quite cumbersome
> >>         for users ;)
> >>
> >>         Especially the server is not even available via Maven
> >>         coordinates/./
> >>
> >>     The server is available at [1]
> >>
> >>     [1]
> >> http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/
> >> <http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/>
> >>
> >>         I'm of course happy to contribute my battle-tested Test
> >>         helpers to Infinispan, but they are meant for JUnit users.
> >>         Finally, comparing to developing OGM integrations for other
> >>         NoSQL stores.. It's really hard work when there is no "viewer"
> >>         of the cache content.
> >>
> >>         We need some kind of interactive console to explore the stored
> >>         data, I felt like driving blind: developing based on black
> >>         box, when something doesn't work as expected it's challenging
> >>         to figure if one has a bug with the storage method rather than
> >>         the reading method, or maybe the encoding not quite right or
> >>         the query options being used.. sometimes it's the used flags
> >>         or the configuration properties (hell, I've been swearing a
> >>         lot at some of these flags!)
> >>
> >>         Thanks,
> >>         Sanne
> >>
> >>
> >>         On 15 Sep 2016 11:07, "Tristan Tarrant" <[hidden email]
> >>         <mailto:[hidden email]>> wrote:
> >>
> >>             Recently I've had a chat with Galder, Will and Vittorio
> >>             about how we
> >>             test the Hot Rod server module and the various clients. We
> >>             also
> >>             discussed some of this in the past, but we now need to
> >>             move forward with
> >>             a better strategy.
> >>
> >>             First up is the Hot Rod server module testsuite: it is the
> >>             only part of
> >>             the code which still uses Scala. Will has a partial port
> >>             of it to Java,
> >>             but we're wondering if it is worth completing that work,
> >>             seeing that
> >>             most of the tests in that testsuite, in particular those
> >>             related to the
> >>             protocol itself, are actually duplicated by the Java Hot
> >>             Rod client's
> >>             testsuite which also happens to be our reference
> >>             implementation of a
> >>             client and is much more extensive.
> >>             The only downside of removing it  is that verification
> >>             will require
> >>             running the client testsuite, instead of being
> >> self-contained.
> >>
> >>             Next up is how we test clients.
> >>
> >>             The Java client, partially described above, runs all of
> >>             the tests
> >>             against ad-hoc embedded servers. Some of these tests, in
> >>             particular
> >>             those related to topology, start and stop new servers on
> >>             the fly.
> >>
> >>             The server integration testsuite performs yet another set
> >>             of tests, some
> >>             of which overlap the above, but using the actual
> >>             full-blown server. It
> >>             doesn't test for topology changes.
> >>
> >>             The C++ client wraps the native client in a Java wrapper
> >>             generated by
> >>             SWIG and runs the Java client testsuite. It then checks
> >>             against a
> >>             blacklist of known failures. It also has a small number of
> >>             native tests
> >>             which use the server distribution.
> >>
> >>             The Node.js client has its own home-grown testsuite which
> >>             also uses the
> >>             server distribution.
> >>
> >>             Duplication aside, which in some cases is unavoidable, it
> >>             is impossible
> >>             to confidently say that each client is properly tested.
> >>
> >>             Since complete unification is impossible because of the
> >>             different
> >>             testing harnesses used by the various platforms/languages,
> >>             I propose the
> >>             following:
> >>
> >>             - we identify and group the tests depending on their scope
> >>             (basic
> >>             protocol ops, bulk ops, topology/failover, security, etc).
> >>             A client
> >>             which implements the functionality of a group MUST pass
> >>             all of the tests
> >>             in that group with NO exceptions
> >>             - we assign a unique identifier to each group/test
> >>             combination (e.g.
> >>             HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These
> >>             should be
> >>             collected in a "test book" (some kind of structured file)
> >>             for comparison
> >>             with client test runs
> >>             - we refactor the Java client testsuite according to the
> >>             above grouping
> >>             / naming strategy so that testsuite which use the wrapping
> >>             approach
> >>             (i.e. C++ with SWIG) can consume it by directly specifying
> >>             the supported
> >>             groups
> >>             - other clients get reorganized so that they support the
> >>             above grouping
> >>
> >>             I understand this is quite some work, but the current
> >>             situation isn't
> >>             really sustainable.
> >>
> >>             Let me know what your thoughts are
> >>
> >>
> >>             Tristan
> >>             --
> >>             Tristan Tarrant
> >>             Infinispan Lead
> >>             JBoss, a division of Red Hat
> >>             _______________________________________________
> >>             infinispan-dev mailing list
> >>             [hidden email]
> >>             <mailto:[hidden email]>
> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
> >>
> >>
> >>         _______________________________________________
> >>         infinispan-dev mailing list
> >>         [hidden email]
> >>         <mailto:[hidden email]>
> >>         https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
> >>
> >>
> >>
> >>     _______________________________________________
> >>     infinispan-dev mailing list
> >>     [hidden email]
> >> <mailto:[hidden email]>
> >>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> infinispan-dev mailing list
> >> [hidden email]
> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> >
>
>
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Vittorio Rigamonti
In reply to this post by Tristan Tarrant-2
I feel, but I'm not sure, that we first need to define what we want to test:
I mean enumerate and organize the requirements could probably be the right starting point.


Of course Sebastian's approach could be right if we can imagine a tool that can enforce a requirement's organizational model.

Vittorio


----- Original Message -----
From: "Tristan Tarrant" <[hidden email]>
To: [hidden email]
Sent: Thursday, September 15, 2016 6:27:54 PM
Subject: Re: [infinispan-dev] Hot Rod testing

Anyway, I like the idea. Can we sketch a POC ?

Tristan


On 15/09/16 14:24, Tristan Tarrant wrote:

> Whatever we choose, this solves only half of the problem: enumerating
> and classifying the tests is the hard part.
>
> Tristan
>
> On 15/09/16 13:58, Sebastian Laskawiec wrote:
>> How about turning the problem upside down and creating a TCK suite
>> which runs on JUnit and has pluggable clients? The TCK suite would be
>> responsible for bootstrapping servers, turning them down and
>> validating the results.
>>
>> The biggest advantage of this approach is that all those things are
>> pretty well known in Java world (e.g. using Arquillian for managing
>> server lifecycle or JUnit for assertions). But the biggest challenge
>> is how to plug for example a JavaScript client into the suite? How to
>> call it from Java.
>>
>> Thanks
>> Sebastian
>>
>> On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes
>> <[hidden email] <mailto:[hidden email]>> wrote:
>>
>>
>>
>>     On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero
>>     <[hidden email] <mailto:[hidden email]>> wrote:
>>
>>         I was actually planning to start a similar topic, but from the
>>         point of view of user's testing needs.
>>
>>         I've recently created Hibernate OGM support for Hot Rod, and
>>         it wasn't as easy as other NoSQL databases to test; luckily I
>>         have some knowledge and contact on Infinispan ;) but I had to
>>         develop several helpers and refine the approach to testing
>>         over multiple iterations.
>>
>>         I ended up developing a JUnit rule - handy for individual test
>>         runs in the IDE - and with a Maven life cycle extension and
>>         also with an Arquillian extension, which I needed to run both
>>         the Hot Rod server and start a Wildfly instance to host my
>>         client app.
>>
>>         At some point I was also in trouble with conflicting
>>         dependencies so considered making a Maven plugin to manage the
>>         server lifecycle as a proper IT phase - I didn't ultimately
>>         make this as I found an easier solution but it would be great
>>         if Infinispan could provide such helpers to end users too..
>>         Forking the ANT scripts from the Infinispan project to
>>         assemble and start my own (as you do..) seems quite cumbersome
>>         for users ;)
>>
>>         Especially the server is not even available via Maven
>>         coordinates/./
>>
>>     The server is available at [1]
>>
>>     [1]
>> http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/
>> <http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/>
>>
>>         I'm of course happy to contribute my battle-tested Test
>>         helpers to Infinispan, but they are meant for JUnit users.
>>         Finally, comparing to developing OGM integrations for other
>>         NoSQL stores.. It's really hard work when there is no "viewer"
>>         of the cache content.
>>
>>         We need some kind of interactive console to explore the stored
>>         data, I felt like driving blind: developing based on black
>>         box, when something doesn't work as expected it's challenging
>>         to figure if one has a bug with the storage method rather than
>>         the reading method, or maybe the encoding not quite right or
>>         the query options being used.. sometimes it's the used flags
>>         or the configuration properties (hell, I've been swearing a
>>         lot at some of these flags!)
>>
>>         Thanks,
>>         Sanne
>>
>>
>>         On 15 Sep 2016 11:07, "Tristan Tarrant" <[hidden email]
>>         <mailto:[hidden email]>> wrote:
>>
>>             Recently I've had a chat with Galder, Will and Vittorio
>>             about how we
>>             test the Hot Rod server module and the various clients. We
>>             also
>>             discussed some of this in the past, but we now need to
>>             move forward with
>>             a better strategy.
>>
>>             First up is the Hot Rod server module testsuite: it is the
>>             only part of
>>             the code which still uses Scala. Will has a partial port
>>             of it to Java,
>>             but we're wondering if it is worth completing that work,
>>             seeing that
>>             most of the tests in that testsuite, in particular those
>>             related to the
>>             protocol itself, are actually duplicated by the Java Hot
>>             Rod client's
>>             testsuite which also happens to be our reference
>>             implementation of a
>>             client and is much more extensive.
>>             The only downside of removing it  is that verification
>>             will require
>>             running the client testsuite, instead of being
>> self-contained.
>>
>>             Next up is how we test clients.
>>
>>             The Java client, partially described above, runs all of
>>             the tests
>>             against ad-hoc embedded servers. Some of these tests, in
>>             particular
>>             those related to topology, start and stop new servers on
>>             the fly.
>>
>>             The server integration testsuite performs yet another set
>>             of tests, some
>>             of which overlap the above, but using the actual
>>             full-blown server. It
>>             doesn't test for topology changes.
>>
>>             The C++ client wraps the native client in a Java wrapper
>>             generated by
>>             SWIG and runs the Java client testsuite. It then checks
>>             against a
>>             blacklist of known failures. It also has a small number of
>>             native tests
>>             which use the server distribution.
>>
>>             The Node.js client has its own home-grown testsuite which
>>             also uses the
>>             server distribution.
>>
>>             Duplication aside, which in some cases is unavoidable, it
>>             is impossible
>>             to confidently say that each client is properly tested.
>>
>>             Since complete unification is impossible because of the
>>             different
>>             testing harnesses used by the various platforms/languages,
>>             I propose the
>>             following:
>>
>>             - we identify and group the tests depending on their scope
>>             (basic
>>             protocol ops, bulk ops, topology/failover, security, etc).
>>             A client
>>             which implements the functionality of a group MUST pass
>>             all of the tests
>>             in that group with NO exceptions
>>             - we assign a unique identifier to each group/test
>>             combination (e.g.
>>             HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These
>>             should be
>>             collected in a "test book" (some kind of structured file)
>>             for comparison
>>             with client test runs
>>             - we refactor the Java client testsuite according to the
>>             above grouping
>>             / naming strategy so that testsuite which use the wrapping
>>             approach
>>             (i.e. C++ with SWIG) can consume it by directly specifying
>>             the supported
>>             groups
>>             - other clients get reorganized so that they support the
>>             above grouping
>>
>>             I understand this is quite some work, but the current
>>             situation isn't
>>             really sustainable.
>>
>>             Let me know what your thoughts are
>>
>>
>>             Tristan
>>             --
>>             Tristan Tarrant
>>             Infinispan Lead
>>             JBoss, a division of Red Hat
>>             _______________________________________________
>>             infinispan-dev mailing list
>>             [hidden email]
>>             <mailto:[hidden email]>
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>>
>>
>>         _______________________________________________
>>         infinispan-dev mailing list
>>         [hidden email]
>>         <mailto:[hidden email]>
>>         https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>>
>>
>>
>>     _______________________________________________
>>     infinispan-dev mailing list
>>     [hidden email]
>> <mailto:[hidden email]>
>>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>>
>>
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>


--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Dan Berindei
In reply to this post by Alan Field
On Thu, Sep 15, 2016 at 7:42 PM, Alan Field <[hidden email]> wrote:

> I also like this idea for a Unit-Based TCK for all clients, if this is possible.
>
>> - we identify and group the tests depending on their scope (basic
>> protocol ops, bulk ops, topology/failover, security, etc). A client
>> which implements the functionality of a group MUST pass all of the tests
>> in that group with NO exceptions
>
> This makes sense to me, but I also agree that the hard part will be in categorizing the tests into these buckets. Should the groups be divided by intelligence as well? I'm just wondering about "dumb" clients like REST and Memcached.
>
>> - we assign a unique identifier to each group/test combination (e.g.
>> HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be
>> collected in a "test book" (some kind of structured file) for comparison
>> with client test runs
>
> Are these identifiers just used as the JUNit test group names?
>
>> - we refactor the Java client testsuite according to the above grouping
>> / naming strategy so that testsuite which use the wrapping approach
>> (i.e. C++ with SWIG) can consume it by directly specifying the supported
>> groups
>
> This makes sense to me as well.
>
> I think the other requirements here are that the client tests must use a real server distribution and not the embedded server. Any non-duplicated tests from the server integration test suite have to be migrated to the client test suite as well. I think this also is an opportunity to inventory the client test suite and reduce it to the most minimal number of tests that verify the adherence to the protocol and expected behavior beyond the protocol.
>

Reducing the number of tests may not be so easy... remember that we
need to test all versions of the protocol, not just the latest one.
And we still need to test stuff that's not explicitly in the protocol,
especially around state transfer/server crashes and around query
(which the protocol says almost nothing about).

More importantly, if I have to rebuild the entire server distribution
every time I make a change in the HR server, then I'm pretty sure I
won't touch the HR server again :)

Cheers
Dan

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Galder Zamarreño
In reply to this post by Sebastian Laskawiec

--
Galder Zamarreño
Infinispan, Red Hat

> On 15 Sep 2016, at 13:58, Sebastian Laskawiec <[hidden email]> wrote:
>
> How about turning the problem upside down and creating a TCK suite which runs on JUnit and has pluggable clients? The TCK suite would be responsible for bootstrapping servers, turning them down and validating the results.
>
> The biggest advantage of this approach is that all those things are pretty well known in Java world (e.g. using Arquillian for managing server lifecycle or JUnit for assertions). But the biggest challenge is how to plug for example a JavaScript client into the suite? How to call it from Java.

^ I thought about all of this when working on the JS client, and although like you, I thought this was the biggest hurdle, eventually I realised that there are bigger issues than that:

1. How do you verify that a Javascript client works the way a Javascript program would use it?
IOW, even if you could call JS from Java, what you'd be verifying is that whichever contorsionate way of calling JS from Java works, which might not necessarily mean it works when a real JS program calls it.

2. Development workflow

The other side problem is related to workflow: when you develop in a scripting, dynamically typed language, the way you go about testing is slightly different. Since you don't have the type checker to help, you're almost forced to run your testsuite continuously, and the JS client tests I developed were geared to make this possible.

To give an example: to make being able to run test continously, the JS client assumes you have a running node for local tests and a set of servers for clustered tests (we provide a script for it). By having a running set of servers, I can very quickly run tests continously. This is very different to how Java-based testsuites work where each test or testsuites starts the required servers and then shuts them down. I'd be very upset if developing my JS client required this kind of waste of time. Moreover, the JS client tests are designed so that whatever they do, they go back to initial state when they finish. This happens for example with failover tests where I could not simply kill running servers, and instead the failover test starts a bunch servers which it kills as it goes along to test failover. The result is that none of the tests started by failover tests end up surviving when the test finishes.

Maybe some day we'll have a Java-based testsuite that more easily allows continous testing. Scala, through SBT, do have something along this lines, so I don't think it's necessarily impossible, but we're not there yet. And, as I said above, you always have the first issue: testing how the user will use things.

Cheers,

[1] https://github.com/infinispan/js-client/blob/master/spec/infinispan_failover_spec.js

>
> Thanks
> Sebastian
>
> On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes <[hidden email]> wrote:
>
>
> On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero <[hidden email]> wrote:
> I was actually planning to start a similar topic, but from the point of view of user's testing needs.
>
> I've recently created Hibernate OGM support for Hot Rod, and it wasn't as easy as other NoSQL databases to test; luckily I have some knowledge and contact on Infinispan ;) but I had to develop several helpers and refine the approach to testing over multiple iterations.
>
> I ended up developing a JUnit rule - handy for individual test runs in the IDE - and with a Maven life cycle extension and also with an Arquillian extension, which I needed to run both the Hot Rod server and start a Wildfly instance to host my client app.
>
> At some point I was also in trouble with conflicting dependencies so considered making a Maven plugin to manage the server lifecycle as a proper IT phase - I didn't ultimately make this as I found an easier solution but it would be great if Infinispan could provide such helpers to end users too.. Forking the ANT scripts from the Infinispan project to assemble and start my own (as you do..) seems quite cumbersome for users ;)
>
> Especially the server is not even available via Maven coordinates.
>
> The server is available at [1]
>
> [1] http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/
>
>  
> I'm of course happy to contribute my battle-tested Test helpers to Infinispan, but they are meant for JUnit users.
> Finally, comparing to developing OGM integrations for other NoSQL stores.. It's really hard work when there is no "viewer" of the cache content.
>
> We need some kind of interactive console to explore the stored data, I felt like driving blind: developing based on black box, when something doesn't work as expected it's challenging to figure if one has a bug with the storage method rather than the reading method, or maybe the encoding not quite right or the query options being used.. sometimes it's the used flags or the configuration properties (hell, I've been swearing a lot at some of these flags!)
>
> Thanks,
> Sanne
>
> On 15 Sep 2016 11:07, "Tristan Tarrant" <[hidden email]> wrote:
> Recently I've had a chat with Galder, Will and Vittorio about how we
> test the Hot Rod server module and the various clients. We also
> discussed some of this in the past, but we now need to move forward with
> a better strategy.
>
> First up is the Hot Rod server module testsuite: it is the only part of
> the code which still uses Scala. Will has a partial port of it to Java,
> but we're wondering if it is worth completing that work, seeing that
> most of the tests in that testsuite, in particular those related to the
> protocol itself, are actually duplicated by the Java Hot Rod client's
> testsuite which also happens to be our reference implementation of a
> client and is much more extensive.
> The only downside of removing it  is that verification will require
> running the client testsuite, instead of being self-contained.
>
> Next up is how we test clients.
>
> The Java client, partially described above, runs all of the tests
> against ad-hoc embedded servers. Some of these tests, in particular
> those related to topology, start and stop new servers on the fly.
>
> The server integration testsuite performs yet another set of tests, some
> of which overlap the above, but using the actual full-blown server. It
> doesn't test for topology changes.
>
> The C++ client wraps the native client in a Java wrapper generated by
> SWIG and runs the Java client testsuite. It then checks against a
> blacklist of known failures. It also has a small number of native tests
> which use the server distribution.
>
> The Node.js client has its own home-grown testsuite which also uses the
> server distribution.
>
> Duplication aside, which in some cases is unavoidable, it is impossible
> to confidently say that each client is properly tested.
>
> Since complete unification is impossible because of the different
> testing harnesses used by the various platforms/languages, I propose the
> following:
>
> - we identify and group the tests depending on their scope (basic
> protocol ops, bulk ops, topology/failover, security, etc). A client
> which implements the functionality of a group MUST pass all of the tests
> in that group with NO exceptions
> - we assign a unique identifier to each group/test combination (e.g.
> HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be
> collected in a "test book" (some kind of structured file) for comparison
> with client test runs
> - we refactor the Java client testsuite according to the above grouping
> / naming strategy so that testsuite which use the wrapping approach
> (i.e. C++ with SWIG) can consume it by directly specifying the supported
> groups
> - other clients get reorganized so that they support the above grouping
>
> I understand this is quite some work, but the current situation isn't
> really sustainable.
>
> Let me know what your thoughts are
>
>
> Tristan
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Alan Field
Hey Galder,

----- Original Message -----

> From: "Galder Zamarreño" <[hidden email]>
> To: "infinispan -Dev List" <[hidden email]>
> Sent: Friday, September 23, 2016 11:33:12 AM
> Subject: Re: [infinispan-dev] Hot Rod testing
>
>
> --
> Galder Zamarreño
> Infinispan, Red Hat
>
> > On 15 Sep 2016, at 13:58, Sebastian Laskawiec <[hidden email]> wrote:
> >
> > How about turning the problem upside down and creating a TCK suite which
> > runs on JUnit and has pluggable clients? The TCK suite would be
> > responsible for bootstrapping servers, turning them down and validating
> > the results.
> >
> > The biggest advantage of this approach is that all those things are pretty
> > well known in Java world (e.g. using Arquillian for managing server
> > lifecycle or JUnit for assertions). But the biggest challenge is how to
> > plug for example a JavaScript client into the suite? How to call it from
> > Java.
>
> ^ I thought about all of this when working on the JS client, and although
> like you, I thought this was the biggest hurdle, eventually I realised that
> there are bigger issues than that:
>
> 1. How do you verify that a Javascript client works the way a Javascript
> program would use it?
> IOW, even if you could call JS from Java, what you'd be verifying is that
> whichever contorsionate way of calling JS from Java works, which might not
> necessarily mean it works when a real JS program calls it.

I think the user workflow can be verified separately. Being able to verify the functional behavior of clients written in multiple languages using a single test suite would be a huge win, IMO. I agree with you though that this should be coupled with an actual end-user test where the Javascript client is run against a real node server, a C++ client is installed from RPMs and built into an application, etc for a complete certification of a client.

> 2. Development workflow

I can't really argue with this point. Any solution that uses a single test suite to test all clients will by definition not feel native to developers. The question is whether it makes sense to recreate the test suite in every language which just doesn't feel like it can scale.

Thanks,
Alan

> The other side problem is related to workflow: when you develop in a
> scripting, dynamically typed language, the way you go about testing is
> slightly different. Since you don't have the type checker to help, you're
> almost forced to run your testsuite continuously, and the JS client tests I
> developed were geared to make this possible.
>
> To give an example: to make being able to run test continously, the JS client
> assumes you have a running node for local tests and a set of servers for
> clustered tests (we provide a script for it). By having a running set of
> servers, I can very quickly run tests continously. This is very different to
> how Java-based testsuites work where each test or testsuites starts the
> required servers and then shuts them down. I'd be very upset if developing
> my JS client required this kind of waste of time. Moreover, the JS client
> tests are designed so that whatever they do, they go back to initial state
> when they finish. This happens for example with failover tests where I could
> not simply kill running servers, and instead the failover test starts a
> bunch servers which it kills as it goes along to test failover. The result
> is that none of the tests started by failover tests end up surviving when
> the test finishes.
>
> Maybe some day we'll have a Java-based testsuite that more easily allows
> continous testing. Scala, through SBT, do have something along this lines,
> so I don't think it's necessarily impossible, but we're not there yet. And,
> as I said above, you always have the first issue: testing how the user will
> use things.
>
> Cheers,
>
> [1]
> https://github.com/infinispan/js-client/blob/master/spec/infinispan_failover_spec.js
>
> >
> > Thanks
> > Sebastian
> >
> > On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes <[hidden email]>
> > wrote:
> >
> >
> > On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero <[hidden email]>
> > wrote:
> > I was actually planning to start a similar topic, but from the point of
> > view of user's testing needs.
> >
> > I've recently created Hibernate OGM support for Hot Rod, and it wasn't as
> > easy as other NoSQL databases to test; luckily I have some knowledge and
> > contact on Infinispan ;) but I had to develop several helpers and refine
> > the approach to testing over multiple iterations.
> >
> > I ended up developing a JUnit rule - handy for individual test runs in the
> > IDE - and with a Maven life cycle extension and also with an Arquillian
> > extension, which I needed to run both the Hot Rod server and start a
> > Wildfly instance to host my client app.
> >
> > At some point I was also in trouble with conflicting dependencies so
> > considered making a Maven plugin to manage the server lifecycle as a
> > proper IT phase - I didn't ultimately make this as I found an easier
> > solution but it would be great if Infinispan could provide such helpers to
> > end users too.. Forking the ANT scripts from the Infinispan project to
> > assemble and start my own (as you do..) seems quite cumbersome for users
> > ;)
> >
> > Especially the server is not even available via Maven coordinates.
> >
> > The server is available at [1]
> >
> > [1]
> > http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/
> >
> >  
> > I'm of course happy to contribute my battle-tested Test helpers to
> > Infinispan, but they are meant for JUnit users.
> > Finally, comparing to developing OGM integrations for other NoSQL stores..
> > It's really hard work when there is no "viewer" of the cache content.
> >
> > We need some kind of interactive console to explore the stored data, I felt
> > like driving blind: developing based on black box, when something doesn't
> > work as expected it's challenging to figure if one has a bug with the
> > storage method rather than the reading method, or maybe the encoding not
> > quite right or the query options being used.. sometimes it's the used
> > flags or the configuration properties (hell, I've been swearing a lot at
> > some of these flags!)
> >
> > Thanks,
> > Sanne
> >
> > On 15 Sep 2016 11:07, "Tristan Tarrant" <[hidden email]> wrote:
> > Recently I've had a chat with Galder, Will and Vittorio about how we
> > test the Hot Rod server module and the various clients. We also
> > discussed some of this in the past, but we now need to move forward with
> > a better strategy.
> >
> > First up is the Hot Rod server module testsuite: it is the only part of
> > the code which still uses Scala. Will has a partial port of it to Java,
> > but we're wondering if it is worth completing that work, seeing that
> > most of the tests in that testsuite, in particular those related to the
> > protocol itself, are actually duplicated by the Java Hot Rod client's
> > testsuite which also happens to be our reference implementation of a
> > client and is much more extensive.
> > The only downside of removing it  is that verification will require
> > running the client testsuite, instead of being self-contained.
> >
> > Next up is how we test clients.
> >
> > The Java client, partially described above, runs all of the tests
> > against ad-hoc embedded servers. Some of these tests, in particular
> > those related to topology, start and stop new servers on the fly.
> >
> > The server integration testsuite performs yet another set of tests, some
> > of which overlap the above, but using the actual full-blown server. It
> > doesn't test for topology changes.
> >
> > The C++ client wraps the native client in a Java wrapper generated by
> > SWIG and runs the Java client testsuite. It then checks against a
> > blacklist of known failures. It also has a small number of native tests
> > which use the server distribution.
> >
> > The Node.js client has its own home-grown testsuite which also uses the
> > server distribution.
> >
> > Duplication aside, which in some cases is unavoidable, it is impossible
> > to confidently say that each client is properly tested.
> >
> > Since complete unification is impossible because of the different
> > testing harnesses used by the various platforms/languages, I propose the
> > following:
> >
> > - we identify and group the tests depending on their scope (basic
> > protocol ops, bulk ops, topology/failover, security, etc). A client
> > which implements the functionality of a group MUST pass all of the tests
> > in that group with NO exceptions
> > - we assign a unique identifier to each group/test combination (e.g.
> > HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be
> > collected in a "test book" (some kind of structured file) for comparison
> > with client test runs
> > - we refactor the Java client testsuite according to the above grouping
> > / naming strategy so that testsuite which use the wrapping approach
> > (i.e. C++ with SWIG) can consume it by directly specifying the supported
> > groups
> > - other clients get reorganized so that they support the above grouping
> >
> > I understand this is quite some work, but the current situation isn't
> > really sustainable.
> >
> > Let me know what your thoughts are
> >
> >
> > Tristan
> > --
> > Tristan Tarrant
> > Infinispan Lead
> > JBoss, a division of Red Hat
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > [hidden email]
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> _______________________________________________
> infinispan-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Emmanuel Bernard
>> 1. How do you verify that a Javascript client works the way a Javascript
>> program would use it?
>> IOW, even if you could call JS from Java, what you'd be verifying is that
>> whichever contorsionate way of calling JS from Java works, which might not
>> necessarily mean it works when a real JS program calls it.
>
>I think the user workflow can be verified separately. Being able to verify the functional behavior of clients written in multiple languages using a single test suite would be a huge win, IMO. I agree with you though that this should be coupled with an actual end-user test where the Javascript client is run against a real node server, a C++ client is installed from RPMs and built into an application, etc for a complete certification of a client.
>

That was my thinking too, often TCK based tools also have a separate
test suite. You could have a common TCK for behavior and a separate test
suite for each client to make sure it works as expected between the
chair and the API.
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Emmanuel Bernard
In reply to this post by Galder Zamarreño
On Fri 2016-09-23 17:33, Galder Zamarreño wrote:
>Maybe some day we'll have a Java-based testsuite that more easily allows continous testing. Scala, through SBT, do have something along this lines, so I don't think it's necessarily impossible, but we're not there yet. And, as I said above, you always have the first issue: testing how the user will use things.

This reminded me of Infinitest https://infinitest.github.io
Which bring continuous testing to your IDEs (for Java).
_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Tristan Tarrant-2
In reply to this post by Galder Zamarreño
On 23/09/16 17:33, Galder Zamarreño wrote:
> ^ I thought about all of this when working on the JS client, and although like you, I thought this was the biggest hurdle, eventually I realised that there are bigger issues than that:
>
> 1. How do you verify that a Javascript client works the way a Javascript program would use it?
> IOW, even if you could call JS from Java, what you'd be verifying is that whichever contorsionate way of calling JS from Java works, which might not necessarily mean it works when a real JS program calls it.
If a specific language API wants to "feel native" in its environment
that is fine, and there should be local tests to exercise that, but from
a protocol compliance point of view this is irrelevant. We need to
verify that:

- for each Hot Rod operation and variant (e.g. flags, metadata) the
client is sending the correct request.
- the client should also be able to correctly process the response,
again with different variations (result, not found, errors, metadata)
- for the different client intelligence levels the client should be able
to correctly process the returned headers  (topology, hashing, etc)
- the client should correctly react to topology changes and failover
- the client should correctly react to events and fire the appropriate
listeners
- the client should be able to correctly handle encryption handshaking
and report error situations properly
- the client should be able to correctly handle authentication and
report error situations properly for the client-supported mechanisms

Additionally client might wish to test for the following, but this is
not part of the protocol specification:

- marshalling
- async methods
- site failover
- language-specific synctactic sugar

Also, to provide a common ground for the server configuration used by
both types of tests (TCK and client-specific), we should really use
docker containers with appropriately named configs together with common
scripts that recreate the test scenarios, so that each testsuite doesn't
have to reinvent the wheel.

Tristan

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Gustavo Fernandes-2
On Fri, Sep 30, 2016 at 9:40 AM, Tristan Tarrant <[hidden email]> wrote:
On 23/09/16 17:33, Galder Zamarreño wrote:
> ^ I thought about all of this when working on the JS client, and although like you, I thought this was the biggest hurdle, eventually I realised that there are bigger issues than that:
>
> 1. How do you verify that a Javascript client works the way a Javascript program would use it?
> IOW, even if you could call JS from Java, what you'd be verifying is that whichever contorsionate way of calling JS from Java works, which might not necessarily mean it works when a real JS program calls it.
If a specific language API wants to "feel native" in its environment
that is fine, and there should be local tests to exercise that, but from
a protocol compliance point of view this is irrelevant. We need to
verify that:

- for each Hot Rod operation and variant (e.g. flags, metadata) the
client is sending the correct request.
- the client should also be able to correctly process the response,
again with different variations (result, not found, errors, metadata)
- for the different client intelligence levels the client should be able
to correctly process the returned headers  (topology, hashing, etc)
- the client should correctly react to topology changes and failover
- the client should correctly react to events and fire the appropriate
listeners
- the client should be able to correctly handle encryption handshaking
and report error situations properly
- the client should be able to correctly handle authentication and
report error situations properly for the client-supported mechanisms


I wonder if something like Haxe [1] could help here in defining a language agnostic
TCK (maybe an skeleton?) that gets compiled to several platforms. Each platform's
testsuite would them "implement" the spec and of course would be free to add
'native' tests as well. There's also a unit test framework built on top of [1], worth exploring

Additionally client might wish to test for the following, but this is
not part of the protocol specification:

- marshalling
- async methods
- site failover
- language-specific synctactic sugar

Also, to provide a common ground for the server configuration used by
both types of tests (TCK and client-specific), we should really use
docker containers with appropriately named configs together with common
scripts that recreate the test scenarios, so that each testsuite doesn't
have to reinvent the wheel.


+1 for docker, as it no longer requires the hack of having VirtualBox on non-Linux platforms.
From my experience, most of the testing cases don't even need huge pre-canned XMLs, all
configurations can be achieved by runtime manipulation of the server model.

Cheers,
Gustavo


Tristan

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Ion Savin-2
In reply to this post by Tristan Tarrant-2
Hi all,

> - for each Hot Rod operation and variant (e.g. flags, metadata) the
> client is sending the correct request.
> - the client should also be able to correctly process the response,
> again with different variations (result, not found, errors, metadata)
> - for the different client intelligence levels the client should be able
> to correctly process the returned headers  (topology, hashing, etc)
> - the client should correctly react to topology changes and failover
> - the client should correctly react to events and fire the appropriate
> listeners
> - the client should be able to correctly handle encryption handshaking
> and report error situations properly
> - the client should be able to correctly handle authentication and
> report error situations properly for the client-supported mechanisms

At least for some of this cases this approach could work for protocol
level client tests:

Implement a tool (single process) which mocks the server side, can
accept multiple connections from clients to simulate a cluster and can
verify that the interaction with the client matches a predefined script.

There could be a separate script for each HR version / intelligence level.

The script is interpreted by the mock and not dependent on any of the
languages in which the clients are implemented. All assertions are done
in this tool and not the client (e.g. to test get() generate a random
value and expect the client to do a put() on another key with the value
it got using get()).

For each HR client implement a client app in that language which
interacts with the mock as prescribed by the script.

This is very similar to how financial institution automate certification
for FIX protocol implementations / integration work.

--
Ion Savin

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Gustavo Fernandes-2
In reply to this post by Gustavo Fernandes-2


I wonder if something like Haxe [1] could help here in defining a language agnostic
TCK (maybe an skeleton?) that gets compiled to several platforms. Each platform's
testsuite would them "implement" the spec and of course would be free to add
'native' tests as well. There's also a unit test framework built on top of [1], worth exploring

This is an idea of how to use it:

1) Define an interface using the Haxe language (just assume syntax is correct):

interface IHotRodClient {
   get(Object k)
   put(Object k, Object value)
   etc
}

2) Write the TCK in terms of that interface. The Haxe language has lots of libraries, including unit tests:

class TCK {

   test1( ) { ... }
   test2( ) { ... }
   etc

   void Main(IHotRodClient client)
        new TCK(client).run()
}

3) Cross compile the TCK and distribute it as jar, dll, js, etc
4) Each Hot Rod client consumes the artifact above
5) Each Hot Rod runs the TCK passing its implementation of IHotRodClient
6) Profit

My 2p,
Gustavo

 


Tristan

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev



_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Reply | Threaded
Open this post in threaded view
|

Re: [infinispan-dev] Hot Rod testing

Alan Field



From: "Gustavo Fernandes" <[hidden email]>
To: "infinispan -Dev List" <[hidden email]>
Sent: Friday, September 30, 2016 5:58:27 AM
Subject: Re: [infinispan-dev] Hot Rod testing



I wonder if something like Haxe [1] could help here in defining a language agnostic
TCK (maybe an skeleton?) that gets compiled to several platforms. Each platform's
testsuite would them "implement" the spec and of course would be free to add
'native' tests as well. There's also a unit test framework built on top of [1], worth exploring

This is an idea of how to use it:

1) Define an interface using the Haxe language (just assume syntax is correct):

interface IHotRodClient {
   get(Object k)
   put(Object k, Object value)
   etc
}

2) Write the TCK in terms of that interface. The Haxe language has lots of libraries, including unit tests:

class TCK {

   test1( ) { ... }
   test2( ) { ... }
   etc

   void Main(IHotRodClient client)
        new TCK(client).run()
}

3) Cross compile the TCK and distribute it as jar, dll, js, etc
4) Each Hot Rod client consumes the artifact above
5) Each Hot Rod runs the TCK passing its implementation of IHotRodClient
6) Profit
It takes 6 steps to profit?!

I think the idea of writing the TCK once and being able to generate the code in the native language of the client is a great idea. The issue will be when we have a Hot Rod client in a language that Haxe doesn't support. (Go?)

Thanks,
Alan


My 2p,
Gustavo



Tristan

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat

_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev



_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev


_______________________________________________
infinispan-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/infinispan-dev