[BEEPbuilders] Proposal for Getting Started

Justin Warren daedalus@eigenmagic.com
Wed, 23 Oct 2002 09:49:57 +1000

Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable

On Tue, Oct 22, 2002 at 09:28:48AM -0700, Gabe Wachob wrote:
> Greeting fellow beepniks.
> Here's my proposal to get us bootstrapped and going for interoperability
> testing.
> First, I propose setting up a sourceforge project (or something similar)
> with a web presence and a cvs repository. The web site will host general
> information and a document(s) showing the result of interoperability
> tests. I'm thinking something very similar, with a table for each "test"
> and a table of implementations along each axis where each square
> contains the results of the conformance test (passed/failed/other). I'll
> volunteer to maintain this document.

Sounds good. A nice, easy, at-a-glance reference of what works with what
would be extremely useful. It might also encourage people to get their
implementations up to spec, increasing the mass of BEEP code that works

> Second, I propose everything be kept in CVS: test definitions (just text
> files probably), the results table document, and source code used to
> implement each test for each beep library implementation. Obviously,
> nobody can be forced to submit their interoperability testing code, but
> it aids the process greatly.


> Third, I propose the following CVS structure:
> /web
> /testX (contains formalish definition of testX)
> /testX/impl1/ (contains README and source code for testX)
> /testX/impl2/
> .
> .
> /testY
> /testY/impl1/ (contains README and source code for testY)
> .
> .
> and so on.

Hmm.. I disagree, more below..

> The idea here, of course, is that someone who is coming along and would
> like to run the tests themselves has an easy way to get the current
> version of the tests, test definitions, and test results. A secondary
> benefit is that these code snippets should provide simple examples of how
> each beep library works - examples which are semantically equivalent and
> therefore are good for comparing libraries from a app developer's POV.

Yes, all true. My concern with the above structure is that it makes it
more difficult to snarf the entire tree of tests for a given architecture,
which is a likely task. Perhaps a structure like this:


Then you can simply grab the /definitions/ tree if all you want is the
specs. If you want to test a particular implementation, say beepcore-c,
then you grab the beepcore-c set of tests.

Hmm, a thought: If one were to say Implementation-X interoperates with
Implementation-Y, does that mean that both implementations have passed
the same set (possibly a subset of the whole) of tests? As in, two
implementations may be partially complete and pass some given subset of
the total test suite. Those two implementations would be said to=20
interoperate. However, they would not be said to interoperate with an
implementation that passes all tests.

There is the possiblitity of ranking a test as 'mandatory' for=20
interoperability (such as those testing framing) and other as 'desirable'
for optional characteristics such as the xml:lang attribute of the
Management Profile <error> tag. You could then have partial interoperability
and know which features aren't supported by a given implementation.

Sorry for the waffle. Getting it straight in my head.

One more thing: the results. To maximise automation, all test suites
should spit out their results in a common format. You could then
theoretically attach an uber-test front end that runs all interop
tests, collates the results and generates the results matrix. This
may become important as the number of test suites grows large.=20

> If I don't hear objections, I'll start the sf registration process in the
> next day or two. I'd like to use SF because this will allow multiple
> people to maintain the tests, code, and results. (I'm thinking the beep
> library authors here). If someone else thinks there is a better place to
> host, plz speak up. (We can point interop.beepcore.org to the sf web page
> if we want).

SF is fine with me.

"I think your cats need tuning - according to a couple of quick measurements
 on a recently calibrated reference cat, the dominant frequency of a correc=
 adjusted cat should be 12Hz +/-20%." -- Lionel Lauer in a.s.r

Content-Type: application/pgp-signature

Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org