[BEEPbuilders] Proposal for Getting Started

Gabe Wachob gwachob@wachob.com
Wed, 23 Oct 2002 19:39:44 -0700 (PDT)

beepbuilders sourceforge project is created using justin's suggested
directory structure.


If you are a beep library author and would like to participate, please let
me know and we can add you as a contributor or other level of participant
in the project that you want. All BEEP library authors are strongly
encouraged to participate!

Haven't heard any other negative (or positive) comments, would still
welcome those.


On Wed, 23 Oct 2002, Justin Warren wrote:

> On Tue, Oct 22, 2002 at 09:28:48AM -0700, Gabe Wachob wrote:
> > Greeting fellow beepniks.
> >
> > Here's my proposal to get us bootstrapped and going for interoperability
> > testing.
> >
> > First, I propose setting up a sourceforge project (or something similar)
> > with a web presence and a cvs repository. The web site will host general
> > information and a document(s) showing the result of interoperability
> > tests. I'm thinking something very similar, with a table for each "test"
> > and a table of implementations along each axis where each square
> > contains the results of the conformance test (passed/failed/other). I'll
> > volunteer to maintain this document.
> Sounds good. A nice, easy, at-a-glance reference of what works with what
> would be extremely useful. It might also encourage people to get their
> implementations up to spec, increasing the mass of BEEP code that works
> well.
> > Second, I propose everything be kept in CVS: test definitions (just text
> > files probably), the results table document, and source code used to
> > implement each test for each beep library implementation. Obviously,
> > nobody can be forced to submit their interoperability testing code, but
> > it aids the process greatly.
> Definitely.
> > Third, I propose the following CVS structure:
> >
> > /web
> > /testX (contains formalish definition of testX)
> > /testX/impl1/ (contains README and source code for testX)
> > /testX/impl2/
> > .
> > .
> > /testY
> > /testY/impl1/ (contains README and source code for testY)
> > .
> > .
> >
> > and so on.
> Hmm.. I disagree, more below..
> > The idea here, of course, is that someone who is coming along and would
> > like to run the tests themselves has an easy way to get the current
> > version of the tests, test definitions, and test results. A secondary
> > benefit is that these code snippets should provide simple examples of how
> > each beep library works - examples which are semantically equivalent and
> > therefore are good for comparing libraries from a app developer's POV.
> Yes, all true. My concern with the above structure is that it makes it
> more difficult to snarf the entire tree of tests for a given architecture,
> which is a likely task. Perhaps a structure like this:
> /web/
> /definitions/
> /definitions/Text-X
> /definitions/Text-Y
> .
> .
> /impl-1/
> /impl-1/Test-X
> /impl-1/Test-Y
> .
> .
> /impl-2/
> /impl-2/Test-X
> .
> etc
> Then you can simply grab the /definitions/ tree if all you want is the
> specs. If you want to test a particular implementation, say beepcore-c,
> then you grab the beepcore-c set of tests.
> Hmm, a thought: If one were to say Implementation-X interoperates with
> Implementation-Y, does that mean that both implementations have passed
> the same set (possibly a subset of the whole) of tests? As in, two
> implementations may be partially complete and pass some given subset of
> the total test suite. Those two implementations would be said to
> interoperate. However, they would not be said to interoperate with an
> implementation that passes all tests.
> There is the possiblitity of ranking a test as 'mandatory' for
> interoperability (such as those testing framing) and other as 'desirable'
> for optional characteristics such as the xml:lang attribute of the
> Management Profile <error> tag. You could then have partial interoperability
> and know which features aren't supported by a given implementation.
> Sorry for the waffle. Getting it straight in my head.
> One more thing: the results. To maximise automation, all test suites
> should spit out their results in a common format. You could then
> theoretically attach an uber-test front end that runs all interop
> tests, collates the results and generates the results matrix. This
> may become important as the number of test suites grows large.
> > If I don't hear objections, I'll start the sf registration process in the
> > next day or two. I'd like to use SF because this will allow multiple
> > people to maintain the tests, code, and results. (I'm thinking the beep
> > library authors here). If someone else thinks there is a better place to
> > host, plz speak up. (We can point interop.beepcore.org to the sf web page
> > if we want).
> SF is fine with me.

Gabe Wachob                       gwachob@wachob.com
Personal                       http://www.wachob.com
Founder, WiredObjects    http://www.wiredobjects.com