Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Javascript > My Library TaskSpeed tests updated

Reply
Thread Tools

My Library TaskSpeed tests updated

 
 
David Mark
Guest
Posts: n/a
 
      02-22-2010
Garrett Smith wrote:
> Richard Cornford wrote:
>> Scott Sauyet wrote:
>>> On Feb 17, 11:04 am, Richard Cornford wrote:
>>>> On Feb 16, 8:57 pm, Scott Sauyet wrote:
>>>
>>>>> I think that testing the selector engine is part of testing
>>>>> the library.
>>>>
>>>> Obviously it is, if the 'library' has a selector engine, but that
>>>> is a separate activity from testing the library's ability to
>>>> carry out tasks as real world tasks don't necessitate any
>>>> selector engine.
>>>

>
> Couldn't agree more with that.
>
> A hand rolled QuerySelector is too much work. It is just not worth it.


I agree with that too, but you have to give the people what they want.
A record number of hits today (and it is only half over) confirms that.

>
> IE botches attributes so badly that trying to get the workarounds
> correct would end up contradicting a good number of libraries out there.


Who cares about contradicting them? They contradict each other already!
Though some have copied each other, making for a cross-library comedy
of errors.

http://www.cinsoft.net/slickspeed.html

>
> Many developers don't know the difference between attributes and
> properties.


Without naming names, John Resig.

> Many of the libraries, not just jq, have attribute selectors
> that behave *incorrectly* (checked, selected, etc).


Do they ever. Non-attribute queries too. They foul up everything they
touch. As the browsers have converged over the last five years or so,
the weenies have written maddeningly inconsistent query engines on top
of them, making it appear that cross-browser scripting is still hell on
earth (and, in a way, it is). They are self-defeating their own stated
purpose (to make cross-browser scripting fun and easy!)

>
> For an example of that, just try a page with an input. jQuery.com
> homepage will do fine:
> http://docs.jquery.com/Main_Page
>
> (function(){
> var inp jQuery('input[value]')[1];
> inp.removeAttribute("value");
> // The same input should not be matched at this point,
> // Because its value attribute was removed.
> alert(jQuery('input[value]')[1] == inp);
> })();
>
> Results:
> IE: "true"
> FF: "false"
>
> As a bookmarklet:
> javascriptfunction(){var inp =
> jQuery('input[value]')[1];inp.removeAttribute("value");alert(jQuery('input[value]')[1]
> == inp);})();


That road has been plowed:-

http://www.cinsoft.net/queries.html

This is the gold standard:-

http://www.cinsoft.net/attributes.html

....and virtually all of it has been manifested in My Library.

>
> By failing to make corrections for IE's attribute bugs, the query
> selector fails with wrong results in IE.


And _tons_ of others. If you don't constantly "upgrade" these things,
they go from some wrong answers to virtually all wrong answers. Try the
SlickSpeed tests in IE5.5 (still in use in Windows 2000, but that's
beside the point) or anything the developers either ignore or haven't
heard of (e.g. Blackberries, PS3, Opera 8, Opera 9, etc.)

>
> Workarounds for IE are possible, but again, the amount of benefit from
> being able to arbitrarily use `input[value]` and get a correct,
> consistent, specified (by the Selectors API draft) result, is not worth
> the effort in added code and complexity.


Pity none of them read that draft before dumping a QSA layer on to their
already inconsistent DOM (and occasionally XPath) layers. I didn't read
it either, but I had a feeling the browsers were using XPath behind the
scenes (and sure enough, the behavior confirmed that when I got around
to testing QSA). It's insane as QSA is relatively new has its own
cross-browser quirks.

>
> Instead, where needed, the program could use `input.value` or
> `input.defaultValue` to read values and other DOM methods to find the
> input, e.g. document.getElementById, document.getElementsByName, etc.


Yes. And JFTR, defaultValue is what reflects the attribute value. All
of the others are reading the value as they think it "makes more sense",
despite the fact that it doesn't match up with XPath or QSA (for obvious
reasons). That dog won't hunt in XML documents either, but they all go
to great lengths to "support" XML (with bizarre inferences to
discriminate XHR results for example). So it's a God-awful mess any way
you slice it.

>
> [...]
>
>>>> Granted there are cases like the use of - addEventListener
>>>> - where positive verification becomes a lot more difficult,
>>>> but as it is the existing tests aren't actually verifying
>>>> that listeners were added.
>>>
>>> Are there any good techniques you know of that would make it
>>> straightforward to actually test this from within the
>>> browser's script engine? It would be great to be able
>>> to test this.

>>

>
> The only way of testing if an event will fire is to subscribe to it and
> the fire the event.


Depends on the context. My Library can detect supported events using a
technique that was first reported here years ago. Sure, users can
disable the effectiveness of some events (e.g. contextmenu). But unless
you design an app that rises or sets with context clicks, it doesn't matter.

>
> Programmatically dispatching the event as a feature test is inherently
> flawed because the important questions cannot be answered from the
> outcome; e.g. will the div's onfocusin fire in when the input is focused?


Right, it's much worse than my method.

>
>> I don't know of anything simple. I think that the designers of -
>> addEventListener - fell down badly here. It would have been so easy
>> for that method to return a boolean; true for success; then if you
>> attempted to add a non-supported listener it could return false from
>> which you would know that your listener was going to be ineffective.
>> Still, that is what happens when the people designing the specs have
>> negligible practical experience in the field.
>>

>
> That design requires a definition for "non-supported listener."


Well, that shouldn't take more than a sentence.
 
Reply With Quote
 
 
 
 
Scott Sauyet
Guest
Posts: n/a
 
      02-23-2010
Richard Cornford wrote:
> Scott Sauyet wrote:
>>On Feb 17, 11:04 am, Richard Cornford wrote:
>>> On Feb 16, 8:57 pm, Scott Sauyet wrote:

>
>>>> I think that testing the selector engine is part of testing
>>>> the library.

>
>>> Obviously it is, if the 'library' has a selector engine, but that
>>> is a separate activity from testing the library's ability to
>>> carry out tasks as real world tasks don't necessitate any
>>> selector engine.

>
>> Perhaps it's only because the test framework was built testing against
>> libraries that had both DOM manipulation and selector engines,

>
> Haven't we already agreed that the test framework was adapted directly
> from one that was designed to test selector engines, and so must have
> been for libraries with selector engines?


The test framework was so adapted. I'm not sure why that implies that
the libraries to be tested have to be the same ones that were being
tested for selector speed. In actual fact, of course, all the
libraries that I've seen tested with this do have selector engines as
well as DOM manipulation tools.

>> but these seem a natural fit.

>
> ?


DOM Manipulation tools and selector engines. Obviously you can run
the former against the results of the latter, but more generally, when
you need to manipulatate the DOM, you need some way to select the
nodes on which you work. You could use some of the host collections,
getElementById, getElementsByTagName, or some manual walking of the
DOM tree starting with the document node, but somehow you need to do
this. If you try to provide any generic tools to do this, you might
well start down a path that leads to CSS Selector engines. Of course
this is not a requirement, but it's a real possibility; to me it seems
a good fit.


>> I don't believe this was meant to be a DOM manipulation test
>> in particular.

>
> The name "taskspeed" implies pretty much that; that we are testing
> actual tasks.


Do you not think that attaching event listeners or selecting elements
can count as tasks?



>> My understanding (and I was not involved in any of the
>> original design, so take this with a grain of salt) is
>> that this was meant to be a more general test of how the
>> libraries were used, which involved DOM manipulation and
>> selector- based querying.

>
> Some of the libraries do their DOM manipulation via selector based
> querying. For them there is no alternative. But DOM manipulation tasks
> do not necessitate the use of selector engines (else DOM manipulation
> did not happen prior to about 2006). It is disingenuous to predicate DOM
> manipulation tasks on selector engine use. It makes much more sense to
> see how each library competes doing realistic tasks in whatever way best
> suites them, be it with selector engines or not.


I believe that was adequately answered when I pointed out that the
specification does not actually require a selector engine.


>> If it seemed at all feasible, the framework would
>> probably have included event handler manipulation tests,
>> as well.

>
> So all those - addEventListener - and - attachEvent - calls are not
> event handler manipulations then?


Just barely, I would contend. Nothing in the test verifies that the
event handler actually is attached.



>> If the libraries had all offered classical OO infrastructures
>> the way MooTools and Prototype do, that would probably also
>> be tested.

>
> Then that would have been another mistake as "offering classical OO" is
> not necessary for any real world tasks either.


I choose not to use these classical OO simulators, and I don't feel
I'm missing anything, although I'm mostly a Java programmer who is
perfectly comfortable in the classical OO world. The point is that
the tests were written around the libraries.


>> Why the scare quotes around "library"? Is there a better
>> term -- "toolkit"? -- that describes the systems being tested?

>
> The "scare quotes" are an expression of 'so called'. "Library" is a
> perfectly acceptable name for these things, but tends to get used in
> these contexts with connotations that exclude many of the things that
> could also reasonably be considered to be libraries.


Yes, there are many other ways to organize code, but I think these
tests were designed for the general-purpose libraries. Competent JS
folks could also write a "library" that passed the tests and did so
efficiently, perhaps even supplying a useful API in the process, but
which does not try to handle the more general problems that the
libraries being tested do. That might be an interesting exercise, but
is not relevant to


> [ .. Interesting discussion on libraries and code reuse deleted
> as I have nothing to add ... ]
>


>>> (Remember that common hardware and browser performance was
>>> not sufficient for any sort of selector engine even to look
>>> like a viable idea before about the middle of 2005, but
>>> (even quite extreme) DOM manipulation was long established
>>> by that time.)

>
>> Really? Very interesting. I didn't realize that it was a
>> system performance issue. I just thought it was a new way
>> of doing things that people started trying around then.

>
> The thing with trying something before it is viable is that when you
> find out that it is not viable you are not then going to waste time
> promoting the idea.


I don't know how much more viable it was then. I remember writing my
own API in, I think, 2004 that worked something like this:

var a = Finder.byTag(document, "div"),
b = Finder.filterByClass(a, "navigation");
c = Finder.byTag(b, "a"),
d = Finder.byTagAndClass(document, "li", "special"),
e = Finder.byTag(d, "a"),
f = Finder.subtract(c, e);

It was plenty fast enough for my own use, but it was rather verbose to
use. I wish I had thought of how much cleaner this API would have
been:

var f = selector("div.navigation a:not(li.special a)");

Perhaps one general-purpose enough to handle all the possible CSS
selectors would not have been viable then, but I think for what I was
using it, the tools were already in place.


> Recall, though, that in the early days of JQuery a great deal of work
> went into making its selector engine faster. That would not have been
> necessary if it had not been on the very edge of being viable at the
> time.


I remember jQuery doing that once the SlickSpeed tests were released.
Did it happen earlier too?


>>> The 'pure DOM' tests, as a baseline for comparison, don't
>>> necessarily need a selector engine to perform any given
>>> task (beyond the fact that the tasks themselves have been
>>> designed around a notion of 'selectors'). So making selector
>>> engine testing part of the 'task' tests acts to impose
>>> arbitrary restrictions on the possible code used,

>
>> Absolutely. A pure selector engine would also not be testable,

>
> Why not? Given a selector and a document all you have to do is verify
> that the correct number of nodes were created and that they all are the
> expected nodes.


Okay, perhaps "would not be easily testable". Maybe this sounds
simpler to you than it does to me, but especially if there is no one
fixed test document, this sounds to me to be much the same as writing
a general-purpose selector engine.


>> nor would a drag-and-drop toolkit.

>
> Automatically testing a system that relies on human interaction is
> inherently problematic.
>
>> We are restricted to systems that can manipulate the DOM
>> and find the size of certain collections of elements.

>
> But why find the size of collections of elements? That is not a task
> that is common in browser scripting tasks. Even if you need to iterate
> over some collection of elements with something like a - for - loop you
> don't care how large that collection is, only that you can read whatever
> size it is in order to constrain the loop.


Absolutely it would be best if the test infrastructue independently
verified the results. I'm still not convinced that it would be an
easy task without either writing a general-purpose selector engine, or
restricting the test documents to a fairly simple set.


>>> biases the results,

>
>> In what way?

>
> It forces the 'pure DOM' code to do things that are not necessary for
> real-world tasks, thus constraining the potential implementations of the
> tests to code that is needlessly inefficient in how it addresses the
> tasks. Thus the 'libraries' never get compared with against what real
> DOM scripting is capable of, in which case why bother with making the
> comparison at all?


Are you saying that it unnecessarily restricts the set of libaries
that can be tested or that the time spent in the selectors used to
feed back the "results" to the test infrastructure would significantly
skew the timing?


>>> and ultimately negates the significance of the entire
>>> exercise.

>
>> I just don't see it. There is clearly much room for
>> improvement, but I think the tests as they stand have
>> significant value.

>
> In making comparisons between the libraries, at doing selector engine
> based tasks (that is, forcing everyone else to play the game JQuery's
> way) they may have some value. But there is no real comparison against a
> 'baseline' unless the baseline is free to do whatever needs doing by any
> means available and where the tasks being preformed are realistically
> related to the sorts of things that actually need doing, as opposed to
> being tied up with arbitrary element counting.


So if the infrastructure was expanded to somehow verify the results
rather than ask for a count back, would this solve the majority of the
problems you see?


>>>> Although this is not the same as the SlickSpeed
>>>> selectors test,

>
>>> Comparing the selector engines in libraries that have selector
>>> engines seems like a fairly reasonable thing to do. Suggesting
>>> that a selector engine is an inevitable prerequisite for
>>> carrying out DOM manipulation tasks is self evident BS.

>
>> Note that these results don't require that the library actually
>> use a CSS-style selector engine, only that it can for instance
>> find the number of elements of a certain type, the set of which
>> if often most easily described via a CSS selector.

>
> So why is the element retrieval for the 'pure DOM' code done with a
> simplified selector engine that receives CSS selector strings are its
> argument?


I would assume that it's because the implementor found it easiest to
do so this way. Note that he's commented out the QSA code, but it was
probably an artifact of his testing with QSA, in which case it's
easier to have a function that responds to the same input as the
native QSA. Surely he could have written something like this
(untested, and changed only minimally) instead:

getSimple:document.createElement("p").querySelecto rAll&&false?
function(tag, className){
return this.querySelectorAll((tag || "*") +
(className ? "." + className : ""));
}:
function(tag, className){
for(var
result = [],
list = this.getElementsByTagName(tag || "*"),
length = list.length,
i = 0,
j = 0,
node;
i < length; ++i
){
node = list[i];
if(className &&
node.className &&
node.className.indexOf(className) > -1)
result[j++] = node
;
};
return result;
}

then used code like this:

return utility.getSimple.call(body, "ul", "fromcode").length;

instead of this:

return utility.getSimple.call(body, "ul.fromcode").length;

Because that's all this trivial selector engine does, "tag.class".



>> When the "table" function is defined to return "the length of
>> the query 'tr td'," we can interpret that as counting the results
>> of running the selector "tr td" in the context of the document
>> if we have a selector engine, but as "the number of distinct TD
>> elements in the document which descend from TR
>> elements"if not.

>
> We can also observe that in formally valid HTML TD elements are required
> to descend from TR elements and so that the set we are after is actually
> nothing more than all the TD elements in the document, and so wonder why
> the code used in the 'pure DOM' is:-
>
>| tr = body.getElementsByTagName("tr");
>| i = tr.length;
>| for(var total = 0; i
>| total += tr[--i].getElementsByTagName("td").length
>| ;
>| return total;
>
> (code that will produce a seriously faulty result if there were nested
> tables in the document as some TD would end up being counted twice.)
>
> - instead of just:-
>
> return body.getElementsByTagName("td").length;


Although we have a test document at hand, and the BODY would be part
of a formally valid document if properly paired with a valid HEAD, I
don't think we would want to assume that this is the only document to
be tested. Or should our test infrastructure require a formally valid
document? I've worked in environments where parts of the document are
out of my control and not valid; I'd like my tools to be able to run
in such an environment.


> - or better yet, counting the number of TDs in the document before
> adding another 80 (so when the document is smaller and so faster to
> search) and then returning that number plus 80 for the number of TDs
> added gets the job done. I.E.:-
>
> ...
> var total = body.getElementsByTagName("td").length;
> ... //loop in which elements are added
> return total + 80;
>
> And then, when you start down that path, you know the document and so
> you know it started with 168 TDs and so adding 80 results in 248, so
> just return that number instead. It is completely reasonable for DOM
> scripts to be written for the context in which they are used, and so for
> them to employ information about that context which is gathered at the
> point of writing the code.


This would end up under cheating in my book.


> This comes down to a question of verification; is this code here to
> verify the document structure after the modifications, or to announce
> how many TDs there are in the DOM? If it is for verification then that
> should not be being done inside the test function, and it should not be
> being done differently for each 'library', because where it is done
> impacts on the performance that is supposed to be the subject of the
> tests, and how it is done impacts on its reliability.


While I agree in the abstract, I'm not willing to write that sort of
verification, unless it was agreed to restrict the framework to a
single, well-known test document. As I've argued earlier, I think
that such an agreement could lead to serious cheating.


>> Being able to find such elements has been an
>> important part of most of the DOM manipulation
>> I've done.

>
> Finding elements is an important aspect of DOM scripting, but how often
> do you actually care about how many you have found (at least beyond the
> question of were any found at all)?


The counting done here is just a poor-man's attempt at verification.


> [ ... ] For the TD example above, all the
> verification code has to do is get a count of the number of TDs in the
> DOM before it is modified, run the test, and then count the number
> afterward in order to verify that 80 were added. Even that is more than
> the current 'verification' attempts. To mirror the current set-up all
> you would have to do is have some external code count some collection of
> elements from the document's DOM after the test has been timed. A simple
> selector engine (which is passed the document to search as an argument)
> would be fine for that, each DOM would be subject to the same test code
> and its performance would not matter as it would not be part of the
> operation being timed.


But this doesn't verify that each of the newly added TD's has content
"first" or that these new ones were added at the beginning of the TR,
both requirements listed in the spec.

>
>> Another file could easily be substituted, and it might
>> well be worthwhile doing. Adding this sort of analysis
>> would make it much more time-consuming to test against
>> a different document.

>
> Why does how long the test takes to run matter? Is this a short
> attention span thing; worried that people will get bored waiting? That
> isn't a good enough reason to compromise a test system.


Sorry, I misspoke. It's not the time to actually run the test that
I'm worried about, but the time to do the analysis of the document in
order to write the code to verify the results.

> [ ... ]
>>>> Make all the libraries report their results, and note
>>>> if there is any disagreement.

>
>>> But reporting result is not part of any genuinely
>>> representative task, and so it should not be timed along
>>> with any given task. The task itself should be timed in
>>> isolation, and any verification employed separately. [ ... ]

>
>> I think this critique is valid only if you assume that the
>> infrastructure is designed only to test DOM Manipulation. I
>> don't buy that assumption.

>
> The infrastructure should be designed only to test DOM manipulation.


I don't see why. I believe this is the crux of our disagreement.
There are many tasks for which I use Javascript in a browser:
selecting and manipulating elements, performing calculations,
verifying form data, making server requests, loading documents into
frames, keeping track of timers. Why should the test framework test
only DOM manipulation?


> [ ... ]




>> In another thread [1], I discuss an updated version of
>> slickspeed, which counts repeated tests over a 250ms span
>> to more accurately time the selectors.

>
> Way too short. If a browser's clock is +/-56 milliseconds that is more
> than 20% of your total timing. Even if it is the observably common +/-16
> milliseconds then that is 5%. I would want to see this sort of testing
> loop pushed up to over 2 seconds.


Perhaps. It's not available as a parameter to set, but the 250
milliseconds is in only one location in the script. In my testing,
there were inconsistent results if I tried below 100 ms. But by 150,
it was quite consistent. I went up to 250 just to add some margin of
safety. When I've tried with times as high as 10 seconds, I have not
had substantially different results in any browser I've tested (a
relatively limited set, mind you, not going back before IE6, and only
quite recent versions of most of the other modern popular browsers.)



>>> but for task testing randomly generating the document acted
>>> upon would be totally the wrong approach. If you did that you
>>> would bias against the baseline pure DOM tests as then they
>>> would have to handle issues arising from the general case,
>>> which are not issues inherent in DOM scripting because
>>> websites are not randomly generated.

>
>> I was not expecting entirely random documents. Instead, I
>> would expect to generate one in which the supplied tests
>> generally have meaningful results. So for this test

>
>> "attr" : function(){
>> // find all ul elements in the page.
>> // generate an array of their id's
>> // return the length of that array
>> },

>
> That is not a hugely realistic test in itself. What exactly would anyone
> do with an array of element IDs? If you were going to use them to look
> up the elements in the DOM why not collect the array of elements and
> save yourself the trouble later?


Actually, I use ids a fair deal to relate different parts of the DOM
together through events. Granted I don't often use them in an array,
but it's easy enough to imagine a realistic case for it:

// myArray contains ["section1", "section2", and "section3"]
for (var i = 0, len = myArray.length; i < len; i++) {
var elt = API.getById(myArray[i]),
links = API.getByClass("linkTo-" + myArray[i]);
for (var j = 0, len2 = links.length; j < len2; j++) {
API.register("click", links[j], function(evt) {
API.showTab(elt);
return false;
});
}
API.creatTab(elt);
}

where API.getById, API.getByClass, API.register, API.createTab, and
API.showTab are defined as you might expect, and links that I want to
open up a new tab have the class "linkTo-" followed by the id of the
element I want in the tab.


>> I might want to randomly determine the level of nesting at
>> which ULs appear, randomly determine how many are included
>> in the document, and perhaps randomly choose whether some
>> of them do not actually have ids. There would probably be
>> some small chance that there were no ULs at all.

>
> Judging whether that is a realistic variance to impose on the document
> would depend on why you needed this information in the first place.


It's a test infrastructure. If we try to tie it too closely to
particular real-world examples, I'd be afraid of limiting its
flexibility. If we can determine that there really are no real-world
uses of somthing under test, then we should remove that test. But if
there is at least reason to imagine that the technique could be
usable, then there is no reason to discard it.


> It is realistic to propose that in real-world web pages a server side
> script may be generating something like a tree structure made up of
> nested ULs and that some of its nodes would have IDs where others would
> not. But now, given server side scripting, we have the possibility of
> the server knowing the list of IDs and directly writing it into the
> document somewhere so that it did not need looking up from the DOM with
> client-side scripts, and if the reason for collecting the IDs was to
> send them back to the server we might also wonder whether the server
> could not keep the information in its session and never necessitate
> client-side script doing anything.


Of course we could. But I often do things client-side to offload some
of the processing and storage that would otherwise have to be done
server-side.


>> [ ... ] I definitely wouldn't try to build entirely random
>> documents, only documents for which the results of the tests
>> should be meaningful. The reason I said I probably wouldn't
>> do this is that, while it is by no means impossible, it is also
>> a far from trivial exercise.

>
> There is certainly much room for improvement in these testing frameworks
> before moving to that point.


Is this something you would be willing to help implement? Your
critique here is very valuable, but some specific code suggestions
would be even more helpful.


>>> In contrast, it is an inherent problem in general purpose
>>> library code that they must address (or attempt to address)
>>> all the issues that occur in a wide range of context (at
>>> minimum, all the common contexts). There are inevitably
>>> overheads in doing this, with those overheads increasing
>>> as the number of contexts accommodated increases.

>
>> Yes, this is true. But it is precisely these general purpose
>> libraries that are under comparison in these tests.

>
> Not exclusively if you want to compare them with a 'pure DOM' baseline.
>
>> Being able to compare their performance and the code each
>> one uses are the only reason these tests exist.

>
> So the reason for having a 'pure DOM' baseline is to be able to compare
> their performance/code with what could be achieved without the overheads
> imposed by the need to be general.


Yes, and ideally also to have an implmentation so transparent that
there is no doubt that it's results are correct. I don't think this
implmentation reaches that standard, but that should be another goal.


>> [ ... ] I see a fair bit of what could
>> reasonably be considered optomising for the test, and
>> I only really looked at jQuery's, YUI's, and My Library's
>> test code. I wouldn't be surprised to find more in
>> the others.

>
> But that is in the implementation for the test functions, not the
> libraries themselves. I don't see any reason why the test functions
> cannot be written to exploit the strengths of the individual libraries.


Well, some of the issues were with caching outside the test loop.
This would clearly be mitigated if the framework ran the loops instead
of the test code, but those clearly optimize in a manner counter to
the spirit of the tests. Similarly, there are tests that doesn't
attach event listeners to the particular items in question but just to
a single parent node. This definitely violates the guidelines.


> [ ... ]
> The biggest point for improvement would be in the specification for the
> tasks. They should be more realistic, more specific (and more accurate),
> and probably agreed by all interested parties (as realistic and
> implementable) and then given to the representatives of each
> library/version to implement as best they can. That way nobody can be
> cheated and the results would be the best representation possible of
> what can be achieved by each library.


And I think there's another factor which may be hard to integrate.
The tests are not designed only to show what's achievable for
performance but to show how the library works when it's used as it's
designed to be used. If you have a wrapper for getElementById that
you expect users to use all the time, it's not right to have test code
which bypasses it to gain speed. It's hard to enforce such a rule,
but it still should be stated explicity so that those who don't comply
can be called out for it.


> Granted, the re-working of the specs for the tasks would not be easy, as
> it would have to address questions such as whether saying an event
> listener should be attached to all of a particular set of elements left
> room for attaching to a common ancestor and delegating the event
> handling. This is one of the reasons that specifying realistic tasks
> should be the approach taken as for a realistic task the question is
> whether delegation is viable, and if it is, and a library can do it,
> there is no reason why it should not.


Again, is this something that you personally would be willing to help
code?

Thank you for your incisive posts on this subject.

-- Scott
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
My Library _passes_ TaskSpeed in IE < 7 David Mark Javascript 88 03-07-2010 01:18 AM
TaskSpeed results for My Library David Mark Javascript 90 02-11-2010 05:38 PM
Junit tests, setting up tests without having to create a billion methods xyzzy12@hotmail.com Java 8 02-28-2006 08:59 PM
Tests without study guides or practice tests? =?Utf-8?B?Q2hyaXNS?= Microsoft Certification 8 12-20-2005 04:59 AM
Constant.t fails 240 of 272 tests and recurs.t fails 1 of 25 tests on HPUX using perl 5.8.7 dayo Perl Misc 11 12-16-2005 09:09 PM



Advertisments