Matt Kruse wrote:
> (literally - and no, I didn't design them) which executed in 2
> takes such little time that it becomes completely negligible.
This assumes you have a fast computer. I suppose it all depends on your
audience, but the average computer user these days does not have
anything better than a 300 Mhz PC with 64 MB of memory. Many people
have much slower machines yet.
> I'm not in the crowd of people who optimizes code so that, when executed in
> a loop 10,000 times, it performs 1 second faster. I think that kind of
> optimization is a huge waste of time, and more of a "fun" exercise in
> programming rather than a practical one
Hmmm... maybe you should. By always thinking about efficiency,
optimization is rarely necessary. Of course, 1 second faster seems
silly, but what if is a 1 second improvement from something that takes
1.1 seconds? If you are improving from 2 seconds to 1 second, the
improvement is probably not good enough.
The "fun" exercises you are talking about change the speed by orders of
magnitude (not just linear changes). O(nln) is MUCH faster than O(n^2)
for instance. In the example of 1.1 seconds, versus 0.1 seconds for
10,000 items, what happens if you increase to 20,000 items? You are
likely looking at 2.2 vs 0.2 seconds. We are not talking about being 1
second slower, we are talking about being 1100% slower! And what about
the memory considerations? Memory optimizations are often as important
as speed. What happens if you are using up all of your systems memory?
If you really think that speed optimizations are a huge waste of time,
then you have never written an application of any substance. I would
recommend _ALWAYS_ thinking about the correct algorithm to use for a
given situation. It will become so "second nature" that it will not
take any extra time to implement it. The result: Effiency every time.
I am not talking about theory here... I am not talking about "fun"
exercises. I am talking about real, out-the-door products used by real
customers. Effiency always matters.
Ok, I will get off my high horse... It is just that I get sick of having
to find effiency problems in code after the fact (From me, or from other
Have a good day,
"Brian Genisio" <(E-Mail Removed)> wrote:
> This assumes you have a fast computer. I suppose it all depends on your
> audience, but the average computer user these days does not have
> anything better than a 300 Mhz PC with 64 MB of memory. Many people
> have much slower machines yet.
Even still, a difference of 5k in library size is negligible. I have some
200mhz machines here I can test with, I should run some comparisons, just to
see for sure what the difference is.
> By always thinking about efficiency,
> optimization is rarely necessary.
I agree, and I do consider efficiency (caching objects and references to
objects, etc). My point is, when a script runs fine and there are no
complaints, spending an hour to squeeze out .0001s better speed is not time
> Of course, 1 second faster seems
> silly, but what if is a 1 second improvement from something that takes
> 1.1 seconds?
Over 10,000 iterations?
That's a speed increase for a single iteration (typical) from .0011 seconds
to .00001 seconds, which is completely unrecognizable. If the code is
actually going to be executed 10,000 times, then that's a different story
(and probably a design flaw
I doubt anything is going to actually be executed 10,000 times in
I see tweaking for speed increases actually have to run something thousands
of times in a loop to even measure the difference in execution time. If it's
only going to run oncec or twice, and you need to run it 10,000 times to see
a speed increase, then it's not a practical exercise.
> If you really think that speed optimizations are a huge waste of time,
> then you have never written an application of any substance.
I never say the former, and I certainly have done the latter!
My belief is that speed optimizations are not the best use of a develoepers
time if the speed increase is so small so as to not even be noticed, and if
a block of code needs to be executed 10,000 times in order to notice a 1
second speed increase, then the time spent optimizing could be better spent
> Ok, I will get off my high horse... It is just that I get sick of having
> to find effiency problems in code after the fact (From me, or from other
I'd much rather deal with slightly inefficient code than code that isn't
commented and is poorly designed to begin with. I'll gladly sacrifice .1
seconds of speed in exchange for code clarity
>>If you really think that speed optimizations are a huge waste of time,
>>then you have never written an application of any substance.
> I never say the former, and I certainly have done the latter!
> My belief is that speed optimizations are not the best use of a develoepers
> time if the speed increase is so small so as to not even be noticed, and if
> a block of code needs to be executed 10,000 times in order to notice a 1
> second speed increase, then the time spent optimizing could be better spent
> somewhere else.
Ok, I thought you were talking globally, as opposed to locally (to
Of course, in other languages, this type of thing happens all the time.
For instance, the web brower will do this often. Immagine a page with
three frames (4 DOM models in all), and each page is somewhat complex.
Since all attributes are nodes, the node tree can easilly make it to
10,000 items. The tree algorithms must be efficient in this matter.
Database searches... could be billions of times.
Also, when I say 10,000 items, that doesnt mean 10,000 times that code
is run. It means a function being executed on 10,000 items. Though, it
is rare you want to run an algorithm on 10,000 items, and have a worst
case that is better than 10,000 items. (There are not many O(c)
algorithms on n items out there)
In my original post, I was saying that if you make something faster,
from 1.1 seconds to 1.0 seconds, this is not a speed increase. You are
likely only changing a constant in the Big O characterization. If,
instead, you can bring something from 1.1 seconds to 0.1 seconds... now
we're talking. This is likely orders of magnitude faster. More than
just a constant change.
"Dr John Stockton" <(E-Mail Removed)> wrote:
> Those on dial-up or radio links will prefer not to receive an
> unnecessary 5K of code.
They may prefer not to receive an unnecessary 5k of site-wide CSS rules,
too, but do you recommend against using global CSS files? Would you rather
have a separate CSS file for every page on your site, containing only the
definitions required by that page?
hell, there may be HTML pages with 1k of whitespace! Should everyone start
compressing all of their HTML source so as to not send unnecessary
It's kind of a ridiculous argument you're trying to make, isn't it?
Matt Kruse wrote:
>> Programming and not programming are mutually exclusive. To program
>> understand, for example, boolean logic. These aren't cruel
>> impositions intended to keep the uninitiated from scripting web
>> browsers they are just the obvious fundamental requirements for the
used without writing at least some additional code to control them.
> You don't have to know how a car works to drive one, do you?
You cannot drive a car and know nothing about how it works, such as the
fact that it consumes fuel, oil, water, etc, as it operates. Where
understanding how they actually work becomes most valuable is when they
stop working properly.
> You don't have to understand the fundamentals of electronics or
> operating systems to use a computer, do you?
I have met one or two people who use computers without any understanding
of how they work. They seem to operate on the basis of inventing their
own superstitions about what the computer is doing; it isn't an approach
that allows them to be very productive.
> I do not think that it's unreasonable to be a web
> understanding everything needed to make it work.
A web developer should understand the issues surrounding the use of
themselves then the are trying to be a programmer and should expect to
have to acquire suitable understanding.
> sufficiently hide enough from you, then you just need to
> deal with an interface, not with the implementation.
That would depend a lot on the interface. It is certainly possible for
an HTML author to create a suitable HTML structure, give it an ID and
then handles everything else, including the degradation (as that would
be just not acting, leaving the HTML unmodified in the page). Such a
author was writing scripts to be used by HTML and server script writing
colleges who needed to do as little work as possible to deploy their
On the other hand, a library providing an interface as an API (or
structures) needs considerably more understanding to be usefully
> appropriate to use it, and how to degrade gracefully in case users
> don't have it enabled - but NOT understand it enough to implement a
> popup div which is positioned correctly in all browsers (even old
> ones) and interacts with the user.
How would it be possible for a developer to not know enough to be able
to position a DIV and also know enough to be able to respond usefully
when a browser was not going to be able to position a DIV?
> There's no reason those details
> can't be hidden from the person implementing the library.
They could be, a script can be inherently cleanly degrading (by
manipulating structures defined in the HTML and not acting on browsers
that cannot support it), and a library could be written to flag its
inability to act usefully (or be queried on the subject) , though that
still leaves the person employing such a library with the problem of
doing something useful in respons.
> For example, if a user wants to have an expandable tree structure,
> they can use mine and give their <ul> structure a certain class, and
> instantly have a tree implemented. They don't need to know how it
> solves their problem in an elegant and robust way.
Because that script is based on CSS and manipulating HTML defined
structures it is relatively robust. It is the type of script that is
easy to employ without much understanding of its mechanism. It is also
the type of script that is easy to cleanly degrade, because the list is
in the HTML and the script could detect browser support for the required
features and not act whenever they are not available. It does not
However, I would not describe your implementation as cleanly degrading
because its response to some unsupporting environment may be to error
out (lacking much in the way of feature detecting), fortunately before
it has done anything to the HTML it is acting on so the page would
remain usable. The worst it will do is show the user an error message
(generally not considered a good thing in itself).
> Do you make these functions available anywhere?
> ... . But, if there was a single function which
> gave the position of an element, for example, and it worked in every
> browser that could possibly be tested, that would be a very valuable
> thing to share.
Some browsers do not make any element positioning information available
(except maybe the old Netscape 4 info for A, IMG and layers), so a good
element position interface would also have to be able to signal its
inability to provide useful information.
But you still would not want a general method because a general method
might have to take into account possibilities such as an element being
contained within a scrolling DIV that was scrolled to some extent at the
time. That is a lot of extra work in doing the calculations, but would
not apply in most situations. A range of methods would be better, so the
one best suited to the situation of its use could be used.
> If you have any ideas about how the users of this group could
> assemble such a collection of developer tools, I'd like to hear it.
When contributors to this group provide detailed explanations, or
examples of cross browser scripts they often feature components that
could be usefully employed in broader contexts. Any sufficiently regular
reader of the group will be exposed to pretty much everything they are
likely to need (and then there are the archives).
>> More complex a task to implement in a truly general way, suddenly you
>> need to accommodate any possible presentation, any date range,
>> interact with arbitrary form controls and combinations of forms,
>> deal with any possible HTML structures and content and so on.
>> Leaving any general solution bloated with code needed to handle the
>> possibilities, most of which will not apply to any actual application
> Requirements change.
> Why re-code, when you could have handled the
> general cases from the beginning?
Requirements can change, but they may not, so equally: why code for the
generality when you have a specification to work from?
But in practice a changed requirement would only necessitate changing
parts of a script, probably just replacing a couple of functions (and
maybe just swapping them for others that already exist, maybe with a
little modification to suite).
> Adding an additional 5k to a library to solve a number of
> general cases is a _GOOD THING_. IMO.
As I said before, you have a theoretical 80k maximum window in which to
serve a page, preferably nearer 40. Every chink taken needlessly in
script is eating away at the user's willingness to wait. 5k may not
sound like much, if the difference between covering all of the potential
for changes in requirements can be accommodated in 5k, and it may not
swing the balance in itself, but it could be 5k better spent.
>>> If they built it from scratch to have the same functionality
>>> as you would propose, they may spend 50 or more hours of
>>> coding and testing, and spend a large amount of money to
>>> get exactly the same result
>>50 hours? Not for someone who knew what they were doing.
> Unless you have implemented a generalized popup date-picker (if you
> have, where is it?), I don't think you understand.
What is this about? I explain to you why I don't think libraries are
suited to browser scripting and you ask me where you can find libraries
that I have written. I explain to you why I don't think broad
generalised scripts are suited to browser scripting, but apparently I
cannot "understand" unless I have spent my time doing something that my
experience tells me is a fundamentally flawed approach.
OK, if you wanted a script that could interact with all of the various
types and combinations of form control to which a date selection
mechanism could be applied then maybe it would take 50 hours (there are
a lot of possibilities to cover). But in reality reasonable site design
would use a consistent style of form control (or control combination)
for the entering of dates wherever it was required (it would be bad UI
design to do otherwise), making accommodating all of the possible
permutations pointless and certainly reducing the task to considerably
less that 50 hours.
Brian Genisio wrote:
> I know this is a long and boring example, but I am trying to
> illustrate a practice that happens in out-the-door products on a
> regular basis... Solving one problem, but creating a smaller,
> unrelated problem that is managable.
It is reasonable to be pragmatic, but I can't see needlessly introducing
look that way if you adopt a position of not caring.
> Ideally, there should be a final solution to A that will be a perfect
> solution. This solution may be unrealistic in budget/schedule, and
> concessions are made for the A"xC' solution.
Your example is rather frightening. I would not be happy to categorise
it as a solution at all, at least without the qualification "temporary".
Schedule constraints may necessitate it, but budget considerations are
never aided by an increased maintenance burden (that becomes a bit open
ended), and you know full well that problem D will manifest itself at
the worst moment possible.
Richard Cornford wrote:
> Brian Genisio wrote:
>>I know this is a long and boring example, but I am trying to
>>illustrate a practice that happens in out-the-door products on a
>>regular basis... Solving one problem, but creating a smaller,
>>unrelated problem that is managable.
> It is reasonable to be pragmatic, but I can't see needlessly introducing
> look that way if you adopt a position of not caring.
the general practice of software development. It is real easy in
a trivial exercise, in a small, descrete environment.
How about when your development solution spans multiple operating
systems and multiple languages?
>>Ideally, there should be a final solution to A that will be a perfect
>>solution. This solution may be unrealistic in budget/schedule, and
>>concessions are made for the A"xC' solution.
> Your example is rather frightening. I would not be happy to categorise
> it as a solution at all, at least without the qualification "temporary".
> Schedule constraints may necessitate it, but budget considerations are
> never aided by an increased maintenance burden (that becomes a bit open
> ended), and you know full well that problem D will manifest itself at
> the worst moment possible.
I once worked on a system that integrated four operating systems on over
6 computers, and ran software in about 12 different programming
languages and communicated over 6 communication standards to come up
with a solution that worked well.
When it finally worked, there was one glaring problem... it was
extremely complex. The solution solved the problem, and did it well,
but was difficult to debug and had a steep learning curve. I am
convinced that this solution was as good as anyone could have come up
This is an example of a real-world software project with thousands of
requirements, and it met every one. Is this not a solution?
Dr John Stockton
JRS: In article <(E-Mail Removed)>, seen in
at Tue, 20 Apr 2004 13:33:50 :
>"Dr John Stockton" <(E-Mail Removed)> wrote:
>> Those on dial-up or radio links will prefer not to receive an
>> unnecessary 5K of code.
>They may prefer not to receive an unnecessary 5k of site-wide CSS rules,
>too, but do you recommend against using global CSS files? Would you rather
>have a separate CSS file for every page on your site, containing only the
>definitions required by that page?
If the nature of the site was such that I was likely to visit only one
page, then as a user I would naturally prefer only the definitions
needed on that page. But if I was likely to visit many pages, so that
definitions were multiply used, I would prefer collected definitions.
>hell, there may be HTML pages with 1k of whitespace! Should everyone start
>compressing all of their HTML source so as to not send unnecessary
Yes, they should, at least for popular professional sites; and comment
should also be removed. It is in the interests of their readers, after
all. That assumes, of course, that the HTML is intended to be read only
by browsers and not by people.
There are two common classes of unnecessary whitespace; spaces at the
ends of lines, and indentation. The former can very easily be removed
automatically; removing the former needs an understanding of <pre>, but
is equally trivial if it is known to be absent.
However, recognise that size reduction is most important for those on
slow links, which are likely to have hardware compression which will be
effective on leading whitespace. Code bloat is far more important.
>It's kind of a ridiculous argument you're trying to make, isn't it?
You need to be sensible about it. A large all-purpose routine of which
only a small part is likely to be used within a site is not sensible; it
is merely showing-off on the part of its author.
Authors should recognise that there is a difference in the use of code
programmer can freely use many library units, because the units stay on
the local machine and only the needed parts go into the distributed EXE
(but needs to think a bit more when writing DLLs). But a Web author
with many library files available should be selective about which parts
he puts in Web pages or include files, and hoe they should be
distributed among those.
Full optimisation is impractical; but a little thought on such matters
should enable avoidance of full pessimisation.
H'mmm - there's another reason for removing redundancy; it diminishes
the load on the server. In particular, it diminishes the download for
an individual author. Large authors will pay directly for the amount of
service provided; small authors may have a fixed allowance, so that more
compact pages means more readers.
>"Brian Genisio" <(E-Mail Removed)> wrote:
>> library form is compiled on the first pass of the JS interpeter, if it
>> is executed or not. If you are only using one function from that
>> library, there is a lot of extra processing to include code that is not
>> being used.
>and no, I didn't design them) which executed in 2 seconds. Computers are
That's fast, initial hit time for a current project of mine which
involves 200 JS load, is around 14 seconds on a 2Ghz P4 not under
load, it's not all compilation, but it's a significant part of it.
On Tue, 20 Apr 2004 08:37:13 -0500, "Matt Kruse"
<(E-Mail Removed)> wrote
>I doubt anything is going to actually be executed 10,000 times in
you've never done something onmousemove, or using css properties with
JS, they are processed a lot... but yes, there's often no point, but
you're talking about seconds, even 1/10th of a second is often too
long in UI's users notice it.
>I'd much rather deal with slightly inefficient code than code that isn't
>commented and is poorly designed to begin with. I'll gladly sacrifice .1
>seconds of speed in exchange for code clarity
I'd be amazed if your users will, a second is an age.