Velocity Reviews

Velocity Reviews (http://www.velocityreviews.com/forums/index.php)
-   Javascript (http://www.velocityreviews.com/forums/f68-javascript.html)
-   -   Monoliths vs. "Microlibraries" (http://www.velocityreviews.com/forums/t942569-monoliths-vs-microlibraries.html)

David Mark 05-14-2011 06:41 PM

Monoliths vs. "Microlibraries"
 
Monoliths vs. "Microlibraries"

After the recent spectacular failures of such "mature" monolithic
libraries as Prototype, Dojo and other collections of general-purpose,
"cross-browser" scripts, the pendulum of public opinion has swung back
to the idea that the "optimal" pattern for browser scripting requires
lots of small scripts working together (e.g. "microlibraries").

AIUI, much time was spent at the recent JS conference "debating" which
side was "right" in this non-argument.

This is a non-argument because there is no formal definition of a JS
"library" (let alone a "microlibrary"). But let's take a look at the
two sides to the "debate".

On one side, you've got a bunch of "Angry Nerds" who have spent years
*trying* to come up with cross-browser solutions for common browser
scripting tasks. Rather than publishing functions, they have mashed
their output together into "libraries" or "frameworks" or
"toolkits" (or whatever). Though many of these blobs claim to be
modular, they have traditionally been hamstrung by interdependencies.
Dojo is the perfect example as the base requirements are massive, even
for the simplest enhancement or application. One interdependency that
stood out to me was that their (laughably inept and wildly inaccurate)
query engine is *required* by their XHR "module". Similar giant piles
of incompetently written Javascript (e.g. Ext JS) present the same
problem.

As we've seen, such garbage dumps have required an outrageous amount
of maintenance over the years (requiring Web developers to constantly
download, test and deploy new versions). Yet they still fail
virtually every time they are presented with an environment unknown to
their authors at the time the scripts were published. It's not
surprising as none of these things are cross-browser. They are multi-
browser (due to browser sniffing and other similarly bad inferences),
which implies that they can only be expected to work in environments
where they have been *demonstrated* to work. This range excludes new
browsers and historically many older environments (e.g. IE < 8 or
compatibility mode) are out of reach as well (likely due to
inexperienced Ajax mavens who weren't around for IE5/6/7 and "don't
care" about compatibility mode).

So that "side" is simply clinging to what they have (which has gone
from mostly to completely worthless of late). Best not to follow
failures. Successful cross-browser scripts solve problems for
specific *contexts*; they cannot be described in such concise terms as
"new wave Javascript" (whatever that is).

What of these "pioneers" who are now trying to shout down the monolith
marketers after having seen some sort of light that indicates they
should be using only "microlibraries". The only atom of reality in
their faith (religion is turned to when understanding is lacking) is
that browser scripts should be as small as possible for the *context*
they are written for. They seem to long for a utopia where they can
download lots of very small scripts, mash them together and create
robust applications that run "anywhere". This was the thinking back
around the turn of the century when sites like Dynamic Drive were
popular among Web developers. That movement has long since run its
course. It took a decade and produced virtually nothing of value.
Dynamic Drive begat Prototype, jQuery, Dojo, etc., projects which
failed to further understanding or innovate (in fact, they were
defined by backwards thinking). They've spent most of the last few
years on ludicrous UI layers built on top of their rickety, outdated
(and often inappropriate for JS) library designs. Now some long to go
back and start the futile cycle anew. :)

I saw a post (a Tweet IIRC) recently that opined that a library that
does not "work cross-browser" is "broken", not a "microlibrary". This
is presumably from the monolithic side, implying that scripts that
don't measure up to their ideals of "cross-browser" functionality are
simply wrong. This serves to illustrate the general confusion that
surrounds this non-debate. There are so many things wrong with this
"argument" that it's hard to choose where to begin.

Again, what's a library? Virtually any script can be called a
library. It generally implies "other people's code" (which highlights
that JS developers are big on abdication of responsibility).

Scripts don't "work cross-browser". They are either designed in cross-
browser fashion or they aren't. The confusion is between cross-
browser and multi-browser scripts. You can prove that a script works
in multi-browser fashion by testing it in the environments where it is
expected to work. Authors of such scripts have always been vehemently
opposed to testing in other environments (or considering the impact of
future environments) because there was a good possibility that their
scripts would simply fall apart (i.e. crash rather than bailing out,
possibly leaving an unusable document behind) in such unknown
scenarios. That's how virtually every library/framework published in
the last ten years has been designed (yet many of them claim to be
cross-browser).

No cross-browser script can be expected to work everywhere. Cross-
browser scripts are designed to work in environments that feature all
of the required host objects and methods. They are designed to
gracefully bow out in lacking environments, leaving the document in
the same state as it would be with scripting turned off. The decision
of whether to carry on or bail out is based on feature detection and
testing (not browser sniffing or multi-browser object inferences).
The definition of "work" for cross-browser scripts is that they
function properly in capable environments and leave the rest of them
alone.

Reality mandates that authors of browser scripts have at least some
knowledge of the history of browsers in order to explain to clients
exactly where these "degradation points" occur (e.g. what happens to
IE 7 users?). Combined with a (sometimes fuzzy) view of which
browsers are actually in use by the target audience (e.g. public Web,
corporate Intranet, etc.), a determination can be made as to an
appropriate cross-browser design.

So the fact that a script does not work in all environments (often
hailed as "all browsers") does not mean that it is not cross-browser.
That determination can only be made by reading the code (abstraction
vs. observation). Of course, most JS developers are not big on
actually reading code (they just want to download it and watch it go).

Consider this dubious function (which is shockingly similar to
innumerable methods found in yesterday's "popular" frameworks):-

function getAttribute(el, n) {
return el.getAttribute(n);
}

Note that there are no comments to define the context of this
function, so it must be expected to work for all cases and
environments. You can find one of these gems in virtually every
"major" query engine, indicating that they have no shot at working
properly in a significant percentage of browsers in use *today* (e.g.
IE 7, IE 8 compat mode, etc.). This is ironic as the biggest claim to
fame for these things is that they get the IE monkey off your
back. :)

So let's give the function a context. This will be for an application
that needs to work in most modern browsers, but is explicitly allowed
to degrade in IE < 8 (and compatibility mode). Perhaps the owners are
okay with IE 7 users having a less dynamic experience or they might
use Conditional Comments to include lesser script(s) for those users
or they may simply present them with a static page. That's a decision
that must be made jointly between the developer and the client.

So what's missing from the code, rendering it less than cross-
browser? The feature testing. As library authors have just
*recently* figured out, the getAttribute method has been Broken as
Designed (BAD) in IE since 1999 (and remains so today in compatibility
mode). An example of feature testing that can identify such troubled
environments (among several others) can be found on this test page:-

http://www.cinsoft.net/attributes.html

One particular test result (call it t) is the indicator we are after
for this context. With this result, we can decide the fight or flight
question:-

if (t) {
var getAttribute = function(el, n) {
...
}
}

There it is. A (dubious example of a) *cross-browser* design, which
is appropriate for the stated context. In theory it should work in
browsers that feature a *working* getAttribute method for elements and
it should degrade (gracefully) in everything else. Of course, it is
only as good as its feature tests, which should be as simple and
direct as possible (i.e. test exactly what you are going to do with
the required objects and methods and nothing else).

If an application requires just this one function, then its
"gateway" (a test before proceeding) would look like this:-

if (getAttribute) {
...
}

Note that the existence of the function itself is the indicator.
That's the *only* reliable way to couple applications, libraries, add-
ons, etc. You detect features of scripts in the exact same way as you
detect features of user agents. If one is missing, you don't fiddle
with the document at all. We've been over this before and it should
be intuitively obvious that any other scheme will be less direct and
prone to compatibility problems as pieces are swapped out or
upgraded. The specific combination of required features and test
results that determine the existence (or lack thereof) of a function
are abstracted by each piece, with none privy to the inner workings of
the others. Predictably, most libraries have missed the boat on this
and have started defining less specific, extraneous flags to give
hints about which functions might work in the current environment.
Also predictably, the track record for plug-ins working from one
version to the next is appalling (lending perceived ammunition to the
"microlibrary" faction).

In summary, it is ridiculous to argue in general about the perfect
browser scripting design as appropriate designs are always married to
specific contexts. But regardless of context, the discipline of cross-
browser scripting remains the same (as it has for many years). So it
is better to understand the discipline than to choose sides in a war
of buzzwords. You just can't advance without a clearly-defined battle
plan.

Furthermore, unlike other types of programming (where aspiring browser
scripting luminaries usually come from), general-purpose libraries and
frameworks will *never* work for cross-browser scripting. Doesn't
matter how many over-complicated script loaders get written or how
many conferences are called to discuss the "problem"; the concept just
doesn't fit. It never has and it never will.

Note that this does not mean "write everything from scratch". That
line is simply a badge of inexperience. You write (or borrow)
functions for specific contexts. Eventually you will end up with
several renditions of the same function, each appropriate for a
specific context. You group these functions together to create
context-specific enhancements and applications. How can the whole
world share such a repository and leverage it to move browser
scripting forward in giant leaps? I don't know the answer to that,
but I do know that the answer will never be found unless the loud
people start asking the right questions.

So (dammit), if you want to have any shot of competing in this
particular arena, you are just going to have to bite the bullet and
learn browser scripting. It's not enough to master the JS language
(though few seem inclined to bother even with that step); browser
scripting is a discipline, and one that cannot be mastered without
understanding its basic concepts (e.g. cross-browser vs. multi-
browser). The whole "argument" of "which size library is best" is
devoid of any such concepts; it's just more confused blithering (and
haven't we had enough of that over the last ten years or so?)

Erwin Moller 05-16-2011 09:33 AM

Re: Monoliths vs. "Microlibraries"
 
On 5/14/2011 8:41 PM, David Mark wrote:
> Monoliths vs. "Microlibraries"
>


<snip>

A good interesting read.
Thank you.

But I don't think many people will like the message:

> So (dammit), if you want to have any shot of competing in this
> particular arena, you are just going to have to bite the bullet and
> learn browser scripting. It's not enough to master the JS language
> (though few seem inclined to bother even with that step); browser
> scripting is a discipline, and one that cannot be mastered without
> understanding its basic concepts (e.g. cross-browser vs. multi-
> browser).


And there is the main problem: Most people who do clientside scripting
are incompetent and don't want to put the time/effort into it that is
needed to get the job done really decently. That includes me, but at
least I am aware of it. ;-)
(Another problem: many clients don't have the first clue what
webdevelopers are talking about when it comes to JavaScript/DOM.)

When libs like JQuery come along that seem to solve all your problems,
web developers are eager to embrace them.
Quality is often hard to find in this fast twitter world.

Regards,
Erwin Moller

--
"That which can be asserted without evidence, can be dismissed without
evidence."
-- Christopher Hitchens

Gregor Kofler 05-16-2011 11:05 AM

Re: Monoliths vs. "Microlibraries"
 
Am 2011-05-16 11:33, Erwin Moller meinte:
> On 5/14/2011 8:41 PM, David Mark wrote:
>> Monoliths vs. "Microlibraries"
>>

>
> <snip>
>
> A good interesting read.
> Thank you.
>
> But I don't think many people will like the message:
>
>> So (dammit), if you want to have any shot of competing in this
>> particular arena, you are just going to have to bite the bullet and
>> learn browser scripting. It's not enough to master the JS language
>> (though few seem inclined to bother even with that step); browser
>> scripting is a discipline, and one that cannot be mastered without
>> understanding its basic concepts (e.g. cross-browser vs. multi-
>> browser).

>
> And there is the main problem: Most people who do clientside scripting
> are incompetent and don't want to put the time/effort into it that is
> needed to get the job done really decently. That includes me, but at
> least I am aware of it. ;-)
> (Another problem: many clients don't have the first clue what
> webdevelopers are talking about when it comes to JavaScript/DOM.)


....and don't want to pay for a professional job, since everything *must*
be cinch with all those libraries floating around for free. (Besides,
all the money has already been spent on the graphics designer.)

Gregor

--
http://vxweb.net

MC 05-16-2011 01:17 PM

Re: Monoliths vs. "Microlibraries"
 
Nice post. It would be more interesting if you posted with less language
like 'garbage dump'. It seems to imply a bias that might influence your
crediblity to the reader.

I was also unaware there was a JS conference. I will be looking online for
details.

MC



S.T. 05-16-2011 10:16 PM

Re: Monoliths vs. "Microlibraries"
 
On 5/16/2011 2:33 AM, Erwin Moller wrote:
> On 5/14/2011 8:41 PM, David Mark wrote:


>> So (dammit), if you want to have any shot of competing in this
>> particular arena, you are just going to have to bite the bullet and
>> learn browser scripting. It's not enough to master the JS language
>> (though few seem inclined to bother even with that step); browser
>> scripting is a discipline, and one that cannot be mastered without
>> understanding its basic concepts (e.g. cross-browser vs. multi-
>> browser).

>
> And there is the main problem: Most people who do clientside scripting
> are incompetent and don't want to put the time/effort into it that is
> needed to get the job done really decently. That includes me, but at
> least I am aware of it. ;-)
> (Another problem: many clients don't have the first clue what
> webdevelopers are talking about when it comes to JavaScript/DOM.)


In fairness, most designers feel the client-side scripters are
incompetent when they try their hand at the creative (artwork / copy /
layout, etc.). Usually the technical ones just start squawking about
Jakob Neilsen articles and the importance of fluid layouts which is a
codephrase for "this site is gonna look like Ms. Everitt's 8th grade
computer science class final project".

In a perfect world you wouldn't have creative handling creative AND
technical nor technicians handling technical AND creative. But building
sites is rarely perfect. The copy and data a website offers visitors may
exist for a long time but the layout, artwork and UI used in presenting
that copy and data is ever evolving and rarely lives beyond a few years
before it's scrapped and a fresh design replaces it. That short life
cycle, along with budgets and time constraints mean it's rarely
economically feasible to design a "perfect" site. Like it or not people
are going to be handling duties outside their field of expertise,
including the DOM-clueless messing about with the DOM.

> When libs like JQuery come along that seem to solve all your problems,
> web developers are eager to embrace them.
> Quality is often hard to find in this fast twitter world.


The libs like jQuery substantially closed the gap for the creative
forced to handle the technical outside his/her comfort zone. Without
needing to learn much (jQuery, in particular, has a *very* shallow
learning curve) through the (ill-advised) use of bulky and suspect
add-ons a JS-illiterate could suddenly create a UI experience that
rivaled, and often exceeded, that of the technical experts all the while
remaining effectively JS-illiterate. There is a price to pay to
remaining completely ignorant (code bloat, inefficient coding styles,
etc) but even those real downsides are rarely visible by the vast
majority of clients and site visitors. A basic knowledge of the DOM/JS
and jQuery is a much more powerful combination (and much preferred), but
not strictly necessary to get going.

The flipside of the equation, the technician forced to handle some
creative outside his/her comfort zone, has no such tool to close that
gap. Without focusing serious time and effort to improve these skills
their artwork looks just as amateur, layouts just as arbitrary and copy
just as awkward as in the past. Seems to annoy some folks around here.


RobG 05-18-2011 01:54 AM

Re: Monoliths vs. "Microlibraries"
 
On May 17, 8:16*am, "S.T." <a...@anon.com> wrote:
> On 5/16/2011 2:33 AM, Erwin Moller wrote:
>
> > On 5/14/2011 8:41 PM, David Mark wrote:
> >> So (dammit), if you want to have any shot of competing in this
> >> particular arena, you are just going to have to bite the bullet and
> >> learn browser scripting. It's not enough to master the JS language
> >> (though few seem inclined to bother even with that step); browser
> >> scripting is a discipline, and one that cannot be mastered without
> >> understanding its basic concepts (e.g. cross-browser vs. multi-
> >> browser).

[...]
> > When libs like JQuery come along that seem to solve all your problems,
> > web developers are eager to embrace them.
> > Quality is often hard to find in this fast twitter world.

>
> The libs like jQuery substantially closed the gap for the creative
> forced to handle the technical outside his/her comfort zone. Without
> needing to learn much (jQuery, in particular, has a *very* shallow
> learning curve) through the (ill-advised) use of bulky and suspect
> add-ons a JS-illiterate could suddenly create a UI experience that
> rivaled, and often exceeded, that of the technical experts all the while
> remaining effectively JS-illiterate. There is a price to pay to
> remaining completely ignorant (code bloat, inefficient coding styles,
> etc) but even those real downsides are rarely visible by the vast
> majority of clients and site visitors. A basic knowledge of the DOM/JS
> and jQuery is a much more powerful combination (and much preferred), but
> not strictly necessary to get going.


Never mind the quality, feel the width. People also made web sites
with authoring tools such as Dreamweaver and FrontPage, but I don't
see them used so much anymore. There is a real problem with the
"createive" types getting involved in UI design - it is a very
functional part of an application and while it might need to have a
particular look, it should behave as near as is practical to what the
user expects (i.e. the native browser UI should be left alone).

But most of all, it should *work* without annoyances like slide or
fade effects that are primarily used so the author can show off just
how clever they are (despite having absolutely no idea how to achieve
the effect themselves). And as has often been pointed out here, those
who are cluless about scripting are the least qualified to choose
which scripts are appropriate for their requirements.

Often blobs are used because authors not only don't know javascript,
but they don't know HTML or CSS either. Using some nifty plugin to get
striped table rows and hover effects seems really easy compared to
having to learn the appropriate CSS properties and values and deciding
on a graceful degredation scheme. Heaven forbid that they learn HTML
layout rules.

Far easier to use the blob de jure and when it goes to **** blame the
browser for not supporting the blob.

But those visitors are likely using browsers that the designer just
doesn't care about, or are edge cases, or just don't happen often
enough in the "real world". C'est la vie.


--
Rob


All times are GMT. The time now is 04:35 PM.

Powered by vBulletin®. Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SEO by vBSEO ©2010, Crawlability, Inc.