Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Perl > Perl Misc > Standards in Artificial Intelligence

Thread Tools

Standards in Artificial Intelligence

John J. Trammell
Posts: n/a
On 10 Sep 2003 10:22:11 -0800, Arthur T. Murray mega-crossposted:
> A webpage of proposed Standards in Artificial Intelligence is at
> -- updated today.

A killfile for excessive crossposters and other undesirables is
at /home/trammell/.slrn/score -- updated about 15 seconds ago.

Reply With Quote
Arthur T. Murray
Posts: n/a
A webpage of proposed Standards in Artificial Intelligence is at -- updated today.
Reply With Quote
David B. Held
Posts: n/a
"Arthur T. Murray" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)...
> A webpage of proposed Standards in Artificial Intelligence
> is at --
> updated today.

Besides not having anything to do with C++, you should
stop posting your notices here because you are a crank.
You claim to have a "theory of mind", but fail to recognize
two important criteria for a successful theory: explanation
and prediction. That is, a good theory should *explain
observed phenomena*, and *predict non-trivial
phenomena*. From what I have skimmed of your "theory",
it does neither (though I suppose you think that it does
well by way of explanation).

In one section, you define a core set of concepts (like
'true', 'false', etc.), and give them numerical indexes.
Then you invite programmers to add to this core by using
indexes above a suitable threshold, as if we were defining
ports on a server. When I saw this, and many other things
on your site, I laughed. This is such a naive and simplistic
view of intelligence that you surely cannot be expected
to be taken seriously.

I dare say one of the most advanced AI projects in
existence is Cog. The philosophy behind Cog is that
an AI needs a body. You say more or less the same
thing. However, the second part of the philosophy behind
Cog is that a simple working robot is infinitely better
than an imaginary non-working robot. That's the part
you've missed. Cog is designed by some of the field's
brightest engineers, and funded by one of the last
strongholds of AI research. And as far as success
goes, Cog is a child among children. You expect to
create a fully developed adult intelligence from scratch,
entirely in software, using nothing more than the
volunteer labor of gullible programmers and your own
musings. This is pure comedy.

At one point, you address programmers who might
have access to a 64-bit architecture. Pardon me, but
given things like the "Hard Problem of Consciousness",
the size of some programmer's hardware is completely
irrelevant. These kinds of musings are forgivable when
coming from an idealistic young high school student
who is just learning about AI for the first time. But the
prolific nature of the work implies that you have been
at this for quite some time.

Until such time as you can A) show that your theory
predicts an intelligence phenomenon that is both novel
and later confirmed by experiment or observation of
neurological patients, or B) produce an artifact that is
at least as intelligent as current projects, I must conclude
that your "fibre theory" is just so much wishful rambling.

The level of detail you provide clearly shows that you
have no real understanding of what it takes to build a
successful AI, let alone something that can even
compete with the state of the art. The parts that you
think are detailed, such as your cute ASCII diagrams,
gloss over circuits that researchers have spent their
entire lives studying, which you leave as "an exercise
for the programmer". This is not only ludicrous, but
insulting to the work being done by legitimate
researchers, not to mention it insults the intelligence
of anyone expected to buy your "theory".

Like many cranks and crackpots, you recognize that
you need to insert a few scholarly references here and
there to add an air of legitimacy to your flights of fancy.
However, a close inspection of your links shows that
you almost certainly have not read and understood
most of them, or A) you would provide links *into* the
sites, rather than *to* the sites (proper bibliographies
don't say: "Joe mentioned this in the book he published
in '92" and leave it at that), or B) you wouldn't focus
on the irrelevant details you do.

A simple comparison of your model with something
a little more respectable, such as the ACT-R program
at Carnegie-Mellon, shows stark contrasts. Whereas
your "model" is a big set of ASCII diagrams and some
aimless wanderings on whatever pops into your head
when you're at the keyboard, the "models" link (note
the plural) on the ACT-R page takes you to what...?
To a bibliography of papers, each of which addresses
some REAL PROBLEM and proposes a DETAILED
MODEL to explain the brain's solution for it. Your
model doesn't address any real problems, because
it's too vague to actually be realized.

And that brings us to the final point. Your model has
components, but the components are at the wrong
level of detail. You recognize the obvious fact that
the sensory modalities must be handled by
specialized hardware, but then you seem to think that
the rest of the brain is a "tabula rasa". To see why
that is utterly wrong, you should take a look at Pinker's
latest text by the same name (The Blank Slate).
The reason the ACT-R model is a *collection* of
models, rather than a single model, is very simple.
All of the best research indicates that the brain is
not a general-purpose computer, but rather a
collection of special-purpose devices, each of which
by itself probably cannot be called "intelligent".

Thus, to understand human cognition, it is necessary
to understand the processes whereby the brain
solves a *PARTICULAR* problem, and not how it
might operate on a global scale. The point being
that the byzantine nature of the brain might not make
analysis on a global scale a useful or fruitful avenue
of research. And indeed, trying to read someone's
mind by looking at an MRI or EEG is like trying to
predict the stock market by looking at the
arrangement of rocks on the beach.

Until you can provide a single model of the precision
and quality of current cognitive science models, for
a concrete problem which can be tested and
measured, I must conclude that you are a crackpot
of the highest order. Don't waste further bandwidth
in this newsgroup or others with your announcements
until you revise your model to something that can be
taken seriously (read: explains observed phenomena
and makes novel predictions).


Reply With Quote
Posts: n/a
      09-17-2003 Removed) (Arthur T. Murray) writes:

> A webpage of proposed Standards in Artificial Intelligence is at
> -- updated today.

How about using a mailing list where everyone interested in your
website can subscribe and is informed about your frequent updates?

If everybody posted their update notifications through usenet the news
servers would immediately break down from overload. So please be
polite and use the appropriate channels to communicate with the
readers of your website.

Reply With Quote

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
"Next Generation Artificial Intelligence, Artificial Mind - Part One - Basic Architecture and Cognitive Structure" tommak Java 2 07-04-2006 06:03 PM
"Next Generation Artificial Intelligence, Artificial Mind - Part One - Basic Architecture and Cognitive Structure" tommak Python 1 07-04-2006 09:34 AM
"Next Generation Artificial Intelligence, Artificial Mind - Part One - Basic Architecture and Cognitive Structure" tommak C++ 0 07-04-2006 05:27 AM
Standards in Artificial Intelligence Arthur T. Murray C++ 76 10-04-2003 04:39 AM
Re: Standards in Artificial Intelligence White Wolf Java 8 09-15-2003 02:18 PM