Velocity Reviews

Velocity Reviews (http://www.velocityreviews.com/forums/index.php)
-   Computer Security (http://www.velocityreviews.com/forums/f38-computer-security.html)
-   -   REVIEW: "Biometrics for Network Security", Paul Reid (http://www.velocityreviews.com/forums/t305918-review-biometrics-for-network-security-paul-reid.html)

Rob Slade, doting grandpa of Ryan and Trevor 10-01-2004 05:07 PM

REVIEW: "Biometrics for Network Security", Paul Reid
 
BKBIOMNS.RVW 20040527

"Biometrics for Network Security", Paul Reid, 2004, 0-13-101549-4,
U$44.99/C$67.99
%A Paul Reid
%C One Lake St., Upper Saddle River, NJ 07458
%D 2004
%G 0-13-101549-4
%I Prentice Hall
%O U$44.99/C$67.99 +1-201-236-7139 fax: +1-201-236-7131
%O http://www.amazon.com/exec/obidos/AS...bsladesinterne
http://www.amazon.co.uk/exec/obidos/...bsladesinte-21
%O http://www.amazon.ca/exec/obidos/ASI...bsladesin03-20
%P 252 p.
%T "Biometrics for Network Security"

In the preface, Reid presents biometrics as the cure for all network
security ills. Given his employment, with a company that sells
biometric systems, this enthusiasm is understandable, if not totally
compelling.

Part one deals with introduction and background. Chapter one is the
introduction--mostly to the book. The definition of biometrics itself
is very terse. Authentication technologies are promised in chapter
two--which starts out by repeating the all-too-common error of
confusing authentication with identification. Reid then pooh-poohs
passwords and tokens and praises biometrics as strong authentication,
without dealing with the fact that a biometric is the ultimate static
password, or addressing the technologies (and associated error rates)
needed to make biometrics a viable authentication factor. Privacy is
confused with intellectual property, access control, and improper
employee monitoring in chapter three.

Part two lists biometric technologies. Chapter four is a disorganized
amalgam of factors generally involved in biometric use and
applications. Fingerprint features are reviewed in chapter five with
incomprehensible explanations and unclear illustrations. Attacks
against fingerprint technologies and systems are raised--but are
usually dismissed in a fairly cavalier manner. Similar examinations
are made of face (chapter six), voice (seven), and iris (eight)
systems.

Part three looks at implementing the technologies for network
applications. Chapter nine compares the four biometrics from part
two, in general terms, and states measures that are rather at odds
with other biometric literature. Reid makes a big deal out of simple
error rate metrics in chapter ten. Most of chapter eleven talks about
hardening biometric devices and hardware. Unconvincing fictional
"straw man" case studies and some general project planning topics are
in chapter twelve, with more of the same in thirteen and fourteen.

Part five, which is only chapter fifteen, casts a rosy-spectacled look
at the future when all of security will be made perfect through the
use of biometrics--essentially returning us to the preface.

Basically, this appears to be a promotional pamphlet padded out to
book length: it isn't even as good as Richards' article in the
"Information Security Management Handbook" (cf. BKINSCMH.RVW). The
material will not help you with a realistic assessment of what
biometrics can (and cannot) do, or how to implement it. The
"Biometrics" text by Woodward, Orlans and Higgins (cf. BKBIOMTC.RVW)
is far superior.

copyright Robert M. Slade, 2004 BKBIOMNS.RVW 20040527

--
======================
rslade@vcn.bc.ca slade@victoria.tc.ca rslade@sun.soci.niu.edu
============= for back issues:
[Base URL] site http://victoria.tc.ca/techrev/
or mirror http://sun.soci.niu.edu/~rslade/
CISSP refs: [Base URL]mnbksccd.htm
Security Dict.: [Base URL]secgloss.htm
Book reviews: [Base URL]mnbk.htm
Review mailing list: send mail to techbooks-subscribe@egroups.com
or techbooks-subscribe@topica.com


Bruce Barnett 10-02-2004 03:30 AM

Re: REVIEW: "Biometrics for Network Security", Paul Reid
 
rslade@sprint.ca (Rob Slade, doting grandpa of Ryan and Trevor) writes:

> BKBIOMNS.RVW 20040527
>
> "Biometrics for Network Security", Paul Reid, 2004, 0-13-101549-4,



How does he prevent replay attacks?

Some use smartcard technology with match-on-card software.

--
Sending unsolicited commercial e-mail to this account incurs a fee of
$500 per message, and acknowledges the legality of this contract.

Richard S. Westmoreland 10-04-2004 01:52 PM

Re: REVIEW: "Biometrics for Network Security", Paul Reid
 
"Bruce Barnett" <spamhater103+U041001232907@grymoire.com> wrote in message
news:cjl7cl$mbc$1$208.20.133.66@netheaven.com...
> rslade@sprint.ca (Rob Slade, doting grandpa of Ryan and Trevor) writes:
>
> > BKBIOMNS.RVW 20040527
> >
> > "Biometrics for Network Security", Paul Reid, 2004, 0-13-101549-4,

>
>
> How does he prevent replay attacks?
>
> Some use smartcard technology with match-on-card software.


I suppose one method of securing the biometric authentication from replay
attacks, is to build into the biometric reader itself one time session IDs.
A person puts their thumb on the reader, which then generates an ID that is
used to encrypt the biometric data (and the ID itself). The data is
decrypted at the server along with the ID (using the server side's expected
ID), the ID is matched up in the database to confirm validity of the
biometric data. Then the biometric is matched up, and the person is
authenticated.

That should prevent any kind of replay attack, and streamline the process
without the need of an additional smart card.

>
> --
> Sending unsolicited commercial e-mail to this account incurs a fee of
> $500 per message, and acknowledges the legality of this contract.


Ever made any money from this? ;-)

--
Richard S. Westmoreland
http://www.antisource.com



Bruce Barnett 10-04-2004 08:48 PM

Re: REVIEW: "Biometrics for Network Security", Paul Reid
 
"Richard S. Westmoreland" <richardsw@suscom.net> writes:

I was asking about the author's opinion, because this should be an
indication of his bias and thoroughness to a topic. I'm not a
biometric expert, but biomterics can't solve every problemn in
isolation. An unbiased writer would cover these issues. But the
world is filled with people who think their technology will solve
every problem in the world.

> That should prevent any kind of replay attack, and streamline the process
> without the need of an additional smart card.


Well, how does one know the reader is trusted? I can walk up to a
Trojan'ed reader, and it can capture my thumbprint and replay it at a
later date.


>The data is
>decrypted at the server along with the ID (using the server side's expected
>ID), the ID is matched up in the database to confirm validity of the
>biometric data.



This also requires the reader to be connected to the server in order
to be authenticated. If the network is down, or disconnected, the
person cannot be authenticated. So that's two potential problems.


I'm not trying to pick a fight. I was interested in the book, and I
wanted to see how well he covers the issues. For instance, biometrics
is just one of three factors that can be used for authentication
(something you know, something you have, and someone you are). And if
only biometrics is used, then this isn't always adequate. Bruce
Schneier made some good comments about the problems of using
biometrics for authentication.

Two of the points he covers (my web proxy is down. Otherwise I'd give
you a reference) are:

Biometrics is PUBLIC information
Biometrics cannot be changed.

Once the fingerprint template is captured, it can be replayed. It's
not secret information. You can't revoke it and re-issue it to the end
user.

Smartcards aren't the best solution to every problem, because they
cost more than thumbs. (:-)

But when combined with biometrics, they provide stronger authentication.

The way I understand it, you can do biometrics/smarcards in at least
three general categories.

1) The template is stored on the server.

Advantage: No smartcard or token is needed
Problem: Replay attacks, and inability to authenticate if disconnected

2) Template-On-Card

Advantage: The template is fetched from the card, not the
server. So the authentication can be done
off-line.

Problem: a smartcard is needed with enough memory to store the
template Also - there is a danger of a replay attack

3) Match-on-Card - The algorithm to match the template is on the card,
as well as the template. Once this is done, the data in the card
can be unlocked, and the private key on the card can be used to
authenticate the individual. Usually the card will lock itself up
if too many bad attempts are made.

Problem: Getting the algorithm to work on a smartcard (cpu,
code size, etc.) Some companies tell me they do it, or are
planning to do it.

Advantage: Strong authentication, and inability to replay the
authentication sequence because the private key isn't known or
revealed - ever.

Smartcard also have problems. The software I am using doesn't
authenticate the reader. So the PIN can be stolen, and if the card is
then stolen, you are out of luck.

Another approach is the Sony Puppy - which as I understand it combines
a smartcard and thumbprint reader into one device. You take it with
you to authenticate yourself.


This Match-on-Card is what I believe the US Government want to use
with their Common Access Card. It only makes sense.

Are you telling me that these issues aren't covered in the book you
reviewed? Oh well.



>> --
>> Sending unsolicited commercial e-mail to this account incurs a fee of
>> $500 per message, and acknowledges the legality of this contract.

>
> Ever made any money from this? ;-)


Well, I feel better. Others have made money, with the right legal
threats. It also shuts up the dimwit harvesters when I point out that
each of my e-mail addresses is unique, and ALWAYS tagged with this
message. The flames to my ISP quickly die when they realize I did not
grant them permission to harvest my address, and that I didn't
"opt-in".


--
Sending unsolicited commercial e-mail to this account incurs a fee of
$500 per message, and acknowledges the legality of this contract.

Richard S. Westmoreland 10-04-2004 09:06 PM

Re: REVIEW: "Biometrics for Network Security", Paul Reid
 

"Bruce Barnett" <spamhater103+U041004162047@grymoire.com> wrote in message
news:cjscvg$qqm$0$208.20.133.66@netheaven.com...
> "Richard S. Westmoreland" <richardsw@suscom.net> writes:
>
> I was asking about the author's opinion, because this should be an
> indication of his bias and thoroughness to a topic. I'm not a
> biometric expert, but biomterics can't solve every problemn in
> isolation. An unbiased writer would cover these issues. But the
> world is filled with people who think their technology will solve
> every problem in the world.


Sorry I was going off on a tangent - I don't care so much about the book
itself, thought I'd hop into a conversation about biometrics.

>
> > That should prevent any kind of replay attack, and streamline the

process
> > without the need of an additional smart card.

>
> Well, how does one know the reader is trusted? I can walk up to a
> Trojan'ed reader, and it can capture my thumbprint and replay it at a
> later date.


I considered this. Some kind of CRC the reader goes through and the server
matches up a hash with the reader's internal circuitry, to confirm it is
untainted.

>
>
> >The data is
> >decrypted at the server along with the ID (using the server side's

expected
> >ID), the ID is matched up in the database to confirm validity of the
> >biometric data.

>
>
> This also requires the reader to be connected to the server in order
> to be authenticated. If the network is down, or disconnected, the
> person cannot be authenticated. So that's two potential problems.


Server or desktop/laptop - can be connected to either. If I have an RSA
SecureID, and the server is down, I'm not getting on then either. I thought
the point was authentication to the *network*? No network, then I sit and
wait until it's fixed.

Rick



Bruce Barnett 10-05-2004 12:43 AM

Re: REVIEW: "Biometrics for Network Security", Paul Reid
 
"Richard S. Westmoreland" <richardsw@suscom.net> writes:

> Sorry I was going off on a tangent - I don't care so much about the book
> itself, thought I'd hop into a conversation about biometrics.


No problem.

>> Well, how does one know the reader is trusted? I can walk up to a
>> Trojan'ed reader, and it can capture my thumbprint and replay it at a
>> later date.

>
> I considered this. Some kind of CRC the reader goes through and the server
> matches up a hash with the reader's internal circuitry, to confirm it is
> untainted.


There is still two potential problems that ideally should be addressed.

1) The reader has been compromised. Any fingerprint it sees is
stores in a hidden location.

2) The user uses the wrong thumbprint reader. Or some other
thumbprint reader at another location. The data is captured.


Both the end user and the remote system has be consider these risks.

For instance, suppose the reader was attached to the local
host/controller by a USB cable. Some evil person might insert a USB
sniffer in the cable, unknown to the remote system.

While some locations are going to have high physical
security, not all locations will. So it's a potential problem.

I mentioned the Sony Puppy,

http://bssc.sel.sony.com/Professiona.../products.html

Because this is a way for a remote system to confirm that the local
system does have the token. Only with the token can the local system
generate the suitable credentials. The local system cannot replay the
data, because it doesn't have the private key stored inside the puppy.

And stealing the token won't help, because only the fingerprint will
unlock the key/credentials.

>> This also requires the reader to be connected to the server in order
>> to be authenticated. If the network is down, or disconnected, the
>> person cannot be authenticated. So that's two potential problems.

>
> Server or desktop/laptop - can be connected to either. If I have an RSA
> SecureID, and the server is down, I'm not getting on then either.


Well, a smartcard can be used without a central server. I've been
using the open source musclecard applet to do so. The java code in the
card generates a key pair, and exports the private key. The public key
can be stored in a local machine's cache/storage (especially if the
user is a frequent user).

The host generates a random challange, and asks the card to encrypt it
with the private key. The card does so, and the host verifies the ID
and grants access. (Once the PIN is verified).

This has a man-in-the-middle risk, by the way.


>I thought
> the point was authentication to the *network*?


I see the need for both local authentication and remote/network
authentication.

In large scale systems, with millions of users (the CAC card has 5
million cards issued), there is an advantage for allowing the local
system to authenticate a user, expecially in remote locations
throughout the world, during war time, etc..


> No network, then I sit and
> wait until it's fixed.


There are critical situations where waiting is not suitable. Medical,
homeland security, first response teams, military, etc.



Cheers.

--
Sending unsolicited commercial e-mail to this account incurs a fee of
$500 per message, and acknowledges the legality of this contract.

Anne & Lynn Wheeler 10-05-2004 01:51 AM

Re: REVIEW: "Biometrics for Network Security", Paul Reid
 
Bruce Barnett <spamhater103+U041004200405@grymoire.com> writes:
> Well, a smartcard can be used without a central server. I've been
> using the open source musclecard applet to do so. The java code in
> the card generates a key pair, and exports the private key. The
> public key can be stored in a local machine's cache/storage
> (especially if the user is a frequent user).
>
> The host generates a random challange, and asks the card to encrypt
> it with the private key. The card does so, and the host verifies the
> ID and grants access. (Once the PIN is verified).


in general private key encryption ... as in some form of digital
signature ... whether of a challenge of some other form of data
.... tends to either be "something you know" or "something you have"
authentication, aka from 3-factor authentication

* something you know
* something you have
* something you are

the correpsonding public key is registered with the relying party
(central authority, your local pc, etc) and the key-owner keeps the
private key in an encrypted software file or in a hardware token.

if the convention has the key-owner keeping the private key in an
encrypted file (say like some of the pkcs12 or other browser
conventions) ... then the relying party when it sees a valid digital
signature ... can assume that the key-owner had supplied the correct
pin to decrypt the software file in order that the digital signature
be performed.

the private key can be kept in a hardware token, and when a relying
party sees a valid digital signature, they can assume "something you
have" authentication on behalf of the key owner.

there are some hardware tokens that are constructed so that the
private key operations (encryption and/or digital signature) are only
performed when the correct PIN and/or biometric is presented
.... i.e. two factor authentication

* something you have
* something you know (pin) or something you are (biometric)

it is possible to construct a hardware token where three factor
authentication might be assumed ... where both a PIN and the correct
biometric is required for the token to do its job. then the relying
party might presume three factor authentication

* something you know (pin/password)
* something you have (hardware token)
* something you are (biometric)

in this case, the relying party (central authority, your local pc,
kerberos and/or radius service, etc) could reansonably expect to have

1) the public key registered,
2) the integrity characteristics of the public key registered,
3) the hardware integrity characteristics of the hardware token registered
4) the operational integrity characteristics of the hardware token
registered

so that when the relying party sees a digital signature for
verification, it has some reasonable level of assurance as to what the
verification of such a digital signature might mean (and how much it
might trust such a digital signature as having any meaning).

for a relying party to get a digital signature and be able to verify
that the digital signature is correct .... w/o additional information
the relying party has absolutely no idea as to the degree or level of
trust/assurance such a digital signature means.

somewhat orthogonal and has frequently thoroughly obfuscated the
issues about the level of trust/assurance that a relying-party might
place in a digital signature are digital certificates.

digital certificates were originally invented for the early '80s
offline email environment. the recipient (aka relying party) gets a
piece of email and has no way of proving who the sender was. so the
idea was to have the sender digitally sign the email. if the sender
and recipient were known to each other and/or had previous
interaction, the recipient could have the sender's public key on file
for validating the digital signature.
http://www.garlic.com/~lynn/subpubkey.html#certless

however, there was a theoritical offline email environment from the
early '80s where the sender and the recipient had absolutely no prior
interactions and the desire was to have the email be processed w/o
resorting to any additional interactions. this led to the idea of 3rd
party certification authorities who would certify as to the senders
identity. the sender could create a message, digital sign it and send
off the message, the digital signature and the 3rd party certified
credential (digital certificate). the recipient eventually downloads
the email, hangs up, and has absolutely no recourse to any additional
information (other than what is contained in the email).

by the early '90s, this had evolved into the x.509 identity (digital)
certificate. however, during the mid-90s, this became serverely
depreciated because of the realization about the enormous liability
and privacy issues with arbritrarily spewing x.509 identity
certificates all over the world. there was some work on something
called an abbreviated relying-party-only digital certificate ... that
basically contained only a public key and some form of account
number. random past relying-party-only posts:
http://www.garlic.com/~lynn/subpubkey.html#rpo

the relying party would use the account to look-up in some sort of
repository the actual information about the sender ... as opposed to
having the liability and privacy issues of having the sender's
information actually resident in the certificate. however, in the PGP
model as well as all of the existing password-based authentication
schemes ... it was possible to show that whatever respository contains
information about the sender ... could also contain the actual
sender's public key. In the widely deployed password-based schemes
like RADIUS, Kerberos, PAM, etc ... just substitute the registration
of a password and permissions with the registration of a public key
and permissions. So it was trivial to show that for all of the
replying-party-only certificate scenarios that the actual certificate
was redundant and superfluous.

of course, the other issue with the original design point for digital
certificates of the early '80s offline email paradigm where a sender
and a recipient that had absolutely no prior interaction and the
recipient had absolutely no other recourse for obtaining information
about the sender .... had pretty much started to disappear by the
early 90s.

and of course, the issue of certificates being redundant and superfluous
and possibly representing severe liability and privacy issues ... the
certificates didn't actually contribute to telling the recipient
(or relying party) to what degree that they could actually trust
a possible digital signature aka the issue of what the relying party
can infer from validating a digital signature .... does it represent
anything from three factor authentication

* something you know
* something you have
* something you are

and say if it might actually be associated with something you have
hardware token .... what level of assurance is associated with a
specific hardware token.

the issue in which a possible digital signature (or other private key
operation) is performed .... like biometric sensor inferface and/or
hardware token pin/password interface ... is also of some possible
concern to a recipient or relying party. one of the scenarios for
FINREAD terminal is to possibly have the terminal also digitally sign
transactions .... so the relying party has some additional idea
about the level of trust that they can place in what they have recieved.
(not only was a certified FINREAD terminal used, but the transaction
carries the digital signature of the FINREAD terminal).

misc. past FINREAD terminal posts
http://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
http://www.garlic.com/~lynn/aadsm9.htm#carnivore Shades of FV's Nathaniel Borenstein: Carnivore's "Magic Lantern"
http://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper
http://www.garlic.com/~lynn/aepay11.htm#54 FINREAD was. Authentication white paper
http://www.garlic.com/~lynn/aepay11.htm#55 FINREAD ... and as an aside
http://www.garlic.com/~lynn/aepay11.htm#56 FINREAD was. Authentication white paper
http://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
http://www.garlic.com/~lynn/aadsm11.htm#4 AW: Digital signatures as proof
http://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
http://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
http://www.garlic.com/~lynn/aadsm11.htm#23 Proxy PKI. Was: IBM alternative to PKI?
http://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and their users [was Re: Cryptogram: Palladium Only for DRM]
http://www.garlic.com/~lynn/aadsm14.htm#32 An attack on paypal
http://www.garlic.com/~lynn/aadsm14.htm#35 The real problem that https has conspicuously failed to fix
http://www.garlic.com/~lynn/aadsm15.htm#38 FAQ: e-Signatures and Payments
http://www.garlic.com/~lynn/aadsm15.htm#40 FAQ: e-Signatures and Payments
http://www.garlic.com/~lynn/aadsm16.htm#9 example: secure computing kernel needed
http://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
http://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
http://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
http://www.garlic.com/~lynn/aadsm18.htm#32 EMV cards as identity cards
http://www.garlic.com/~lynn/2001g.html#57 Q: Internet banking
http://www.garlic.com/~lynn/2001g.html#60 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#61 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#64 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
http://www.garlic.com/~lynn/2001i.html#26 No Trusted Viewer possible?
http://www.garlic.com/~lynn/2001k.html#0 Are client certificates really secure?
http://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
http://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
http://www.garlic.com/~lynn/2002c.html#10 Opinion on smartcard security requested
http://www.garlic.com/~lynn/2002c.html#21 Opinion on smartcard security requested
http://www.garlic.com/~lynn/2002f.html#46 Security Issues of using Internet Banking
http://www.garlic.com/~lynn/2002f.html#55 Security Issues of using Internet Banking
http://www.garlic.com/~lynn/2002g.html#69 Digital signature
http://www.garlic.com/~lynn/2002m.html#38 Convenient and secure eCommerce using POWF
http://www.garlic.com/~lynn/2002n.html#13 Help! Good protocol for national ID card?
http://www.garlic.com/~lynn/2002n.html#26 Help! Good protocol for national ID card?
http://www.garlic.com/~lynn/2002o.html#67 smartcard+fingerprint
http://www.garlic.com/~lynn/2003h.html#25 HELP, Vulnerability in Debit PIN Encryption security, possibly
http://www.garlic.com/~lynn/2003h.html#29 application of unique signature
http://www.garlic.com/~lynn/2003j.html#25 Idea for secure login
http://www.garlic.com/~lynn/2003m.html#51 public key vs passwd authentication?
http://www.garlic.com/~lynn/2003o.html#29 Biometric cards will not stop identity fraud
http://www.garlic.com/~lynn/2003o.html#44 Biometrics
http://www.garlic.com/~lynn/2004.html#29 passwords
http://www.garlic.com/~lynn/2004i.html#24 New Method for Authenticated Public Key Exchange without Digital Certificates
http://www.garlic.com/~lynn/2004i.html#27 New Method for Authenticated Public Key Exchange without Digital Certificates
http://www.garlic.com/~lynn/2004j.html#1 New Method for Authenticated Public Key Exchange without Digital Certificates

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Anne & Lynn Wheeler 10-06-2004 04:33 PM

Re: REVIEW: "Biometrics for Network Security", Paul Reid
 

followup
http://www.garlic.com/~lynn/2004l.html#4

note that while three factor authentication

* something you know
* something you have
* something you are

allows pin/passwords as "something you know" authentication, there can
be a big different between "something you know" as a "shared secret"
and "something you know" as a "non-shared secret".

for instance the current payment card scenario effectively has account
numbers as shared-secrets ... since gaining knowledge of the account
number can enable fraudulent transactions. harvesting of merchant
transaction files can result in account/identity theft impact because
of the ability to use the account numbers for fraudulent transactions.
some related discussion of security proporitional to risk
http://www.garlic.com/~lynn/2001h.html#61
misc. past postings about secrets and account numbers
http://www.garlic.com/~lynn/subpubkey.html#secrets

where there is a big focus on protected all occurances of account number
because of its shared-secret vulnerability. an alternative solution is
x9.59
http://www.garlic.com/~lynn/index.html#x959

where financial transactions are digitally signed and there is a
business rule that account numbers used in x9.59 transactions can't be
used in non-authenticated transactions. as a result, just knowing an
account number used in an x9.59 doesn't enable fraudulent transactions
(or account/identity theft) and therefor such account numbers no
longer needs to be considered as a shared-secret.

one of the requirements of shared-secret based infrastructure (in
addition to the requirement to needing to protect the shared-secret)
is frequently to require a unique shared-secret for different security
domains .... aka ... the password on file with your local garage ISP
should be different than passwords used for personal banking or for
you job. The issue is that for different security domains ... they may
have different levels of protection for shared-secrets. there may also
be instances where one security domain may be at odds with some other
security domain.

In effect, anything that is on file in a security domain ... and just
requires reproducing the same value for authentication can be
considered a shared secret. shared-secret passwords frequently also
have guidelines regarding frequently changes for the shared-secret.
some past references to password changing rules:
http://www.garlic.com/~lynn/2001d.html#52
http://www.garlic.com/~lynn/2001d.html#53
http://www.garlic.com/~lynn/2001d.html#62

A "something you have" hardware token can also implement "something
you know" two-factor authentication, where the "something you know" is
a non-shared-secret. The hardware token contains the secret and is
certified to require the correct secret entered for correct operation.
Since the secret isn't shared ... and/or on file with some security
domain, it is a non-shared-secret ... rather than a shared-secret.

A relying party needs some proof (possibly at registration) that
authentication information (like a digital signature) is uniquely
associated with a specific hardware token and furthermore needs
certified proof that a particular hardware token only operates in a
specific way when the correct password has been entered .... to
establish trust for the relying party that two-factor authentication
is actually taking place. In the digital signature scenario, based on
certificate of the hardware token, the relying party when it validates
a correct digital signature then can infer two-factor authentication:

* something you now (password entered into hardware token)
* something you have (hardware token generated digital signature)

In a traditional shared-secret scenario, if a shared-secret has been
compromised (say merchant transaction file has been harvested), new
shared-secrets can be issued. Typically, there a much fewer
vulnerabilities and different threat models for non-shared-secret
based infrastructures compared to shared-secret based infrastructures
(in part because of possible greater proliferation of location of
shared-secrets).

It turns out that "something you are" biometrics can also be
implemented as either a shared-secret infrastructure or a
non-shared-secret infrastructure. Biometrics typically is implemented
as some sort of mathematical value that represents some biometric
reading. In a shared-secret scenario, this biometric mathematical
value is on file someplace, in much the same manner that a password
might be on file. The person is expected to reproduce the biometric
value (in much the same way they might be expected to reproduce the
correct password). Depending on the integrity of the environment that
is used to convert the biometric reading to a mathematical value
.... and the integrity of the environment that communicates the
biometric value, a biometric shared-secret imfrastructure may be prone
to identical vulnerabilities as shared-secret password systems ... aka
somebody havests the biometric value(s) and is able to inject such
values in to the authentication infrastructure to spoof an individual.

Many shared-secret biometric infrastructures with distributed sensors
that might not always be under armed guards ... frequently go to a
great deal of trouble specifying protection mechanisms for personal
biometric information. One of the issues with respect to shared-secret
biometric infrastructures compared to shared-secret password
infrastructures, in that it is a lot easier to replace a password than
say an iris or a thumb.

There are also hardware tokens that implement non-shared-secret
biometrics, in much the same way that non-shared-secret passwords are
implemented. Rather than having the biometric values on file at some
repository, the biometric value is contained in a personal hardware
token. The personal hardware token is certified as performing in a
specific manner only when the correct biometric value is entered.
Given adequate assurance about the operation of a specific hardware
token, a relying party may then infer from something like validating
a digital signature that two-factor authentication has taken place,
i.e.

* something you have (hardware token that uniquely generates signature)
* something you are (hardware token requiring biometric value)

biometric values are actually more complex than simple passwords,
tending to having very fuzzy matches. for instance an 8-character
password either matches or doesn't match. A biometric value is more
likely to only approximately match a previously stored value.

Some biometric systems are frequently designed with hard-coded fuzzy
match threshhold values .... say like a 50 percent match value. These
systems frequently talke about false positives (where a 50 percent
match requirement results in authenticating the wrong person) or false
negatives (where a 50 percent match requirement results in rejecting
the correct person). Many of these systems tend to try and adjust
their fuzzy match value settings in order to minimize both the false
positives and false negatives.

in value-based systems, hard-coded fuzzy match values may represent a
problem. an example is transaction system that supports both $10
transactions and million dollar transactions. In general, a risk
manager may want to have a higher match requirement for higher value
transactions.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Vin McLellan 10-07-2004 03:17 AM

Re: REVIEW: "Biometrics for Network Security", Paul Reid
 
Bruce Barnett <spamhater103+U041004162047@grymoire.com> wrote:

>>>This also requires the reader to be connected to the server in
>>>order to be authenticated. If the network is down, or disconnected,
>>>the person cannot be authenticated. So that's two potential
>>>problems.


Richard S. Westmoreland <richardsw@suscom.net> replied:

>>Server or desktop/laptop - can be connected to either. If I have
>>an RSA SecureID, and the server is down, I'm not getting on then
>>either. I thought the point was authentication to the *network*?
>>No network, then I sit and wait until it's fixed.


Actually, at least so far as RSA's SecurID, this is no longer true.

Working with Microsoft, RSA developed a new SecurID for Windows
(SID4Win) infrastructure that not only simplifies the user experience
by replacing the traditional Window's logon password with a SecurID,
it requires (and keeps an audit record of) two-factor authentication
not only at the network perimeter, as is traditional, but also
whereever corporate data is stored.

RSA Authentication Agents are now installed *inside* the network, on
the Windows network domain, and networked XP desktops (even when they
are temporarily disconnected from the network), and *outside" the
perimeter, on remote company laptops -- and it works even when they
too are also temporarily disconnected from the Internet.

(See RSA's data sheet, white paper, and webcast on SID4Win at:
<http://www.rsasecurity.com/node.asp?id=1173>.

As Bruce noted in his second message:

> I see the need for both local authentication and remote/network
> authentication.
>
>In large scale systems, with millions of users (the CAC card has 5
>million cards issued), there is an advantage for allowing the local
>system to authenticate a user, expecially in remote locations
>throughout the world, during war time, etc..


Actually, I think the demand is more prosaic. If we extend strong
authentication and audit logs to the desktop (let along to mobile
corporate laptops), they can't functionally depend on omnipresent
network access. Networks simply aren't yet dependable enough that we
can let network unavailablity stop all work.

With RSA's SDI4Win infrastructure, PC users temporarily disconnected
from the network can still use their SecurIDs for local access, and
laptop users remain free to roam. A corporate laptop user can use his
SecurID (and memorized PIN) to access his mobile machine in a plane at
35,000 feet, even without a network connection, because the RSA
Authentication Manager (what RSA used to call the ACE/Server) stores a
secure cache of future SecurID token-codes -- for a variable number of
days (pre-set, per user or group, by the network admin) -- on the
laptop.

When the PC or laptop again connects with the enterprise network and
the RSA Authentication Manager, the laptop's Authentication Agent
forwards an audit log to the RSA Authentication Manager about what has
happened in the interim... and replenishes its stored cache of SecurID
token-codes for future off-line access.

With the new IT security requirements associated with various external
regulatory regimes (Basel II, Sarbanes-Oxley, HIPAA, GLBA, etc.), many
enterprises find it necessary to have audit records and strong access
controls on all repositories of proprietary or confidential corporate
data. As necessary, these AAA systems can be buttressed by crypto,
SSO, and federated access controls, but the operational baseline for
strong authentication is no longer limited by network access, nor
limited to perimeter access controls.

This is new, but since RSA is providing this capability at no
additional charge to all of the thousands of sites which use the
current version of the ACE/Server (aka, the RSA Authentication
Manager), it will probably be widely adopted fairly quickly.

Back to biometrics:

Perhaps the most successful use of biometrics I have seen is the use
of a local biometrics check to block fraudulent multiple claims in
benefits payments in the social services. In that mode, the benefits
recipient is first registered on a secure terminal, under supervision.
A central record is made of the recipient's ID and biometric (a
fingerprint, as I recall) and his approved benefits, and a check is
made to make sure that recipient is making no other claims on the
system under another identity.

Vin McLellan 10-07-2004 03:40 AM

Re: REVIEW: "Biometrics for Network Security", Paul Reid
 
Bruce Barnett <spamhater103+U041004162047@grymoire.com> wrote:

>>>This also requires the reader to be connected to the server in
>>>order to be authenticated. If the network is down, or disconnected,
>>>the person cannot be authenticated. So that's two potential
>>>problems.


Richard S. Westmoreland <richardsw@suscom.net> replied:

>>Server or desktop/laptop - can be connected to either. If I have
>>an RSA SecureID, and the server is down, I'm not getting on then
>>either. I thought the point was authentication to the *network*?
>>No network, then I sit and wait until it's fixed.


Actually, at least so far as RSA's SecurID, this is no longer true.

Working with Microsoft, RSA developed a new SecurID for Windows
(SID4Win) infrastructure that not only simplifies the user experience
by replacing the traditional Window's logon password with a SecurID,
it requires (and keeps an audit record of) two-factor authentication
not only at the network perimeter, as is traditional, but also
whereever corporate data is stored.

RSA Authentication Agents are now installed *inside* the network, on
the Windows network domain, and networked XP desktops (even when they
are temporarily disconnected from the network), and *outside" the
perimeter, on remote company laptops -- and it works even when they
too are also temporarily disconnected from the Internet.

(See RSA's data sheet, white paper, and webcast on SID4Win at:
<http://www.rsasecurity.com/node.asp?id=1173>.

As Bruce noted in his second message:

> I see the need for both local authentication and remote/network
> authentication.
>
>In large scale systems, with millions of users (the CAC card has 5
>million cards issued), there is an advantage for allowing the local
>system to authenticate a user, expecially in remote locations
>throughout the world, during war time, etc..


Actually, I think the demand is more prosaic. If we extend strong
authentication and audit logs to the desktop (let alone to mobile
corporate laptops), they can't functionally depend on omnipresent
network access. Networks simply aren't yet dependable enough that we
can let network unavailablity stop all work. Having two loosely-
coupled systems works to our advantage in many environments.

With RSA's SDI4Win infrastructure, PC users temporarily disconnected
from the network can still use their SecurIDs for local access, and
laptop users remain free to roam. A corporate laptop user can use his
SecurID (and memorized PIN) to access his mobile machine in a plane at
35,000 feet, even without a network connection, because the RSA
Authentication Manager (what RSA used to call the ACE/Server) stores a
secure cache of future SecurID token-codes -- for a variable number of
days (pre-set, per user or group, by the network admin) -- on the
laptop.

When the PC or laptop again connects with the enterprise network and
the RSA Authentication Manager, the laptop's Authentication Agent
forwards an audit log to the RSA Authentication Manager about what has
happened in the interim... and replenishes its stored cache of SecurID
token-codes for future off-line access.

With the new IT security requirements associated with various external
regulatory regimes (Basel II, Sarbanes-Oxley, HIPAA, GLBA, etc.), many
enterprises find it necessary to have audit records and strong access
controls on all repositories of proprietary or confidential corporate
data. As necessary, these AAA systems can be buttressed by crypto,
SSO, and federated access controls, but the operational baseline for
strong authentication is no longer limited by network access, nor
limited to perimeter access controls.

This is new, but since RSA is providing this capability at no
additional charge to all of the thousands of sites which use the
current version of the ACE/Server (aka, the RSA Authentication
Manager), it will probably be widely adopted fairly quickly.

Back to biometrics:

Perhaps the most successful use of biometrics I have seen is the use
of a local biometrics check to block fraudulent multiple claims in
benefits payments in the social services. In that mode, the benefits
recipient is first registered on a secure terminal, under supervision.
A central record is made of the recipient's ID and biometric (a
fingerprint, as I recall) and his approved benefits, and a check is
made to make sure that recipient is making no other claims on the
system under another identity.

To pick up his benefits payment, the recipient shows up at an office
where he goes through a quick process which merely matches his
biometric to the card he carries. My recollection is that such a
system -- in use in several states including Connecticut -- cut
benefits fraud very effectively and paid for itself very quickly.

A targeted function; a problem resolved.

Suerte,

_Vin

PS. Hi Rob!


All times are GMT. The time now is 04:28 AM.

Powered by vBulletin®. Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SEO by vBSEO ©2010, Crawlability, Inc.