Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > Coding skills

Reply
Thread Tools

Coding skills

 
 
Kelsey Bjarnason
Guest
Posts: n/a
 
      02-20-2008
[snips]

On Sun, 17 Feb 2008 21:22:40 +0000, Ark Khasin wrote:

>> Their conclusion? Something's broken. Proper conclusion: if you
>> invalidate the contract against which the code was designed, things are
>> going to break.


> I've been around for some time, and I am yet to see a single case where
> a customer (external or internal) knows upfront what requirements she
> actually has.
> It is a sign of maturity of the programmer to anticipate (and suggest!)
> changes in specifications, - and write code with guards against
> attempted violations.


Sure. Now let me ask you this: how often do *you* write your code to,
oh, cache all the local OS files so that if something goes wrong, you can
recover?

Right, you don't. Part of the contract the software is designed to is
that it has a functioning OS. It's not the application's job to ensure
this.

>> Coding skills? Who cares, as long as things work? It takes a really
>> unusual environment to even be able to recognize coding skills, let
>> alone determine the relative level of skills involved.


> If things /really/ work, the skills have been adequate (by definition).
> Not every parking garage has to be an architectural marvel. None should
> collapse under its weight.
> Coding skills stand out immediately in e.g. code reviews.


This assumes the reviewers know what they're looking for, and at, which
is not always the case.

Take the case in question: every predictable failure - server outages,
for example - is dealt with in the code, which is *very* robust about
such things.

So what caused our failure? Was it a server outage? No. It was someone
breaking the contract. The code is designed a particular way, based on a
particular flow of data through the system. That flow was tampered with,
with undesirable results. Yet in normal operation - "normal" including
all failure modes that the environment can actually expect to encounter
in usage - causes no such problem.

So, is this a case of a bad design, bad code? No; it's a case of someone
violating the contract to which the code is written. So what'll a code
review show? Oh, yes, the code does not deal with the case of someone
with admin level access injecting invalid data into the system. Well, of
course not; it wasn't designed to, as in normal operation it does not
*get* invalid data, and in every error mode which can be predicted, it
simply gets no data at all.

You cannot prevent all failures; you can only prevent the ones which are
predictable. Nor can you even *detect* all failures, only the ones which
stand out in some algorithmically detectable way - but even there, the
effort to do so may simply not be worth it: if it requires some thumb-
fingered twit to manually mess things up in the first place, do you spend
an extra three weeks writing code to deal with that, or do you just tell
said thumb-fingered twit "don't do that"?

 
Reply With Quote
 
 
 
 
Ark Khasin
Guest
Posts: n/a
 
      02-22-2008
Kelsey Bjarnason wrote:
> [snips]
>>> It is a sign of maturity of the programmer to anticipate (and suggest!)

>> changes in specifications, - and write code with guards against
>> attempted violations.

>
> Sure. Now let me ask you this: how often do *you* write your code to,
> oh, cache all the local OS files so that if something goes wrong, you can
> recover?
>
> Right, you don't. Part of the contract the software is designed to is
> that it has a functioning OS. It's not the application's job to ensure
> this.

Disputable. Haven't you seen application-level workarounds for known OS
err... quirks? Including inspection of OS version etc.?
If you /assume/ a functioning OS, the OS is included in your product V&V.
It's trivially a quality issue. A quality level acceptable for a word
processor is different from that of an aircraft engine controller. But
you knew that.
>
>>> Coding skills? Who cares, as long as things work? It takes a really
>>> unusual environment to even be able to recognize coding skills, let
>>> alone determine the relative level of skills involved.

>
>> If things /really/ work, the skills have been adequate (by definition).
>> Not every parking garage has to be an architectural marvel. None should
>> collapse under its weight.
>> Coding skills stand out immediately in e.g. code reviews.

>
> This assumes the reviewers know what they're looking for, and at, which
> is not always the case.

We aren't talking just pro forma reviews, are we?
>
> Take the case in question: every predictable failure - server outages,
> for example - is dealt with in the code, which is *very* robust about
> such things.
>
> So what caused our failure? Was it a server outage? No. It was someone
> breaking the contract. The code is designed a particular way, based on a
> particular flow of data through the system. That flow was tampered with,
> with undesirable results. Yet in normal operation - "normal" including
> all failure modes that the environment can actually expect to encounter
> in usage - causes no such problem.
>
> So, is this a case of a bad design, bad code? No; it's a case of someone
> violating the contract to which the code is written. So what'll a code
> review show? Oh, yes, the code does not deal with the case of someone
> with admin level access injecting invalid data into the system. Well, of
> course not; it wasn't designed to, as in normal operation it does not
> *get* invalid data, and in every error mode which can be predicted, it
> simply gets no data at all.

I would suggest to separate the discussion of bad designs vs. bad code.
I've seen horrific code implementing a sensible design (the thing
magically works but isn't maintainable) and elegant code implementing an
idiotic design (the thing doesn't work, but programmers' artifacts are
salvaged for the next spin of the design).
>
> You cannot prevent all failures; you can only prevent the ones which are
> predictable. Nor can you even *detect* all failures, only the ones which
> stand out in some algorithmically detectable way - but even there, the
> effort to do so may simply not be worth it: if it requires some thumb-
> fingered twit to manually mess things up in the first place, do you spend
> an extra three weeks writing code to deal with that, or do you just tell
> said thumb-fingered twit "don't do that"?
>

Think of a deliberate attack on your system. How clever must an attacker
be to break it? How protective should you be? Is the corresponding
decision-making a part of the design process? Is it a part of the
`contract', whatever the term means?
There is a practitioner's technique - FMEDA - not ideal but sufficiently
practical, which includes brainstorming on what - at all - can possibly
go wrong. Then you classify all failures as detected and undetected in
your design. You do it consciously. If your work affects lives or
well-being of people (e.g., managing medical records or warehousing of
warheads), you additionally classify failures as dangerous or not. You
then explicitly accept a tolerable risk level of each class of failures
as your design input.
If that becomes part of the `contract' and is accepted by your customer,
then indeed breaking it is none of your fault.

--
Ark
 
Reply With Quote
 
 
 
 
Kelsey Bjarnason
Guest
Posts: n/a
 
      02-24-2008
[snips]

On Fri, 22 Feb 2008 13:21:03 +0000, Ark Khasin wrote:

>> Sure. Now let me ask you this: how often do *you* write your code to,
>> oh, cache all the local OS files so that if something goes wrong, you
>> can recover?
>>
>> Right, you don't. Part of the contract the software is designed to is
>> that it has a functioning OS. It's not the application's job to ensure
>> this.


> Disputable. Haven't you seen application-level workarounds for known OS
> err... quirks?


Sure. Now, in Windows terms, delete the registry and see how well things
work. Oh, wait, they don't. Thus an application is obligated to cache
the entire registry and be able to reinstall it as needed? Which also
means every application runs at highest privilege levels?

No, that's silly. The application's job is to _do it's job_ not to
babysit the OS.


> Think of a deliberate attack on your system. How clever must an attacker
> be to break it?


In our case, very. He has to be inside the LAN. Not just the normal
office LAN, either, but inside a private sub-LAN to which only two
machines have access; one is not on the main LAN at all, the other is
well secured.

> How protective should you be?


Against admins who have and need access to do their jobs? The very
people who are paid to administer those systems? Protecting yourself
from the very people paid to work on the systems seems a bit silly.

> Is the corresponding
> decision-making a part of the design process? Is it a part of the
> `contract', whatever the term means?


You're unfamiliar with the concept? I'll explain it.

A program is expected to do certain things, certain ways. In order to do
that, it needs certain basic guarantees: a machine to run on, with a
working OS, power, whatever usernames and passwords and the like are
required, the ability to connect to other machines it needs to
communicate with and so forth.

It also needs to know other conditions it will deal with. If it is a
world-facing program, it needs to know that, as the security issues are
different from those applying to a machine which is locked in a vault
with no network access.

It also needs to know other issues, such as "the power here goes down for
four hours every Thursday night" or "sometimes the AC fails and when it
does, the UPS goes into thermal shutdown, taking the system with it."

It also needs to know what it is expected to do, and how.

Collectively, these are the "contract". The program honours its side of
the contract by doing what it's supposed to do, correctly, while coping
with the known failure conditions, as well as any unknown but predictable
failure conditions. It should be able to, say, cope with an unexpected
power failure, if the nature of the application is such that a power
failure would have unacceptable consequences.

The flip side of that is that the program needs the contract to be
honoured by the environment in which it works. Thus the occasional
unscheduled power outage, or the network going down now and then, or a
server it needs to talk to failing to respond, these are predictable
failures.

Some bonehead coming along and injecting bogus data into the primary
database, outside the program's control, is *not* such a condition. The
only ones who have the access to do that also, in theory, have the
knowledge not to. It is an unrealistic demand to have the program guard
itself against that in general, even more so when it is nigh-on
impossible to detect the problem except when it comes to the final
output, many steps later - many programs later.

> There is a practitioner's technique - FMEDA - not ideal but sufficiently
> practical, which includes brainstorming on what - at all - can possibly
> go wrong. Then you classify all failures as detected and undetected in
> your design. You do it consciously. If your work affects lives or
> well-being of people (e.g., managing medical records or warehousing of
> warheads), you additionally classify failures as dangerous or not. You
> then explicitly accept a tolerable risk level of each class of failures
> as your design input.
> If that becomes part of the `contract' and is accepted by your customer,
> then indeed breaking it is none of your fault.


Exactly. So let's take an example.

Several years ago, I read a book about Three Mile Island. According to
it (I don't know the veracity of what it said, I merely use it here as an
example) the designs of the plant included valve sensors to determine
whether the valves were open or closed. The designs were such that a
valve would only report as closed (or open) if it actually *was* closed
(or open).

One can envision any of a number of ways to achieve this, not least of
which is simply having a portion of the valve make or break an electrical
contact as it moves up to open or down to close. If it ain't closed,
there's no circuit on the "closed" side, so the indicator circuit doesn't
operate.

Instead, according to the book, it was actually built a little
differently, with the indicators tied to the power side of the valves.
Thus when power was applied, the indicator reported "closed", when there
was no power, it reported "open".

One slight difference in operation: in the first case, the indicators
would never report "closed" unless they were; a stuck valve, for example,
would result in no contact being made, no "closed" report being issued.
In the latter case, however, a stuck valve still had power, it just
wasn't actually closed - but because of the power, it would report
"closed".

Suppose you're writing the software for such a system. You know that if
temperatures go above a certain point, you need to close some valves,
open others. You know - as part of the contract - that when a valve says
"closed" it really is closed. So you check the state of the valves,
close the ones which need closing, check to ensure they are, in fact
closed and all is good.

Except it ain't, because the valve monitors are lying to you; they're
telling you valves are closed when they're not.

So, do we blame the code? Or do we blame the thumb-fingered idiot who
installed the wrong sort of sensor? The code lived up to its side of the
contract, the other side didn't. The contract was violated.

You can put the best programmer in the world on a project and still get
bogus results, if the contract to which he designed the program is
violated. This doesn't make him a bad programmer, it means he wrote the
program to work in one set of conditions, a set of conditions which
changed.

Perhaps the simplest example of that is hiring someone to write a Windows
program then complaining when it doesn't work in Linux, or on a Mac.
Well, of course not; the contract the program was written to said
"Windows" not "An arbitrarily changing operating system". Is it the
code's fault it doesn't work? No. It's the fault of a violated contract.

 
Reply With Quote
 
mirzamisamhusain@gmail.com
Guest
Posts: n/a
 
      02-27-2008
On Feb 18, 1:34 pm, Flash Gordon <(E-Mail Removed)> wrote:
> Malcolm McLean wrote, On 17/02/08 22:45:
>
>
>
>
>
> > "Flash Gordon" <(E-Mail Removed)> wrote in message
> >news:(E-Mail Removed)-gordon.me.uk...
> >> Malcolm McLean wrote, On 17/02/08 18:08:

>
> >>> "Flash Gordon" <(E-Mail Removed)> wrote in message
> >>>news:(E-Mail Removed)-gordon.me.uk...
> >>>> Malcolm McLean wrote, On 17/02/08 10:32:

>
> >>>> <snip>

>
> >>>>> any other group of people they employ. This is human nature.
> >>>>> Programmers are especially vulnerable because there is no
> >>>>> professional body.

>
> >>>> Apart from the BCS in the country in which you live if you want a
> >>>> professional body dedicated to IT or the IET if you want a larger
> >>>> body which does not specialise in IT but does accept it as a
> >>>> discipline. Of course, if you don't count bodies that can grant
> >>>> Chartered Engineer status...

>
> >>>> I will admit that the BCS is fairly new, after all it was only
> >>>> established in 1957...

>
> >>> I'm not a member of the BritishComputerSociety, and this is typical.

>
> >> You said there was no professional body, you were wrong. There are two
> >> in the UK alone, one dedicate to just IT professionals. Just because
> >> you and lots of others are not members does not stop them from
> >> existing especially as lots of people *are* members or one or both of
> >> them.

>
> > The BritishComputerSociety is not a professional body.

>
> I suggest you sue the BCS for misleading advertising then, since under
> "About us" on their home page they say, "BCS is the leading professional
> body for those working in IT."
>
> > That is to say,
> > it does not control conditions for entry into a profession known as
> > "computerprogramming", and it has no powers to prevent anyone from
> > practising as acomputerprogrammer.

>
> If you are going to use your own definitions for terms then you need to
> state what definition you are using otherwise no one will know what you
> means. So in this case what you intended is true but what you actually
> stated is clearly false. Tell me, have you redefined all the terms used
> in your field of work as well, such as molecule?
> --
> Flash Gordon


i want to know that how will i create any pro gramme in c and i am
unable to understand c programming and i join a computer institute so
i want to ask the question about it.
thanks
MIRZA MISAM HUSAIN <(E-Mail Removed)>
 
Reply With Quote
 
santosh
Guest
Posts: n/a
 
      02-27-2008
http://www.velocityreviews.com/forums/(E-Mail Removed) wrote:

<snip>

> i want to know that how will i create any pro gramme in c and i am
> unable to understand c programming and i join a computer institute so
> i want to ask the question about it.
> thanks
> MIRZA MISAM HUSAIN <(E-Mail Removed)>


Start with this tutorial. If you have any difficulty ask here, but
provide details.

<http://www.eskimo.com/~scs/cclass/cclass.html>

Also see the comp.lang.c FAQ, but it will only make sense after you have
learned the basics from the above tutorial.

<http://www.c-faq.com/>

 
Reply With Quote
 
Flash Gordon
Guest
Posts: n/a
 
      03-01-2008
santosh wrote, On 27/02/08 07:32:
> (E-Mail Removed) wrote:
>
> <snip>
>
>> i want to know that how will i create any pro gramme in c and i am
>> unable to understand c programming and i join a computer institute so
>> i want to ask the question about it.
>> thanks
>> MIRZA MISAM HUSAIN <(E-Mail Removed)>

>
> Start with this tutorial. If you have any difficulty ask here, but
> provide details.
>
> <http://www.eskimo.com/~scs/cclass/cclass.html>


That is a good tutorial.

> Also see the comp.lang.c FAQ, but it will only make sense after you have
> learned the basics from the above tutorial.
>
> <http://www.c-faq.com/>


That is good reference material.

However, in my opinion you should have a good text book/reference as
well as a good tutorial. K&R2 is good for this (the bibliography in the
FAQ tells you what K&R2 is).
--
Flash Gordon
 
Reply With Quote
 
Keith Thompson
Guest
Posts: n/a
 
      03-01-2008
Flash Gordon <(E-Mail Removed)> writes:
> santosh wrote, On 27/02/08 07:32:

[...]
>> Start with this tutorial. If you have any difficulty ask here, but
>> provide details.
>>
>> <http://www.eskimo.com/~scs/cclass/cclass.html>

>
> That is a good tutorial.


I haven't looked at it much, but I have little doubt that you're
correct.

>> Also see the comp.lang.c FAQ, but it will only make sense after you have
>> learned the basics from the above tutorial.
>>
>> <http://www.c-faq.com/>

>
> That is good reference material.


Agreed, but it's not a general reference for the C language (and it's
not intended to be one). Its intent is to clear up points of
confusion, not to teach or fully describe the language. It's
extremely useful *after* you've started to learn the language.

> However, in my opinion you should have a good text book/reference as
> well as a good tutorial. K&R2 is good for this (the bibliography in
> the FAQ tells you what K&R2 is).


Yes, K&R2 is excellent -- but I'd say it's primarily a tutorial,
though it also has a reference section at the back. In that sense,
K&R2 and <http://www.eskimo.com/~scs/cclass/cclass.html> probably
serve much the same purpose.

H&S5 (Harbison & Steele, 5th edition) is a good reference.

The definitive reference, of course, is the language standard, but
it's emphatically not a tutorial.

--
Keith Thompson (The_Other_Keith) <(E-Mail Removed)>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
general coding issues - coding style... calmar Python 11 02-21-2006 10:36 AM
please help me in improving my designing and coding skills phani rajiv Java 5 09-20-2005 05:49 PM
Basic skills needed for wireless network set-up? =?Utf-8?B?TWFyaw==?= Wireless Networking 3 01-09-2005 01:02 AM
SKILLS BEING MEASURED exam 70-306 Christiaan MCSD 5 02-10-2004 07:23 PM
Staff Skills Assessment Tests Emma Microsoft Certification 1 10-02-2003 11:31 PM



Advertisments