Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > C unit testing and regression testing

Reply
Thread Tools

C unit testing and regression testing

 
 
Ian Collins
Guest
Posts: n/a
 
      08-09-2013
Malcolm McLean wrote:
> On Thursday, August 8, 2013 6:29:02 PM UTC+1, James Kuyper wrote:
>> On 08/08/2013 12:46 PM, James Harris wrote:
>>
>>> I am looking for an approach in preference to a package or a framework.
>>> Simple and adequate is better than comprehensive and complex.

>>
>> For me, "simple" and "adequate" would be inherently conflicting goals,
>> while "comprehensive" would be implied by "adequate" - but it all
>> depends upon what you mean by those terms.
>>

> Adequate means that a bug might slip through, but the software is likely still
> to be usable. Comprehensive would mean that, as far as humanly impossible,
> all bugs are caught, regardless of expense. Comprehensive only equates to
> adequate if any bug at all renders the software unacceptable, which might
> be the case in a life support system (but only if the chance of a software
> failure is of the same or greater order of magnitude as the chance of a
> hardware component failing).
> The real secret is to write code so that it is easily unit-testable.


The best way to do that is to write the unit tests before the code.

> That
> means with as few dependencies and as simple an interface as possible.


Which tended to be a consequence of writing the unit tests before the code.

--
Ian Collins
 
Reply With Quote
 
 
 
 
Ian Collins
Guest
Posts: n/a
 
      08-09-2013
Malcolm McLean wrote:
> On Thursday, August 8, 2013 9:33:15 PM UTC+1, James Kuyper wrote:
>> On 08/08/2013 02:49 PM, Malcolm McLean wrote:
>>
>> For my group, "adequate" testing has been officially defined as testing
>> that each branch of the code gets exercised by at least one of the test
>> cases, and that it has been confirmed that the expected test results
>> have been achieved for each of those cases, and that the test plan has
>> been designed to make sure that test cases corresponding to different
>> branches have distinguishable expected results (I'm amazed at how often
>> people writing test plans forget that last issue).
>>

> Your group might have officially defined the term "adequate testing". But
> most of us don't work for your group, and are unlikely to adopt the
> definition.
>
> Coverage is a reasonable criterion, however. It doesn't prove a program
> is correct, because you've got a combinatorial problem with branch
> points. But it will catch most bugs.
> However a lot of code has branch points for memory allocation failures
> which are extremely unlikely to happen. You could argue that if it's worth
> handle the allocation failure, it's also worth testing it. But it does
> increase the difficulty of testing considerably - either you've got to
> alter the source code, or you need a special malloc package, or you need
> to fiddle with a debugger.


Or you use a testing framework that can mock malloc and friends.

> You can make a strong case that not going to
> the trouble is "adequate".


Not really, if the code isn't covered by a test, it shouldn't be there.
One of my teams used to pay the testers who did the product acceptance
testing in beer if they found bugs in our code. Most "bugs" turned out
to be ambiguities in the specification.

--
Ian Collins
 
Reply With Quote
 
 
 
 
James Harris
Guest
Posts: n/a
 
      08-09-2013

"James Kuyper" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)...
> On 08/08/2013 12:46 PM, James Harris wrote:
>> What do you guys use for testing C code? From web searches there seem to
>> be
>> common ways for some languages but for C there doesn't seem to be any
>> particular consensus.

>
> I'm surprised to hear that there's a consensus in any language. I'd
> expect any "one size fits all" approach to fail drastically. I realize
> that this is not what you were looking to discuss, but if you would,
> could you at least summarize what are those "common ways for some
> languages" are?


I mentioned common ways for some languages. For Java the biggie seems to be
JUnit. For Python there's Unittest, possibly because it is built in. Also,
Nose and Cucumber seem popular.

James


 
Reply With Quote
 
James Harris
Guest
Posts: n/a
 
      08-09-2013

"James Kuyper" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)...
> On 08/08/2013 05:55 PM, Malcolm McLean wrote:
>> On Thursday, August 8, 2013 9:33:15 PM UTC+1, James Kuyper wrote:
>>> On 08/08/2013 02:49 PM, Malcolm McLean wrote:
>>>
>>> For my group, "adequate" testing has been officially defined as testing
>>> that each branch of the code gets exercised by at least one of the test
>>> cases, and that it has been confirmed that the expected test results
>>> have been achieved for each of those cases, and that the test plan has
>>> been designed to make sure that test cases corresponding to different
>>> branches have distinguishable expected results (I'm amazed at how often
>>> people writing test plans forget that last issue).
>>>

>> Your group might have officially defined the term "adequate testing". But
>> most of us don't work for your group, and are unlikely to adopt the
>> definition.

>
> I made no claim to the contrary. As I said, it's a definition "for my
> group", and I brought it up as an example of how "it all depends upon
> what you mean by those terms." The terms that James Harris used:
> "simple", "adequate", "comprehensive", and "complex", are all judgement
> calls - they will inherently be judged differently by different people
> in different contexts.


To me, "adequate" means something allowing the job to be done, albeit
without nice-to-have extras. In this context, "comprehensive" means complex
and all encompassing. I was thinking of a testing approach providing
comprehensive facilities, not of being able to carry out a comprehensive set
of tests. Anything that allows a comprehensive set of tests is, er,
adequate!

James


 
Reply With Quote
 
James Harris
Guest
Posts: n/a
 
      08-09-2013

"Les Cargill" <(E-Mail Removed)> wrote in message
news:ku0lti$2cm$(E-Mail Removed)...
> James Harris wrote:
>> What do you guys use for testing C code?

>
> 1) More 'C' code.
> 2) External script based drivers.
>
>> From web searches there seem to be
>> common ways for some languages but for C there doesn't seem to be any
>> particular consensus.
>>

>
> I don't find COTS test frameworks valuable.
>
>> I am looking for an approach in preference to a package or a framework.
>> Simple and adequate is better than comprehensive and complex.
>>
>> If I have to use a package (which is not cross-platform) this is to run
>> on
>> Linux. But as I say an approach would be best - something I can write
>> myself
>> without too much hassle. At the moment I am using ad-hoc code but there
>> has
>> to be a better way.
>>

>
> Why must there be a better way?


For a number of reasons. Ad-hoc code is bespoke, unfamiliar, irregular. More
structured approaches, on the other hand, are easier to understand, modify
and develop. Also, most activities which have a common thread can be made
more regular and the common parts abstracted out to make further similar
work shorter and easier.

James


 
Reply With Quote
 
James Harris
Guest
Posts: n/a
 
      08-09-2013

"Jorgen Grahn" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed). ..
> On Thu, 2013-08-08, James Harris wrote:
>> What do you guys use for testing C code? From web searches there seem to
>> be
>> common ways for some languages but for C there doesn't seem to be any
>> particular consensus.
>>
>> I am looking for an approach in preference to a package or a framework.
>> Simple and adequate is better than comprehensive and complex.

>
> Here's what I use.
>
> 1. A script which goes through the symbols in an object file or archive,
> finds the C++ functions named testSomething() and generates code to
> call all of these and print "OK" or "FAIL" as needed.
>
> 2. A C++ header file which defines the exception which causes the
> "FAIL". Also a bunch of template functions named assert_eq(a, b)
> etc for generating them.


When such an assert_eq discovers a mismatch how informative is it about the
cause? I see it has no parameter for descriptive text.

That gives me an idea. I wonder if C's macros could be a boon here in that
they could be supplied with any needed parameters to generate good testing
code. Possibly something like the following

EXPECT(function(parms), return_type, expected_result, "text describing the
test")
EXPECT(function(parms), return_type, expected_result, "text describing the
test")

Then a whole series of such EXPECT calls could carry out the simpler types
of test. For any that fail the EXPECT call could state what was expected,
what was received, and produce a relevant message identifying the test which
failed such as

Expected 4, got 5: checking the number of zero elements in the array

where the text at the end comes from the last macro argument.

Of course, the test program could write the number of successes and failures
at the end.


> 3. And then I write tests. One function is one test case. No support
> for setup/teardown functions and stuff; the user will have to take
> care of that if needed.
>
> Doesn't really force the user to learn C++ ... and I suppose something
> similar can be done in C. Longjmp instead of exceptions?


It's a good idea but I'm not sure that longjmp would be possible without
modifying the code under test.

James


 
Reply With Quote
 
James Harris
Guest
Posts: n/a
 
      08-09-2013

"Jorgen Grahn" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed). ..
> On Thu, 2013-08-08, <william@wilbur.25thandClement.com> wrote:
>> James Harris <(E-Mail Removed)> wrote:
>>> What do you guys use for testing C code? From web searches there seem to
>>> be common ways for some languages but for C there doesn't seem to be any
>>> particular consensus.

> ...
>>> My main focus is on how best to do unit testing and regression testing
>>> so
>>> any comments on those would be appreciated.

>>
>> Shell scripting!
>>
>> It takes thoughtful API preparation, but that's sort of a bonus. And it's
>> exceedingly portable, future-proof, and has no library dependencies.
>>
>> Basically, each unit file has a small section at the bottom with a
>> preprocessor-guarded main() section. I then write code to parse
>> command-line
>> options which forward to the relevant functions and translate the output.

>
> So you have a test.c, and you can build and run two tests as e.g.
>
> ./test t4711 t9112
>
> right? Or perhaps you mix the functionality, the tests and the
> command-line parser in the same file.


It's an interesting approach to test whole programs. Some time ago I wrote a
tester that could be controlled by a grid where each row was a test and the
columns represented 1) the command, 2) what would be passed to the program
under test via its stdin, 3) what to look for coming out of the program on
its stdout and stderr, and 4) what return code to expect.

That approach did have some advantages. It could test command line programs
written in any language. The language did not matter as all it cared about
was their inputs and outputs. Further, being able to see and edit tests in a
grid form (e.g. via a spreadsheet) it was very easy to understand which
tests were present and which were missing. There was also a script interface
to the same tester.

However, that's got to be an unusual approach hasn't it? I imagine it's more
normal to compile test code with the program being tested and interact with
it at a lower level. That would certainly be more flexible.

James


 
Reply With Quote
 
Ian Collins
Guest
Posts: n/a
 
      08-09-2013
James Harris wrote:
> "Jorgen Grahn" <(E-Mail Removed)> wrote in message
> news:(E-Mail Removed). ..
>> On Thu, 2013-08-08, James Harris wrote:
>>> What do you guys use for testing C code? From web searches there seem to
>>> be
>>> common ways for some languages but for C there doesn't seem to be any
>>> particular consensus.
>>>
>>> I am looking for an approach in preference to a package or a framework.
>>> Simple and adequate is better than comprehensive and complex.

>>
>> Here's what I use.
>>
>> 1. A script which goes through the symbols in an object file or archive,
>> finds the C++ functions named testSomething() and generates code to
>> call all of these and print "OK" or "FAIL" as needed.
>>
>> 2. A C++ header file which defines the exception which causes the
>> "FAIL". Also a bunch of template functions named assert_eq(a, b)
>> etc for generating them.

>
> When such an assert_eq discovers a mismatch how informative is it about the
> cause? I see it has no parameter for descriptive text.


If it's anything like common unit test frameworks, it would output
something like "failure in test whatever at line bla, got this, expected
that" which is all you really need.

> That gives me an idea. I wonder if C's macros could be a boon here in that
> they could be supplied with any needed parameters to generate good testing
> code. Possibly something like the following
>
> EXPECT(function(parms), return_type, expected_result, "text describing the
> test")
> EXPECT(function(parms), return_type, expected_result, "text describing the
> test")


There is a common tool called "expect (http://expect.sourceforge.net )
which is frequently used for acceptance testing.

I use something similar with my unit test framework. For example if I
want to test write gets called with the expected file descriptor and
data size, but don't care about the data I would write something like:

write::expect( 42, test::Ignore, size );

functionUnderTest();

CPPUNIT_ASSERT( write::called );

The harness maps mocked functions to objects, so they can have state and
perform generic actions such as checking parameter values.

> Then a whole series of such EXPECT calls could carry out the simpler types
> of test. For any that fail the EXPECT call could state what was expected,
> what was received, and produce a relevant message identifying the test which
> failed such as
>
> Expected 4, got 5: checking the number of zero elements in the array
>
> where the text at the end comes from the last macro argument.
>
> Of course, the test program could write the number of successes and failures
> at the end.


That's normal behaviour for a test harness.

>> 3. And then I write tests. One function is one test case. No support
>> for setup/teardown functions and stuff; the user will have to take
>> care of that if needed.
>>
>> Doesn't really force the user to learn C++ ... and I suppose something
>> similar can be done in C. Longjmp instead of exceptions?

>
> It's a good idea but I'm not sure that longjmp would be possible without
> modifying the code under test.


It's way easier with exceptions, another reason for using a C++ harness.

--
Ian Collins
 
Reply With Quote
 
Jorgen Grahn
Guest
Posts: n/a
 
      08-09-2013
On Fri, 2013-08-09, James Harris wrote:
>
> "Jorgen Grahn" <(E-Mail Removed)> wrote in message
> news:(E-Mail Removed). ..
>> On Thu, 2013-08-08, <william@wilbur.25thandClement.com> wrote:
>>> James Harris <(E-Mail Removed)> wrote:
>>>> What do you guys use for testing C code? From web searches there seem to
>>>> be common ways for some languages but for C there doesn't seem to be any
>>>> particular consensus.

>> ...
>>>> My main focus is on how best to do unit testing and regression testing
>>>> so
>>>> any comments on those would be appreciated.
>>>
>>> Shell scripting!
>>>
>>> It takes thoughtful API preparation, but that's sort of a bonus. And it's
>>> exceedingly portable, future-proof, and has no library dependencies.
>>>
>>> Basically, each unit file has a small section at the bottom with a
>>> preprocessor-guarded main() section. I then write code to parse
>>> command-line
>>> options which forward to the relevant functions and translate the output.

>>
>> So you have a test.c, and you can build and run two tests as e.g.
>>
>> ./test t4711 t9112
>>
>> right? Or perhaps you mix the functionality, the tests and the
>> command-line parser in the same file.

>
> It's an interesting approach to test whole programs. Some time ago I wrote a
> tester that could be controlled by a grid where each row was a test and the
> columns represented 1) the command, 2) what would be passed to the program
> under test via its stdin, 3) what to look for coming out of the program on
> its stdout and stderr, and 4) what return code to expect.
>
> That approach did have some advantages. It could test command line programs
> written in any language. The language did not matter as all it cared about
> was their inputs and outputs. Further, being able to see and edit tests in a
> grid form (e.g. via a spreadsheet) it was very easy to understand which
> tests were present and which were missing. There was also a script interface
> to the same tester.


I see it as one of the benefits of designing your programs as
non-interactive command-line things, just like you say.

Although I don't think I would use a spreadsheet -- too much
duplication of test data between different test cases. This is one
case where I might use shell scripts to implement the tests.

> However, that's got to be an unusual approach hasn't it? I imagine it's more
> normal to compile test code with the program being tested and interact with
> it at a lower level. That would certainly be more flexible.


It's two different things. The latter is unit test. The former is
testing the system itself[1], for a case where the system is unusually
well-suited to system test (no GUI to click around in and so on).

You might want to do both. Personally I enjoy testing the system as a
whole more: it's more obviously useful, and it helps me stay focused
on the externally visible behavior rather than internals.

At any rate, I don't trust a bunch of unit tests to show that the
system works as intended.

/Jorgen

[1] Or a subsystem, because maybe the whole system isn't just a
single command-line tool but several, or several connected by
shell scripts, or ...

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
 
Reply With Quote
 
Malcolm McLean
Guest
Posts: n/a
 
      08-09-2013
On Friday, August 9, 2013 7:08:33 AM UTC+1, Ian Collins wrote:
> Malcolm McLean wrote:
>
> > However a lot of code has branch points for memory allocation failures
> > which are extremely unlikely to happen. You could argue that if it's worth
> > handle the allocation failure, it's also worth testing it. But it does
> > increase the difficulty of testing considerably - either you've got to
> > alter the source code, or you need a special malloc package, or you need
> > to fiddle with a debugger.

>
> Or you use a testing framework that can mock malloc and friends.
>
> > You can make a strong case that not going to the trouble is "adequate".

>
> Not really, if the code isn't covered by a test, it shouldn't be there.
>

You can make that argument.
But you've got to balance the costs and difficulty of the test against
the benefits. I don't have such a testing framework. That's not to say
I might not use one if I could find a good one. But I don't have one at
the moment. I have written malloc wrappers which fail on every 10% or so
of allocation requests, but rarely used that technique. It's too
burdensome for the benefit, for the code that I write.
>
> One of my teams used to pay the testers who did the product acceptance
> testing in beer if they found bugs in our code. Most "bugs" turned out
> to be ambiguities in the specification.
>

Knuth offered 1 cent for the first bug in his Art of Computer Programming,
rising exponentially to 2 cents, 4 cents, etc. He soon had to default on
the policy, to avoid going bankrupt.
Beer is a better idea. But sometimes an ambiguous specification is better
than one which is written in strange language designed to avoid
any possibility of misinterpretation. If it's hard to read and to
understand, then it costs more.
Most programs and even functions need to do a job which can be expressed
in plain informal English, and the details don't matter. For example I
might want an embedded, low strength chess-playing algorithm. That can
be fulfilled by someone nabbing one from the web, with the right licence.
It's just adding unnecessary expense to be more specific.

 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Regression testing for pointers Don Y C Programming 86 03-23-2012 10:49 PM
unit-profiling, similar to unit-testing Ulrich Eckhardt Python 6 11-18-2011 02:00 AM
Test::Unit - Ruby Unit Testing Framework Questions Bill Mosteller Ruby 0 10-22-2009 02:02 PM
Regression testing with Python Almad Python 0 06-05-2007 09:44 AM
Ruby regression testing Jeremy Dillworth Ruby 2 09-11-2003 02:50 AM



Advertisments