Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Python > Failing unittest Test cases

Reply
Thread Tools

Failing unittest Test cases

 
 
Scott David Daniels
Guest
Posts: n/a
 
      01-09-2006
There has been a bit of discussion about a way of providing test cases
in a test suite that _should_ work but don't. One of the rules has been
the test suite should be runnable and silent at every checkin. Recently
there was a checkin of a test that _should_ work but doesn't. The
discussion got around to means of indicating such tests (because the
effort of creating a test should be captured) without disturbing the
development flow.

The following code demonstrates a decorator that might be used to
aid this process. Any comments, additions, deletions?

from unittest import TestCase


class BrokenTest(TestCase.failureException):
def __repr__(self):
return '%s: %s: %s works now' % (
(self.__class__.__name__,) + self.args)

def broken_test_XXX(reason, *exceptions):
'''Indicates unsuccessful test cases that should succeed.
If an exception kills the test, add exception type(s) in args'''
def wrapper(test_method):
def replacement(*args, **kwargs):
try:
test_method(*args, **kwargs)
except exceptions + (TestCase.failureException,):
pass
else:
raise BrokenTest(test_method.__name__, reason)
replacement.__doc__ = test_method.__doc__
replacement.__name__ = 'XXX_' + test_method.__name__
replacement.todo = reason
return replacement
return wrapper


You'd use it like:
class MyTestCase(unittest.TestCase):
def test_one(self): ...
def test_two(self): ...
@broken_test_XXX("The thrumble doesn't yet gsnort")
def test_three(self): ...
@broken_test_XXX("Using list as dictionary", TypeError)
def test_four(self): ...

It would also point out when the test started succeeding.

--Scott David Daniels
http://www.velocityreviews.com/forums/(E-Mail Removed)
 
Reply With Quote
 
 
 
 
Eric
Guest
Posts: n/a
 
      01-09-2006
On 9 January 2006, Scott David Daniels wrote:
> There has been a bit of discussion about a way of providing test cases
> in a test suite that _should_ work but don't. One of the rules has been
> the test suite should be runnable and silent at every checkin. Recently
> there was a checkin of a test that _should_ work but doesn't. The
> discussion got around to means of indicating such tests (because the
> effort of creating a test should be captured) without disturbing the
> development flow.
>
> The following code demonstrates a decorator that might be used to
> aid this process. Any comments, additions, deletions?


Interesting idea. I have been prepending 'f' to my test functions that
don't yet work, so they simply don't run at all. Then when I have time
to add new functionality, I grep for 'ftest' in the test suite.

- Eric
 
Reply With Quote
 
 
 
 
Frank Niessink
Guest
Posts: n/a
 
      01-10-2006
Scott David Daniels wrote:
> There has been a bit of discussion about a way of providing test cases
> in a test suite that _should_ work but don't. One of the rules has been
> the test suite should be runnable and silent at every checkin. Recently
> there was a checkin of a test that _should_ work but doesn't. The
> discussion got around to means of indicating such tests (because the
> effort of creating a test should be captured) without disturbing the
> development flow.


There is just one situation that I can think of where I would use this,
and that is the case where some underlying library has a bug. I would
add a test that succeeds when the bug is present and fails when the bug
is not present, i.e. it is repaired. That way you get a notification
automatically when a new version of the library no longer contains the
bug, so you know you can remove your workarounds for that bug. However,
I've never used a decorator or anything special for that because I never
felt the need for it, a regular testcase like this also works for me:

class SomeThirdPartyLibraryTest(unittest.TestCase):
def testThirdPartyLibraryCannotComputeSquareOfZero(sel f):
self.assertEqual(-1, tplibrary.square(0),
'They finally fixed that bug in tplibrary.square')

Doesn't it defy the purpose of unittests to give them a easy switch so
that programmers can turn them off whenever they want to?

Cheers, Frank
 
Reply With Quote
 
Paul Rubin
Guest
Posts: n/a
 
      01-10-2006
Scott David Daniels <(E-Mail Removed)> writes:
> Recently there was a checkin of a test that _should_ work but
> doesn't. The discussion got around to means of indicating such
> tests (because the effort of creating a test should be captured)
> without disturbing the development flow.


Do you mean "shouldn't work but does"? Anyway I don't understand
the question. What's wrong with using assertRaises if you want to
check that a test raises a particular exception?
 
Reply With Quote
 
Peter Otten
Guest
Posts: n/a
 
      01-10-2006
Scott David Daniels wrote:

> There has been a bit of discussion about a way of providing test cases
> in a test suite that should work but don't.**One*of*the*rules*has*been
> the test suite should be runnable and silent at every checkin.**Recently
> there was a checkin of a test that should work but doesn't.**The
> discussion got around to means of indicating such tests (because the
> effort of creating a test should be captured) without disturbing the
> development flow.
>
> The following code demonstrates a decorator that might be used to
> aid this process.**Any*comments,*additions,*deletions?


Marking a unittest as "should fail" in the test suite seems just wrong to
me, whatever the implementation details may be. If at all, I would apply a
"I know these tests to fail, don't bother me with the messages for now"
filter further down the chain, in the TestRunner maybe. Perhaps the code
for platform-specific failures could be generalized?

Peter

 
Reply With Quote
 
Fredrik Lundh
Guest
Posts: n/a
 
      01-10-2006
Paul Rubin wrote:

> > Recently there was a checkin of a test that _should_ work but
> > doesn't. The discussion got around to means of indicating such
> > tests (because the effort of creating a test should be captured)
> > without disturbing the development flow.

>
> Do you mean "shouldn't work but does"?


no, he means exactly what he said: support for "expected failures"
makes it possible to add test cases for open bugs to the test suite,
without 1) new bugs getting lost in the noise, and 2) having to re-
write the test once you've gotten around to fix the bug.

> Anyway I don't understand the question.


it's a process thing. tests for confirmed bugs should live in the test
suite, not in the bug tracker. as scott wrote, "the effort of creating
a test should be captured".

(it's also one of those things where people who have used this in
real life find it hard to believe that others don't even want to under-
stand why it's a good thing; similar to indentation-based structure,
static typing, not treating characters as bytes, etc).

</F>



 
Reply With Quote
 
Duncan Booth
Guest
Posts: n/a
 
      01-10-2006
Scott David Daniels wrote:

> There has been a bit of discussion about a way of providing test cases
> in a test suite that _should_ work but don't. One of the rules has been
> the test suite should be runnable and silent at every checkin. Recently
> there was a checkin of a test that _should_ work but doesn't. The
> discussion got around to means of indicating such tests (because the
> effort of creating a test should be captured) without disturbing the
> development flow.


I like the concept. It would be useful when someone raises an issue which
can be tested for easily but for which the fix is non-trivial (or has side
effects) so the issue gets shelved. With this decorator you can add the
failing unit test and then 6 months later when an apparently unrelated bug
fix actually also fixes the original one you get told 'The thrumble doesn't
yet gsnort(see issue 1234)' and know you should now go and update that
issue.

It also means you have scope in an open source project to accept an issue
and incorporate a failing unit test for it before there is an acceptable
patch. This shifts the act of accepting a bug from putting it onto some
nebulous list across to actually recognising in the code that there is a
problem. Having a record of the failing issues actually in the code would
also helps to tie together bug fixes across different development branches.

Possible enhancements:

add another argument for associated issue tracker id (I know you could put
it in the string, but a separate argument would encourage the programmer to
realise that every broken test should have an associated tracker entry),
although I suppose since some unbroken tests will also have associated
issues this might just be a separate decorator.

add some easyish way to generate a report of broken tests.
 
Reply With Quote
 
Paul Rubin
Guest
Posts: n/a
 
      01-10-2006
"Fredrik Lundh" <(E-Mail Removed)> writes:
> no, he means exactly what he said: support for "expected failures"
> makes it possible to add test cases for open bugs to the test suite,
> without 1) new bugs getting lost in the noise, and 2) having to re-
> write the test once you've gotten around to fix the bug.


Oh I see, good idea. But in that case maybe the decorator shouldn't
be attached to the test like that. Rather, the test failures should
be filtered in the test runner as someone suggested, or the filtering
could even integrated with the bug database somehow.
 
Reply With Quote
 
Duncan Booth
Guest
Posts: n/a
 
      01-10-2006
Peter Otten wrote:

> Marking a unittest as "should fail" in the test suite seems just wrong
> to me, whatever the implementation details may be. If at all, I would
> apply a "I know these tests to fail, don't bother me with the messages
> for now" filter further down the chain, in the TestRunner maybe.
> Perhaps the code for platform-specific failures could be generalized?


It isn't marking the test as "should fail" it is marking it as "should
pass, but currently doesn't" which is a very different thing.
 
Reply With Quote
 
Fredrik Lundh
Guest
Posts: n/a
 
      01-10-2006
Paul Rubin wrote:

> > no, he means exactly what he said: support for "expected failures"
> > makes it possible to add test cases for open bugs to the test suite,
> > without 1) new bugs getting lost in the noise, and 2) having to re-
> > write the test once you've gotten around to fix the bug.

>
> Oh I see, good idea. But in that case maybe the decorator shouldn't
> be attached to the test like that. Rather, the test failures should
> be filtered in the test runner as someone suggested, or the filtering
> could even integrated with the bug database somehow.


separate filter lists or connections between the bug database and the
code base introduces unnecessary couplings, and complicates things
for the developers (increased risk for checkin conflicts, mismatch be-
tween the code in a developer's sandbox and the "official" bug status,
etc).

this is Python; annotations belong in the annotated code, not in some
external resource.

</F>



 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Unit test cases for Object intraction using mox/unittest jitendra gupta Python 0 03-08-2013 12:36 PM
Python-2.6.1 ctypes test cases failing Naresh Bhat Python 0 03-07-2012 05:43 PM
ignored test cases in unittest Terry Python 3 08-17-2009 12:23 PM
Putting unit test cases in the code - RDoc & UnitTest listrecv@gmail.com Ruby 9 11-15-2005 07:08 AM
test test test test test test test Computer Support 2 07-02-2003 06:02 PM



Advertisments