Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > VHDL > Complex testbench design strategy

Reply
Thread Tools

Complex testbench design strategy

 
 
KJ
Guest
Posts: n/a
 
      11-17-2008
On Nov 16, 6:30*am, Eli Bendersky <(E-Mail Removed)> wrote:
> Hello,
>
> The testbench is for a complex multi-functional FPGA. This FPGA has
> several almost unrelated functions, which it makes sense to check
> separately. So I'm in doubt as to how to structure my testbench. As I
> see it, there are two options:
>
> 1) One big testbench for everything, and some way to sequence the
> tests of the various functions
> 2) Separate testbenches, one for each function
>
> Each approach has its pros and cons. Option (2) sounds the most
> logical, at least at first, but when I begin pursuing it, some
> problems emerge. For example, too much duplication between testbenches
> - in all of them I have to instantiate the large top-level entity of
> the FPGA, write the compilation scripts for all the files, and so on.


I would have a separate testbench for each of these individual
functions, however they would not be instantiating the top level FPGA
at all. Each function presumably has it's own 'top' level that
implements that function, the testbench for that function should be
putting that entity through the wringer making sure that it works
properly. Having a single instantiation of the top level of the
entire design to test some individual sub-function tends to
dramatically slow down simulation which then produces the following
rather severe consequences:

- Given an aribtrary period of wall clock time, less testing of a
particular function can be performed.
- Less testing implies less coverage of oddball conditions.
- When you have to make changes to the function to fix a problem that
didn't happen to show up until the higher level testing was performed,
regression testing will also be hampered by the above two points when
trying to verify that in fixing one problem you didn't create another.

As a basic rule, a testbench should be testing new design content that
occurs roughly at that level not design content that is buried way
down in the design. At the top level of the FPGA design, the only new
design content is the interconnect to the top level functions and the
instantiation of all of the proper components.

Also, I wouldn't necessarily stop at the FPGA top level either. The
FPGA exists on a PCBA with interconnect to other parts (all of which
can be modelled), and the PCBA exists in some final system that simply
needs power and some user input/output (which again can be modelled).
All of this may seem to be somewhat of a side issue but it's not.
Take for example, the design of a DDR controller as a sample design.
That controller should have it's own testbench that rather extensively
tests all of the operational modes. At the top level of the FPGA
though you would simply instantiate it. Exhaustively testing again at
that level would be pretty much wasted time. A better test would be
to model the PCBA (which would instantiate a memory model from the
vendor) and walk a one across the address and data bus since that
effectively verifies the interconnect which at the FPGA top level and
the PCBA level is all the new design content that exists in regards to
the DDR controller.

As another example, let's say your system processes images and
produces JPEG output and you're writing all the code yourself. You
would probably want some extensive low level testing of some of the
basic low level sub-functions like...
- 1d DCT transform
- 2d DCT transform
- Huffman encoding
- Quantizer

Another level of the design would tie these pieces (and all of the
other pieces necessary to make a full JPEG encoder) and test that the
whole thing compresses images properly. At that level, you'd probably
also be extensively varying the image input, the Q tables, the Huffman
tables, the flow control in and out and run lots of images through to
convince yourself that everything is working properly. But you would
be wasting time (and wouldn't really be able to) vary all of the
parameters to those lower level DCT functions since they would likely
be parameterized for things like input/output data width and size but
in the JPEG encoder it doesn't matter if there is a bug that could
only affect an 11x11 element 2d DCT since you're using it in an 8x8
environment. That lower level testbench is the only thing that could
uncover the bug in the 11x11 case but if you limit yourself to
testbenches that can only operate at some higher level you will be
completely blind and think that your DCT is just fine until you try to
use it with some customer who wants some 11x11 DCT for whatever
reason.

Similarly, re-testing those same things that you do vary at the 'image
compression' level at the next higher level would then be mostly
wasted time that could've been better spent somewhere else. Often
times integration does produce conditions that were not considered at
the lower functional testbench level but to catch that what you need
then is to have something that predicts the correct response given the
testbench input and assert on any differences.

Testing needs to occur on various levels, not one (even if that one
level is able to effectively disable all the other functions but the
one is currently interested in). Ideally this testing occurs at every
level where significant new design content is being created. At the
level where 'insignificant' (but still important) new content is
created (like the FPGA top level which simply instantiates and
interconnects), you can generally get more bang for the buck by going
up yet another level (to the PCBA).

Kevin Jennings
 
Reply With Quote
 
 
 
 
Eli Bendersky
Guest
Posts: n/a
 
      11-17-2008
On Nov 17, 4:44*pm, KJ <(E-Mail Removed)> wrote:
> On Nov 16, 6:30*am, Eli Bendersky <(E-Mail Removed)> wrote:
>
>
>
> > Hello,

>
> > The testbench is for a complex multi-functional FPGA. This FPGA has
> > several almost unrelated functions, which it makes sense to check
> > separately. So I'm in doubt as to how to structure my testbench. As I
> > see it, there are two options:

>
> > 1) One big testbench for everything, and some way to sequence the
> > tests of the various functions
> > 2) Separate testbenches, one for each function

>
> > Each approach has its pros and cons. Option (2) sounds the most
> > logical, at least at first, but when I begin pursuing it, some
> > problems emerge. For example, too much duplication between testbenches
> > - in all of them I have to instantiate the large top-level entity of
> > the FPGA, write the compilation scripts for all the files, and so on.

>
> I would have a separate testbench for each of these individual
> functions, however they would not be instantiating the top level FPGA
> at all. *Each function presumably has it's own 'top' level that
> implements that function, the testbench for that function should be
> putting that entity through the wringer making sure that it works
> properly. *Having a single instantiation of the top level of the
> entire design to test some individual sub-function tends to
> dramatically slow down simulation which then produces the following
> rather severe consequences:
>
> - Given an aribtrary period of wall clock time, less testing of a
> particular function can be performed.
> - Less testing implies less coverage of oddball conditions.
> - When you have to make changes to the function to fix a problem that
> didn't happen to show up until the higher level testing was performed,
> regression testing will also be hampered by the above two points when
> trying to verify that in fixing one problem you didn't create another.
>
> As a basic rule, a testbench should be testing new design content that
> occurs roughly at that level not design content that is buried way
> down in the design. *At the top level of the FPGA design, the only new
> design content is the interconnect to the top level functions and the
> instantiation of all of the proper components.
>


While I agree that this approach is the most suitable, and your JPEG
example is right to the point, it must be stressed that there are some
problems with it.

First of all, the "details devil" is sometimes in the interface
between modules and not inside the modules themselves. Besides, you
can't always know where a module's boundary ends, especially if it's
buried deep in the design.

Second, the guys doing the verification are often not the ones who
wrote the synthesizable code, and hence have little idea of the
modules inside and their interfaces.

Third, the modules inside can change, or their interfaces can change
due to redesign or refactoring, so the testbench is likely to change
more often. On the top level, system requirements are usually more
static.

Eli



 
Reply With Quote
 
 
 
 
KJ
Guest
Posts: n/a
 
      11-17-2008
On Nov 17, 11:45*am, Eli Bendersky <(E-Mail Removed)> wrote:
> On Nov 17, 4:44*pm, KJ <(E-Mail Removed)> wrote:
>
> First of all, the "details devil" is sometimes in the interface
> between modules and not inside the modules themselves.


I disagree, the only 'between' modules construct is on the port map to
the modules. The only types of errors that can occur here are:
- Forgot to connect something
- Connected multiple outputs together
- Connect signal 'xyz' to port 'abc' but it should've been connected
to port 'def'

While all of the above three do occur, they are easily found in and
fixed early on. Almost any testbench and/or synthesis result will
flag it.

If instead by "details devils" 'between' modules you mean connecting
incompatible interfaces that you thought were compatible (for example,
port 'abc' is an active high output but it is used by some other
module as an active low input) then I would suggest that you're not
viewing the interface connection logic in the proper light. That
connection logic is itself it's own module and logic set. While the
logic inversion example is of course trivial and likely not a good
candidate for it's own entity/architecture, a more complicated example
requiring more logic (say an 8 bit to 32 bit conversion as a slightly
more involved example) is more ammo that if there is any interface
conversion logic required it should be seen as it's own design and
tested as such before integration into the bigger design.

While I wouldn't bother with a testbench (or even a stand alone
design) for an interface conversion that consisted of simply a logic
inversion, I would for something like a bus size converter.

> Besides, you
> can't always know where a module's boundary ends, especially if it's
> buried deep in the design.
>


Nor does it matter where a module boundary ends. Either a design
performs the required function given a particular stimulus or it does
not. When it doesn't, the most likely design change required is
something that changes the logic but finding just what line of code
needs changing requires debug regardless of methodology. Except for
the port mapping errors mentioned above which are almost always fixed
really early on, any other changes would be in 'some' module's rtl
code.

> Second, the guys doing the verification are often not the ones who
> wrote the synthesizable code, and hence have little idea of the
> modules inside and their interfaces.
>


The line between modules that are to be tested by some separate
verification guys and modules that are not, is an arbitrary line that
you yourself define. Part of what goes into the decision about where
you draw the line between which interfaces get rigorous testing and
which do not is time (=money). The higher you go up before testing,
the less control you get in testing and verification.

Also, it is not unreasonable for the verification guys to be adding
non-synthesizable test/verification code right into the design itself.

> Third, the modules inside can change, or their interfaces can change
> due to redesign or refactoring, so the testbench is likely to change
> more often.


The testbenches would be similarly refactored. Assuming a working
design/testbench pair before refactoring then there should also be a
working set of design/testbench pairs after. A designer may very well
create a small module that can be tested at some higher level without
creating undo risk to the overall design but that should tend to be
more of the exception than the rule.

I understand the likely concern regarding cost/effort in maintaining
and migrating testbenches along with the designs but moving all
testing and verification up to the top level because the interfaces up
there don't change much as a methodology is likely to severly limit
the quality of the design verification. It will also tend to inhibit
your ability to adequately regression test the effects of design
changes that address problems that have been found to verify that
something else didn't break.

Those lower level testbenches are somewhat analogous to incoming
receiving/inspection at a manufacturing plant. No manufacturer would
start building product without performing their own inspection and
verification on incoming parts to see that they are within spec before
building them into their product. Similarly, an adequate testbench(s)
should exist for modules before they are integrated into a top level
design. If those testbences are not adequate, beef them up. If they
are then there is not too much point in going hog wild testing the
same things at a higher level.

Kevin Jennings
 
Reply With Quote
 
Mike Treseler
Guest
Posts: n/a
 
      11-17-2008
KJ wrote:

> The line between modules that are to be tested by some separate
> verification guys and modules that are not, is an arbitrary line that
> you yourself define.


Indeed. There are significant FPGA based projects
where the total number of design and verification
engineers is one. Collecting and maintaining all module tests for
automated regression can be an efficient and effective strategy.

> Also, it is not unreasonable for the verification guys to be adding
> non-synthesizable test/verification code right into the design itself.


I agree. If source code is untouchable, there is a problem
with the version control process.

> Those lower level testbenches are somewhat analogous to incoming
> receiving/inspection at a manufacturing plant. No manufacturer would
> start building product without performing their own inspection and
> verification on incoming parts to see that they are within spec before
> building them into their product. Similarly, an adequate testbench(s)
> should exist for modules before they are integrated into a top level
> design. If those testbences are not adequate, beef them up. If they
> are then there is not too much point in going hog wild testing the
> same things at a higher level.


Good point.
I wouldn't use anyone's IP core without
a testbench, including my own.

-- Mike Treseler
 
Reply With Quote
 
Andy
Guest
Posts: n/a
 
      11-18-2008
On Nov 17, 2:41*pm, Mike Treseler <(E-Mail Removed)> wrote:
> KJ wrote:
> > The line between modules that are to be tested by some separate
> > verification guys and modules that are not, is an arbitrary line that
> > you yourself define. *

>
> Indeed. *There are significant FPGA based projects
> where the total number of design and verification
> engineers is one. Collecting and maintaining all module tests for
> automated regression can be an efficient and effective strategy.
>
> > Also, it is not unreasonable for the verification guys to be adding
> > non-synthesizable test/verification code right into the design itself.

>
> I agree. If source code is untouchable, there is a problem
> with the version control process.
>
> > Those lower level testbenches are somewhat analogous to incoming
> > receiving/inspection at a manufacturing plant. *No manufacturer would
> > start building product without performing their own inspection and
> > verification on incoming parts to see that they are within spec before
> > building them into their product. *Similarly, an adequate testbench(s)
> > should exist for modules before they are integrated into a top level
> > design. *If those testbences are not adequate, beef them up. *If they
> > are then there is not too much point in going hog wild testing the
> > same things at a higher level.

>
> Good point.
> I wouldn't use anyone's IP core without
> a testbench, including my own.
>
> * * * *-- Mike Treseler


I agree that it is a good thing to be able to test submodules
individually. However, in some designs the "system" can be greatly
simplified, and the same system level interface (re)used to test the
individual modules, which can reduce the effort required to create the
module level test benches in the first place.

If the RTL includes generics that can "disable" individual modules,
and those generics are passed up to the top of the DUT (defaulted to
enable everything), then a testbench can instantiate the DUT with
generics that disable all but the module under test, yet re-use the
same system level interface routines to do so, while avoiding the wall-
clock time penalties of simulating a far more complex system than what
you are interested in.

Of course, every project is different, and some include too much
overhead to incorporate the system level interface to make this
practical.

Andy
 
Reply With Quote
 
Mike Treseler
Guest
Posts: n/a
 
      11-18-2008
Andy wrote:

> I agree that it is a good thing to be able to test submodules
> individually. However, in some designs the "system" can be greatly
> simplified, and the same system level interface (re)used to test the
> individual modules, which can reduce the effort required to create the
> module level test benches in the first place.


I wasn't going to rock the BFM boat, but I also use generics
and along with shell scripts to to make a regression.
I list command options at the top of the testbench
for interactive use something like this:

-- force an error 0:
-- vsim -Gwire_g=stuck_hi -c test_uart -do "run -all; exit"
-- change template:
-- vsim -Gtemplate_g=s_rst -c test_uart -do "run -all; exit"
-- slow baud:
-- vsim -Gtb_tics_g=42 -c test_uart -do "run -all; exit"
-- verify strobe calibration:
-- vsim -Gtb_tics_g=42 test_uart
-- then "do uart.do"

... and later code up a script to play with coverage.

-- Mike Treseler
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
How to force an internal wire which is deep inside DUT hierachy attop level testbench using VHDL design? One Cent VHDL 7 09-10-2012 10:53 PM
Testbench design references Benjamin Couillard VHDL 2 06-17-2009 09:35 PM
How to write system verilog testbench assertions for a VHDL design zj82119 VHDL 0 10-21-2008 10:18 PM
Complex JSON obj, best strategy? dd Javascript 7 07-05-2007 08:40 PM
For expert on complex loops (reposted) - complex looping problem news.amnet.net.au Java 1 04-13-2004 07:10 AM



Advertisments