Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > VHDL > timing verification

Reply
Thread Tools

timing verification

 
 
alb
Guest
Posts: n/a
 
      10-17-2013
Hi Andy,

On 16/10/2013 15:52, Andy wrote:
> I have used "wrappers" around internal components/entities that
> include VHDL assertions (or could use OVL, PSA, etc.) to verify
> certain aspects of the interface.
>
> I am planning on adding OSVVM coverage model(s) to the wrapper so
> that I can capture/monitor how well the interface is exercised from
> the system level interfaces. Wrappers could even be nested.
>
> You can also "modify" the interface via the wrapper for easier system
> verification. For example, your entity may have a data output with a
> corresponding valid output that signifies when the data output is
> valid and safe to use. The wrapper can drive 'X's on the data port
> when the valid port is not asserted, ensuring that the consumer does
> not use the data except when it is valid.


So that's the place you can actually perform protocol checks! But then
how do you collect or steer the testbench based on these checks? Other
than 'severity' levels to either break or continue the simulation,
there's a way to access this information from the TB?

AKAIK you cannot access through hierarchies of entities unless your
wrapper has out ports which provide a connection to the upper levels.
This would mean that the top entity wrapper would be populated with a
bunch of ports related to internal interfaces' wrappers... Am I missing
something?

But I guess I got your point, since is for verification purposes only
you can have multiple drivers and enforce the requirement, forcing the
consumer to break 'early' in the verification process.

>
> I have seen so many cases where the data consumer violated the
> protocol, but still worked only because the data producer kept the
> data valid for longer than indicated. One small change to the
> producer (maybe an optimization), and suddenly the consumer will quit
> working, even thought the producer is still following the protocol.


Let alone when the producer is out of specs (say a too early
integration) and you need to tight those requirements to make it
working. While the producer gets fixed according to new requirements you
may still proceed with the verification process of the consumer.

>
> My wrappers take the form of an an additional architecture for the
> same entity it wraps (the wrapper re-instantiates the
> component/entity within itself). These can be inserted either using
> configurations (if you use components), or by making sure that your
> simulation compile scripts compile the wrapper architecture AFTER the
> RTL architecture, if you use entity instantiations.


I use vmk to generate Makefiles for compilation. It resolves the
dependencies automatically analyzing the code structure and since the
wrapper 'depends' on the rtl architecture the condition is met.

I should say that I'd like to start using configurations more regularly,
but never had a deep motivation for it yet.

> In the latter
> case, the system RTL should not specify an architecture for those
> entities, but the wrapper's instantiation of the entity (itself)
> should specify the RTL architecture. That way, if no wrapper
> architecture is compiled (or is compiled after the RTL architecture),
> no wrapper is used (as might be the case for other test cases or for
> synthesis.)
>
> I keep the wrapper architectures in separate files in the testbench
> source folder(s).


The only limitation I see is that you can only use this approach during
pre-synth simulation. For post-synth simulation I get a flat netlist
where wrappers are not existing anymore. Do you need to change/adapt
your testbenches to accommodate this? Am I missing something else?
 
Reply With Quote
 
 
 
 
alb
Guest
Posts: n/a
 
      10-17-2013
Hi Hans,

On 16/10/2013 16:41, HT-Lab wrote:
[]
>> Ok, I just had a look at 'endpoints' in PSL, but how do you pass that
>> information to your running vhdl testbench???

>
> Very simple, as soon as you define an endpoint you get an extra boolean
> signal (endp below) in your design. Example:
>
> architecture rtl of test_tb is
>
> -- psl default clock is rising_edge(clk);
> -- psl sequence seq is {a;b};
> -- psl endpoint endp is {seq};
>
> begin
> process
> begin
> if endp=TRUE then
> -- change testbench behaviour
> end if;
> end process;
> end architecture rtl;


Ok, does it mean that 'a' and 'b' assertions are visible at the top
level even if they are defined in the inner interfaces and/or elements
of your logic?

> As soon as "a" is asserted followed by "b" in the next clock cycle the
> endpoint endp becomes true. You can treat endp like any other signal,
> drag in the waveform window etc.


Uhm I might then have misunderstood what Jim Lewis referred to in the
link I provided earlier in this thread. As I understand it, there's no
way currently to access information gathered by PSL (I presume unless
it's on the same hierarchical level), is this correct?
 
Reply With Quote
 
 
 
 
Andy
Guest
Posts: n/a
 
      10-17-2013
Al,

For interaction with the rest of the testbench from within the wrapper, youhave a few different options that don't require adding signals to the DUT hierarchy and port maps.

You can use global signals or shared variables (including method calls on those shared variables) declared in a simulation package. These are often good for maintaining a global pass/fail status and/or error count.

In VHDL-2008, you can also hierarchically access signals without going through formal ports to get them.

I have also simply used assert/report statements to put messages in the simulation log that can be post-processed.

I have steadily moved away from using configurations. The cost of maintaining them and the component declarations they rely upon is too high. The wrapper architectures are one way I have avoided using configurations. From thewrapper, you can also modify or set the RTL architecture's generics to alter behavior for some tests. The wrapper architecture can also access 'instance_name or 'path_name attributes to find out "where it is" and alter its behavior based on its location. So there isn't much left for which you really need configurations.

I don't use vmkr, so I don't know how it might work with wrapper architectures. I used to use make to reduce compile time for incremental changes, butthat does not seem to be as big an issue as it used to be. There is something to be said for a script that compiles the simulation (or the DUT) from scratch, the same way every time, regardless of what has or has not changed..

Wrapper architectures are not compatible with gate level simulations (at least not wrappers for entities within the DUT).

After synthesis optimizations, retiming, etc. specific internal interface features may not even exist at the gate level.

However, a wrapper can instantiate the gate level model in place of the RTL..

Wrappers can also create or instantiate a different model of an RTL entity for improving the simulation performance, or providing internal stimulus, etc.

Andy
 
Reply With Quote
 
alb
Guest
Posts: n/a
 
      10-18-2013
Hi Andy,

On 17/10/2013 19:45, Andy wrote:
> Al,
>
> For interaction with the rest of the testbench from within the
> wrapper, you have a few different options that don't require adding
> signals to the DUT hierarchy and port maps.
>
> You can use global signals or shared variables (including method
> calls on those shared variables) declared in a simulation package.
> These are often good for maintaining a global pass/fail status and/or
> error count.


My experience with global signals is quite bad in general and even when
writing software, every time I had to deal with large amount of global
variables I ended up regretting that choice...

I'm not familiar with protected types, but I guess that at least they
provide some sort of encapsulation with their methods and their private
data structures. In this case the wrappers might update coverage (write
access to data structures) while the TB can steer its course accordingly
(read access to data structures). Keeping data structures separate for
each interface (or wrapper) might facilitate the effort.

>
> In VHDL-2008, you can also hierarchically access signals without
> going through formal ports to get them.


Ok, this is something I did not know, I should keep reading about the
main differences between 2008 and previous versions of the standard.

> I have also simply used assert/report statements to put messages in
> the simulation log that can be post-processed.


Yep, that's something that already gives you more observability.

> I don't use vmkr, so I don't know how it might work with wrapper
> architectures.


FYI I'm using vmk, not vmkr. I tried to use the latter but I did have
problems in compiling it.

> I used to use make to reduce compile time for
> incremental changes, but that does not seem to be as big an issue as
> it used to be.


I agree, but I'm kind of used to incremental compilation and do not see
any pitfall in it, but it is possible that my understanding of the
process is somewhat limited.

> There is something to be said for a script that
> compiles the simulation (or the DUT) from scratch, the same way every
> time, regardless of what has or has not changed..


The compilation order has to be taken care of anyhow and this is
something that so far tools have asked the users to do (AFAIK). If I
have to insert a new entity in my code I simply run vmk first:

> vmk -o Makefile *.vhd


and then 'make'. The dependencies are automatically found and I do not
need to know where to put my new file in the list of files to be
compiled. Moreover, if I need to add a component that has several other
components in its hierarchy the hassle grows if everything should be
handled manually, but not if there's a tool handy.

What is the benefit of running the simulation from scratch the same way
every time?

> Wrapper architectures are not compatible with gate level simulations
> (at least not wrappers for entities within the DUT).
>
> After synthesis optimizations, retiming, etc. specific internal
> interface features may not even exist at the gate level.
>
> However, a wrapper can instantiate the gate level model in place of
> the RTL..


Uhm, that's interesting indeed. Meaning that for integration purposes of
several IPs you may still use wrappers and benefit from their
advantages. The simulation would still be a functional one, but some of
the elements might be gate level models.

You could in principle have behavioral models (even not synthesizable),
just to proceed with the functional verification and get the simulation
framework in place before the rtl model is ready. Using rtl libraries
instead of behavioral would be sufficient to switch.

> Wrappers can also create or instantiate a different model of an RTL
> entity for improving the simulation performance, or providing
> internal stimulus, etc.


I guess at this point I have no more excuses for not using wrappers!
 
Reply With Quote
 
HT-Lab
Guest
Posts: n/a
 
      10-18-2013
Hi Alb,

On 17/10/2013 15:30, alb wrote:
> Hi Hans,
>
> On 16/10/2013 16:41, HT-Lab wrote:
> []
>>> Ok, I just had a look at 'endpoints' in PSL, but how do you pass that
>>> information to your running vhdl testbench???

>>
>> Very simple, as soon as you define an endpoint you get an extra boolean
>> signal (endp below) in your design. Example:
>>
>> architecture rtl of test_tb is
>>
>> -- psl default clock is rising_edge(clk);
>> -- psl sequence seq is {a;b};
>> -- psl endpoint endp is {seq};
>>
>> begin
>> process
>> begin
>> if endp=TRUE then
>> -- change testbench behaviour
>> end if;
>> end process;
>> end architecture rtl;

>
> Ok, does it mean that 'a' and 'b' assertions are visible at the top
> level even if they are defined in the inner interfaces and/or elements
> of your logic?


Sorry I should have mentioned the above was just a bit of pseudo code, a
and b are std_logic signals. Also (before anybody corrects me the
curly braces on {seq} are redundant as seq is already defined as a sequence.

>
>> As soon as "a" is asserted followed by "b" in the next clock cycle the
>> endpoint endp becomes true. You can treat endp like any other signal,
>> drag in the waveform window etc.

>
> Uhm I might then have misunderstood what Jim Lewis referred to in the
> link I provided earlier in this thread. As I understand it, there's no
> way currently to access information gathered by PSL (I presume unless
> it's on the same hierarchical level), is this correct?


You are correct that endpoints are only valid at the current level,
however, as it is a signal you should be able to reference it at a
different level using VHDL2008 hierarchical references or vendors own
solution like Signalspy in Modelsim.

Regards,
Hans.
www.ht-lab.com

>


 
Reply With Quote
 
Andy
Guest
Posts: n/a
 
      10-18-2013
Al,

For simulation or synthesis, you want to ensure your tool is running from only your files (from the repository), not from some version already sitting in a library inside the tool somewhere.

Especially for synthesis, I have seen compilation order (or skipping re-compilation of previously compiled unchanged files) affect optimizations. Now I always synthesize from scratch ("Run->Resynthesize All" or "project -run synthesis -clean")

Andy
 
Reply With Quote
 
Mike Treseler
Guest
Posts: n/a
 
      10-20-2013
On Monday, October 14, 2013 1:54:43 PM UTC-7, alb wrote:
> I'm at the point where I might need to verify timing relationships
> between two or more signals/variables in my code.
>
> I've read PSL assertions can serve the need, but what if I need this
> task to be done by yesterday and I do not know PSL at all? Any
> suggestion on investing some of my time in learning PSL?


Timing relationships are best covered by static timing analysis.

-- Mike Treseler
 
Reply With Quote
 
alb
Guest
Posts: n/a
 
      10-21-2013
Hi Mike,

On 20/10/2013 07:49, Mike Treseler wrote:
>> I'm at the point where I might need to verify timing relationships
>> between two or more signals/variables in my code.

[]
> Timing relationships are best covered by static timing analysis.


How would you use static timing analysis to verify that a read operation
has happened 'n clocks' after a write operation? Or that a sequence of
events has happened?

I've always used static timing analysis to verify that propagation
delays were smaller than my clock rate (corrected for setup/hold time),
but nothing more than that.

 
Reply With Quote
 
Mike Treseler
Guest
Posts: n/a
 
      10-28-2013
On Monday, October 21, 2013 7:57:21 AM UTC-7, alb wrote:

> How would you use static timing analysis to verify that a read operation
> has happened 'n clocks' after a write operation? Or that a sequence of
> events has happened?


Static timing covers setup and hold timing relationships for all registers
in the programmable device and external registers driving or
reading the programmable device ports.

'n clocks' type verifications for a synchronous design that meets Fmax static timing, can be covered by a synchronous testbench that meets the setup and hold pin requirements of the programmable device design.

> I've always used static timing analysis to verify that propagation
> delays were smaller than my clock rate (corrected for setup/hold time),
> but nothing more than that.


Yes, device Fmax is the basic constraint and the easiest to use,
but IO constraints are most critical for the testbench and on the real bench.

-- Mike Treseler

 
Reply With Quote
 
alb
Guest
Posts: n/a
 
      10-28-2013
Hi Mike,

On 28/10/2013 20:36, Mike Treseler wrote:
[]
>> How would you use static timing analysis to verify that a read
>> operation has happened 'n clocks' after a write operation? Or that
>> a sequence of events has happened?

>
> Static timing covers setup and hold timing relationships for all
> registers in the programmable device and external registers driving
> or reading the programmable device ports.
>
> 'n clocks' type verifications for a synchronous design that meets
> Fmax static timing, can be covered by a synchronous testbench that
> meets the setup and hold pin requirements of the programmable device
> design.


This is what I typically do myself as well, but that wouldn't be
possible if instead of 'pin requirements' we were talking about
'interface requirements' of an internal interface of your design.

I guess the suggestions of using assertions through 'wrappers' may solve
the issue.

>> I've always used static timing analysis to verify that propagation
>> delays were smaller than my clock rate (corrected for setup/hold
>> time), but nothing more than that.

>
> Yes, device Fmax is the basic constraint and the easiest to use, but
> IO constraints are most critical for the testbench and on the real
> bench.


I deal often with asynchronous interfaces therefore I typically
synchronize them before use and do not constraint them at all. Actually
is there any good guide on how to constraint I/O?.
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Resume: Design Verification Consultant (Specman) Veritec VHDL 1 11-15-2010 06:25 PM
Good websites for Formal Verification ? thewhizkid VHDL 3 10-07-2003 03:04 PM
Resume: Design Verification Consultant (Specman) Veritec VHDL 2 09-26-2003 07:24 PM
Verification Intern Positions Available Recruit Interns VHDL 0 08-14-2003 09:56 PM
Re: Outsoursing Hardware verification Rajesh Bawa VHDL 2 08-05-2003 09:21 PM



Advertisments