Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > VHDL > abstracting the client/server protocol

Reply
Thread Tools

abstracting the client/server protocol

 
 
alb
Guest
Posts: n/a
 
      08-19-2013
Hi everyone,

I'm trying to implement a testbench with a client/server abstraction
[1], but I have a little problem implementing a serial protocol with bit
stuffing.

In my 'client' side I'd like to write/read to the serial link, but
/someone/ should take care about bit stuffing. Considering that on my
client I may need to do some write/read ops which are interleaved with
other types of activities (say on different interfaces), I presume the
'bit stuffing' mechanism should be entirely handled on the server side.

The way I split the code is the following: a test case file
(testcase.vhd) which instantiate the harness and issue all necessary
commands to cover the intended functions, an harness file (harness.vhd)
which contains the DUT instantiation and the server instantiation (with
signal mapping and so on) with a clock process, a server file
(server.vhd) with the translation between the 'virtual interface' used
in the test case file and the physical interface of the DUT.

In testcase.vhd I have one unique process which performs the necessary ops:

main: process
begin
init(); -- set defaults state to all DUT input signals
serial_set(..., ..., to_srv, fr_srv);
serial_get(..., ..., to_srv, fr_srv);
...
end process;

I presume that only in my server I can perform the necessary operations
*with* 'bit stuffing' and I should take care about 'bit stuffing' when
there's no transaction on the serial interface (serial_set/serial_get).

Am I too off road?
What about having a testcase where the bit stuffing is either wrong or
absent? Should I use a separate server for that?

Thanks for any suggestion/comment,

Al

[1] see 'Writing Testbenches: functional verification of HDL Models' -
2nd Edition, Janick Bergeron

--
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
 
Reply With Quote
 
 
 
 
Andy
Guest
Posts: n/a
 
      08-20-2013
This is the essence of transaction based modelling.

Simply being able to abstract the transaction data from the bit stuffing isvery useful, especially if you want to layer constrained random transaction generation on top of that.

If the interface supports it, you can also layer message-level transactionson top of byte-level transactions. If you do this, then any arbitration between multiple clients needs to occur at a point where messages from different clients do not get inter-mingled.

The transaction data can include information about error injection (incorrectly stuffing the bits) as needed.

You can also use a single bidirectional record port with resolved element types (including your own resolved types, since this is not synthesizable) for communications in both directions with the server, simplifying the send/receive subprogram calls.

Andy
 
Reply With Quote
 
 
 
 
alb
Guest
Posts: n/a
 
      08-20-2013
Hi Andy,

On 20/08/2013 16:05, Andy wrote:
[]
> Simply being able to abstract the transaction data from the bit
> stuffing is very useful, especially if you want to layer constrained
> random transaction generation on top of that.


that is a personal goal and be sure I'll post something on the topic as
soon as I get there!

>
> If the interface supports it, you can also layer message-level
> transactions on top of byte-level transactions. If you do this, then
> any arbitration between multiple clients needs to occur at a point
> where messages from different clients do not get inter-mingled.


For the time being my model is extremely simple: one test case -> one
client -> one file. My testcases will leverage the first one which is
aiming to test the interface.

I've seen a paper [1] presenting a similar approach with cases specified
in different architectures of the same unit. I do not really see the
advantage of doing this, but I do not have a strong opinion either.

Having multiple clients all accessing the server may require an extra
layer to ensure message integrity. But I do not see the point of having
multiple clients interacting unless the 'message integrity' is a feature
of the device under test and has to be verified.

> The transaction data can include information about error injection
> (incorrectly stuffing the bits) as needed.


This is certainly part of the verification plan, even though I have no
idea as of now on how to implement that. At the moment I have a
write/read abstract procedures that initiate the transaction from the
client and a 'dispatcher' process on the server side that calls the
appropriate subprograms to interact with the DUT.

Bit stuffing instead should be something that occurs independently from
the client transmitting anything (I need to stuff a bit when no
transaction is occurring as well). This means that a concurrent process
should monitor the 'wire' and inject a '1' each N '0's. While I could
potentially disrupt the bit stuffing during a transaction (with a
special flag in the record port?), I still cannot see how I could do
this when no transaction is occurring...

>
> You can also use a single bidirectional record port with resolved
> element types (including your own resolved types, since this is not
> synthesizable) for communications in both directions with the server,
> simplifying the send/receive subprogram calls.


well, indeed I do not favor too much the choice of having two separate
record ports to handle the two directions. In the end the two records
have great similarities and the main difference is simply the
'direction' (to or from the server).

Currently this is what I have:

<code>
type kind is (RD, WR);

type to_srv_ctrl is record -- abstract register to server
it : integer range 0 to 20; -- interface type
ad : integer range 0 to 65535; -- address
dt : integer range 0 to 65535; -- data
op : kind; -- read/write operation
go : boolean; -- initiate operation
end record;

type fr_srv_ctrl is record -- abstract register from server
ad : integer range 0 to 65535; -- address
dt : integer; -- return data from server
op : kind; -- read/write operation
go : boolean; -- data ready
end record;
</code>

with the interface type 'it' I select an heterogeneous set of
interfaces, and the 'go' flag is used to initiate the transaction. In
principle I should be able to merge most of the elements.

It is not too much overhead in the subprogram call interface (only two
signals more), but it is annoying to have structures which do seem quite
similar.

While these record types are extremely generic and can suit several type
of transactions, they suffer from the possibility to add 'ad hoc
features' in the transaction that are 'it' specific (like the
bit-stuffing error injection).

Any suggestion/comments?

Al

[1] Accelerating Verification Through
Pre-Use of System-Level, Transaction Based Testbench Components
 
Reply With Quote
 
Andy
Guest
Posts: n/a
 
      08-21-2013
Al,

The referenced paper is from Synthworks, which offers an EXCELLENT testbench & verification course. I have taken it, and I strongly recommend it. The course also covers the OSVVM library for constrained random stimulus generation and functional coverage models. Jim Lewis also provides a lot of useful packages and example code in addition to OSVVM for his students.

One thing to consider about test cases is whether or not there may be interaction between test case scenarios. Simultaneous testing of multiple test cases helps verify expected or expose unexpected interactions.

If you only have one or a few simulator licenses available, simultaneous testing is more efficient, since it often takes less time than two separate tests run sequentially. Unless you go to great lenghts to configure empty architectures for unused RTL entities in each specific test, simulation timecorrelates with the number of clock cycles pretty well, no matter how manythings the RTL model is doing in each of those clock cycles.

On the other hand, if you have access to a sim farm with lots of licenses, running separate test cases on separate machines in parallel is faster.

Depending on where your "test case code" is (at the top level architecture,or in a test controller entity at a lower level), using separate architectures for one common entity may or may not make that much difference.

Creating resolved types for integers and booleans is pretty simple: pick a value that is "off" (does not interfere with the results), and drive that value (e.g. 0 or false) when you want to use it as an input. The resolution function can assert a warning (or worse) if more than one driver value is non-off (contention should not occur), and then it can simply return any non-off value in the received array.

If I understand your situation, the BFM that turns transactions into bits on its "physical" interface is cranking out some bits whether or not a "realtransaction" is in progress. You could have a separate interface for an error injection transaction that simply tells the BFM to create specific errors while not generating bits related to real transactions. You can also addargument(s) to your real transaction procedures (defaulted to off) that cause the transaction to be executed with errors injected.

Andy
 
Reply With Quote
 
alb
Guest
Posts: n/a
 
      08-26-2013
Hi Andy, sorry for the late reply, I was offline for few days...

On 21/08/2013 16:24, Andy wrote:
[]
> The referenced paper is from Synthworks, which offers an EXCELLENT
> testbench & verification course. I have taken it, and I strongly
> recommend it.


I'm trying to build up a crew of people interested in the course here at
Cern, in order to get an on-site course and maybe with a 'discount'
considering that we are part of an international organization

>
> One thing to consider about test cases is whether or not there may be
> interaction between test case scenarios. Simultaneous testing of
> multiple test cases helps verify expected or expose unexpected
> interactions.


I see your point. For what concern the serial protocol in the OP each
transaction is atomic and there's no 'memory' in the system to induce
any interaction. OTOH there are other interfaces which are 'active'
simultaneously and may have an impact when stimulated concurrently.

If I need to have a simultaneous testing than I would need to rework the
test case a little bit. Since currently I have one sequence of
transactions in one process, I may think about using two processes which
run concurrently and see what is the interaction.

>
> If you only have one or a few simulator licenses available,
> simultaneous testing is more efficient, since it often takes less
> time than two separate tests run sequentially. Unless you go to
> great lenghts to configure empty architectures for unused RTL
> entities in each specific test, simulation time correlates with the
> number of clock cycles pretty well, no matter how many things the RTL
> model is doing in each of those clock cycles.


I agree and moreover I do not have a set of configurations to play with,
therefore I would need to manually remove/add components along the way
(too painful).

>
> On the other hand, if you have access to a sim farm with lots of
> licenses, running separate test cases on separate machines in
> parallel is faster.


At Cern we have several licenses and I'm planning to run simulations in
batch mode through our batch system. Unfortunately I haven't spent
enough time on this part yet and so far I'm running simulations one
after the other one on a single license machine.

>
> Depending on where your "test case code" is (at the top level
> architecture, or in a test controller entity at a lower level), using
> separate architectures for one common entity may or may not make that
> much difference.


As of now my 'test case' is at the top level. The harness instantiate
the most of the common parts (DUT, BFM, clock generation) and the 'test
case' instantiate the harness.

>
> Creating resolved types for integers and booleans is pretty simple:
> pick a value that is "off" (does not interfere with the results), and
> drive that value (e.g. 0 or false) when you want to use it as an
> input. The resolution function can assert a warning (or worse) if
> more than one driver value is non-off (contention should not occur),
> and then it can simply return any non-off value in the received
> array.


I guess I didn't get it. What is 'worse than a warning'? If I get an
error how can the simulation continue?

>
> If I understand your situation, the BFM that turns transactions into
> bits on its "physical" interface is cranking out some bits whether or
> not a "real transaction" is in progress.


that is correct, because of the 'bit-stuffing' property of the protocol.

> You could have a separate
> interface for an error injection transaction that simply tells the
> BFM to create specific errors while not generating bits related to
> real transactions.


meaning the BFM should 'listen' continuously the interface to check if
errors need to occur or not. This will require a concurrent interface to
the BFM which adds errors on top of the one which creates 'good
transactions'.

> You can also add argument(s) to your real
> transaction procedures (defaulted to off) that cause the transaction
> to be executed with errors injected.


That is indeed yet another type of error, which acts at the level of
packet format (wrong CRC, ...). I guess with the combination of the two
I can expose many unwanted 'features' of the code. The main question
here would be: for how long should I simulate? I guess here I'm stuck
with building a coverage model (out of a non existing requirements'
document!).
 
Reply With Quote
 
Andy
Guest
Posts: n/a
 
      08-26-2013
Al,

WRT "warnings or worse", most simulators can:

1) print a message and keep going (e.g. note or warning)
2) stop (breakpoint from which you can continue, e.g. error)
3) quit (cannot continue e.g. failure)

based on the severity level (NOTE, WARNING, ERROR, etc) of the assert or report statement. Most simulators allow changing the specified behavior for each severity level as well, but that becomes a global behavior change for all assertions/reports of that severity level.

Also, assuming you have a package for your resolved types and their resolution functions, you can use a constant in that package for the severity usedby the assertions in your resolution functions. While debugging your testbench, you can set the constant to WARNING or ERROR, but later on, when running regression simulations "for the record" you can change that constant toFAILURE so that there is no possibility of continuing on from a resolutionfunction problem in a record run simulation.

Andy
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
The node.js Community is Quietly Changing the Face of Open Source Rodrick Brown Python 2 04-17-2013 04:47 PM
Re: The node.js Community is Quietly Changing the Face of Open Source Sven Python 0 04-16-2013 04:41 PM
Re: The node.js Community is Quietly Changing the Face of Open Source Ned Batchelder Python 0 04-16-2013 04:25 PM
Is there a difference between the use of the word montage vscollage Danny D. Digital Photography 8 04-15-2013 02:24 PM
Windows 8 - so bad it's hastening the death of the PC? ~misfit~ NZ Computing 18 04-15-2013 04:15 AM



Advertisments