Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > VHDL > structured VHDL

Reply
Thread Tools

structured VHDL

 
 
alb
Guest
Posts: n/a
 
      07-12-2013
On 12/07/2013 04:53, KJ wrote:
[]
>>> - Ian states the obvious...that there is combinatorial logic and
>>> registers

>>
>>
>> true. But the separation might be at the gate level or higher and
>> this is where Ian style makes a difference

>
> What difference do you think it makes? The 'style' it is written in
> will make no difference. Synthesis will take the description and
> turn it into logic that is implemented in 'gates' (or connections to
> LUT) as well as flip flops (or memory). The tools really do not need
> any help from the user in figuring out which is which.


Certainly the tool is smart enough to figure that out, but writing code
at a higher level does pay off. Higher level of abstraction has the
purpose to describe the problem in a clearer way (I'm thinking of
'literate programming') for other people to read and maintain.

The methods reported by the authors of ET4351, as well as Mike's and
Ian's examples, have all one intent IMO, reduce the number of concurrent
statements which need to interact. This intent is beneficial since it
foster separation between blocks, reducing the amount of signals my head
has to keep track of when I try to write/read/debug some code.

>> If you think about 'structured programming' as in a paradigm where
>> every computable function can be expressed as a combination of only
>> three control structures, then Ian proposal pretty much follows the
>> same line. Whether is overplayed or not I do not know.
>>

>
> That does not imply that the 'best' way to describe the code is in
> that form.


Agreed, in the end 'best' is what fits the needs case by case and here's
where the designer's experience matter.

> The soft metrics for 'best' here should be maintainability. On the
> assumption that one can write code that meets the function and
> performance requirements in many different ways, then one looks
> towards source code maintainability which would be the ability for
> somebody (not necessarily the original designer) to get up to speed
> on the design. Clumping all of the unrelated code physically
> together because it describes the transfer function is rather
> pointless. Clumping parts of code together that are inter-related
> makes sense.


I cannot agree more, both on the maintainability and the clumping. But
having the 'best' subdivision between functional blocks in a design is
another aspect that sits on top of the style you use. If your design
doesn't have any partition it can easily turn into a mess, if your
design is fragmented in micro blocks it does not make sense either.

Grouping together signals into records which do make sense is certainly
beneficial and still does not prevent the usage of the proposed approach.

>
>>
>>> - The paper takes non-issues and presents 'solutions' and ignores
>>> actual issues as well as the new issues that get created by the
>>> proposed 'solution'

>>
>>
>> Could you be more specific on what kind of 'non-issues' and 'actual
>> issues' you are referring to? Since I consider the talk rather to
>> the point I may have missed/misunderstood an essential part of it.
>>
>>

>
> I'll just nitpick on a few points in no particular priority, just the
> order that they are in the presentation.


I'll nitpick along...

> - Other than the last bullet about auto-generated code, nothing on
> the slide titled 'Traditional VHDL design methodology' is worth the
> time it took to type. It's all wrong. Maybe some people have done
> he says here but that doesn't make it 'Traditional'.


I am certainly not in the position to say that I've seen *a lot* of code
in my life, but certainly most of the time I've seen code it was pretty
close to the description they provided (otherwise I wouldn't have needed
to start this thread in the first place). Does this qualify it as
'traditional'? Maybe not, but it does not qualify it neither wrong.

[]
> - The next slide 'Traditional ad-hoc design style' is similarly
> biased and worthless. Taking the statement 'No unified signal naming
> convention' as one example. The upcoming 'unified' naming convention
> will not add anything useful (more later).


I find meaningless their proposed 'unified' naming convention, but I
must say that too often I've seen a stunning negligence in the choice of
names. IMO names should convey their purpose/function, rather than type
and or direction.

I remember we had once two sets of tx/rx cables going in and out a
series of avionics boxes and people were convinced that
labeling tx_out/rx_in on one end and tx_in/rx_out on the other would
have been sufficient...we spent few hours before finding the right
combination (ouch!).

> The statement 'Coding is done at low RTL level' is laughable. Again
> some may code this way, but let's not make elevate those people to be
> considered the traditionalists.


is laughable because 'level' is already part of the RTL acronym

My first supervisor used to design logic with a schematic entry tool,
with gates and flops. I was asked to learn vhdl though, so I did learn
the syntax (or part of it - at least), but the mindset when designing
was still with gates and flops. Most of the people I know are still not
far from there... I am slowly departing from that paradigm.

> - Slide 'Unified signal naming example'. Only the convention for
> 'types' has any value and that is because use of the type name can
> frequently be used throughout the code and it can be of value
> sometimes to easily differentiate that xyz is a data type.


Indeed I ignored this slide entirely and only
retained the suggestion to name a type as such (which is what I normally
do in C). Since these guys are
strongly linked to ESA, with a lot of bureaucratic non-sense written
down on piles of documents, I guess this is a some sort of remnant of
their complex yet proficuous partnership.

[]
> - Slide 'The abstracted view in VHDL : the two-process scheme' and successors describing the
> method does not justify how productivity would increase with any of the schemes. Examples:
> * The collection of signals and/or ports into records groups together things that are logically
> completely unrelated other than by the fact that they are pieces of an entity/architecture.


Couldn't agree more.

> As an example, consider an entity that receives input data, processes it, and then outputs
> the data. Further the input data interface has some implementation protocol, the output
> has a different protocol, both protocols imposed by external forces (i.e. you can't change
> them, maybe they are external device I/O). The natural collection from my perspective is
> input interface signals, output interface signals and processing algorithm signals. The
> input and output interfaces likely have nothing at all to do with each other...so why should
> they be collected into a record together as if they are?


nobody prevents you to combine the input/output/processing signals in
three separate records, you would still benefit of the fact that adding
a signal to a record definition does not have an impact on propagating
the change throughout all the component/instantiation/entity definitions.

> Think about it. The input
> interface likely has some state machine, the output interface has another, the processing
> algorithm possibly a third. Do you think those three state machines should be all lumped
> together? No? Then what benefit is it to lump them into a record together? (Hint: None)


but you could still have a three 'two-processes' or one-process
exchanging the necessary information via signals. The interface
separation you have at the entity level (with three different records)
is kept throughout the architecture. I understand that what I'm saying
*is not* what they are saying, but I can equally ask why should the
three functions sit in the same entity? After all if they are only
passing data through a well defined interface, they could equally sit on
separate entities and keep a one record per interface.

> - Slide 'Benefits'...every single supposed benefit except for 'Sequential coding is well known and
> understood' is wrong. How is the 'algorithm' easily extractable as stated? Take the example
> I mentioned earlier. There are at least three 'algorithms' going on: input protocol, output
> protocol and processing algorithm...and yet the proposed method will lump these together into
> one supposed 'algorithm'. What is stated by the author as an 'algorithm' is not really an
> algorithm, all it is is the combinatorial logic...


That is certainly one of the reasons why I would either break your
example in smaller entities, or have three processes running
concurrently and passing data to each other.

> - The other supposed benefits are merely unsubstantiated beliefs...I'll leave it to you to show
> in concrete terms how any of them are either generally true. Be specific.


'Uniform coding style simplify maintenance': the fact that you have a
pattern in each entity of your code (see Mike's template) let's you read
and maintain only the part where stuff happens, relying on the template
to do its job together with the synthesis tool.

Knowing 'a priori' that somebody else's code is following a logical
separation as the one proposed does shorten the time I need to spend to
understand how pieces are glued together.

In your example I assume that data received from the input are 'passed'
to the algorithm which 'passes' them eventually to the output. The three
functions are concurrent and they need some sort of 'handshake' to pass
the data around, maybe a couple of fifos to be monitored for
'full/empty' conditions... how many things should I know about this code
while I read it? Where should this description go?

With Mike's template something similar can be quite self explaining:

procedure update_regs is
begin
receive_data;
process_data;
transmit_data; -- too bad receive/transmit have different word length!
end procedure update_regs;

A simple glance at this part already tells me a lot about what the code
is doing, without the need to read through often not up to date comments
scattered around.

> - Adding a port: While I do agree that the method does make it easier to add and subtract I/O,
> I'll counter with if you'd put more thought into using a consistent interface protocol in
> the first place (example: Avalon or Wishbone) you wouldn't find yourself adding and subtracting
> ports in the first place because you'd get it right (or very nearly right) the first time. So
> this becomes a benefit of small value over a superior method that mostly avoids the issue.


I do wish more of my colleagues understood the benefits coming from a
standard interface, but unfortunately I've seen many wheels reinvented
from scratch (and only few of them spinning as they should!).

I would only add that adding/removing I/O from an entity takes a matter
of a couple of key strokes with emacs, so I never found this to be a
real problem.

[]
> - Slide 'Stepping through code during debugging'. This is generally the last thing one needs to
> do unless you make heavy use of variables in which case you're stuck and forced to single step.


AFAIK if variables are used in processes than you can still use 'add
wave' with the process label and * for everything in the process, but if
they are used in subprograms then it's needed to single step, since they
are popped off the stack when the procedure exits.

I try to design my logic such that is observable and controllable (which
I normally lose the moment I 'transfer' the firmware on some piece of
silicon). I try to keep functional blocks as separate as possible and
with clear interfaces (often going through the entity boundaries), that
I end up tracing.

[]
> - Slide 'Increasing the abstraction level' is a misnomer. The method described does not increase
> abstraction, it simply collects unrelated signals into a tidy (but more difficult to use) bucket.


How would you increase the level of abstraction instead?
Any pointer to available code would be really useful.

[]
>>> The first two are laughably wrong, Mike is all about synthesizable
>>> code and the supposed 'wrong gate-level structure' is claptrap. The
>>> last point again relates back to the skill of the designer. However,
>>> assuming equally skilled designers, the supposed 'structured'
>>> approach given in the article is even more likely to have 'Problems
>>> to understand the algorithm for less skilled engineers'.

>>
>>
>> I may have misunderstood completely the talk but I do not read the
>> 'increasing the abstraction level' slide as a critique to Mike's style
>> w.r.t. their style.
>>

>
> It sounds better to say 'higher abstration level' than 'collecting unrelated signals'
> doesn't it?


I must say that it does sound better! Jokes apart, I believe that style
does play a big role in coding and any attempt to propose some is worth
the effort. If, as you say, it does not help in increasing the level of
abstraction, it does help to have the 'bucket tidy' instead of messy.

[]
>> The first bullet qualifies the two-process approach as uniform and I
>> believe rightly since all entities look the same and a designer only
>> needs to look at the combinatorial process to understand the algorithm.
>>

>
> Only if you're implementing embarassingly simple entities I suppose. For anything
> real, you're making things worse (refer to my simple example stated earlier of a
> generic processing module).


Mike's example is not so complex, but not so 'embarrassingly simple'
either I suppose.

>
>> Looking at their LEON vs ERC32 example, it seems the method claims less
>> resources than an ad-hoc one, therefore improving development time.
>>

>
> The most likely reason is more skilled designers rather than style...prove me wrong.


I cannot certainly prove you wrong, I can only say that both are 32bit
RISC processors based on SPARC architecture. Temic (now Atmel) developed
the ERC2 and ESTEC (part of ESA) the first versions of the LEON. A
research center versus an established microcontroller manufacturer... I
guess both had quite skilled designers (but this is not more than a guess).

[]
>> I certainly believe that a readable code is easier to maintain, but I
>> may doubt that this approach does increase the re-usability of the code,
>> at least I consider this last point not more than the author's personal
>> opinion.
>>

>
> A designer's goal is to implement a specific function and meet a specific level of
> performance. A method that makes traceability of the function specification to the
> source code easier is 'good'. The seperation of that description into a collection of
> all combinatorial logic needed to implement the function and a seperate clocked process
> does absolutely nothing to define tracability back to the specification. The specification
> will most likely have no mention of combinatorial or storage, that is an implementation
> decision. Therefore the proposed seperation does not aid tracability and in fact makes
> it harder to follow.


The separation is still left to the designer to make, sectioning the
code in structural elements (entity, function, procedure, package),
providing the traceability you referred to.

If your entity is cluttered with small processes, which share signals
all together, the traceability issue is not solved either.

[]
>> Here I need to urge you to go through the presentation again since I
>> believe, with all due respect, you missed the point. IMO they are *not*
>> proposing their two-processes approach vs Mike's one process approach.
>> They actually use Mike's example to show how increasing the level of
>> abstraction is a good thing.
>>

>
> Nope. Calling it a 'higher level abstraction' doesn't make it so. You've
> simply collected together unrelated signals into various records. Records are
> a good thing, collecting unrelated things into a record...not so good thing.


Would you qualify Mike's template as a hardware description at higher
level of abstraction? if yes than I do not see where's the *big*
difference w.r.t. Ian's example or the presentation examples.

If not than I should ask you again to point me to some code example that
you consider written at a higher level of abstraction, because there is
where I aim to go.

[]
> And you of course are entitled to your opinion as well and I mean that with
> absolutely no disrespect or sarcasm or anything else negative.


I'm enjoying the ride.

Al


 
Reply With Quote
 
 
 
 
Andy
Guest
Posts: n/a
 
      07-16-2013
On Friday, July 12, 2013 6:47:48 PM UTC-5, alb wrote:
> Why instead integers do show such a performance difference?


SLV/unsigned/signed, etc. are arrays of enumerated values (for each bit). Even if the enumerated value is represented as a byte, each bit is then a byte in memory, and the CPU's built-in arithmetic instructions cannot be usedon the vector directly. The best bet for the simulator (rather than the boolean implementations in the reference package bodies) is to convert it to integer(s) where the integer(s) represent(s) the numerical value (assuming there are no meta-values in the vector), and then use the CPU's arithmetic instructions on those integers.

Using integers directly where possible (within the range constraints available on integers) avoids all that conversion and memory usage.

MANY years ago, I replaced a widely-used 5 bit unsigned address bus with aninteger in a ~20K gate FPGA design, and that alone improved RTL simulationruntimes by ~2.5X IIRC.

>Nevertheless let me say that I value the sensitivity list because I can spot
>immediately what the process depends on without the need to go through it.


If you limit signals to only that information that is communicated between processses, and use local variables for everything else, then your benefit from the explicit sensitivity list is minimal. If only some of many processes share a group of signals, then enclose those few processes, and declarations for their exclusively shared signals, in a block statement, so that the signals are not only hidden from outside access, but also declared in close proximity to the processes that use them.

>Why are you saying that using a procedure does not improve testability unless the procedure is externally accessible?


Testability of the architecture that uses subprograms is improved by separately testing each subprogram, without the process/architecture around it, hindering access to it. If a subprogram is declared in an architecture or ina process, how would you call the subprogram directly from a testbench to test it? For it to be called by some other architecture (e.g. testbench) the subproram must be declared in a package accessible to the testbench.

I'm not trying to say that all subprograms should be declared in packages. There are many reasons to use subprograms, and improved testability is onlyone of them. Localization of data (variables) and complexity is another. IMHO, since I'm not a big fan of unit level testing FPGA designs anyway, improved unit level testability is not as big a reason for using subprograms as is the localization.

>Uhm, in Treseler's uart example
>(http://myplace.frontier.com/~miketreseler/uart.vhd) if you look at the
>procedure called 'retime' it seems to me the logic described is sequentialand
>not combinatorial. Am I missing your point?


In synthesis, a suprogram cannot span time, and it cannot remember local variables' values from one subprogram call to the next. It can control the order of its access/update of its interface variables, so it could be considered as inferring registers. However, in Treseler's example the previous values of the variables are assigned in a different (in time) call to the procedure. Thus it is also the surrounding process that infers the registers, and the procedure could be considered as simply defining the combinatorial logic (wires in this case) between them.

On the other hand, if the order of the variable assignment statements in that procedure were reversed, only one register would be inferred. So in thatsense, the procedure is the one inferring at least two of the registers. Good point! I had not thought of that.

The important thing to remember about variables in clocked processes is that each reference to a variable, relative to its most recent update, determines whether a register is inferred for that reference. Thus a single variable declaration can represent, through multiple references and assignments to it, both a register and combinatorial values. Naturally, if multiple references to the variable return the same previously stored value, only one register is inferred for all of those references.

Signals can be thought of as inferring registers in the same way (if the value accessed was updated in a previous clock cycle). But since signal valueupdates are always postponed until the process suspends, a reference to a signal assigned on a clock edge is necessarily the value stored in a previous clock cycle, thus all references to said signal are to the registered value.

In that sense, variables and signals both infer registers for the same reason, but you have more control over that inference with variables. But don'tworry; the synthesis tool will generate hardware that behaves the same waythe RTL does (on a clock cycle basis), with or without registers as required.

>Does the procedure need to wait for the signal to be updated to execute orit
>will execute with the not yet updated value of the signal?


Since a procedure in synthesis cannot wait (span time/delta), the not-yet-updated value will be used. Similar confusion also arises when a statement following the procedure call that assigned a signal also sees the non-yet-updated value. Signals assigned within the procedure are not automatically updated when the procedure exits, only when the calling process suspends.

>Another option would be to make the signals globally visible, but I fell once
>in the trap and will never do it again!


Don't confuse accessibility/visibility of signals with driveability of signals from a procedure. The only way a procedure can DRIVE signals not passedto it explicitly is if the procedure is declared in a process, regardless of whether it can "see" the signals. If declared in a process, the procedure can drive any signal that can be driven from the process, whether the signal is passed to the procedure or not.

There are a couple of options to consider to make testbench procedures moreeasily reusable.

First, you can declare a procedure, with signal interfaces, in a package, and then within each process that needs to call it, you can overload it witha declaration that omits the signal interfaces, and that procedure simply calls the full version procedure with signals. Then you can more easily call the short-hand version procedure anywhere in that process, as many times as needed.

Second, you can declare records for the signal interface, either for separate in and out record ports, or for a combined inout record port, but the inout record type must either be resolved (implement a custom resolution function for it), or it must contain elements of resolved types (e.g. SL/SLV/Unsigned, etc.)
The record makes calling the procedure over and over much simpler.

Also, don't forget; in simulation (testbenches), procedures can have wait statments, and therfore can span time during their execution.

Andy
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
VHDL-2002 vs VHDL-93 vs VHDL-87? afd VHDL 1 03-23-2007 09:33 AM
Encrypt/Decrypt a structured file kevininstructor@state.or.us ASP .Net 1 09-25-2004 05:25 AM
Does Sun's Windows JVM catch Win32 Structured Exceptions? Barney Java 2 09-02-2004 10:07 AM
Storing structured types in request Dharmesh ASP .Net 4 07-29-2004 04:51 PM
Using xml source in datagrid -- possible with non-structured data? KatB ASP .Net 3 10-29-2003 07:09 AM



Advertisments