Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Python > [OT] Simulation Results Managment

Reply
Thread Tools

[OT] Simulation Results Managment

 
 
moogyd@yahoo.co.uk
Guest
Posts: n/a
 
      07-14-2012
Hi,
This is a general question, loosely related to python since it will be the implementation language.
I would like some suggestions as to manage simulation results data from my ASIC design.

For my design,
- I have a number of simulations testcases (TEST_XX_YY_ZZ), and within each of these test cases we have:
- a number of properties (P_AA_BB_CC)
- For each property, the following information is given
- Property name (P_NAME)
- Number of times it was checked (within the testcase) N_CHECKED
- Number of times if failed (within the testcase) N_FAILED
- A simulation runs a testcase with a set of parameters.
- Simple example, SLOW_CLOCK, FAST_CLOCK, etc
- For the design, I will run regression every night (at least), so I will have results from multiple timestamps
We have < 1000 TESTCASES, and < 1000 PROPERTIES.

At the moment, I have a script that extracts property information from simulation logfile, and provides single PASS/FAIL and all logfiles stored in a directory structure with timestamps/testnames and other parameters embedded in paths

I would like to be easily look at (visualize) the data and answer the questions
- When did this property last fail, and how many times was it checked
- Is this property checked in this test case.

Initial question: How to organize the data within python?
For a single testcase, I could use a dict. Key P_NAME, data in N_CHECKED, N_FAILED
I then have to store multiple instances of testcase based on date (and simulation parameters.

Any comments, suggestions?
Thanks,
Steven







 
Reply With Quote
 
 
 
 
Neal Becker
Guest
Posts: n/a
 
      07-15-2012
http://www.velocityreviews.com/forums/(E-Mail Removed) wrote:

> Hi,
> This is a general question, loosely related to python since it will be the
> implementation language. I would like some suggestions as to manage simulation
> results data from my ASIC design.
>
> For my design,
> - I have a number of simulations testcases (TEST_XX_YY_ZZ), and within each of
> these test cases we have:
> - a number of properties (P_AA_BB_CC)
> - For each property, the following information is given
> - Property name (P_NAME)
> - Number of times it was checked (within the testcase) N_CHECKED
> - Number of times if failed (within the testcase) N_FAILED
> - A simulation runs a testcase with a set of parameters.
> - Simple example, SLOW_CLOCK, FAST_CLOCK, etc
> - For the design, I will run regression every night (at least), so I will have
> results from multiple timestamps We have < 1000 TESTCASES, and < 1000
> PROPERTIES.
>
> At the moment, I have a script that extracts property information from
> simulation logfile, and provides single PASS/FAIL and all logfiles stored in a
> directory structure with timestamps/testnames and other parameters embedded in
> paths
>
> I would like to be easily look at (visualize) the data and answer the
> questions - When did this property last fail, and how many times was it
> checked - Is this property checked in this test case.
>
> Initial question: How to organize the data within python?
> For a single testcase, I could use a dict. Key P_NAME, data in N_CHECKED,
> N_FAILED I then have to store multiple instances of testcase based on date
> (and simulation parameters.
>
> Any comments, suggestions?
> Thanks,
> Steven


One small suggestion,
I used to store test conditions and results in log files, and then write parsers
to read the results. The formats kept changing (add more conditions/results!)
and maintenance was a pain.

Now, in addition to a text log file, I write a file in pickle format containing
a dict of all test conditions and results. Much more convenient.

 
Reply With Quote
 
 
 
 
rusi
Guest
Posts: n/a
 
      07-15-2012
On Jul 14, 10:50*am, (E-Mail Removed) wrote:
> Hi,
> This is a general question, loosely related to python since it will be the implementation language.
> I would like some suggestions as to manage simulation results data from my ASIC design.
>
> For my design,
> - I have a number of simulations testcases (TEST_XX_YY_ZZ), and within each of these test cases we have:
> * - a number of properties (P_AA_BB_CC)
> * - For each property, the following information is given
> * * - Property name (P_NAME)
> * * - Number of times it was checked (within the testcase) N_CHECKED
> * * - Number of times if failed (within the testcase) N_FAILED
> - A simulation runs a testcase with a set of parameters.
> * - Simple example, SLOW_CLOCK, FAST_CLOCK, etc
> - For the design, I will run regression every night (at least), so I willhave results from multiple timestamps
> We have < 1000 TESTCASES, and < 1000 PROPERTIES.
>
> At the moment, I have a script that extracts property information from simulation logfile, and provides single PASS/FAIL and all logfiles stored in a directory structure with timestamps/testnames and other parameters embedded in paths
>
> I would like to be easily look at (visualize) the data and answer the questions
> - When did this property last fail, and how many times was it checked
> - Is this property checked in this test case.
>
> Initial question: How to organize the data within python?
> For a single testcase, I could use a dict. Key P_NAME, data in N_CHECKED,N_FAILED
> I then have to store multiple instances of testcase based on date (and simulation parameters.
>
> Any comments, suggestions?
> Thanks,
> Steven


Not sure if you are asking about:
1. Python data structure organization
or
2. Organization of data outside python for conveniently getting in and
out of python

For 2. if the data is modestly sized and is naturally managed with
builtin python types -- lists and dictionaries -- yaml gives a nice
fit. I used pyyaml some years ago, today I guess json which is
similar, is the way to go.

For 1, you need to say what are your questions/issues.
 
Reply With Quote
 
moogyd@yahoo.co.uk
Guest
Posts: n/a
 
      07-15-2012
On Sunday, July 15, 2012 5:25:14 AM UTC+2, rusi wrote:
> On Jul 14, 10:50*am, (E-Mail Removed) wrote:
> &gt; Hi,
> &gt; This is a general question, loosely related to python since it will be the implementation language.
> &gt; I would like some suggestions as to manage simulation results data from my ASIC design.
> &gt;
> &gt; For my design,
> &gt; - I have a number of simulations testcases (TEST_XX_YY_ZZ), and within each of these test cases we have:
> &gt; * - a number of properties (P_AA_BB_CC)
> &gt; * - For each property, the following information is given
> &gt; * * - Property name (P_NAME)
> &gt; * * - Number of times it was checked (within the testcase) N_CHECKED
> &gt; * * - Number of times if failed (within the testcase) N_FAILED
> &gt; - A simulation runs a testcase with a set of parameters.
> &gt; * - Simple example, SLOW_CLOCK, FAST_CLOCK, etc
> &gt; - For the design, I will run regression every night (at least), so Iwill have results from multiple timestamps
> &gt; We have &lt; 1000 TESTCASES, and &lt; 1000 PROPERTIES.
> &gt;
> &gt; At the moment, I have a script that extracts property information from simulation logfile, and provides single PASS/FAIL and all logfiles stored in a directory structure with timestamps/testnames and other parameters embedded in paths
> &gt;
> &gt; I would like to be easily look at (visualize) the data and answer the questions
> &gt; - When did this property last fail, and how many times was it checked
> &gt; - Is this property checked in this test case.
> &gt;
> &gt; Initial question: How to organize the data within python?
> &gt; For a single testcase, I could use a dict. Key P_NAME, data in N_CHECKED, N_FAILED
> &gt; I then have to store multiple instances of testcase based on date (and simulation parameters.
> &gt;
> &gt; Any comments, suggestions?
> &gt; Thanks,
> &gt; Steven
>
> Not sure if you are asking about:
> 1. Python data structure organization
> or
> 2. Organization of data outside python for conveniently getting in and
> out of python
>
> For 2. if the data is modestly sized and is naturally managed with
> builtin python types -- lists and dictionaries -- yaml gives a nice
> fit. I used pyyaml some years ago, today I guess json which is
> similar, is the way to go.
>
> For 1, you need to say what are your questions/issues.


Hi Rusi,

For (1), I guess that the only question I had was how to handle regression results. But I think that the most logical way for string this data is as adict with key = datestamp, and entries being list of testcases/results.

For (2), I will look at both these.

Thanks for the help.

Steven

 
Reply With Quote
 
moogyd@yahoo.co.uk
Guest
Posts: n/a
 
      07-15-2012
On Sunday, July 15, 2012 2:42:39 AM UTC+2, Neal Becker wrote:
> me wrote:
>
> &gt; Hi,
> &gt; This is a general question, loosely related to python since it will be the
> &gt; implementation language. I would like some suggestions as to manage simulation
> &gt; results data from my ASIC design.
> &gt;
> &gt; For my design,
> &gt; - I have a number of simulations testcases (TEST_XX_YY_ZZ), and within each of
> &gt; these test cases we have:
> &gt; - a number of properties (P_AA_BB_CC)
> &gt; - For each property, the following information is given
> &gt; - Property name (P_NAME)
> &gt; - Number of times it was checked (within the testcase) N_CHECKED
> &gt; - Number of times if failed (within the testcase) N_FAILED
> &gt; - A simulation runs a testcase with a set of parameters.
> &gt; - Simple example, SLOW_CLOCK, FAST_CLOCK, etc
> &gt; - For the design, I will run regression every night (at least), so I will have
> &gt; results from multiple timestamps We have &lt; 1000 TESTCASES, and &lt; 1000
> &gt; PROPERTIES.
> &gt;
> &gt; At the moment, I have a script that extracts property information from
> &gt; simulation logfile, and provides single PASS/FAIL and all logfiles stored in a
> &gt; directory structure with timestamps/testnames and other parameters embedded in
> &gt; paths
> &gt;
> &gt; I would like to be easily look at (visualize) the data and answer the
> &gt; questions - When did this property last fail, and how many times was it
> &gt; checked - Is this property checked in this test case.
> &gt;
> &gt; Initial question: How to organize the data within python?
> &gt; For a single testcase, I could use a dict. Key P_NAME, data in N_CHECKED,
> &gt; N_FAILED I then have to store multiple instances of testcase based on date
> &gt; (and simulation parameters.
> &gt;
> &gt; Any comments, suggestions?
> &gt; Thanks,
> &gt; Steven
>
> One small suggestion,
> I used to store test conditions and results in log files, and then write parsers
> to read the results. The formats kept changing (add more conditions/results!)
> and maintenance was a pain.
>
> Now, in addition to a text log file, I write a file in pickle format containing
> a dict of all test conditions and results. Much more convenient.


Hi Neal,
We already store the original log files.
Does pickle have any advantages over json/yaml?
Thanks,
Steven
 
Reply With Quote
 
moogyd@yahoo.co.uk
Guest
Posts: n/a
 
      07-15-2012
On Sunday, July 15, 2012 2:42:39 AM UTC+2, Neal Becker wrote:
> me wrote:
>
> &gt; Hi,
> &gt; This is a general question, loosely related to python since it will be the
> &gt; implementation language. I would like some suggestions as to manage simulation
> &gt; results data from my ASIC design.
> &gt;
> &gt; For my design,
> &gt; - I have a number of simulations testcases (TEST_XX_YY_ZZ), and within each of
> &gt; these test cases we have:
> &gt; - a number of properties (P_AA_BB_CC)
> &gt; - For each property, the following information is given
> &gt; - Property name (P_NAME)
> &gt; - Number of times it was checked (within the testcase) N_CHECKED
> &gt; - Number of times if failed (within the testcase) N_FAILED
> &gt; - A simulation runs a testcase with a set of parameters.
> &gt; - Simple example, SLOW_CLOCK, FAST_CLOCK, etc
> &gt; - For the design, I will run regression every night (at least), so I will have
> &gt; results from multiple timestamps We have &lt; 1000 TESTCASES, and &lt; 1000
> &gt; PROPERTIES.
> &gt;
> &gt; At the moment, I have a script that extracts property information from
> &gt; simulation logfile, and provides single PASS/FAIL and all logfiles stored in a
> &gt; directory structure with timestamps/testnames and other parameters embedded in
> &gt; paths
> &gt;
> &gt; I would like to be easily look at (visualize) the data and answer the
> &gt; questions - When did this property last fail, and how many times was it
> &gt; checked - Is this property checked in this test case.
> &gt;
> &gt; Initial question: How to organize the data within python?
> &gt; For a single testcase, I could use a dict. Key P_NAME, data in N_CHECKED,
> &gt; N_FAILED I then have to store multiple instances of testcase based on date
> &gt; (and simulation parameters.
> &gt;
> &gt; Any comments, suggestions?
> &gt; Thanks,
> &gt; Steven
>
> One small suggestion,
> I used to store test conditions and results in log files, and then write parsers
> to read the results. The formats kept changing (add more conditions/results!)
> and maintenance was a pain.
>
> Now, in addition to a text log file, I write a file in pickle format containing
> a dict of all test conditions and results. Much more convenient.


Hi Neal,
We already store the original log files.
Does pickle have any advantages over json/yaml?
Thanks,
Steven
 
Reply With Quote
 
Dieter Maurer
Guest
Posts: n/a
 
      07-15-2012
(E-Mail Removed) writes:
> ...
> Does pickle have any advantages over json/yaml?


It can store and retrieve almost any Python object with almost no effort.

Up to you whether you see it as an advantage to be able to store
objects rather than (almost) pure data with a rather limited type set.


Of course, "pickle" is a proprietary Python format. Not so easy to
decode it with something else than Python. In addition, when
you store objects, the retrieving application must know the classes
of those objects -- and its knowledge should not be too different
from how those classes looked when the objects have been stored.


I like very much to work with objects (rather than with pure data).
Therefore, I use "pickle" when I know that the storing and retrieving
applications all use Python. I use pure (and restricted) data formats
when non Python applications come into play.

 
Reply With Quote
 
Neal Becker
Guest
Posts: n/a
 
      07-15-2012
Dieter Maurer wrote:

> (E-Mail Removed) writes:
>> ...
>> Does pickle have any advantages over json/yaml?

>
> It can store and retrieve almost any Python object with almost no effort.
>
> Up to you whether you see it as an advantage to be able to store
> objects rather than (almost) pure data with a rather limited type set.
>
>
> Of course, "pickle" is a proprietary Python format. Not so easy to
> decode it with something else than Python. In addition, when
> you store objects, the retrieving application must know the classes
> of those objects -- and its knowledge should not be too different
> from how those classes looked when the objects have been stored.
>
>
> I like very much to work with objects (rather than with pure data).
> Therefore, I use "pickle" when I know that the storing and retrieving
> applications all use Python. I use pure (and restricted) data formats
> when non Python applications come into play.


Typically what I want to do is post-process (e.g. plot) results using python
scripts, so using pickle is great for that.

 
Reply With Quote
 
rusi
Guest
Posts: n/a
 
      07-15-2012
On Jul 15, 11:35*am, Dieter Maurer <(E-Mail Removed)> wrote:
> (E-Mail Removed) writes:
> > ...
> > Does pickle have any advantages over json/yaml?

>
> It can store and retrieve almost any Python object with almost no effort.
>
> Up to you whether you see it as an advantage to be able to store
> objects rather than (almost) pure data with a rather limited type set.
>
> Of course, "pickle" is a proprietary Python format. Not so easy to
> decode it with something else than Python. In addition, when
> you store objects, the retrieving application must know the classes
> of those objects -- and its knowledge should not be too different
> from how those classes looked when the objects have been stored.
>
> I like very much to work with objects (rather than with pure data).
> Therefore, I use "pickle" when I know that the storing and retrieving
> applications all use Python. I use pure (and restricted) data formats
> when non Python applications come into play.


Pickle -> JSON -> Yaml
are roughly in increasing order of human-friendliness and decreasing
order of machine friendliness (where machine means python 'machine')

This means that
- Pickle is most efficient, Yaml least
- Pickle comes with python from as far back as I know
Json started coming somewhere round 2.5 (I think)
(py)yaml needs to be installed separately
- reading pickled data will spoil your eyes whereas yaml is pleasant
to read (just like python)

 
Reply With Quote
 
moogyd@yahoo.co.uk
Guest
Posts: n/a
 
      07-17-2012
On Sunday, July 15, 2012 6:20:34 PM UTC+2, rusi wrote:
> On Jul 15, 11:35*am, Dieter Maurer &lt;(E-Mail Removed)&gt; wrote:
> &gt; (E-Mail Removed) writes:
> &gt; &gt; ...
> &gt; &gt; Does pickle have any advantages over json/yaml?
> &gt;
> &gt; It can store and retrieve almost any Python object with almost no effort.
> &gt;
> &gt; Up to you whether you see it as an advantage to be able to store
> &gt; objects rather than (almost) pure data with a rather limited type set.
> &gt;
> &gt; Of course, &quot;pickle&quot; is a proprietary Python format. Not soeasy to
> &gt; decode it with something else than Python. In addition, when
> &gt; you store objects, the retrieving application must know the classes
> &gt; of those objects -- and its knowledge should not be too different
> &gt; from how those classes looked when the objects have been stored.
> &gt;
> &gt; I like very much to work with objects (rather than with pure data).
> &gt; Therefore, I use &quot;pickle&quot; when I know that the storing andretrieving
> &gt; applications all use Python. I use pure (and restricted) data formats
> &gt; when non Python applications come into play.
>
> Pickle -&gt; JSON -&gt; Yaml
> are roughly in increasing order of human-friendliness and decreasing
> order of machine friendliness (where machine means python 'machine')
>
> This means that
> - Pickle is most efficient, Yaml least
> - Pickle comes with python from as far back as I know
> Json started coming somewhere round 2.5 (I think)
> (py)yaml needs to be installed separately
> - reading pickled data will spoil your eyes whereas yaml is pleasant
> to read (just like python)


Hi Everyone,
Thanks for the feedback. For now, I store the data using Pickle.
Steven
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Problem with post-route simulation / timing simulation jasperng VHDL 0 11-27-2008 06:23 AM
Cassini web server and session state managment Paul Fi ASP .Net 0 09-10-2004 04:31 AM
Bug Tracking/Change managment software Mike ASP .Net 1 04-13-2004 11:29 PM
State managment Vik ASP .Net 3 02-04-2004 03:46 AM
Managment/traffic sniffer? Oystein Cisco 1 11-04-2003 07:04 AM



Advertisments