Re: [sv-ac] Zero/Non-zero delay model (is it same as synchronous vs asynchronous)


Subject: Re: [sv-ac] Zero/Non-zero delay model (is it same as synchronous vs asynchronous)
From: Cindy Eisner (EISNER@il.ibm.com)
Date: Thu Sep 26 2002 - 08:53:51 PDT


all,

>an event driven simulator which
>has sufficient enough granularity of time instants at which it
>evaluates the design will FAIL the assertion (i.e. will identify
>glitch).
>
>And that's my fundamental concern. If the analysis of the assertions
>within the design is contingent upon evaluating the design at discrete
>points which are determined by the explicit delays specified, then we
>are going to run into mismatch in results.

i believe that both r38 and r55 were trying to address similar concerns.
comments? are there other requirements that address this as well?

cindy.

Cindy Eisner
Formal Methods Group Tel: +972-4-8296-266
IBM Haifa Research Laboratory Fax: +972-4-8296-114
Haifa 31905, Israel e-mail:
eisner@il.ibm.com

"Rajeev Ranjan" <rajeev@realintent.com>@eda.org on 20/09/2002 01:25:10

Sent by: owner-sv-ac@eda.org

To: <sv-ac@eda.org>
cc:
Subject: [sv-ac] Zero/Non-zero delay model (is it same as synchronous vs
       asynchronous)

Hello folks --

I guess we can take a short break from the issue of "to mandate or not
to mandate" the check names. I wanted to bring up another
controversial issue of synchronous vs asynchronous semantics and
support.

As I recall, with our discussion over the phone and via emails, we
still did not get to the same page on our understanding about what we
mean when we use the terms "synchronous" and "asynchronous". I will
refer to the mails sent earlier by Adam Krolnik and Ambar Sarkar.

Let me begin with elaborating on what my basic concern is:

Consider the following piece of Verilog code:

assign c = a ^ b;
always @(d) begin
  a <= #10 d;
  b <= #20 d;
end

Assume for a moment that the signal "d" changes every 100th time unit
(e.g. wrt a clock) and has been properly initialized. Now consider the
assertion which states that "c" is always "0". Any verification
technology which builds a zero-delay model of the design will come
back with a "YES"/"PASS" answer for that assertion. However, the
signal "c", if implemented in hardware with the given delay
specification will have a glitch. And an event driven simulator which
has sufficient enough granularity of time instants at which it
evaluates the design will FAIL the assertion (i.e. will identify
glitch).

And that's my fundamental concern. If the analysis of the assertions
within the design is contingent upon evaluating the design at discrete
points which are determined by the explicit delays specified, then we
are going to run into mismatch in results.

I think either Erich or Ambar pointed out that by providing a
sufficiently fine grained clock one could expect even a zero-delay
model analysis to produce the same result. That is not true. What
needs to happen is that the computation model needs to take the
individual gate delays into account. And now we are getting in the
territory of higher complexity analysis than the formal analysis of
zero-delay models. I don't intend to go into the details of relative
analysis complexity of a "timed automata" vs "regular DFA". It
suffices to say that formal/semi-formal tools need to build an FSM
model and their capacity of dealing with models with delays will be
highly limited.

To some extent (or to a large extent, depending upon the optimization
put into the simulation tools), similar observations are made between
a simulation tool leveraging the cycle-based semantics vs one which is
completely event-driven.

So what's the bottomline?

I will re-emphasize a point that Adam had made in one of his emails --
we as a committee should create a set of guidelines on using the
assertion constructs -- the scenarios where they work best, scenarios
which could lead to very poor performance, perhaps a difference in
analysis results between tools, etc.

And I would propose that one of the item in the guideline should be
assertions should be written to check functionality which are
independent of the gate delays in the RTL.

Why is it important/necessary: I could write a whole sermon here --
but the summary is: we are in the very early phase of assertion based
verification methodology and we must do our best to avoid potential
pitfalls that users may get into and avoid situations where their
performance expectations could take a major hit.

I would seek feedback from the end users/managers like Adam Krolnik
and Harry Foster who have had extensive experience with assertion
methodology as to a) whether the "delay independent" specification
could limit any of their assertion b) if not, what training/guidelines
they used to ensure that the designers/assertion writers do not make
such mistakes.

Thanks for your attention...

-rajeev

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Rajeev K. Ranjan                                 Tel:    (408) 982-5418
Director, R&D                                     Fax:    (408) 982-5443
Real Intent
3910 Freedom Circle, Suite 102A       rajeev@realintent.com
Santa Clara, CA  95054                       http://www.realintent.com
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



This archive was generated by hypermail 2b28 : Thu Sep 26 2002 - 08:52:34 PDT