Thanks Brian, I agree with pretty much everything you've said below. One niggle - the overhead may not be quite so great as you are only carrying one "resolution bit" per integer, rather than having to resolve each bit individually for an unsigned vector (even though it pretty much always ends up resolved as a whole - I wonder if any simulators are smart enough to optimise that case?) With regard to "Adding out-of-band signalling like this to every integer would add a lot of cost and complexity and execution time penalty to a simulator." I'd propose (if I were proposing this extension, which I'm not :) a separate type, so the overhead is not carried by normal ints (much like std_ulogic vs std_logic). Cheers, Martin -----Original Message----- From: owner-vhdl-200x@eda.org [mailto:owner-vhdl-200x@eda.org] On Behalf Of Brian Drummond Sent: 14 October 2014 14:00 To: vhdl-200x@eda.org Subject: Re: resolved integers (was RE: [vhdl-200x] Update to proposal for arbitrary integers) On Tue, 2014-10-14 at 12:09 +0000, Martin.J Thompson wrote: > From: owner-vhdl-200x@eda.org [mailto:owner-vhdl-200x@eda.org] On > Behalf Of David Bishop > > > > A “NaN” is a very specific value for a floating point number. An > exponent of all “1” (which means infinity) and a fraction which starts > with a “1”. Since all of the bits in an integer are valid, I don’t > know how you would do an invalid number. You can’t just pick one > because somebody will need to use it. > > > > I agree – if you want an ‘undriven value’, then it has to be separate. > The simulator could track it in whatever softwarey way suits, say via > an internal variable with a bit for each resolved integer object which > needs to be tracked. In synthesis, a resolved integer could an extra > bit to signifiy its “drivenness”. We need to think carefully before we add this. Adding out-of-band signalling like this to every integer would add a lot of cost and complexity and execution time penalty to a simulator. Resolved integers can be modelled using existing numeric_std "unsigned" or "signed" types ... why not use them? Presumably because of the speed penalty. Which arises - at least in part - because of the out of band signalling required for resolution, "undriven values" and presumably we would also need some expression of "X" unknown or conflicted resolution, and possibly don't cares. I'm not doubting that this proposal adds some value. But does it add enough to justify a new level of complexity somewhere between full numeric_std and the simplicity and thus performance of native 32-bit or 64-bit integer? Or would it end up costing enough in performance that people would be reluctant to use it? I use integers where I don't need this added functionality and numeric_std when I do. I support the "universal_integer" extension because it allows for restricted-range derived types such as the existing 32-bit signed integer, and presumably gives a tool permission to add e.g. 32-bit unsigned integers (as opposed to the current 31-bit natural!) and 64-bit signed or unsigned with efficient implementation. While some tools may choose to support 128 bit or arbitrary width arithmetic I wouldn't expect to see it on a widespread basis soon. I support the "modular types" extension allowing boolean and shift operators on a type which is no longer strictly integer (but efficiently implemented using native instructions on typical processors, and easily converted to/from integer) But I'm cautious about this because it cannot be so efficiently implemented (unlike FP NAN which uses in-band values). - Brian -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.Received on Tue Oct 14 07:04:53 2014
This archive was generated by hypermail 2.1.8 : Tue Oct 14 2014 - 07:05:25 PDT