On Wed, 2014-10-15 at 10:00 +1300, David Koontz wrote: > On 15 Oct 2014, at 3:04 am, Martin.J Thompson <Martin.J.Thompson@trw.com> wrote: > > > Adding out-of-band signalling like this to every integer would add a lot of cost and complexity and execution time penalty to a simulator." > > > > I'd propose (if I were proposing this extension, which I'm not :) a separate type, so the overhead is not carried by normal ints (much like std_ulogic vs std_logic). > > And I have no interest in such a type personally, either. I found the implementation details and implications of interest. Nice example... > And a record type subtype would work EXCEPT for the pesky problem that an element_constraint doesn't > embrace a range_constraint for an integer type element. (And perhaps should for appropriate numerical types, > it'd be interesting to read how this came about, the expansion of subtype to reach into records appears to > have been focused on resolution, see 6.3). This would count as the sort of thing I meant about underlying language features. One approach might be a generic package declaring that record, instantiated with two parameters; the ends of the subtype range. The operators and functions could check the ranges against the instantiated parameters... Problem is, the proper place for a range check is assignment rather than e.g. the return value from an operator. In Ada, this would be handled by making the record a controlled type, and overriding "Initialize" and "Adjust" to perform the range checks but adding controlled types just for this would be an even bigger can o'worms. So there would be an asymmetry with proper integer arithmetic : perhaps that is to be expected. Integer arithmetic : no computation can overflow; all internal values are of Universal Integer type : only when the final result is assigned will the overflow or range error occur. (Unless I am confusing with Ada again; even if so, this is the intent for the Universal Integer proposal) IV_Integer arithmetic : overflows occur within expressions because that's what the overloaded operators can do. Perhaps this is to be expected as there is no coherent concept of Universal_IV_Integer so no mechanism for deferring the range checks. One (ugly?) way round this would be to allow mixed IV_Tnteger and Integer arithmetic (overloading operators) with Integer results, requiring a constructor function val (int: integer) return iv_integer; to create an IV_Integer (presumably setting Invalid according to the range check) > There are two classes of overhead compared to integer types. Multiple element handing and in the case of the model > functions calling functions which are generally anathema to performance and could be subsumed by an implementation > directly implementing a package body. If it can be implemented purely as a library, and that library standardised, then the way is open for implementations to do their own thing (inlining those functions for speed) within the library. > > And ya, it would be slower in all cases than integers the idea is to optimize that difference in performance. > > A simple model: > > package invalid_undriven_integer is > type iv_integer is > record > int: integer; > iv: boolean; -- invalid or undriven > end record; alternatively: type iv_type is (invalid, undriven, constraint_error, valid); type iv_integer is record int: integer; iv: iv_type; end record; but the example gets the idea across > ghdl -a iv_integer.vhdl (and I see generic package support being added to ghdl right now!) - Brian -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.Received on Tue Oct 14 14:52:35 2014
This archive was generated by hypermail 2.1.8 : Tue Oct 14 2014 - 14:52:51 PDT