Require 64-bit Integers

Proposal Details

  • Who Updates:
  • Date Proposed:
  • Date Last Updated:
  • Priority:
  • Complexity:
  • Focus:

Language Change Specification Link

LCS-2016-026

LCS-2016-026 formerly referenced the LongIntegers proposal, but the text of the LCS is based on this proposal.

Related Issues

LongIntegers proposes a new 64-bit integer type that will be separate from the 32-bit type.

ArbitraryIntegers proposes arbitrary-length integers.

Current Situation

Currently LRM specifies a minimum of 32 bits for integer implementations. While this may be good 10 years ago, this minimum requirement may prove inadequate in the present age.

Requirement

Propose to increase the minimum to 64 bits.

Implementation details

Code Examples

Use Cases

-- DanielKho - 2015-01-04

The implementation of physical types has mostly been limited by the 32-bit range of integers. If we can just increase the minimum range to 64 bits, many more physical types can be expressed with higher precision and resolution.

As an example, the LRM defines the type "time" (std/standard) as:

    type TIME is range -2147483647 to 2147483647 
        units 
            fs; 
            ps = 1000 fs;
            ns = 1000 ps;
            us = 1000 ns; 
            ms = 1000 us; 
            sec = 1000 ms; 
            min = 60 sec; 
            hr = 60 min; 
        end units;

However, this 32-bit range is not sufficient to store values with femtosecond resolution up to an hour. It makes a lot of sense that all the secondary units defined has to be within the range specified, but currently this is not the case. I view this as perhaps an oversight of the LRM - could someone enlighten me otherwise?

There's an article published by Aldec that explains this in detail: https://www.aldec.com/en/support/resources/documentation/articles/1165

Arguments FOR

  1. Modern computers have no problems simulating 64-bit operations, hence there will not be much of a performance slowdown.
  2. For users who really want 32 bits, they can still choose older versions of VHDL.
  3. People would welcome increased precision, resolution, or range when doing arithmetic, with just a little slower simulation / synthesis time.

Arguments AGAINST

  1. Increases the simulation time.
  2. Increases the design size of integer-based implementations.

General Comments

-- DavidKoontz - 2015-01-03

The only predefined physical type range is separable from integer range (16.3 Package standard, "type TIME is range implementation_defined") which also points out that the range of integer types is also implementation defined. Where are all the implementations with 64 bit integer or real ranges? There are downsides to increasing the base type range (univ@ in CONLAN, CONLAN Report, 3.2 pscl Model of Computation, page 13, Springer-Verlag 1983, Lecture Notes in Computer Science 151, known as universal integer in VHDL) shared with proposals for arbitrary precision, in that index ranges and range constraints are also tied to integer range, affecting the size and performance of models. Note from IEEE Std 1800-2012, 6.11 Integer data types SystemVerilog supports several range versions of integers, raising the question of whether or not a new wider range integer might be viable. Note there's a fundamental difference in philosophy derived from CONLAN found in the VHDL standard.

In any event because the range is implementation defined (and separable from simulation Time) the question remains, where is the demand for standardization?

-- DanielKho - 2015-01-04

Yes, while the range is implementation defined, it must include a minimum of 32 bits. I was thinking of increasing this minimum to a wider range, say 64 bits.

Section 5.2.3.2 Predefined integer types states:

The only predefined integer type is the type INTEGER. The range of INTEGER is implementation
dependent, but it is guaranteed to include the range -??2147483647 to +2147483647. It is defined with an
ascending range.

There are upsides and downsides to doing this. But perhaps the upsides outweigh the downsides (in terms of simulation time), now that computers and compilers are much faster as compared to 20 years ago. Are there any other side effects other than slowing down existing tools?

-- DanielKho - 2015-01-04

Just noticed there is another proposal on this topic (see LongIntegers). While this proposal suggests having only a single integer type having a minimum range of 64 bits, LongIntegers suggests creating a separate 64-bit integer type, independent of its 32-bit counterpart. After giving it some thought, I concur that LongIntegers may be a better solution. Now that we have generic types, we can defer the declaring of a specific type until a later stage, or when we traverse up a design's hierarchy. With this in mind, it would be easy for a designer to choose whether he/she wants a 32-bit or 64-bit integer type, and can switch between the two without much hassle:

entity design is generic(type t);
    port(...);
end entity design;
...
u0: entity lib.design(rtl) generic map(t => integer64)    -- or integer (32-bit)
    port map(...);

Also, there's another proposal ArbitraryIntegers, which we can consider. After reading through that proposal, I feel it solves all the issues we are seeing with the existing integer

type, and is a more robust solution than this proposal.

-- RyanHinton: I happen to be partial to the ArbitraryIntegers proposal myself, but I want to disagree with one of your statements. You claim that we can switch between integer types without much hassle, and you give the example of using a type generic on an entity. In your example, objects of type t don't have any arithmetic operations defined. They can be, but it's more work -- a significant hassle. Also, the increased complexity takes more complex coding practices, which gets me in trouble with my coworkers when it seems to provide no benefit. (Not to mention when I have to figure out what I did a few months ago!) Also, the more complex coding structures won't be in legacy code. So I have to rewrite old code or deal with the limitations. Also, the more complex coding structures usually have performance impacts. So I disagree that adding and using a new type has negligible impact on coding efficiency, readability, and performance. -- RyanHinton - 2016-12-13

Supporters

-- PatrickLehmann - 2016-02-11

Add your signature here to indicate your support for the proposal

Topic revision: r7 - 2020-02-17 - 15:34:30 - JimLewis
 
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback