- Who Updates: main.CliffordWalinsky
- Date Proposed: March 7, 2013
- Date Last Updated: RyanHinton - 2016-12-13
- Priority:
- Complexity:
- Focus:General Language

Provide a representation of 64-bit integers.

Had a link to LCS-2016-026, but the text of this LCS actually implements the ExtendedIntegers proposal. Replace this text with an LCS link if one is written.

The range of integers supported by most VHDL implementations is currently limited to 32-bit representations. Modelers often need to exceed this range of representation, and must resort to very different type representations. In a common representation, for example, 56-bit integers can be expressed by representing them in 64-bit IEEE floating point. This representation suffers from poor performance (floating-point compared to integer arithmetic), and higher complexity. Another representation with records also suffers from higher complexity.

Any approach to increasing the representational capacity for integers must be concerned with performance. Integers are one of the central object types of VHDL. Currently, most workstation processors support 64-bit integer arithmetic, so evaluation of 64-bit integer arithmetic instructions is as fast as 32-bit instructions. Compiler implementations of other languages (such as C and C++) continue to differentiate between 32-bit and 64-bit integers because of the difference in memory latency (a difference that approaches 2X), as well as the need to communicate with legacy libraries.

Currently, type integer is defined in the standard library as:

type integer is range _implementation_defined_;

Implementors must ensure that integers have a representation consisting of at least 32 bits. To date, all implementations of the integer type consist of exactly 32 bits.

We propose to expand the standard library with a definition of a new 64-bit integer type:

type long_integer is range _implementation_defined_;

It is important that these 2 integer types have type declarations, rather than subtype declarations: overloaded subprogram calling is disambiguated based on base types of signatures. For example, if sub1 and sub2 are subtypes of integer, the following procedure declarations in a package header would be ambiguous:

procedure proc(x : sub1); procedure proc(x : sub2);

However, integer and long_integer are different base types, so that if subint is a subtype of integer, and sublongint is a subtype of long_integer, the following procedure declarations are unambiguous:

procedure proc(x : subint); procedure proc(x : sublongint);

Machine code generated for the sublongint version of proc would be expected to have different performance characteristics compared to the subint version of proc.

ArbitraryIntegers suggests changing the definition of `integer`

to have an arbitrarily large range, similar to the Python and Ruby programming languages. If that proposal is accepted, then long integers are probably not necessary. Alternatively, `long_integer`

could be defined as a subtype of an arbitrary-length integer type.

ExtendedIntegers suggests increasing the minimum range of `integer`

to 64 bits.

The C/C++ language has a programming model known as LP64, where integers declared as long shall have the same size as pointers/addresses, facilitating assignment compatibility of pointers and longs. This same programming model will serve hardware modelers, who will be able to convert pointers into hardware memory into long_integers, perform arithmetic on the long_integers, and then use the results as addresses into hardware memory. Note that, because VHDL is strictly typed, it is not possible to cast an access type's value into a long_integer -- the language remains strictly typed.

Due to memory latency, evaluation of 32-bit programs often differs significantly from identical programs converted to perform 64-bit integer instructions. Making this differentiation apparent, by defining a new long_integer type, provides the following benefits:

- Performance of existing designs will not degrade.
- When choosing between defining objects as integers or long_integers, designers will be aware of the tradeoff between increased expressiveness of long_integers, and compactness and improved performance of integers.
- Synthesis tools will expand hardware requirements of operations involving long_integers, compared to those involving integers, and designers will be aware of this expansion.

Forcing designers to choose an implementation of integer forces undue constraints on them.

-- TristanGingold - 2014-10-13

Complain to your simulator vendor if it doesn't support 64 bit integers.

I think there is no need to have Long_Integer in std.standard; you can declare your own integer type (with your own range) and that would be better from an engineering point of view.

-- RyanHinton: Since the current limitation is explicitly allowed in the LRM, I need an LRM change to write portable code beyond those limits.

Complaining to my simulator vendor won't help if I want to synthesize the code. I use `integer`

and `real`

math heavily in calculating constants and initial values in synthesizable code. The existing LRM takes ownership of this issue by guaranteeing a supported range for integral types. I agree that some tool implementors have interpreted the minimum guarantee as an absolute maximum. But if I write code that needs a 64b `integer`

, I'll have to fight with the vendor for every single tool in my toolchain, and they can very reasonably argue as Cliff does on this page that there are good technical reasons to **not** provide more than 32 bits for `integer`

.
-- RyanHinton - 2016-12-13

*Add your signature here to indicate your support for the proposal*

Topic revision: r11 - 2020-02-17 - 15:34:35 - JimLewis

Copyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.

Ideas, requests, problems regarding TWiki? Send feedback

Ideas, requests, problems regarding TWiki? Send feedback