# Arbitrary Length Integers

## Proposal Details

• Who Updates: Martin Thompson
• Date Proposed: 8 March 2013
• Date Last Updated:10 October 2014
• Priority:
• Complexity:
• Focus:

### Current Situation

VHDL currently only mandates support for integers with slightly less range than that of a 32-bit 2s-complement number.

Users often need access to larger ranges than this. Even lack of access to the full 32-bit range can be a problem for certain code.

### Requirements

Create a completely unconstrained integer type, with all the capabilities of current INTEGERS (use in ranges, loop indices, ability to address array etc.). Users are still free to define constrained subtypes.

The LRM definition of INTEGER changes from an "at least" guarantee to an explicitly constrained subtype of an unconstrained integer type (UNIVERSAL_INTEGER). Keeping the commonly-assumed semantics allows existing code to operate unchanged.

Allow logical and shift operations on all integers.The following operators shall be implemented: not, and, or, xor, nand, nor, xnor, srl, sll, sra, sla. See IntegerOperators for details. Rotate operators (ror and rol) cannot be implemented as there is no "natural length" of bits representation to rotate the low bits into the high bit of.

The operators will have their conventional mathematical meanings (in particular note that "sla" will fill the lsb with 0). LSBs "dropped off" the right hand side by a right shift are discarded.

As with current mathematical operators, "It is an error if the execution of such an operation (in particular, an implicit conversion) cannot deliver the correct result (that is, if the value corresponding to the mathematical result is not a value of the integer type). "

## Implementation details

Tools are free to support this in any way - a variety of so-called "bignum" libraries are available.

Vendors should use their implementation to differentiate themselves. For example, to improve performance, one could use "ordinary" machine-native arithmetic until the range of a specific value had exceeded this, then switch to the bignum library.

Logical and shift operations will operate on a 2-s complement bit pattern to represent the integers.

### Candidate: expose UNIVERSAL_INTEGER

Currently, all integer literals are of an anonymous predefined type called UNIVERSAL_INTEGER in the standard. One approach to arbitrary-length integers is to expose this type and specify that it is completely unconstrained. RyanHinton suggests we make INTEGER a (constrained) subtype of UNIVERSAL_INTEGER. (Anyone see any side-effects of doing this?) Specifically, INTEGER would be a subtype with the current range guaranteed in the standard, -2**31+1 to 2**31-1.

-- implicit type UNIVERSAL_INTEGER
subtype INTEGER is UNIVERSAL_INTEGER range -2**31 to 2**31-1;
subtype INTEGER is range -2**31 to 2**31-1; -- more compact
-- note both of these include the full twos-complement range

The LONG_INTEGER type proposed in LongIntegers can also be implemented as a subtype of UNIVERSAL_INTEGER.

subtype LONG_INTEGER is range -2**63 to 2**63-1;

There need to be two additional attributes defined, UNIVERSAL_INTEGER'high and 'low, they would effectively map to +/- infinity. These can be used for comparisons and assigned to things, and used in some ranges. However they cannot be used to define the size of, or index into, arrays. They cannot be used as the left-most constraint of a loop. They cannot be used in logical or shift operations.

#### For example

for i in UNIVERSAL_INTEGER'low to 0 -- error
for i in 0 to UNIVERSAL_INTEGER'high -- OK - presumably some other way to break out of the loop
variable i: UNIVERSAL_INTEGER := UNIVERSAL_INTEGER'low;
if i = UNIVERSAL_INTEGER'low then -- OK
some_array(i) := 1; -- runtime failure, can't index like t</span>his

variable B : UNIVERSAL_INTEGER range 0 to 2**64;   -- off-by-one ? is "-1" missing ?
type big_array_type is array range 0 to 2**40 of some_other_type;

## Related Issues

LongIntegers suggests adding a new type with exactly 64-bit range. This would be an improvement over the current situation where we are limited to 32-bit integers, but it is not as powerful or general as arbitrary-ranged integers.

## Use Cases

• Cryptography
• Error detection and correction
• Addressing large (>2GB) memories - not out of the question to want to address such a memory these days.
• Event counters in long simulations.
• Representing floating point numbers within a "significand, exponent" record - for significands beyond 31 bits (eg IEEE754 double, Intel 80-bit)
• Represent physical types with higher precision and range (e.g. frequency type up to a few terahertz with precision of 1-Hz)

## Arguments FOR

Insulating users from machine-specific details is a good thing.

Allowing users to use wider ranges (esp. up to 64-bits and beyond)

Encryption applications would benefit from using bignums rather than having to work up their own libraries.

Python (the widely used high-level language) also did this - http://www.python.org/dev/peps/pep-0237/ states (in summary):

Many programs find a need to deal with larger numbers after the
fact, and changing the algorithms later is bothersome...

Having the machine word size exposed to the language hinders
portability...
...the general desire to hide unnecessary details from
the ... user when they are irrelevant for most applications

If we choose a constrained range, it will look too-small in a surprisingly short length of time ( 640K is enough for anyone syndrome )

-- JanDecaluwe - 2014-11-02 Some non-supporters ask what is wrong with numeric_std. The answer is that signed/unsigned types do not behave like integers in annoying and unproductive ways. They originate from the time when exposing the low-level computer architecture in the language was useful. Many people have become so used to them that they don't realize that there is a superior alternative: plain old integers as we all learn in primary school. See also my essay These Int's Are Made for Counting.

-- DanielKho - 2015-01-04 Solves existing problems dealing with physical types and integer arithmetic (multiplication, division).

## Arguments AGAINST

Performance - although compared to writing a bignum library in VHDL, or using one of the other hacks currently employed (wide signed vectors, reals, ) a compiled bignum library is probably still a win.Likely to be very minor for integers which fall within the current range of INTEGER anyway.

The comparison with Python is irrelevant: VHDL is a hardware description language, while Python is a scripting language. A hardware description necessarily deals with hardware limitations.

-- MartinThompson - 2014-10-10 - with respect, it is relevant. An increasing amount of the job is verification, which does not have to deal with hardware limitations and should not be restricted as if it did. The comparison with Python is very valid as there are currently two projects which are built on the premise that Python makes an excellent verification language (MyHDL and cocotb). -- YannGuidon - 2014-10-24 I agree, VHDL is now more than just a hardware description language.

-- LievenLemiengre - 2015-04-30

making all integer-types a subtype of universal_integer has consequences for overloading Currently this works fine:

package p is
type myInt is range 0 to 10;
type yourInt is range 0 to 10;

procedure p (arg : myInt);
procedure p (arg : yourInt);
end;

But if myInt & yourInt were subtypes of universal_integer you can't do this overloading because overloading works with basetypes. There actually is code in the wild that defines their own integer and we would break that code. This probably isn't a big issue, but it's something we should be aware of.

Additionally, the std library could contain

INTEGER_x, NATURAL_x definitions for x in the set (8,16,32,64,128)

for users to have access to "standard types" such as the users of C99 are used to.

It would be nice if the current integer were a subtype of a more general type. That way you can avoid breaking old code that depends on the size. -- JimLewis - 2013-04-29

It wouldn't seem code that depends on size 'implementation defined' is a portable VHDL description to begin with. Backward compatibility with a tool dependency sounds like a tool vendor issue as in the second para of Arguments Against. - DavidKoontz - 2013-07-14

[Main.RyanHinton - 2013-06-18] Do we need to make C99 programmes comfortable in VHDL? In other words, is there a concrete reason to add INTEGER_x types to the language, or can we put them in a (non-IEEE) convenience package?

[Main.CliffordWalinsky - 2014-02-27] Yes, code that depends on a manageable size for INTEGER is probably not "nice" code, but it exists, and probably in large quantities. Requiring that all of this legacy code be changed is not realistic. In some companies, even asking for a new compile-time switch could require approval at the highest levels of the organization; changing source code may be close to impossible.

-- YannGuidon - 2014-10-24 let me state it differently : Do we really need to make C99 programmes UNcomfortable in VHDL?

-- JanDecaluwe - 2014-11-02 My suggestion is to implement this similar to vector types: allow unconstrained types (e.g. UNIVERSAL_INTEGER) in interfaces, but require finite constraints at elaboration time. This would provide the modeling advantages of arbitrary length integers while keeping the advantage of a compiled language: the elaborator/compiler could perform optimizations based on the constraints. This should address any performance concerns.

-- JanDecaluwe - 2014-11-02 I see references to synthesis from non-supporters. Although synthesis is an important application, VHDL is not defined as a synthesis language but as a modeling & simulation language. Synthesis concerns should not be taken into account to decide on language features; they become relevant later when defining synthesis standards and (especially) restrictions.

-- DanielKho - 2015-01-04 I agree to Jim's and Ryan's suggestions of making the current integer as a constrained subtype of universal_integer. @Tristan, yes, arbitrary integers can't be synthesized, but at a higher-level hierarchy of the design, the user should constrain the integer to a specific range. That would make it synthesizable. As Jan mentioned, we can specify unconstrained integers in interfaces (or ports), which will later at a higher-level of the design, be constrained by the designer prior to being elaborated.

-- ErnstChristen - 2015-01-27 - Some of the comments here and in related proposals (e.g. Long Integers, Modular Integer Types) and discussions (e.g. long physical types) it would be worthwhile to distinguish between abstract types and "hardware" types. Some of the arguments made in these discussions apply to one but not the other. It has been observed that the HW types SIGNED and UNSIGNED are available, but often have poor performance. I think that this discussion should focus on abstract types, in which case synthesis issues are not as relevant as they are for HW types. Here the most blatant issue I have seen is the ability to work with TIME'POS, e.g. assign it to an integer, once time exceeds the integer range.

## Supporters

-- MartinThompson - 2013-03-08 -- RyanHinton - 2013-04-29 -- JimLewis - 2013-04-29 -- TrondDanielsen - 2013-11-20 -- YannGuidon - 2014-10-24 -- JanDecaluwe - 2014-11-02 -- DanielKho - 2015-01-04 -- JonasBaggett - 2016-10-16

## Non-Supporters

-- CliffordWalinsky - 2014-02-27

From the standpoint of performance, integers are a basic data type in the implementation of VHDL, used in all array indexing operations. Array indexing is performed in almost all VHDL models of any size, given the centrality of std_logic_vectors. This proposal would, in many cases, cause index computations to become slower, because all integer values would need to be checked to determine if they are represent arbitrary-length integers. Consequently, a design without any arbitrary-width integers could simulate significantly slower than before introduction of this concept. It's possible that vendors could devise dataflow optimizations to infer limits on integer expressions. But, in general, this is an unsolvable problem. There may be integer-valued indexing expressions in designs that cannot be inferred to have limits, and that take up large quantities of compute-time (perhaps because they reside within nested loops). Users encountering this situation may be left very frustrated, particularly if they have no interest in using arbitrary-length integers.

-- PeterFlake - 2014-08-07

Is there any reason why arithmetic operators cannot be added to bit_vector to provide integers of arbitrary but fixed size, as in numeric_bit package, built in to the implementation for efficiency?

-- TristanGingold - 2014-10-13

I think this proposal is confused.

Arbitrary precision numbers cannot be synthetized (as they need unbounded resources), so I don't think they are useful for VHDL.

Very large number are already supported via numeric_bit or numeric_std. What is wrong with them ?

If you need large counters or large memory, you can define your own 64 bit type. If they aren't supported by your simulator, complaim to your vendor. I think that 64 bit integers should fill your needs.

-- MartinThompson - 2014-10-17 There is a need for synthesisable integers larger than 64 bits, with constrained ranges (eg 0 to 2**1024-1 for encryption). Vectors are slow to simulate and use more memory as they carry around the metavalue baggage for every bit.

-- TristanGingold - 2014-10-24 I don't see any reason why numeric_bit should be slow. I think it would be much better to improve numeric_bit/numeric_std (and to spend effort to improve their speed) than to add new features.

-- DavidKoontz - 2014-11-09

There's practical limits to the size of adding or multiplying operator's operands based on performance and testability. For instance you'll likely not find record of someone building a parallel multiplier larger than 112 bits. We build hardware with larger operands by doing the equivalent of bignum, chaining smaller operations together. There isn't a single commercially offered CPU that can perform greater than 128 bit integer operations without chaining operations. While having the saving grace of allowing a compact expression (a property already exhibited by array types with element types representing 'bits'), building that 1024 adder or multiplier is not practicable. There are also issues with the index size, the index or range constraint size is the universal integer size. Think of all that simulation time spent doing constraint checking, an issue avoided using array types with today's universal integer. Even changing universal integer to 64 bit would slow down simulation measurably due to cache sizes not to mention seriously limiting performance on 32 bit CPUs. The constraints placed on integer range today are carefully thought out. Arbitrary sized integers appears to require a new language definition which may not include range constraint checking at run time or like bignum base size on a unit size, all the sudden less arbitrary, implementable as a package today without breaking the underpinnings of VHDL.

Why not just change the bounds of integers like this: "However, an implementation shall allow the declaration of any integer type whose range is wholly contained within the bounds -2**512 and +-2**512-1 inclusive." Googol is approximately 2**333, so I would think that this new definition should be enough for VHDL-2017. People will still have their old tools if they happen to have written code that is sensitive to the VHDL-87 defined range. Performance impact could be minimized by the simulator making use of the define range of the object to cause it to choose the appropriate math routines. -- KevinJennings - 2016-10-14

Topic revision: r30 - 2020-02-17 - 15:34:25 - JimLewis

Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback