Enhanced integers

Proposal Details

  • Who Updates: JonasBaggett
  • Date Proposed: 2016-07-30
  • Date Last Updated: 2016-10-18
  • Priority:
  • Complexity: Medium, I think.
  • Focus: Synthesis; also simulation to a lesser extent

Remark : This proposal is being splitted into the : logical representation access proposal, the auto ranged integers proposal (to be written), the type mechanism extension proposal (to be written) and the properties proposal (to be written). See also the new ideas section.

Current Situation

Currently the best option to do integer arithmetics for synthetizable code is to work with the types signed/unsigned from numeric_std which forces the user to do low level programming all the time (like size calculation and resizing vectors). It is true that low-level programming is unavoidable when interfacing an external entity, but as soon as you go inside your design, the need to specify the physical representation of the signals decreases.

Secondly, the numeric_std signed/unsigned types are actually defined as arrays of std_logic, a nine value enumeration type, with arithmetic and binary operations added, which leads to weirdness and performance issues. It would be more natural both for the user and the compiler (by compiler I mean : simulator or synthetizer) to use real integers, hence both the user and the compiler become more efficient. Unfortunetely with current VHDL, integer types have too much limitations for synthetizable code.


I have written some code here example_1.vhd that is valid with current VHDL but which would help to introduce the point I would like to make.

Since Top interfaces an external entity, it is absolutely necessarly to do low level programming and specify the length of its port signals. But for the signals inside the design (here : AE, BE, CE, DE), I don't see, in this simple case, a need for that much specifications.

So basically, here are the (slighty simplified) specifications that are to be met by the compiler :

  • A and B have 8 bits; C has 24
  • Convert A and B into a natural integer and assign them to AE, respectively BE
  • Assign AE + BE to DE (and no overflow allowed)
  • Assign DE * BE to CE (and no overflow allowed)
  • Convert CE into a std_logic_vector and assign it to C (and no overflow allowed).
As long as these specifications are met, we don't care about the bit size of AE, BE, CE and DE. We are actually giving to the compiler the constraints that are really necessarly for us and giving it the freedom of the physical implementation on what we don't specify because it doesn't really matter for us. More freedom means more ability for the compiler to produce optimized code. Of course there are cases when the compiler needs more informations and must complain if they are lacking. For example if a signal is an accumulator, we need to specify it's range or size because the compiler cannot guess how much the accumulator is supposed to grow; that's of course the user who has to find out what are his needs and specify the range or size accordingly.

Now the user can concentrate more on its design than on questions like how much bits is needed after multiplying N1 bits by N2 bits and sustracting a number of N3 bits ? which could be calculated by the compiler. Letting the compiler find the range and size when it's obvious allows a gain of developpement time and a reduction of bugs due to incorrect size calculations that leads either to overflows which still need to be catched, most of the time by analyzing the waves, or to bad optimized code if more bits are used than needed.

The goal of this proposal is to add new integer types, the binary integer types, which inherit all the properties of standard integer types, including having a range, but have a few more extensions to make them suitable for synthesis and to ease conversions from/to bit vectors.


  • The above principle about range and size calculations will apply to these new types, unless otherwise specified.
  • Some extensions need to be added to the type declaration mechanism of VHDL (see remarks on implementation details).

  • We need to specify integers whose maximal ranges isn't limited as suggested in ArbitraryIntegers.

  • With the integers types, a constraint error will raise in case of an overflow. But it is not yet possible to use modulo integers. So a first enhancement would be to allow to create modulo integer types (as already suggested with Modular Integer Types, but I will present a different way).

  • Binary operations should be allowed on the binary integer types, which should answer the IntegerOperators proposal. Not only using conversion functions to do binary operations is painful, but by nature, binary integers should have a binary representation accessible. Therefore I suggest to add a 'bits attribute on binary integers that returns a bit_vector representing its bits. This attribute will be read/write and will enable binary operations. The binary representation will be a standard 2's complement. The user may want a different binary representation than the default one. Then I think we should make this attribute overridable so that the user can create his own functions/procedures for the 'bits attribute. Therefore I suggest the use of attribute getters/setters that follow the same principle as Python properties. Extended User-Defined Attributes proposal seems to address this issue.

    With the suggested implementation for binary operations on the new integer types (see below), the following proposals are required (at least partially) :

Implementation details

So here is a suggested implementation of integers types that can meet the proposal goals while, AFAIK, still keeping backward compatibility.

I will start with the types definition here : types_definitions.vhd. Be aware that the Signed and Unsigned types used in the examples below are not those from numeric_std but those defined in the preceding source file. I decided to use the same names, as I believe they are perfectly describing the wanted behavior.

Currently (2016.08.02), proposal Extended User-Defined Attributes is incomplete and no implemention is suggested. Therefore I also suggest an implementation for user defined attribute getters/setters. Here is an example for a one's complement signed that require the overriding of the 'bits attribute : bits_attribute.vhd.


Some extensions were needed to the type declaration mechanism of VHDL :

  1. It was necessary to borrow from ada the derived types feature in order to create types that are incompatible without an explicit cast. (see https://en.wikibooks.org/wiki/Ada_Programming/Type_System#Derived_types)
  2. Since new properties (or attributes) need to be set to integer types, a mechanism was created to allow that as shown above.

Code Examples

Then here is some VHDL code using these types : example_2.vhd

Compiler range and size calculation rules

The first thing we have to be aware of is that every binary integer signals have a range as binary integers inherits from the standard integer type, but they have an extra size property. Both could be defined by the user or calculated by the compiler. The general rule is that the range must fit into the size of a binary integer, but its size could be higher than needed for the range. So, don't forget that they are 2 different things, although being related. Working with range is working with the standard integer part of the binary integer which is what we are trying to do whenever possible, while working with its size equals to working with its bits part, that means working on the lower level, which we are trying to avoid as much as possible. Working with ranges instead will allow the compiler to make further optimizations (like using system size integers whenever possible) as the size is not constrained by the user. Lastly, the calculated range of a signal will always be its minimal possible range.

Let me take the 2nd example to show the compiler calculations rules and I will explain step by step how the compiler should find out the range and size of the signals. After each step, if needed, I will describe the general rule to be applied. Sometimes rules will have a label because I will refer to them later :

Analyzing the top level

First the compiler has to start from the top level of the design. Here AT, BT and CT haven't a size defined by the user, although AT has a range. For calculating their range and size, the compiler has to analyze the expressions assigned to these signals.

There are 2 specifications to be met with AT size calculation :

  1. AT size must be A length as A is directly assigned to its bits.
  2. AT size must be enough for its range. In this example, it means that if A length would be lower than 4 bits, then an error will be raised because the range requirement (0 to 9) cannot be fulfilled.
(a) This lead to the general rule for the calculation of the size of a range defined signal whose bits are assigned from a bit vector signal/expression :
  1. Its size must be the same as the size of the assigned bit vector signal/expression.
  2. Its size must be enough to fit its range.
Concerning BT, as B is assigned directly to its bits too, its size will be the same as B length, and it's range will be from 0 to 2**B'Length - 1.

(b) This lead to the general rule for the calculation of the range and size of a signal whose bits are assigned from a bit vector signal/expression :

  1. Its size must be the same as the size of the assigned bit vector signal/expression.
  2. Its range will completely fit its size.
Beside explicit size specification, rule (b) is the only case where the range is calculated from the size. In all other cases, when range and size aren't explicitely specified, the calculations will be made for the range, and then the range will be one of the several needed parameters for calculating the size (see rule (c) below).

Concerning CT, the CE out port from the Example_2 entity is assigned to it, then its range will be the same as CE. Therefore it cannot be calculated at that moment. It is true that its bits are assigned to C. However the compiler won't look at C length in order to calculate CT range as it will always first calculate the range of the right operand of assignement to then use it to define the range of the left operand of assignement (= the signal that is about to be assigned). Nevertheless, as CT bits are assigned to C, if CT range is bigger than the range allowed by C length, then an error will be raised.

(c) Here are the rule of the size calculation more clearely shown, when we are not in the special cases dealt by rule (a) or (b) :

  1. First the range of the signal must be calculated.
  2. If the bits of the signal are used (directly or in an expression) to assign a (bit vector) signal, then the size must be such as there won't be a size mismatch with the assigned (bit vector) signal. But the range must fit in the size. Conflicts are possible here if the signal bits would also be used to assign another signals. Let's say that in our case we would also have D <= CT'Bits, where D is a Bit_Vector with a bigger size than C. Then an error will be raised telling that the size of CT is required to be both the length of C and D, which is impossible. The user could fix it by doing resizing before assigning CT'Bits to D.
  3. If that would be a better fit for the compiler, it is allowed to choose a bigger size, as long as it doesn't conflict with the preceding points.

Analyzing Example_2_RTL

Here both AE and BE will inherit the range from respectively AT and BT, as the latters are assigned to the firsts. But as their size is allowed to be bigger than needed for their range, AE and BE won't inheritate the size from AT and BT. Indeed, knowing the range of AE and BE, the compiler is free to use more bits than needed if it would lead to more optimizations.

Then we have the assignement to XE :

XE := AE * AE * BE;

Where AE and BE minimal size are both 4 bits, but I will show that using the minimal sizes for doing the calculations with lead to less optimization than using the ranges. Naively we could think that XE minimal size will be 2 * AE minimal size + BE minimal size, which equals 12 bits, but as AE range is required to be from 0 to 9, an optimization can be made : the maximum result will be 9 * 9 * 15 which is 1215, therefore 11 bits will be enough for XE size.

Then we have the accumulator signal. As its range is given, its size can be calculated following rule (c). If no size or range specification would be given by the user, then an error would be raised as it would be impossible for the compiler to calculate the range because of the dependancy loop.

Then this leeds to the following rule : when there is a dependancy loop, either directly as in our case, or indirectly (e.g. A depending on B; B depending on C; C depending on A). Then informations about range or size must be provided so that the compiler could break the dependancy loop. In case of the indirect dependancy loop case shown above, it won't necessarly be needed to specify A, B and C range/size as specifying the range/size of one of them should be enough to break the dependancy loop and give an order of calculation. It may exist some tricky corner cases with dependancy loops, but it should always result with the compiler asking some more informations so that it could break the dependancy loop.

Then we have YE1 and YE2 assignements. As they are modulo numbers, their range/size must be explicitely defined. The result of the expression assigned to them will be modulo their range.

Finally, we have the assignment to CE (we suppose here that no division by zero could occur) :
CE <= (Unsigned (YE2) * BE) / (Unsigned (YE1) * AE) + Accumulator;

CE range will be calculated from the range of YE2, BE, YE1, AE and Accumulator. It's exact range will be from 0 to YE2'High * BE'High + Accumulator'High, which becomes : 0 to 45,945. So its size must be at least 16 bits.

Back to top level

As CE is assigned to CT, CT inherits its range. As its bits will be directly assigned to C, rule (c) apply for its size calculation.

Open questions

  • With numeric_std signed/unsigned signals, it is easy for the user to see if he forgets to initialize a signal because it's value is set to unitialized. Is there a way to mitigate the lack of this feature in binary integer signals ?
  • For "signal A : Unsigned; -- Size and range set by the compiler to the minimum possible", on what basis will the compiler be able to determine a minimum possible size/range? After all, the minimum size/range based on just this declaration would be a null range. If you want to say that it will be determined by the context of how it is being used (Simple case: Z <= 2+3; would need three bits) then you'll need to spell that behavior out in more detail here. Remember that operators such as + can be overridden. I'm not necessarily opposed, but you haven't provided sufficient detail in this proposal -- KevinJennings - 2016-10-13
    JB: Yes the minimum possible size/range will be determined by the context. This needs more clarification, thanks for pointing me that, I am going to address that soon. JonasBaggett - 2016-10-14
    JB: Addressed now by a new chapter (Compiler range and size calculation rules), but as I was thinking about that, I found some simplier and better rules : Now the calculated signal range will always be the minimum possible, but its size could be any size as long as the range can fit in it. I have also removed the code line you are referring to, as enough explanations were already given in the new chapter. JonasBaggett - 2016-10-14
    JB: I can't see when operator overloading could be problematic, do you have a possible case in mind ? Correct me if I am wrong, but inside an operator overloading function, we can call the function that is about to be overloaded, right ? JonasBaggett - 2016-10-14

New ideas

  • As the binary integers are basically ranged integers, it doesn't make sense anymore for me that they are of type signed or unsigned. They should be of type universal_integer or of one of its subtypes (universal_natural or universal_positive). Integers whose range includes negative numbers could be encoded as 2's complement (unless the user overiddes the methods for setting/getting the bits attribute), just as numeric_std signed vectors. Integers whose range doesn't include negative numbers could be encoded just as numeric_std unsigned vectors. Modulo integers will be of type Modulo_Integer or Modulo_Natural. Without an explicit cast, they will be incompatible with the universal integer type and subtypes.
  • Now it seems a better approach for me to split the proposal into several ones :
    • Synthesizable integers : this proposal will only describe the rules about ranges calculations (like described here, but without the rules about sizes calculations) and the modulo integers.
    • Derived types : This proposal will be a requirement for the synthesizable integers proposal. See the first point of this section for more informations.
    • Allow access to physical representation of objects : This proposal will be a recommanded dependancy for the synthesizable integers proposal. It would address the issues reported by IntegerOperators and ease connections with external entities. The main idea is that we want to use integers, enumerations, arrays, records, etc as ideal objects. But these objects are to be physically implemented in bits. When possible, we want to use them as they were intended to be, but sometimes we need an access to the physical representation. It will be like allowing 2 possible views for an object, and let the user use which one is relevant for each case. Here is a picture to better explain the idea :

      A represents the ideal object, while B represents the real object that also have a physical representation. Currently in VHDL we can only access A, and are forced to do conversions to access the B part outside A. It's a bit of a pity because in the reality we are dealing with B and not A, although we want to only access the A part of B whenever possible. Therefore it makes sense to me to consider every object as a B object that has 2 possible views : the A part which will be accessible as usual and the B part outside A that will be accessible with the following attributes : bits, size and encoding. The latter will allow to define a predefined encoding for the object.
    • Properties : The previous proposal will have a recommended dependancy on this one, because it would allow user-defined encoding.

Related Issues

Arguments FOR

Compared to the signed/unsigned types in numeric_std, I see the following advantages :

  • More high-level programming when possible, then the user can be more focused to his design. KJ: Because you say so? I don't see this. -- KevinJennings - 2016-10-13
    JB : The last but one paragraph of the introduction expands on that. I mean the user can be more focused to what his design has to achieve instead of doing vector size calculation which could be error prone. Is it now clear for you or could you more explain your point ? JonasBaggett - 2016-10-14
  • Less error prone because the compiler does now a lot of size calculations that have to be done previously by the user. KJ: Your examples show doing double the work, not only specifying the range but also specifying the size. Your example seem to show the declarations to be more error prone, not less. -- KevinJennings - 2016-10-13
    JB: I don't get your point, my examples show that it is possible to specify either the range, or the size, or even both if for some reason it could makes sense for the user. I see no doubling the work here. And don't forget that most of the time size or range specification won't be needed. JonasBaggett - 2016-10-14
    JB: Now it should be more clear with the modifications I made. JonasBaggett - 2016-10-14
  • Leads to a lot more readable code KJ: Because you say so? I don't see this. -- KevinJennings - 2016-10-13
    Ok. If you compare the vector size specification of a numeric_std type with the way of specifying the size of the proposed binary type, right, there isn't really a difference in readability. But the whole point is that most of the time size specification is not needed. Did you also analyze example_2.vhd ? If the types from numeric_std were used instead, there would be a lot of calls to resize functions making the code less readable, especially in the Example_2_RTL architecture. Here, except with the conversions from/to the Bit_Vector at top level (and some explicit casts that could be avoided with operators overloading as already told in code comments), every calculation is just like mathematics. So I don't get your point here. JonasBaggett - 2016-10-14
  • It is now possible to define if a binary integer is modulo or not, then it is easier to catch unwanted overflows. KJ: Do you mean this when compared to using signed/unsigned? VHDL integers already catch overflows quite nicely. -- KevinJennings - 2016-10-13
    JB : Yes I mean this compared to using signed/unsigned from numeric_std as standard integers are of very limited use in synthetizable code. By the way, when in my examples I am using signed or unsigned types, it isn't those from numeric_std, but those that derivates from the binary integer type as defined here : types_definitions.vhd. I added a sentence in the implementation details to make it clearer. JonasBaggett - 2016-10-14
  • Easier for the compiler to do optimizations KJ: Because you say so? This may be true, but is this just a hope or has some compiler person vetted it and how do these optimizations compare to simply redefining the bounds of integer from the +/- 2^31 (or thereabouts) to +/- googol (10^100, 333 bits)? If googol is not enough for the purists, how about Googolplex? Simply redefining the integer bounds would be more straightforward, probably meets most people's needs and is not creating new keywords. I haven't come up with the scenario where using googol as the bounds would break existing code either whereas this proposal, as written, would. -- KevinJennings - 2016-10-13
    JB : I mean that, for a compiler point of view, numeric_std types are array of characters and without cheating a little, the aritmetics will be slow to calculate on them as the IEEE arithmetics functions are to be called. But it would make sense for me if most of the available compilers cheat and consider signals of these types as normal integer while still dealing the corner cases when some of their bits have special values like 'U', or is it more complex than that ? JonasBaggett - 2016-10-14
  • Should address a good part of the issues reported by ArbitraryIntegers, LongIntegers, Modular Integer Types, IntegerOperators and ImplicitConversionNumeric.
It is also easy to extend to fixed and floating numbers.

Arguments AGAINST

General Comments

Thanks to PatrickLehmann for the inspiring discussions we had smile

Not necessarily against this proposal, but it needs more work and there isn't much time left to finish it off for the next revision of the standard. -- KevinJennings - 2016-10-13

Thanks for spending time to review my proposal, and for the suggested improvements. But for some of your comments, I need some more clarification so that I could know what in my proposal needs either improvement, or clarification. -- JonasBaggett - 2016-10-14


-- JonasBaggett - 2016-08-01

Add your signature here to indicate your support for the proposal

Topic attachments
I Attachment Action Size Date Who Comment
Unknown file formatvhd bits_attribute.vhd manage 2.6 K 2016-08-03 - 10:51 JonasBaggett  
Unknown file formatvhd example_1.vhd manage 1.0 K 2016-07-30 - 19:59 JonasBaggett Example 1
Unknown file formatvhd example_2.vhd manage 2.5 K 2016-10-14 - 13:27 JonasBaggett  
Unknown file formatvhd types_definitions.vhd manage 1.5 K 2016-10-13 - 10:45 JonasBaggett  

This topic: P1076 > WebHome > Vhdl2019CollectedRequirements > EnhancedIntegers
Topic revision: r16 - 2020-02-17 - 15:34:29 - JimLewis
Copyright © 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback