(According to Castbox, this is an error in the ad and is out of their control.)

]]>

My demo app consists of four composable functions:

**ByteToDecimal()****Byte()****Bit()****Decimal()**

**ByteToDecimal()** is the top-level composable which I call from **setContent()** in MainActivity.kt. (I start with “Empty Compose Activity” in Android Studio.) **ByteToDecimal()** calls **Byte()**, which calls **Bit()** in a loop eight times to generate a “toggleable” button for each bit. It then calls **Decimal()** to generate a text composable which displays the current decimal value of the byte.

/** Rick Regan, https://www.exploringbinary.com/ */ val byte = arrayOf( mutableStateOf(false), mutableStateOf(false), mutableStateOf(false), mutableStateOf(false), mutableStateOf(false), mutableStateOf(false), mutableStateOf(false), mutableStateOf(false) ) var byteValue by mutableStateOf(0) fun bitFlip( bitPosition: Int ) { var bit by byte[bitPosition] bit = !bit val bitValue = 2.0.pow(bitPosition.toDouble()).toInt() byteValue += if (bit) bitValue else -bitValue }

The app represents a byte as eight individually mutable Booleans, a change to any of which will cause the corresponding button to be recomposed (redrawn). It represents the decimal value as a mutable integer, a change to which will recompose the text displaying the decimal value. The changes to this mutable state are initiated in the UI (button presses in **Bit()**) but made here in the function **bitFlip()** through a callback.

The app’s state lives outside of the composable functions so that it can survive configuration changes.

@Composable fun ByteToDecimal() { Column( horizontalAlignment = Alignment.CenterHorizontally, modifier = Modifier.fillMaxWidth() ) { Byte(byte, ::bitFlip) Spacer(modifier = Modifier.padding(top = 20.dp)) Decimal("$byteValue") } }

**Byte()** creates eight buttons by calling the **Bit()** composable in a loop. It gives each button its bit position, the value of its mutable Boolean representation, and a reference to the **bitFlip()** function that reacts to the bit being toggled.

@Composable fun Byte( byte: Array<MutableState<Boolean>>, bitFlip: (bitPosition: Int) -> Unit ) { Row { for (bitPosition in 7 downTo 0) { Bit( bitPosition, byte[bitPosition].value, bitFlip) } } }

**Bit()** displays a button with its current state (“0” or “1”), its place value equivalent in decimal above it, and its bit position below it. The buttons are styled to make it easier to differentiate between a “0” and a “1”.

@Composable fun Bit( bitPosition: Int, bit: Boolean, bitFlip: (bitPosition: Int) -> Unit ) { Column( horizontalAlignment = Alignment.CenterHorizontally ) { Text( text = 2.0.pow(bitPosition.toDouble()).toInt().toString() ) val colors = if (bit) { ButtonDefaults.buttonColors( backgroundColor = Color.White, contentColor = Color.Blue ) } else { ButtonDefaults.buttonColors( backgroundColor = Color(240, 240, 240), contentColor = Color.Blue ) } Button( onClick = { bitFlip(bitPosition) }, modifier = Modifier .padding(2.dp) .size(45.dp), colors = colors, border = BorderStroke(1.dp, color = Color.Blue) ) { Text( text = if (bit) "1" else "0", modifier = if (!bit) Modifier.alpha(0.4f) else Modifier ) } Text( text = bitPosition.toString(), color = Color.Gray ) } }

**Decimal()** simply displays the decimal representation of the byte.

@Composable fun Decimal( decimal: String ) { Text( text = decimal, Modifier .width(70.dp) .border(BorderStroke(1.dp, color = Color.Black)) .padding(4.dp), textAlign = TextAlign.Center, color = Color(0, 180, 0), fontSize = 5.em ) }

- I tried to demonstrate Compose concepts like mutable state, state hoisting, and unidirectional data flow. I also tried to follow all Kotlin and Compose coding conventions as I understand them at this stage.
- I would have liked to have used property delegate syntax in declaring
**byte**but I don’t know if that’s possible in an array (I do “*var bit by byte[bitPosition]*” in the code individually for each byte). - I could have made an array of “
*bitValue*” instead of computing the powers of two each time, but I wanted to keep it simple. - I probably could have made
**Bit()**and**Decimal()**more stateless by hoisting the styling information and hardcoded button labels (if those are considered state) but I’m not sure yet where that line is. - I use the line “
*if (!bit) Modifier.alpha(0.4f) else Modifier*” but the “*else Modifier*” part seems a little clunky. I was looking for something like “*Modifier.Unspecified*”, similar to “*Color.Unspecified*”. - I had originally coded
**bitFlip()**as a lambda but it was causing unnecessary (but ultimately harmless) recomposes. I’m investigating this on the #compose channel on Slack (login required). - My call to
**Decimal()**passes the string representation of*byteValue*, which triggers recomposition as I want. However, I was expecting to have to pass*byteValue*itself, since it is the mutable state. - I did not include all the imports, but Android Studio should generate most of them (if you need them let me know and I’ll post them in the comments).
- Compose is in
*alpha*so the API may change. This article is based on alpha 11, using Android Studio 2020.3.1 Canary 5.

Feedback welcome!

]]>

For my initial exploration of double rounding errors in floating-point conversions I generated examples by hand, setting the required bit patterns indirectly to produce long decimal strings. It occurred to me that some of these long decimal strings could be rounded to much shorter ones — like 0.1000000000000000055511151231257827021181583404541015625 rounds to 0.1 — and map to the same double, in our case the double representation of a float midpoint. From there, it’s only a matter of checking if the double rounding error still occurs, since the decimal number could have shifted to the midpoint or to the other side of it.

Starting with a randomly selected double rounding error bit pattern (which I call a “template” in the code) I fill in the “don’t care” bits with a random value and then randomly shift those example bits left or right a random number of binary places to create a decimal number that will experience a double rounding error upon conversion from decimal to double to float. I do these calculations using BigDecimal operations on powers of two, meaning I create a decimal number indirectly by thinking of its construction in binary. Then, I successively round this long, exact decimal representation to 17 digits, 16 digits, 15 digits, etc., stopping when the double rounding error no longer occurs. (In many cases the 17-digit representation does not even cause an error.)

Here is the Kotlin code I wrote:

/* Rick Regan, https://www.exploringbinary.com/ */ import java.math.BigDecimal import java.math.MathContext import java.math.RoundingMode import kotlin.random.Random funfindDoubleRoundingExamples(maxLength: Int) { val random = Random(1) for (i in 1..1_000_000) { val (decimal, numSigDigits) =generateDoubleRoundingExample(random) if (numSigDigits <= maxLength) { // Only interested in numbers this short or shorter println("$numSigDigits digits: $decimal") } } } fungenerateDoubleRoundingExample(random: Random): Pair<String, Int> { val decimalExact =generateExactDoubleRoundingExample(random) var numSigDigits = decimalExact.precision() // Try to make the exact number shorter (often will get at least a 17-digit number) var decimalPrev = decimalExact.toString() for (i in 17 downTo 1) { val decimal = decimalExact.round(MathContext(i,RoundingMode.HALF_UP)).toString() if (decimal.toFloat() == decimal.toDouble().toFloat()) break // No double rounding error at this length decimalPrev = decimal numSigDigits = i } return Pair(decimalPrev, numSigDigits) } fungenerateExactDoubleRoundingExample(random: Random): BigDecimal { /* Double rounding bit pattern templates (pick one randomly) 1 2-23 (any value) 24 25 26-53 54 up 1: 18014401730707455 : 1 0000000000000000000000 1 0 1111111111111111111111111111 1 1 up 2: 18014401730707454 : 1 0000000000000000000000 1 0 1111111111111111111111111111 1 0 down 1: 18014399583223809 : 1 0000000000000000000000 0 1 0000000000000000000000000000 0 1 down 2: 18014399583223810 : 1 0000000000000000000000 0 1 0000000000000000000000000000 1 0 */ val templateBits = when (random.nextInt(1, 5)) { 1 -> BigDecimal(18014401730707455) 2 -> BigDecimal(18014401730707454) 3 -> BigDecimal(18014399583223809) else -> BigDecimal(18014399583223810) } /* Generate a random 22-bit integer to fill out bits 2-23 (22-bit numbers lie between 2^21 = 2,097,152 and 2^22 - 1 = 4,194,304 - 1). Shift the number left 32 bits (2^32 = 4,294,967,296) so it will start at bit 2 when added in to the template */ val pow2To21 = 2_097_152 val pow2To22 = 4_194_304 val pow2To32 = BigDecimal(4_294_967_296) val exampleBits = templateBits + BigDecimal(random.nextInt(pow2To21, pow2To22)) * pow2To32 /* The max binary exponent for a float is 127, and the min (normal) exponent is -126. We have a 55-bit integer, so normalized its binary exponent starts out as 54. This means we can only shift the point to the right a max of 73 more bits: 127 - 54 (add 1 for non-inclusive endpoint to get 74), but we can shift the point to the left as much as 180 bits: 126 + 54 (add 1 for non-inclusive endpoint to get 181). */ return if (random.nextBoolean()) // Randomly pick a left or right shift exampleBits * BigDecimal(2).pow(random.nextInt(0, 74)) else exampleBits.divide(BigDecimal(2).pow(random.nextInt(1, 181))) // Kotlin quirk: Must use divide() since / operator does integer division } funmain() {findDoubleRoundingExamples(12) // Look for examples of 12 digits or fewer (change as desired) }

- I classify the bit patterns into four types (I describe them as three types and a subcase in my article):
*up pattern 1*,*up pattern 2*,*down pattern 1*, and*down pattern 2* - I start rounding at 17 digits because I view that as the start of the “short” examples.
- The rounding loop is overkill sometimes; a string of 0s that leads to short strings could be removed in one swoop. (I wanted to keep it simple.)
- I test for double rounding errors by doing the decimal to float and decimal to double to float conversions. In an alternative version of the code (not shown) I just checked the bit patterns of each rounded number to see if it matched an error pattern. I did this by scaling the BigDecimal representation of the string to an integer, converting that to a BigInteger, converting that to a binary string, and then testing the bits to see if they matched one of the patterns. That is more code, but it is independent of any particular language’s decimal to floating-point conversion routine (which may have errors).
- Only normal (not subnormal) numbers are generated and tested.
- Kotlin allows you to use operator symbols for BigDecimal operations; for example, ‘*’ instead of multiply(), and ‘/’ instead of divide(). However, oddly, the ‘/’ ends up doing integer arithmetic, so I had to use divide().

Here are some examples the program found, from the initially generated decimal number to the shortest rounding:

**17 digits**: 16581582576129408000 => **1.6581582576129408E+19**

**17 digits**: 3929563.874999999767169356346130371093750 => **3929563.8749999998** (this looks like it could round up to 10 digits but it doesn’t)

**13 digits**: 585276137701600012475039744 => **5.852761377016E+26**

**13 digits**: 150821866599299996733401494192128 => ** 1.508218665993E+32**

**11 digits**: 6.058141011400000204336628621036399613317319814496643298255323900016867931117800261830346267299951534823776455596089363098144531250E-33 => **6.0581410114E-33**

**10 digits**: 5169850375000000058598302970544128 => **5.169850375E+33**

**10 digits**: 9347089477999999790045239508467712 => **9.347089478E+33**

The initially generated decimal string is not guaranteed to round to a shorter string that causes a double rounding error. In my tests, on average, about 45% round to at least a 17 digit example. If you break it out by pattern type, about 38% of *up pattern 1*, about 62% of *up pattern 2*, about 28% of *down pattern 1*, and about 62% of *down pattern 2* round to at least a 17 digit example.

During the sequence of roundings, an initial *up pattern 1* can end with a shorter string that exhibits *up pattern 2* (and vice versa). Similarly, an initial *down pattern 1* can end up with a shorter string that exhibits *down pattern 2* (and vice versa).

I was surprised at first to see *up pattern 1*s converting to *up pattern 2*s and *down pattern 1*s converting to *down pattern 2*s. But after looking at examples I saw that this happened only for roundings to integers — integers with their last 1 bit at bit 54. For example, in an *up pattern 1* to *up pattern 2* scenario, 28808324560453631 rounded to 28808324560453630, the latter changing bit 55 from 1 to 0 (and keeping bit 54 1). In a *down pattern 1* to *down pattern 2* scenario, 16301684200308736.5 rounded to 16301684200308737, with the former having bit 54 = 0 and bit 55 = 1, and the latter having bit 54 = 1 (and no bit 55).

The easiest way for me to approach Per’s question was to search for examples, rather than try to find a way to construct them. As such, I wrote a simple Kotlin^{1} program to generate decimal strings and check them. I tested all float-range (including subnormal) decimal numbers of 9 digits or fewer, and tens of billions of random 10 to 17 digit float-range (normal only) numbers. I found example 7 to 17 digit numbers that, when converted to float through a double, suffer a double rounding error.

Here are examples I found:

Decimal | # | To Float | To Double | To Double to Float |
---|---|---|---|---|

1.3006255030632019 | 17 | 0x1.4cf5cap0 | 0x1.4cf5cbp0 | 0x1.4cf5ccp0 |

6.467822313308716 | 16 | 0x1.9df0cep2 | 0x1.9df0cdp2 | 0x1.9df0ccp2 |

0.0691026858985424 | 15 | 0x1.1b0b6ap-4 | 0x1.1b0b6bp-4 | 0x1.1b0b6cp-4 |

0.025306879542768 | 14 | 0x1.9ea0bep-6 | 0x1.9ea0bfp-6 | 0x1.9ea0c0p-6 |

4.456769842065e-9 | 13 | 0x1.324452p-28 | 0x1.324453p-28 | 0x1.324454p-28 |

5.79090352403e-4 | 12 | 0x1.2f9c32p-11 | 0x1.2f9c31p-11 | 0x1.2f9c30p-11 |

3.0128387285e-10 | 11 | 0x1.4b43dep-32 | 0x1.4b43dfp-32 | 0x1.4b43e0p-32 |

7.582917533e-5 | 10 | 0x1.3e0cf6p-14 | 0x1.3e0cf5p-14 | 0x1.3e0cf4p-14 |

9.67498269e-11 | 9 | 0x1.a9829ep-34 | 0x1.a9829fp-34 | 0x1.a982a0p-34 |

4.1358803e34 | 8 | 0x1.fdc95ep114 | 0x1.fdc95fp114 | 0x1.fdc960p114 |

7.038531E-26 | 7 | 0x1.5c87fap-84 | 0x1.5c87fbp-84 | 0x1.5c87fcp-84 |

A quick way to see the rounding errors is to look at the hexadecimal floating-point representations of the conversions. The decimal to float and decimal to double to float values differ by one ULP. (Remember that when used to represent a float, a 0 must be tacked on to fill out the last hex digit. This makes it appear like the float has 25 bits of precision, not 24, and that the double rounding error is two ULPs, when it is only ever one.)

You can verify the decimal to float and decimal to double conversions with my decimal to floating-point converter.

To understand those hexadecimal floating-point representations of the conversions better, consider the full binary representation of 0.0691026858985424:

0.00010001101100001011011010101111111111111111111111111111101100...

The single-precision rounding bit, significant bit 25 (the first bit highlighted), is 0, so the full binary representation rounds *down* to 0.000100011011000010110110101 when converted correctly to a float.

However, a double rounding error occurs when converting first to a double, and then from the double to a float. The double-precision rounding bit, significant bit 54 (the second bit highlighted), is 1, and there are 1 bits after it, so it rounds *up* to 0.0001000110110000101101101011 as a double. The last bit of the double, which would be bit 25 of a float, is 1, so round-to-nearest-even dictates it rounds *up* to a float as 0.00010001101100001011011011.

Here’s a summary of those three binary representations:

float: 0.000100011011000010110110101 double: 0.0001000110110000101101101011 double to float: 0.000100011011000010110110110

The full binary representation of 5.79090352403e-4 is

0.000000000010010111110011100001100010000000000000000000000000000000001...

The pattern of bits between bits 24 and 54 is the “complement” of the first example: bit 24 is 0, bit 25 is 1, bits 26-53 are 0s, and bit 54 is a 0. A correctly converted float rounds *up*, but converted through an intermediate double it rounds *down* and then *down* again due to round-to-nearest-even.

There’s nothing special about the examples except that I chose ones with the smallest absolute value exponent.

Having tested all numbers with 9 digits or fewer, I found only one 7-digit number, nine 8-digit numbers, and 51 9-digit numbers. One of the 8-digit numbers was the 7-digit number with a trailing 0, and nine of the 9-digit numbers were a corresponding 8-digit number with a trailing 0.

Examples were hard to come by with testing of randomly generated decimal strings, although longer examples surfaced more frequently. I found 17-digit numbers roughly on the order of one per billion, whereas I found 10-digit numbers roughly on the order of one per ten billion. (Don’t quote me on those ratios: I did not run enough tests to establish them with high confidence.) Is this because shorter decimal numbers are sparser around eligible double-precision numbers?

While all the double rounding errors I found are verifiable, it’s possible that an incorrect conversion in Java (Kotlin uses Java’s FloatingDecimal class for conversions) could have missed some. (I have generally assumed these days though that FloatingDecimal is correct.)

Here are the significant bits of the examples above:

Decimal | Significant Bits of Full Binary Representation |
---|---|

1.3006255030632019 | 10100110011110101110010101111111111111111111111111111111110… |

6.467822313308716 | 11001110111110000110011010000000000000000000000000000001100… |

0.0691026858985424 | 10001101100001011011010101111111111111111111111111111101100… |

0.025306879542768 | 11001111010100000101111101111111111111111111111111111100011… |

4.456769842065e-9 | 10011001001000100010100101111111111111111111111111111100010… |

5.79090352403e-4 | 10010111110011100001100010000000000000000000000000000000001… |

3.0128387285e-10 | 10100101101000011110111101111111111111111111111111111100011… |

7.582917533e-5 | 10011111000001100111101010000000000000000000000000000001000… |

9.67498269e-11 | 11010100110000010100111101111111111111111111111111111110101… |

4.1358803e34 | 11111110111001001010111101111111111111111111111111111101001… |

7.038531E-26 | 10101110010000111111110101111111111111111111111111111110011… |

These examples demonstrate two patterns in the significant bits of the full binary representation of a decimal number that cause these double rounding errors:

- Double round up pattern:
- Bit 24 is 1
- Bit 25 is 0
- Bits 26-53 are 1s
- Bit 54 is 1

- Double round down pattern:
- Bit 24 is 0
- Bit 25 is 1
- Bits 26-53 are 0s
- Bit 54 is 0
- At least one bit after bit 54 is 1

The double round up pattern rounds the float up when it should be rounded down; the double round down pattern rounds the float down when it should be rounded up. Both patterns create a double that becomes a float halfway case.

In the double round up case, the rounding up propagates all the way down to bit 25, setting it to 1 and all bits above it to 0. This in turn sets up the float rounding, which is a halfway case that’s resolved by rounding up to the nearest even value. In the double round down case, the rounding down to double removes bits 54 and above, information essential to proper float rounding.

In the double round up pattern, the value of bits 55 and beyond are irrelevant. But to see a case where all those bits are 0s, you have to construct an exactly representable example, such as:

- 0.500000089406967107574786268742172978818416595458984375, which is 2
^{-1}+ 2^{-24}+ 2^{-25}– 2^{-53}+ 2^{-54} - 9007200865353727, which is 2
^{53}+ 2^{30}+ 2^{29}– 2^{1}+ 2^{0}

To get a string of 1s from bits 26 to 53, the first example uses 2^{-25} – 2^{-53}, and the second example uses 2^{29} – 2^{1}.

(In this case, if you round the first example to 17 digits, 0.50000008940696711, you still get the double rounding error, although there won’t be all 0s beyond bit 55.)

There is a third double rounding error pattern that’s a variation of the double round down pattern, but it’s not demonstrated in the examples:

- Double round down alternative pattern:
- Bit 24 is 0
- Bit 25 is 1
- Bits 26-53 are 0s
- Bit 54 is
**1** **All bits after bit 54 are 0**

To invoke this scenario you also have to construct an exactly representable example, like:

- 0.500000029802322443206463731257827021181583404541015625, which is 2
^{-1}+ 2^{-25}+ 2^{-54} - 9007199791611905, which is 2
^{53}+ 2^{29}+ 2^{0}

(If you round the first example to 17 digits, 0.50000002980232244, you still get the double rounding error, but the bit pattern switches from the double round down alternative pattern to the double round down pattern.)

Per was interested in fast path conversion, where numbers with 15 decimal digits or fewer and power-of-ten exponents between -22 and 22 (relative to the decimal digits viewed as an integer) can be converted correctly using just double-precision arithmetic. Specifically, he wanted to know if double rounding errors were possible if you used double-precision fast path as an intermediate for converting to *float*.

The shortest fast path case is 9 digits, and there are five of them among the 51 9-digit numbers: 9.67498269e-11 (described below), 5.85052973e21, 9.49766107e23, 8.04624287e26, and 8.96981543e28. (Remember, I tested all numbers through 9 digits.) Two of the nine 8-digit numbers though, 4.1358803e34 and 8.2717606e34, are fast path case in disguise, meaning zeros can be tacked on to make each a fast path case: 4135880300000e22 and 8271760600000e22, respectively.

Here are the 9-15 digit examples from above (which I selected in the first place because they are fast-path eligible), expressed as a fast path calculation:

Decimal | Significant Digits | Power of Ten | IEEE Operation |
---|---|---|---|

0.0691026858985424 | 691026858985424 | 10^{16} |
Divide |

0.025306879542768 | 25306879542768 | 10^{15} |
Divide |

4.456769842065e-9 | 4456769842065 | 10^{21} |
Divide |

5.79090352403e-4 | 579090352403 | 10^{15} |
Divide |

3.0128387285e-10 | 30128387285 | 10^{20} |
Divide |

7.582917533e-5 | 7582917533 | 10^{14} |
Divide |

9.67498269e-11 | 967498269 | 10^{19} |
Divide |

(It’s just arbitrary that all the examples I selected require division, and that none require multiplication.)

The necessary condition that sets up a double rounding error is a double representation of a decimal number that rounds to a tie-breaking rounding case for a float. If the float representation in turn is rounded in the same direction as the decimal to double conversion, the error occurs.

To be correct, the original decimal number (its binary representation that is) must be rounded relative to bits 25 and beyond. If those bits are altered in a certain way by the rounding to the intermediate double, rounding goes in the opposite direction than intended.

While finishing this article I thought of a way to randomly generate double rounding cases directly, using Java’s BigDecimal. You can generate long, exactly represented decimal numbers and round them down (to 17 digits, 16 digits, etc.) until you find the shortest one that still causes the double rounding error.

^{1}Why Kotlin? It’s just the language I’ve been using lately.

The drawing shows the binary code for the ASCII characters that spell “Google”:

01000111 01100111 01101111 01101100 01101111 01100101

(The last ‘1’ is unfinished in the drawing.)

Converted to decimal it reads

71 103 111 108 111 101

Converted to letters by looking up in an ASCII table it reads

G g o l o e

(Here are two of his papers to check out: De Progressione Dyadica and Explication de l’Arithmétique Binaire.)

]]>This site doesn’t process sensitive data like logins or credit cards, so technically HTTPS is not necessary. But Internet users have come to expect encryption for everything, including browsing a basic informational site like this one. Also, Google adds another incentive: they use HTTPS as a signal for search result rankings. In any case, using HTTPS makes a site look more modern.

I’ve removed all the “mixed content” warnings I could find, but please tell me if you find any. (A page with mixed content won’t have the “padlock” in the browser address bar.) Also tell me if you experience a degradation of performance (I have not).

Please consider changing your links to my site to use HTTPS: https://www.exploringbinary.com/…/. Old HTTP links will redirect to HTTPS, but direct links would be cleaner.

]]>The maximum digit counts are useful if you want to print the full decimal value of a floating-point number (worst case format specifier and buffer size) or if you are writing or trying to understand a decimal to floating-point conversion routine (worst case number of input digits that must be converted).

For integers, it’s easy to determine the maximum number of decimal digits — just count the digits of the largest floating-point number. (This may seem obvious, but wait until we discuss fractions.) Smaller numbers may have as many digits, but they can never have more.

The largest value of a double, known as DBL_MAX, is a significand of 53 1s starting at the place of the largest power of two exponent, 1023; here it is, expressed in normalized binary scientific notation:

`1.1111111111111111111111111111111111111111111111111111 x 2`^{1023}

Written out longhand in binary it is 1024 bits, 53 1s followed by 971 zeros.

This number can be expressed as (2 – 2^{-52}) · 2^{1023} = (1 – 2^{-53}) · 2^{1024} = 2^{1024} – 2^{971} = (2^{53} – 1) · 2^{971}. In decimal it is

179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368

It has **309 (significant) digits**. You can count them, or use this formula: ⌊log_{10}((2^{53} – 1) · 2^{971})⌋ + 1 = 309.

(Formulas with logarithms can be rewritten so that they are computed more efficiently; for example, the above can be written as ⌊log_{10}(2^{53} – 1) + 971 · log_{10}(2)⌋ + 1. However, to keep things simple, I will not express them that way.)

The largest value of a float, known as FLT_MAX, is a significand of 24 1s starting at the place of the largest power of two exponent, 127:

`1.11111111111111111111111 x 2`^{127}

Written out longhand in binary it is 128 bits, 24 1s followed by 104 zeros.

This number can be expressed as (2 – 2^{-23}) · 2^{127} = (1 – 2^{-24}) · 2^{128} = 2^{128} – 2^{104} = (2^{24} – 1) · 2^{104}. In decimal it is

340282346638528859811704183484516925440

It has **39 (significant) digits**: ⌊log_{10}((2^{24} – 1) · 2^{104})⌋ + 1 = 39.

With fractions, finding the maximum required digits is not as simple as counting the digits of the smallest number. Also, we have to specify what we mean by “maximum” — is it the total length of the fraction, which includes leading zeros, or just the length of the significant digits? ^{1} We will be looking for the maximum significant digits, although the maximum number of digits overall will come out in the process.

We’ll first look at the smallest values of a double — the smallest *normal* and *subnormal* numbers — and then we’ll look at the numbers with the most significant digits.

The smallest (positive) normal value of a double, known as DBL_MIN, is 2^{-1022.} In binary it is 1022 bits, 1021 leading zeros followed by a 1. In decimal it is

0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002225073858507201383090232717332404064219215980462331830553327416887204434813918195854283159012511020564067339731035811005152434161553460108856012385377718821130777993532002330479610147442583636071921565046942503734208375250806650616658158948720491179968591639648500635908770118304874799780887753749949451580451605050915399856582470818645113537935804992115981085766051992433352114352390148795699609591288891602992641511063466313393663477586513029371762047325631781485664350872122828637642044846811407613911477062801689853244110024161447421618567166150540154285084716752901903161322778896729707373123334086988983175067838846926092773977972858659654941091369095406136467568702398678315290680984617210924625396728515625

It has **1022 digits**, 307 leading zeros followed by **715 significant digits**. You can count the digits or just use this formula: 1022 + ⌊log_{10}(2^{-1022})⌋ + 1 = 715.

That’s a lot of digits, but it’s neither the maximum total digits nor the maximum significant digits possible.

Let’s look at another number, 2^{-1074}. It’s the smallest subnormal value of a double — the smallest value of a double period. In binary it is 1074 bits, 1073 leading zeros followed by a 1. In decimal it is

0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004940656458412465441765687928682213723650598026143247644255856825006755072702087518652998363616359923797965646954457177309266567103559397963987747960107818781263007131903114045278458171678489821036887186360569987307230500063874091535649843873124733972731696151400317153853980741262385655911710266585566867681870395603106249319452715914924553293054565444011274801297099995419319894090804165633245247571478690147267801593552386115501348035264934720193790268107107491703332226844753335720832431936092382893458368060106011506169809753078342277318329247904982524730776375927247874656084778203734469699533647017972677717585125660551199131504891101451037862738167250955837389733598993664809941164205702637090279242767544565229087538682506419718265533447265625

It has **1074 digits**, 323 leading zeros followed by **751 significant digits**: 1074 + ⌊log_{10}(2^{-1074})⌋ + 1 = 751.

There aren’t numbers with more total digits, but there are numbers with more significant digits.

That there are numbers with more digits than 2^{-1022} and 2^{-1074} is not surprising if you’ve read this. The most significant digits come from a binary floating-point number that has the maximum length 1s-filled significand ending at the lowest place — place 1074 for doubles.

The double-precision number that fits the bill has a significand of 53 1s starting at the place of the smallest normal power of two exponent, -1022; it is this 53 significant bit number:

`1.1111111111111111111111111111111111111111111111111111 x 2`^{-1022}

Written out longhand in binary it is 1074 bits, 1021 leading zeros followed by a 53 1s.

This number can be expressed as (2 – 2^{-52}) · 2^{-1022} = (1 – 2^{-53}) · 2^{-1021} = 2^{-1021} – 2^{-1074} = (2^{53} – 1) · 2^{-1074} = (2^{53} – 1) / 2^{1074}. In decimal it is

0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000044501477170144022721148195934182639518696390927032912960468522194496444440421538910330590478162701758282983178260792422137401728773891892910553144148156412434867599762821265346585071045737627442980259622449029037796981144446145705102663115100318287949527959668236039986479250965780342141637013812613333119898765515451440315261253813266652951306000184917766328660755595837392240989947807556594098101021612198814605258742579179000071675999344145086087205681577915435923018910334964869420614052182892431445797605163650903606514140377217442262561590244668525767372446430075513332450079650686719491377688478005309963967709758965844137894433796621993967316936280457084866613206797017728916080020698679408551343728867675409720757232455434770912461317493580281734466552734375

It has **1074 digits**, 307 leading zeros followed by **767 significant digits**: 1074 + ⌊log_{10}(2^{53} – 1) / 2^{1074})⌋ + 1 = 767.

This is the most significant digits a double can have, but it’s not the only double with that many; some (relatively speaking) smaller numbers have just as many. In this case, any significand with bits 1 and 53 equal to 1 — that is, half of the doubles between (2^{52} + 1) / 2^{1074} and (2^{53} – 1) / 2^{1074} — will have a decimal value with 767 significant digits.

There are also some subnormal numbers with the same number of significant digits. Consider the number that is one ULP below DBL_MIN. It has a significand of *52* 1s starting at the place of the largest subnormal power of two exponent, -1023; it is this 52 significant bit number:

`1.111111111111111111111111111111111111111111111111111 x 2`^{-1023}

Written out longhand in binary it is 1074 bits, 1022 leading zeros followed by 52 1s.

This number can be expressed as (2 – 2^{-51}) · 2^{-1023} = (1 – 2^{-52}) · 2^{-1022} = 2^{-1022} – 2^{-1074} = (2^{52} – 1) · 2^{-1074} = (2^{52} – 1) / 2^{1074}. In decimal it is

0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000022250738585072008890245868760858598876504231122409594654935248025624400092282356951787758888037591552642309780950434312085877387158357291821993020294379224223559819827501242041788969571311791082261043971979604000454897391938079198936081525613113376149842043271751033627391549782731594143828136275113838604094249464942286316695429105080201815926642134996606517803095075913058719846423906068637102005108723282784678843631944515866135041223479014792369585208321597621066375401613736583044193603714778355306682834535634005074073040135602968046375918583163124224521599262546494300836851861719422417646455137135420132217031370496583210154654068035397417906022589503023501937519773030945763173210852507299305089761582519159720757232455434770912461317493580281734466552734375

It has **1074 digits**, 307 leading zeros followed by **767 significant digits**: 1074 + ⌊log_{10}(2^{52} – 1) / 2^{1074})⌋ + 1 = 767.

Why does it also have 767 digits? Well of course because the logarithm comes out the same. But let’s look at it in terms of the structure of the number. It has one less significant bit than the number described above, which makes it lose a significant digit. But that’s offset by the lowered exponent, which in this case — as is the case about 70% of the time — adds back a significant digit. (This by the way explains the difference in the number of significant digits between this number and 2^{-1074}, which has 751; the 51 extra bits adds 51 digits, but moving the exponent 51 places higher subtracts about 0.7 * 51 = 36 digits.)

As it turns out, half of the doubles between (2^{51} + 1) / 2^{1074} and (2^{52} – 1) / 2^{1074}, will have a decimal value with 767 significant digits.

The range of 767 significant digit numbers even continues into the next subnormal exponent down (one digit lost, one digit gained), although it does not span all its values; it goes from (2^{50} + 898122626230483) / 2^{1074} through (2^{51} – 1) / 2^{1074}. (I determined that big constant in the first numerator through trial and error.)

Overall, the 767 significant digit numbers live in the range (2^{50} + 898122626230483) / 2^{1074} through (2^{53} – 1) / 2^{1074}. (You can verify this with the logarithm formula.)

I’ll do a similar progression for floats, but with less narration.

The smallest normal value of a float, known as FLT_MIN, is 2^{-126.} In binary it is 126 bits, 125 leading zeros followed by a 1. In decimal it is

0.000000000000000000000000000000000000011754943508222875079687365372222456778186655567720875215087517062784172594547271728515625

It has **126 digits**, 37 leading zeros followed by **89 significant digits**: 126 + ⌊log_{10}(2^{-126})⌋ + 1 = 89.

That number has neither the maximum total digits nor the maximum significant digits possible.

Let’s look at another number, 2^{-149}. It’s the smallest subnormal value of a float — the smallest value of a float period. In binary it is 149 bits, 148 leading zeros followed by a 1. In decimal it is

0.00000000000000000000000000000000000000000000140129846432481707092372958328991613128026194187651577175706828388979108268586060148663818836212158203125

It has **149 digits**, 44 leading zeros followed by **105 significant digits**: 149 + ⌊log_{10}(2^{-149})⌋ + 1 = 105.

There aren’t numbers with more total digits, but there are numbers with more significant digits.

A significand of 24 1s starting at the place of the smallest normal power of two exponent (-126) will give us a float with the most significant digits; it is this 24 significant bit number:

`1.11111111111111111111111 x 2`^{-126}

Written out longhand in binary it is 149 bits, 125 leading zeros followed by 24 1s.

This number can be expressed as (2 – 2^{-23}) · 2^{-126} = (1 – 2^{-24}) · 2^{-125} = 2^{-125} – 2^{-149} = (2^{24} – 1) · 2^{-149} = (2^{24} – 1) / 2^{149}. In decimal it is

0.00000000000000000000000000000000000002350988561514728583455765982071533026645717985517980855365926236850006129930346077117064851336181163787841796875

It has **149 digits**, 37 leading zeros followed by **112 significant digits**: 149 + ⌊log_{10}(2^{24} – 1) / 2^{149})⌋ + 1 = 112.

This is not the only float with that many significant digits; some smaller numbers have just as many. In this case, any significand with bits 1 and 24 equal to 1 — that is, half of the floats between (2^{23} + 1) / 2^{149} and (2^{24} – 1) / 2^{149} — will have a decimal value with 112 significant digits.

There are also some subnormal numbers with the same number of significant digits. Consider the number that is one ULP below FLT_MIN. It has a significand of 23 1s starting at the place of the largest subnormal power of two exponent (-127); it is this 23 significant bit number:

`1.1111111111111111111111 x 2`^{-127}

Written out longhand in binary it is 149 bits, 126 leading zeros followed by 23 1s.

This number can be expressed as (2 – 2^{-22}) · 2^{-127} = (1 – 2^{-23}) · 2^{-126} = 2^{-126} – 2^{-149} = (2^{23} – 1) · 2^{-149} = (2^{23} – 1) / 2^{149}. In decimal it is

0.00000000000000000000000000000000000001175494210692441075487029444849287348827052428745893333857174530571588870475618904265502351336181163787841796875

It has **149 digits**, 37 leading zeros followed by **112 significant digits**: 149 + ⌊log_{10}(2^{23} – 1) / 2^{149})⌋ + 1 = 112.

There are more 112 significant digit numbers below that, with the same exponent; they range from (2^{22} + 2941935) / 2^{149} through (2^{23} – 1) / 2^{149}.

Overall, the 112 significant digit numbers live in the range (2^{22} + 2941935) / 2^{149} through (2^{24} – 1) / 2^{149}.

Here are the maximum number of significant digits required for quadruple-precision:

*Maximum length integer.*An integer with the maximum number of significant digits is (2^{113}– 1) · 2^{16271}. It has ⌊log_{10}((2^{113}– 1) · 2^{16271})⌋ + 1 =**4,933 digits**.*Maximum length fraction.*A fraction with the maximum number of significant digits is (2^{113}– 1) / 2^{16494.}It has 16,494 digits: 4,931 leading zeros followed by**11,563 significant digits**: 16494 + ⌊log_{10}(2^{113}– 1) / 2^{16494})⌋ + 1 = 11,563!

You can apply the above analysis to find the maximum number of significant digits in other floating-point formats as well.

Some programming languages let you print all these digits. In Python 3, this line will print all 1074 digits of the worst case double-precision example (2^{53} – 1) · 2^{-1074}, as displayed above:

print(format((pow(2,53)-1)*pow(2,-1074),".1074f"))

You can print just the 767 significant digits with this line (uses the ‘g’ presentation type instead of ‘f’):

print(format((pow(2,53)-1)*pow(2,-1074),".767g"))

4.4501477170144022721148195934182639518696390927032912960468522194496444440421538910330590478162701758282983178260792422137401728773891892910553144148156412434867599762821265346585071045737627442980259622449029037796981144446145705102663115100318287949527959668236039986479250965780342141637013812613333119898765515451440315261253813266652951306000184917766328660755595837392240989947807556594098101021612198814605258742579179000071675999344145086087205681577915435923018910334964869420614052182892431445797605163650903606514140377217442262561590244668525767372446430075513332450079650686719491377688478005309963967709758965844137894433796621993967316936280457084866613206797017728916080020698679408551343728867675409720757232455434770912461317493580281734466552734375e-308

In C (GCC and Visual Studio), this line will also print all 1074 digits (must also include *math.h*):

printf("%.1074f\n",(pow(2,53)-1)*pow(2,-1074));

Substituting ‘g’ for ‘f’ will print only the significant digits:

printf("%.767g\n",(pow(2,53)-1)*pow(2,-1074));

(I was happy that all the *pow()* implementations computed 2^{-1074} correctly. It’s a power of two so I expected them to, but you never know.)

Java, PHP, and JavaScript won’t let you print all those digits.

For double-precision, for example, it may seem like 17 significant digits of a decimal input is enough to convert it correctly, but it’s not; 17 digits only helps you return to the proper double once you’ve determined it.

A decimal to floating-point conversion routine has to consider all input digits up to the maximum, plus one digit for rounding. (Any digits beyond can just be considered “sticky”.) For example, for the input decimal representing 2^{-1022} + 2^{-1074} + 2^{-1075}, 1075 digits (768 significant) must be processed.

Here is a summary of the digit counts we derived:

Max Integer Digits | Max Fraction Digits | |||
---|---|---|---|---|

Format | Total/Significant | Total | Leading Zero | Significant |

float | 39 | 149 | 37 | 112 |

double | 309 | 1,074 | 307 | 767 |

quad | 4,933 | 16,494 | 4,931 | 11,563 |

It is the maximum number of digits in a fraction that determines the maximum number of digits for a given IEEE format.

Notice the near symmetry between the number of integer digits and the number of fractional leading zeros. (If the absolute values of the minimum and maximum exponents of each format were equal, and if we listed the starting place of the significant digits instead of the count of leading zeros, it’d be symmetric.) For the fractions, we’ve put the biggest significand at the lowest place.

^{1} I’ll use the definition that *significant digits* are all the digits following the leading zeros; don’t think of them as digits of precision.

One digit per bit? We know that’s not true for binary integers. But it *is* true for binary fractions; every binary fraction of length *n* has a corresponding equivalent decimal fraction of length *n*.

This is the reason why you get all those “extra” digits when you print the full decimal value of an IEEE binary floating-point fraction, and why glibc strtod() and Visual C++ strtod() were once broken.

(In the text that follows, I will usually refer to decimal digits as just *digits* and binary digits as just *bits* — but where context allows, I will use *digits* to refer to both generically.)

I will prove, in two steps, that every binary fraction has a corresponding decimal fraction of the same length:

- Show that every binary fraction can be written as a/2
^{n}, where*a*is a positive integer < 2^{n}. - Show that every fraction of the form a/2
^{n}can be written as a fraction of the form b/10^{n}, where*b*is a positive integer < 10^{n}.

Any fraction with a power of ten denominator represents a decimal fraction, with a number of digits equal to the exponent of the power of ten.

A binary fraction is a finite sum of negative powers of two. For example, 0.00111011011 =

0·2^{-1} + 0·2^{-2} + 1·2^{-3} + 1·2^{-4} + 1·2^{-5} + 0·2^{-6} + 1·2^{-7} + 1·2^{-8} + 0·2^{-9} + 1·2^{-10} + 1·2^{-11}

= 2^{-3} + 2^{-4} + 2^{-5} + 2^{-7} + 2^{-8} + 2^{-10} + 2^{-11}

= 1/2^{3} + 1/2^{4} + 1/2^{5} + 1/2^{7} + 1/2^{8} + 1/2^{10} + 1/2^{11}.

If a binary fraction has *n* bits, its lowest weighted bit has a place value of 2^{-n}, or 1/2^{n}. For our example, that’s 2^{-11}, or 1/2^{11}.

The denominator of each term in the series divides 2^{n}. (The divisors of a nonnegative power of two are all the nonnegative powers of two less than or equal to it.) As such, each term can be rewritten as a fraction with denominator 2^{n}, with a corresponding power of two numerator. All terms can then be combined into a single value a/2^{n}. For our example, we’d have

2^{8}/2^{11} + 2^{7}/2^{11} + 2^{6}/2^{11} + 2^{4}/2^{11} + 2^{3}/2^{11} + 2^{1}/2^{11} + 2^{0}/2^{11}

= (2^{8} + 2^{7} + 2^{6} + 2^{4} + 2^{3} + 2^{1} + 2^{0})/2^{11}

= (256 + 128 + 64 + 16 + 8 + 2 + 1)/2^{11}

= 475/2^{11}.

(A way to view what we’ve shown: every binary fraction is a multiple of the smallest negative power of two contained within it.)

The above proof suggests a simpler way to convert a binary fraction to the form a/2^{n}. First, treat the digits of the binary fraction as a binary integer, and convert that integer to decimal (ignore superfluous leading zeros). Next, divide that number by 2^{n}, where *n* is the number of bits in the binary fraction. For our example, we have 111011011 = 475 as the numerator, and 2^{11} as the denominator: 475/2^{11}.

This step is trivial: simply multiply a/2^{n} by 5^{n}/5^{n}, getting (a·5^{n})/(2^{n}·5^{n}) = (a·5^{n})/10^{n} = b/10^{n}, where *b* = a·5^{n}. It has *n* digits.

For our example, we have (475·5^{11})/10^{11} = 23193359375/10^{11} = 0.23193359375.

We’ve shown that every binary fraction is a decimal fraction with the same number of digits.

Having understood the above proof, you can reason towards the answer with this shortcut: Think of each place of the *n*-bit binary fraction as having the place value 2^{-m} = 1/2^{m} = (1/2)^{m} = (5/10)^{m} = 5^{m}/10^{m}. This makes it obvious from the outset that the binary fraction is equivalent to an *n*-digit decimal.

In general, a decimal fraction has more significant digits than its corresponding binary fraction has significant bits. ^{1} (A decimal fraction can have an equal number of significant digits, but it can never have less.) This may seem counterintuitive at first, since the opposite is true for integers. The difference in the number of significant digits and significant bits depends on three properties of the binary fraction:

**The starting place of the significant bits.**For any given combination of length and value of significant bits, the lower the starting place — that is, the more leading zeros, or smaller the number — the more significant digits, in general. For example, 0.000011 = 0.046875 (2 bits to 5 digits), 0.0000011 = 0.0234375 (2 bits to 6 digits), and 0.00000011 = 0.01171875 (2 bits to 7 digits).

Lowering the starting place won’t always increase the number of significant digits — it may stay the same. For example, continuing the sequence above, 0.000000011 = 0.005859375 (2 bits to 7 digits). The length of the fractions increased by one, but the number of significant digits remained the same; a leading decimal zero accumulated instead.

As you incrementally lower the starting place, the number of significant digits can increase two or three times in a row, after which a leading decimal zero must be added. From my observations the pattern is: significant digits increase twice in a row, then a leading zero is added; significant digits increase twice in a row, then a leading zero is added; significant digits increase

*three times*in a row, then a leading zero is added. This pattern then repeats. This reflects a ratio of about 7 significant digits to 3 leading zeros, which is expected (this is the same as saying you get about 3 leading decimal zeros for every 10 leading binary zeros — see “Where log_{10}(2) Fits In” below).If the binary fraction has no leading zeros, then the decimal fraction won’t either; both will have the same number of significant digits no matter how you vary length or value.

To emphasize the effect that starting place has on the number of significant digits, consider this example, a very small single-bit number, 2

^{-100}:0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001

It has 70 significant digits (and 30 leading zeros):

0.0000000000000000000000000000007888609052210118054117285652827862296732064351090230047702789306640625.

**The number of significant bits it has.**For any given starting place, more significant bits equates to more significant digits. For example, 0.0001 = 0.0625 (1 bit to 3 digits), 0.00011 = 0.09375 (2 bits to 4 digits), and 0.000111 = 0.109375 (3 bits to 6 digits). You almost always get just one significant digit per bit, but sometimes you get two (a leading decimal zero is replaced). From my observations, the latter only happens for some starting places, only once per starting place, and only after just a few significant bits.

**The value of the significant bits.**For any given combination of starting place and number of significant bits, a greater value can lead to more significant digits. For example, 0.000000101 = 0.009765625 (3 bits to 7 digits), and 0.000000111 = 0.013671875 (3 bits to 8 digits). The effect is minimal though, since it can only add up to one significant digit. From my observations, this happens about 30% of the time, when you change the value from its minimum (only its most and least significant bits set) to its maximum (all its bits set).

You can count the number of significant digits indirectly by counting the number of leading zeros and then subtracting that from the length of the binary fraction (decimal fraction). The number of leading zeros is easily deduced from the starting place of the significant digits, which is determined using a logarithm.

The starting place of the significant digits of a number *x* is determined by taking the *floor* of the logarithm of *x* to the base *i* of the number: ⌊log_{i}(x)⌋. ^{2} This value is the exponent of the power of *i* of the place containing the most significant digit. (The logarithm is negative, so remember that *floor* will round it towards negative infinity.) If you negate that value, you get the number of the place (1, 2, 3, etc.) at which the significant digits start, *p*_{i} = -⌊log_{i}(x)⌋. For a binary fraction *b*, ** p_{b} = -⌊log_{2}(x)⌋**; for a decimal fraction

The number of leading zeros z_{i} is simply the starting place of the significant digits minus one. The number of leading binary zeros is **z _{b} = p_{b} – 1**; the number of leading decimal zeros is

Knowing the starting place of the significant digits — and hence the number of leading zeros — and the length *n* of the binary fraction (decimal fraction), you can compute the number of significant digits s_{i} = *n* – z_{i}. The number of significant bits is **s _{b} = n – z_{b};** the number of significant digits is

For example, for *b* = 0.00111011011, p_{b} = 3, z_{b} = 2, and s_{b} = 9; for *d* = 0.23193359375, p_{d} = 1, z_{d} = 0, and s_{d} = 11.

For integers, just like for fractions, the logarithm tells you the starting place of the significant digits. But for integers, it tells you more: it also says *how many* significant digits there are. Furthermore, the ratio of significant digits in one base relative to another is predictable, approaching a constant as the integers get large. For example, the ratio of significant digits to significant bits converges to log_{10}(2) ≈ 0.3.

For fractional values, there is no such relationship. The logarithm does not help you count significant digits. There is no significant digit ratio in any case — it can vary from one to essentially infinite. But from the logarithm you can count *leading zeros*, and from that you can determine the ratio of leading decimal zeros to leading binary zeros, z_{d}/z_{b}, as the numbers get small: log_{10}(2) ≈ 0.3.

All binary fractions are decimal fractions, but the reverse is not true. Some decimal fractions *do* convert to binary fractions (when put in lowest terms as a fraction their denominator becomes a power of two), so in this case, the above analysis applies.

However, most decimal fractions *do not* convert to binary fractions, which means that their binary representations are infinite (repeating bicimals as I call them ^{4}). The only part of the above analysis that applies is the computation of the starting place of the significant digits, and hence the number of leading zeros.

^{1} *Significant digits* are the digits following the leading zeros of the fraction — don’t think of them as digits of precision.

^{2} I use ⌊log_{i}(x)⌋ and not ⌈log_{i}(x)⌉ + 1 because the calculation using *ceiling* fails when *i* = *2* and *x* is a power of two or *i* = *10* and *x* is a power of ten.

^{3} I have not specified how to compute ⌊log_{i}(x)⌋ programatically, but imagine that *x* is in an internal binary representation.

^{4} A binary fraction can be called a *terminating bicimal*, just as a decimal fraction can be called a *terminating decimal*. In this article, I chose to go with the standard terminology, which I think of as inferior in most contexts.

Most of my traffic is desktop — presumably because my readers are computer programmers and engineers — so there wasn’t an urgent need to upgrade. On the other hand, improved readability and improved google search ranking may increase my mobile traffic. Whether mobile traffic increases or not, I am happy with the more modern design.

Mobile-friendly means the site will also respond fluidly when resized in a desktop browser window. This is useful, for example, when you want to keep one of my converters or calculators up on part of your screen.

There are still some tweaks to make, mainly to fix minor formatting issues, but the experience should be good. (Landscape mode will generally give the best results.) If you encounter any problems, please let me know.

]]>On the other hand, an arbitrary, arbitrarily long decimal literal rounded or truncated to 17 digits *may not* convert to the double-precision value it’s supposed to. This is a subtle point, one that has even tripped up implementers of widely used decimal to floating-point conversion routines (glibc strtod() and Visual C++ strtod(), for example).

The job of a decimal to floating-point conversion routine is to pick the floating-point number that is closest to the decimal input. The hardest inputs to convert correctly are those that are halfway or near halfway between consecutive floating-point numbers. These are the very inputs for which more than 17 digits — for doubles, up to 768 of them! — may need to be parsed. The extra digits, despite how little they contribute, can put you just at or beyond the halfway point.

Here’s the interesting thing: we know every double has a 17-digit representative that maps to it (actually there are multiple, but for our purposes we’ll consider only one, the nearest), but it may take more than 17 digits of a decimal input to figure out which 17-digit representative to choose! But once we’ve processed our long decimal input, we could replace it with its simpler, 17-digit representative. We’ve still got the correct double, but now our “handle” to it is unambiguously close. It’s like we are using a more precise number to decide which less precise number to pick, although that extra precision is just an illusion.

3.08984926168550152811e-32, an example near-halfway case from Vern Paxson’s paper “A Program for Testing IEEE Decimal–Binary Conversion”, is a 21-digit number that converts to 0x1.40de48676653bp-105. Rounded (same as truncated in this case) to 17 digits it is 3.0898492616855015e-32. However, that converts to 0x1.40de48676653ap-105, which is one binary ULP below the correct answer. Even rounded (truncated) to 20 digits — 3.0898492616855015281e-32 — we still come up one ULP short. So we need all 21 digits.

The problem is that this input converts to a number very nearly halfway between two double-precision numbers. (It is slightly *above* halfway). Here are its first 130 digits in binary to illustrate this (bits 54 through 130 are highlighted):

1.010000001101111001001000011001110110011001010011101010000000000000000000000000000000000000000000000000000000000000000000000000001…p-105

Being so close to halfway, those extra decimal digits are needed to decide which way to go. Ignore those digits, and the conversion will land below the halfway point.

The full decimal value of the desired floating-point number is

3.08984926168550180180110631344083416478369964008307296403646973245891422971698357636226306421889375997125171124935150146484375e-32

Rounded to 17 digits, it’s 3.0898492616855018e-32; that converts to the desired floating-point number.

1.00000000000000033306690738754696212708950042724609375 is an example I concocted using my decimal/binary converter; it is halfway between consecutive doubles:

1.00000000000000000000000000000000000000000000000000011

It is 54 digits, and all 54 are needed to decide that the correct conversion is 0x1.0000000000002p0 (round half to even) and not 0x1.0000000000001p0. Rounding (truncating) to 17 digits — 1.0000000000000003 — does not work.

The full decimal value of the desired floating-point number is

1.000000000000000444089209850062616169452667236328125

Rounded to 17 digits, it’s 1.0000000000000004, which converts to the desired floating-point number.

8.36168422905420598437e-214, another 21-digit example from Vern Paxson’s paper, is a little less than halfway between two doubles. Here it is in binary:

1.00100000010000000011101001100010100010101001110010100111111111111111111111111111111111111111111111111111111111111111111111111110…p-708

It converts to 0x1.20403a628a9cap-708. But rounded to 17 digits — 8.361684229054206e-214 — it converts up to 0x1.20403a628a9cbp-708. (If you truncated instead of rounded, you would get the correct answer.) Again, assuming you are rounding and not truncating, you need to round it to 18 digits to convert it correctly: 8.36168422905420598e-214.

The full decimal value of the desired floating-point number is

8.36168422905420515990295749156693…(509 digits not shown)…0625e-214

Rounded to 17 digits, it’s 8.3616842290542052e-214.

In case you were thinking I could come up with only contrived examples, take a look at sqrt(2) ≈ 1.414213562373095048801688… . In binary, it’s

1.01101010000010011110011001100111111100111011110011001001… ,

which you can see is somewhat close to halfway. If you test it, you will find that you need 18 digits (rounded to 1.41421356237309505 or truncated to 1.41421356237309504) to get the correction conversion; 1.414213562373095 (rounded/truncated to 17 digits) does not work.

The full decimal value of the desired floating-point number is

1.4142135623730951454746218587388284504413604736328125

Rounded to 17 digits, it’s 1.4142135623730951.

As another realistic example, consider pi/3 ≈ 1.047197551196597746154214… . In binary it’s

1.0000110000010101001000111000001011010111001101100101100001…

which is even closer to halfway than sqrt(2).

19 digits — 1.047197551196597746 — are required to convert it correctly; 1.0471975511965977 (rounded/truncated to 17 digits) does not work.

The full decimal value of the desired floating-point number is

1.047197551196597853362391106202267110347747802734375

Rounded to 17 digits, it’s 1.0471975511965979.

The same issue applies to floats of course. Whereas you can always find a 9-digit decimal stand-in for a float, you may need more digits than that to convert it correctly.

When entering decimal literals into a computer program, you need to be aware that you may need more than 17 (9) digits to get the correct conversion. Once you know what a given decimal input converts to, it’s easy to find its 17 (9) digit stand-in. But until you convert it, you must assume you need all the digits you have — or even more if your value represents an infinite decimal. Unless you are willing to do an analysis like mine, you won’t know how many digits you need.

**(Please let me know if you know of any “underspecified” literals in real code.)**

Please check out the related article by “carolomeetsbarolo”: “Mathematical Constants in Program Code”.

]]>For example, does an IEEE single-precision binary floating-point number, or *float* as it’s known, have 6-8 digits? 7-8 digits? 6-9 digits? 6 digits? 7 digits? 7.22 digits? 6-112 digits? (I’ve seen all those answers on the Web.)

Part of the reason for multiple answers is that there *is* no single answer; the question is not as well defined as it seems. On the other hand, if you understand what it really means to equate decimal floating-point precision with binary floating-point precision, then only some of those answers make sense. In this article, I will argue that there are only three reasonable answers: “6 digits”, “6-8 digits”, and “slightly more than 7 digits on average”.

(For double-precision binary floating-point numbers, or *doubles*, the three answers are “15 digits”, “15-16 digits”, and “slightly less than 16 digits on average”.)

A common answer is that floats have a precision of about 7.22 digits. While this may be true for integers, where gaps align and are both of size one, it’s not true for floating point numbers (the fact that it gets you in the ballpark notwithstanding). I can’t say it better than David Matula himself, as he did in his 1970 paper “A Formalization of Floating-Point Numeric Base Conversion”:

“Converting integer and fixed-point data to an “equivalent” differently based number system is generally achieved by utilizing essentially log

_{δ}Β times as many digits in the new base δ as were present for representing numbers in the old base Β system. This simplified notion of equivalence does not extend to the conversion of floating-point systems. Actually, conversion between floating-point number systems introduces subtle difficulties peculiar to the structure of these systems so that no such convenient formula for equating the “numbers of significant digits” is even meaningful.”

Bruce Dawson has an excellent article on floating-point precision. Here is his main definition of precision, and the one I will adopt:

“For most of our purposes when we say that a format has n-digit precision we mean that over some range, typically [10^k, 10^(k+1)), where k is an integer, all n-digit numbers can be uniquely identified.”

So d-digit precision (d a positive integer) means that if we take all d-digit decimal numbers over a specified range, convert them to b-bit floating-point, and then convert them back to decimal — rounding to nearest to d digits — we will recover all of the original d-digit decimal numbers. In other words, all d-digit numbers in the range will round-trip.

It is the choice of range which leads to the multiple answers. Powers of ten and two interleave to create segments with different relative gap sizes, and it is relative gap size that determines how many decimal digits will round-trip.

So the answer for precision depends on what you are looking for. Do you want to know the precision for one power of ten exponent? For a power of two exponent? For a power of two exponent that crosses a power of ten exponent? For the whole floating-point format?

These are the main differences between my work and Bruce’s:

- I primarily do an analytical analysis based on relative gap sizes instead of running code to check round-trips. (I have run code tests in the past too, but they capture “coincidental” precision, as I’ll explain below.)
- I include analysis for 8-digit precision, not just 7 and 6 digit precision.
- I talk solely about decimal precision of binary numbers so as not to confound precision with floating-point to decimal to floating-point round-trip theory.
- I do a more granular analysis instead of just assigning the maximum guaranteed precision per power of ten exponent.
- I don’t analyze subnormal numbers, where the precision can go down to as low as 0 digits.

I wrote a PARI/GP script to identify all the different segments with different relative gap sizes over the entire single-precision format. For each segment, I calculated its precision — a single number, 6, 7, or 8. Here is the data condensed by power of ten exponent, which results in a range of precisions for most:

Power | Precision | Power | Precision | Power | Precision | ||
---|---|---|---|---|---|---|---|

10^{-38} |
6-7 | 10^{-12} |
7 | 10^{14} |
7-8 | ||

10^{-37} |
7 | 10^{-11} |
7-8 | 10^{15} |
6-8 | ||

10^{-36} |
7-8 | 10^{-10} |
6-8 | 10^{16} |
7 | ||

10^{-35} |
6-8 | 10^{-9} |
7 | 10^{17} |
7-8 | ||

10^{-34} |
7 | 10^{-8} |
7-8 | 10^{18} |
6-8 | ||

10^{-33} |
7-8 | 10^{-7} |
6-8 | 10^{19} |
7 | ||

10^{-32} |
6-8 | 10^{-6} |
7 | 10^{20} |
7-8 | ||

10^{-31} |
7 | 10^{-5} |
7-8 | 10^{21} |
6-8 | ||

10^{-30} |
7-8 | 10^{-4} |
6-8 | 10^{22} |
7 | ||

10^{-29} |
7-8 | 10^{-3} |
7 | 10^{23} |
7-8 | ||

10^{-28} |
7-8 | 10^{-2} |
7-8 | 10^{24} |
6-8 | ||

10^{-27} |
7-8 | 10^{-1} |
7-8 | 10^{25} |
7 | ||

10^{-26} |
7-8 | 10^{0} |
7 | 10^{26} |
7-8 | ||

10^{-25} |
7-8 | 10^{1} |
7-8 | 10^{27} |
6-8 | ||

10^{-24} |
7-8 | 10^{2} |
7-8 | 10^{28} |
7 | ||

10^{-23} |
7-8 | 10^{3} |
7-8 | 10^{29} |
7-8 | ||

10^{-22} |
6-8 | 10^{4} |
7-8 | 10^{30} |
7-8 | ||

10^{-21} |
7 | 10^{5} |
7-8 | 10^{31} |
7-8 | ||

10^{-20} |
7-8 | 10^{6} |
7-8 | 10^{32} |
7-8 | ||

10^{-19} |
6-8 | 10^{7} |
7-8 | 10^{33} |
7-8 | ||

10^{-18} |
7 | 10^{8} |
7-8 | 10^{34} |
7-8 | ||

10^{-17} |
7-8 | 10^{9} |
6-8 | 10^{35} |
7-8 | ||

10^{-16} |
6-8 | 10^{10} |
7 | 10^{36} |
7-8 | ||

10^{-15} |
7 | 10^{11} |
7-8 | 10^{37} |
6-8 | ||

10^{-14} |
7-8 | 10^{12} |
6-8 | 10^{38} |
7 | ||

10^{-13} |
6-8 | 10^{13} |
7 |

There are 77 powers of ten, although being at the extremes, 10^{-38} and 10^{38} are covered only partially.

When the precision shows a range, like 6-8, it means one segment has 6 digits, another has 7 digits, and another has 8 digits. Actually, it is always in reverse; each power of ten range starts with the higher precision segment and ends with the lower precision segment (as the numbers increase).

A few observations from the table:

- There are 19 powers of ten with a constant precision (and it’s always 7 digits).
- There are 18 powers of ten for which precision dips to 6 digits. (Precision as low as 6 digits may surprise you, especially if you have bought into the log
_{10}(2^{24}= 16,777,216) ≈ 7.22 argument.) - There are three long runs where the precision is 7-8 digits: 10
^{-30}through 10^{-23}, 10^{1}through 10^{8}, and 10^{29}through 10^{36}.

A power of ten can get one less digit of precision than advertised — if it converts to a floating-point number less than itself. For example, 1e-4 does not have 8 digits, but rather 7: 9.99999974737875163555145263671875e-5.

Let’s look at the precision of decimal floating-point numbers with decimal exponent -4, the range [10^{-4}, 10^{-3}):

Between 10^{-4} and 10^{-3} there are five segments: [10^{-4}, 2^{-13}), [2^{-13}, 2^{-12}), [2^{-12}, 2^{-11}), [2^{-11}, 2^{-10}), and [2^{-10}, 10^{-3}); they have decimal precision of 8, 7, 7, 7, and 6 digits, respectively.

Let’s look at examples of numbers represented to each of the three different levels of precision:

- 1.21e-4 converts to the single-precision floating-point value 1.209999973070807754993438720703125e-4, which has 8 digits of precision: rounded to 8 digits it’s 1.21e-4, but rounded to 9 digits it’s 1.20999997e-4.
- 1.23e-4 converts to 1.2300000526010990142822265625e-4, which has 7 digits of precision: rounded to 7 digits it’s 1.23e-4, but rounded to 8 digits it’s 1.2300001e-4.
- 9.86e-4 converts to 9.860000573098659515380859375e-4, which has 6 digits of precision: rounded to 6 digits it’s 9.86e-4, but rounded to 7 digits it’s 9.860001e-4.

(When using my decimal to floating-point converter to compute these values, check the boxes ‘Single’ and ‘Normalized decimal scientific notation’.)

You can’t get less precision than each segment supports, but you can get what looks like more.

In any given segment, you can find examples of numbers that convert to a higher precision than supported by that segment:

- 1.013e-4 converts to 1.01300000096671283245086669921875e-4, which appears to have 9 digits of precision; but it’s in an 8-digit segment.
- 1.24e-4 converts to 1.23999998322688043117523193359375e-4, which appears to have 8 digits of precision; but it’s in a 7-digit segment.
- 9.8e-4 converts to 9.80000011622905731201171875e-4, which appears to have 7 digits of precision; but it’s in a 6-digit segment.
- 2.350988561514728583455765982071533026645717985517980855365926236850006129930346077117064851336181163787841796875e-38, an exactly representable number, converts to itself, so it looks like it has 112 digits of precision!

This is not precision, at least as we have defined it; precision is a property of a *range* of n-digit numbers, not a specific n-digit number. I’ll call the above ** coincidental precision**.

Let’s look at 1.013e-4 and its six nearest 9-digit neighbors:

1.01299997e-4

1.01299998e-4

1.01299999e-4

1.013e-4

1.01300001e-4

1.01300002e-4

1.01300003e-4

All seven of those numbers map to the same float, which in turn maps back (to 9 digits) to 1.013e-4. There’s not enough precision to represent all 9-digit numbers.

Broadly we can say that precision ranges from 6 to 8 digits — but can we say what the *average* precision is?

I computed a simple average from the 329 segments between FLT_MIN and FLT_MAX. There are 254 powers of two in this range, most of which have a single precision value; those that cross a power of ten have two values. There are 77 powers of ten, 75 of which cross powers of two (10^{-38} is less than FLT_MIN, and 10^{0} = 2^{0}). 254 + 75 = 329.

The denominator for my average was 254, the number of powers of two. For those powers of two with a single precision, I assigned a weight of 1. For those powers of two split by powers of ten, I assigned a fractional weight, proportional to where the split occurs.

For example, 2^{115} has 7 digits of precision, and 2^{116} has 7 digits for about 20% of its length (before 10^{35}) and 8 digits for the remaining 80% of its length (after 10^{35}). The average across just those two powers of two would be (7*1 + 7*0.2 + 8*0.8)/2 = 7.4.

With that methodology, I came up with an average decimal precision for single-precision floating-point: **7.09 digits**. 89.27% of the range has 7 digits, 10.1% has 8 digits, and 0.63% has 6 digits.

It’s hard to say what that average would mean in practice, since you will likely be using numbers in a specific range and with a particular distribution. But it does tell you that you are likely to get 7 digits.

(I kind of “linearized” a logarithmic concept, but since I’m talking about integer digit counts, it feels OK.)

So after that analysis, what is the bottom line?

If you care about the minimum precision you can get from a float, or equivalently, the maximum number of digits *guaranteed* to round-trip through a float, then 6 digits is your answer. (It’s unfortunate we have to be that conservative; only a small percentage of the float range is limited to 6 digits.) If you want to know the *range* of precision, or equivalently, the range of the number of digits that can round-trip through a float (excluding “coincidental” conversions), then 6-8 digits is your answer. If you want to know how much precision you’ll get on average, then your answer is slightly more than 7 digits.

If you could give only one answer, the **safe answer is 6 digits**. That way there will be no surprises (print precision notwithstanding).

By the same argument above, the precision of a double is not log_{10}(2^{53}) ≈ 15.95.

Doing the same analysis for doubles I computed an average decimal precision of **15.82 digits**. 82.17% of the range has 16 digits and 17.83% has 15 digits.

The three reasonable answers are 15 digits, 15-16 digits, and slightly less than 16 digits on average. The **safe answer is 15 digits**.

Some will say 9 is the upper bound of precision for a float, and likewise, 17 digits is the upper bound for a double (for example, see the Wikipedia articles on single-precision and double-precision). Those numbers come from the theory of round-tripping, from conversions in the opposite direction: floating-point to decimal to floating-point. But you’re not getting that much decimal precision in any range of those IEEE formats. You can determine that based on analyzing gaps, as I did above with my PARI script, or by running some code, as I did with a C program.

For floats, you can iterate through all 9-digit decimal numbers and see if they round-trip. I found that about 97% of 9-digit decimals failed to round-trip. Every float had multiple decimals mapping to it. The minimum count I found was 6; for example, 1.00000004e6 through 1.00000009e6. The maximum count I found was 119, occurring from 9.90353153e27 through 9.90353271e27. (This matches theory, as produced by my PARI/GP script.)

Printing to 9 (or 17) digits is just a way to recover a floating-point number; any one of multiple 9 (or 17) digit numbers will serve that purpose (it doesn’t have to be the closest).

]]>In my article “The Shortest Decimal String That Round-Trips May Not Be The Nearest” I describe some shortest string conversions that are tricky to get right. I tried some in Java and guess what? It got them wrong.

After some research I realized the problem is not limited to the tricky power-of-two conversions described in my article; Java fails to print the shortest strings for many other numbers. This is apparently not a bug, because Java is working as designed.

I had once written this small program to test if Java printed shortest strings:

public class ShortestTest1{ public static void main(String []args){ double d = 0; for (int i = 1; i <= 10; i++) { d += 0.1; System.out.println(d); } } }

It gave me the expected output:

0.1 0.2 0.30000000000000004 0.4 0.5 0.6 0.7 0.7999999999999999 0.8999999999999999 0.9999999999999999

The floating-point numbers represented by the long strings are printed that way because no shorter strings (e.g., 0.3, 0.8, 0.9, and 1.0) will round-trip. Here are the full-length decimal strings representing the double *d* after each iteration:

0.1000000000000000055511151231257827021181583404541015625 0.200000000000000011102230246251565404236316680908203125 0.3000000000000000444089209850062616169452667236328125 0.40000000000000002220446049250313080847263336181640625 0.5 0.59999999999999997779553950749686919152736663818359375 0.6999999999999999555910790149937383830547332763671875 0.79999999999999993338661852249060757458209991455078125 0.899999999999999911182158029987476766109466552734375 0.99999999999999988897769753748434595763683319091796875

You can verify that no shorter strings than the ones Java prints will round-trip by using my decimal to floating-point converter.

The values in the Java program are calculated values of 0.1, 0.2, …, 0.9, 1.0, not conversions of them. The conversions are

0.1000000000000000055511151231257827021181583404541015625 0.200000000000000011102230246251565404236316680908203125 0.299999999999999988897769753748434595763683319091796875 0.40000000000000002220446049250313080847263336181640625 0.5 0.59999999999999997779553950749686919152736663818359375 0.6999999999999999555910790149937383830547332763671875 0.8000000000000000444089209850062616169452667236328125 0.90000000000000002220446049250313080847263336181640625 1.0

The conversions all round-trip when shortened to one digit (because of round-tripping in the other direction, where all decimal numbers 15 digits or less round-trip through floating-point).

This program tests one of the problematic powers of two from my article, 2^{-44}:

public class ShortestTest2{ public static void main(String []args){ double d; d = 0.00000000000005684341886080801486968994140625; //2^-44 System.out.println(d); } }

This prints the 17-digit value 5.6843418860808015E-14, not the shortest, non-nearest, 16-digit expected value, 5.684341886080802E-14.

(Other problematic powers of two fail as well.)

I had assumed Java printed shortest strings, and decided to look for a statement to that effect in the specification. The closest thing I found was this paragraph in the documentation for the toString() method of class Double (search within the page for the second occurrence of ‘toString(double d)’):

How many digits must be printed for the fractional part of m or a? There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type double.

This seems to say Java prints shortest strings. But then I found this old bug report (written in 2001, still open): “Double.toString(double) sometimes produces incorrect results”. (Sometimes that link doesn’t work; try the OpenJDK copy instead.)

In this bug report are:

- Other examples of non-shortest conversions.
- An explanation as to why the Java implementation gets it wrong.
- A link (which unfortunately is dead) to code that would fix the problem.
- Examples of other printing anomalies, like not choosing the nearer of two numbers that round-trip, and printing a number to 18 digits (only up to 17 is ever necessary!)
- A response that explains that Java is working as designed.

It is a very good read.

]]>