The post ChatGPT Will Never Write For Exploring Binary first appeared on Exploring Binary.
]]>Mastering Decimal, Binary, & Two’s Complement Conversion
In the digital world, understanding numerical notations like decimal, binary, and two’s complement can provide a substantial advantage and presents opportunities for improved problem-solving. The journey starts with the basic foundation of decimal …
I did not click through the tracking link to get the rest of the article but it looks like the generic fluff ChatGPT wrote when I asked it about Exploring Binary. It’s worse than fluff actually; I don’t see how knowing numerical notations helps with problem solving.
If you’ve read anything on this site you’d know immediately that I didn’t write that. Will AI ever be able to write an article indistinguishable from one of my own? I don’t think so.
The post ChatGPT Will Never Write For Exploring Binary first appeared on Exploring Binary.
]]>The post ChatGPT Writes Decent Code For Binary to Decimal Conversion first appeared on Exploring Binary.
]]>
(The line that got cut off in the screenshot of the second solution is
decimal += (binaryArray[i] - '0') * Math.pow(2.0, (binaryArray.size - 1 - i).toDouble()).toInt()
)
My first observation is that it was smart enough to know that by base 10 I meant decimal, and even named the function binaryToDecimal() as I would have.
The first solution is more clever then I was expecting; it uses the built-in function parseInt() to do the conversion directly. It compiles and runs successfully.
The second solution is along the lines of what I was expecting, although it computes the nonnegative powers of two instead of the more efficient “nested” multiplication by two as I do in function bin2dec_i(). It compiles and runs successfully, although it did have one warning from the IDE about the Math.pow() function: “Should be replaced with Kotlin function. Replace with ‘pow’ function”. I replaced “Math.pow(2.0,…” with “2.0.pow(…” and it got rid of the warning.
I was hoping that it might have generated code to handle arbitrary length integers, like my converters and calculators. I guess I’ll have to be more specific.
(The line that got cut off in the screenshot of the second solution is
decimal = decimal.add(bit.multiply(BigInteger.valueOf(2).pow(binaryArray.size - 1 - i)))
)
It decided to use BigInteger, which is what I would have done.
The first solution compiles and runs successfully. It takes the “shortcut” of using BigInteger itself to convert integers to different bases directly.
The second solution does not compile. The compiler complains about valueOf(): “None of the following functions can be called with the arguments supplied.”. But a simple fix — appending .toLong() to (binaryArray[i] – ‘0’) — allowed it to compile and run successfully. It still has the same inefficiencies as the limited-precision solution above though.
Stylistically, it did not use operators for the BigInteger functions to make the code easier to read, like this:
decimal += bit * BigInteger.valueOf(2).pow(binaryArray.size - 1 - i)
When I set out with my query I was really hoping that it would generate code to handle any arbitrary length number, integer or floating-point; let’s add that to the request.
This solution does not compile. (And also has the same warning as above about Math.pow().) Even after fixing the small compiler error (changing “val decimal” to “var decimal”) the code works incorrectly. For the sample test code, it prints 1101.625. It incorrectly converts the integer part, treating it as decimal.
A simple additional fix that gets the sample test running correctly is to add import java.math.BigInteger and to replace
var decimal = BigDecimal(parts[0], MathContext.UNLIMITED)
with
var decimal = BigInteger(parts[0], 2).toBigDecimal()
which is a trick it used in another solution.
But the code still has an error, when the input has no integer part; for example, “.101”. This causes the runtime exception “Zero length BigInteger”. (With the original line that has BigDecimal, the runtime exception is “Index 0 out of bounds for length 0”.)
A simple fix for this is to replace the modified line with this further modified line:
var decimal = (if (parts[0].isEmpty()) BigInteger.ZERO else BigInteger(parts[0], 2)).toBigDecimal()
I tested the new version on this number:
11101011100010101001000110101111010101011111101111111101011010111011.1110101011010001001001101010101010101010101010101010101011111010101010001011101010101
which gave this correct output, as confirmed by my binary to decimal converter:
271560613247152281275.9172538916269938194405650664083933868757033715246596017323099658824503421783447265625
The final fix is to use the BigInteger pow() function. In keeping with the original style of ChatGPT’s solution, and making the minimum change possible, I replaced
fraction += BigDecimal(bit * Math.pow(0.5, i + 1.0))
with
fraction += bit.toBigDecimal() * BigDecimal.ONE.divide(BigInteger.TWO.pow(i + 1).toBigDecimal())
I tested this with fractional values longer than 1074 bits.
Finally, I was expecting it to use BigIntegers with scaling, not BigDecimal, which is overkill. And it computes the negative powers of two instead of the more efficient “nested” division by two as I do in function bin2dec_f().
I’m not sure what to make of ChatGPT’s coding ability since I only asked it to code a short, well-known algorithm. But for this case I’d say its results are a great starting point, and is easier than searching the Web. On the other hand, searching could open your eyes to other solutions, and to the errors I pointed out.
It’s interesting that ChatGPT wrote decent Kotlin, given that it’s a relatively new language, and considering ChatGPT’s training data is only through 2021. I was also impressed that it printed the code in dark mode, which seems to be what developers prefer.
I wonder: if ChatGPT learned to code by reading others’ code, what happens when it learns from its own code? This sounds like one giant feedback loop converging to incorrect answers. (This is a bigger issue, applying to everything it learns, not just code.)
I published this article yesterday and asked ChatGPT about it today:
I’m really starting to question what it thinks it knows.
Except for a few individual lines, the code is in screenshots. Is it reading images?
A few things in its response that stick out:
The post ChatGPT Writes Decent Code For Binary to Decimal Conversion first appeared on Exploring Binary.
]]>The post Binary Numbers Haiku (by ChatGPT) first appeared on Exploring Binary.
]]>Nice, but I was looking for the 5/7/5 syllable format (this is 6/8/5), so I asked for that specifically:
Oddly, that was worse in terms of format (5/9/5). I tried again with the original, shorter prompt, and then for some reason it was able to crank out four in a row of the desired format:
Pretty impressive! The first of the four is my favorite (plus it is number-centric, like I asked).
The post Binary Numbers Haiku (by ChatGPT) first appeared on Exploring Binary.
]]>The post What ChatGPT Knows About Exploring Binary (Spoiler: Not Much) first appeared on Exploring Binary.
]]>
It can “write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR” but it’s never heard of my site.
(“Useful resource” is flattering, especially after first hearing it did not know about my site.)
A bit generic but not bad. I do cover the basics of binary numbers, but what makes this blog unique is the discussion of the intricacies of decimal to binary floating-point and binary floating-point to decimal conversions, including the related bugs I’ve found in commercial and open-source software.
Again, pretty generic, and not the specifics I was looking for. My converters and calculators are my most popular pages, followed by the articles Number of Bits in a Decimal Integer, Binary Multiplication, Binary Division, Ten Ways to Check if an Integer Is a Power Of Two in C, and Binary Subtraction. (The “intricacies” articles have a much smaller potential audience.)
(“Clear explanations”. Wait – does it know or is it just continuing to try to flatter me?)
Some of the links I see in my stats from US colleges and universities are from Drexel University, Emory University, New York University, Stanford University, Stony Brook University, University of Texas, University of Washington, Villanova University, and Virginia Tech.
“Bicimal” was a term I found in the bowels of the internet when I was looking years ago for a way to describe fractional binary numbers. I wrote a few articles using it, and the top search results on the term lead to this site.
Another generic answer, although I certainly agree with “make it fun”. I prefer my method though, using tape flags (analogous to base ten blocks) and describing another non-decimal base (base 5) before getting to base 2 (which in its simplicity almost hides the place value pattern).
That’s basically the same answer as for third graders. I found that a different method might be more suited to adults, approaching it by showing that any number can be written as a sum of powers of two (and not calling them “powers”).
“Why does 0.3 + 0.6 = 0.89999999999999991?” was a sentence in my article Why 0.1 Does Not Exist In Floating-Point. It got the answer right at a high level (and with some extra fluff). But I was looking for a more concise answer like “because most decimals have infinite representations in binary, and binary-floating point has finite precision”.
Pretty close, but wrong. It’s actually the largest subnormal number, not the smallest normal number.
I was looking for PHP Hangs On Numeric Value 2.2250738585072011e-308, or even its offshoot, Java Hangs When Converting 2.2250738585072012e-308. (Those two articles went viral, inasmuch as these kinds of articles do.)
That one hurt. It knows about specific websites, just not mine. It said earlier that my site dealt with two’s complement (a generic guess?), yet it missed my Decimal/Two’s Complement Converter, which has been the top Google search result for many years. To boot, none of those sites listed has a two’s complement converter (as far as I can tell).
I really had no expectations for this one, especially at this point. Overly generic, and bullet 5 is wrong.
I would have bet it would have at least picked one of the many Rick Regans that aren’t me.
“Clear and concise”. That’s three times now; I’ll accept the compliments.
The post What ChatGPT Knows About Exploring Binary (Spoiler: Not Much) first appeared on Exploring Binary.
]]>The post Jetpack Compose Byte Converter App: 2022 Version first appeared on Exploring Binary.
]]>
The impetus for updating the app was to change how I represented and handled state. In the original code, the observable state is a mutableState byte value and an array of mutableState, one element for each of the eight bits in a byte. I passed the array as a parameter of type Array<MutableState<Boolean>>. Google, however, recommends passing the State’s value and a lambda to set it instead.
Also, as it turns out, as conceptually clean as I found the bit-to-mutable-state mapping, it was overkill; one mutableState for the whole byte does not impact performance (of a tiny app like this one at least). And it lends itself to a better solution, one based on an immutable state object containing an array of bits (Booleans) and an integer value representing them.
All the code needed for the app, except for the imports, is listed in segments below. (If you copy and paste all the segments into an Android Studio Compose project, the IDE will suggest the required imports.) The essential structure and purpose of the composable functions is unchanged, so you can refer to the original article for details I don’t describe here.
I created an additional, app-level state object, acting kind of like a “view model”. It holds the newly-defined byte object (type UiByte) as mutableState, so recomposition is triggered whenever the mutableState is set to a new object. A new object is created with each bit flip, from a copy of the bit array, but with the changed bit.
When constructing UiByte, the byte’s value is calculated with the specified bit values. This has the vibe of Compose’s “declarative” philosophy, which feels more appropriate than mutating a separate state value with each bit flip. (I also made the calculation more efficient, although that was for aesthetics and not observed performance.)
/** Rick Regan, https://www.exploringbinary.com/ */ class AppState { var byte by mutableStateOf(UiByte()) fun bitFlip(bitPosition: Int) { byte = byte.bitFlip(bitPosition) } } val appState = AppState() class UiByte( private val bits: Array<Boolean> = Array(8) { false } // MSB is index 0 (bit 7); LSB is index 7 (bit 0) ) { val size = bits.size val value = bits.fold(0) { value, bit -> value * 2 + (if (bit) 1 else 0) } // Horner's method fun getBit(bitPosition: Int) = bits[bits.size - 1 - bitPosition] fun getBitPlaceValue(bitPosition: Int) = bitPlaceValues[bitPosition] private companion object { val bitPlaceValues = intArrayOf(1, 2, 4, 8, 16, 32, 64, 128) } fun bitFlip( bitPosition: Int ): UiByte { val newBits = bits.copyOf() val bitIndex = bits.size - 1 - bitPosition newBits[bitIndex] = !newBits[bitIndex] return UiByte(bits = newBits) } }
App() is the new top-level composable function that I added to access the global app state and “launch” the app’s screen:
@Composable fun App() { ByteToDecimal( appState.byte, appState::bitFlip ) }
ByteToDecimal() is the top composable of the app proper; I changed it to pass in the state and lambda, rather than access them globally.
@Composable fun ByteToDecimal( byte: UiByte, bitFlip: (bitPosition: Int) -> Unit ) { Column( horizontalAlignment = Alignment.CenterHorizontally, modifier = Modifier.fillMaxWidth() ) { Byte( byte, bitFlip ) Spacer(modifier = Modifier.padding(top = 20.dp)) Decimal("${byte.value}") } }
I changed Byte() to pass in parameter byte as type UiByte instead of type Array<MutableState<Boolean>> (the motivation for updating the app).
@Composable fun Byte( byte: UiByte, bitFlip: (bitPosition: Int) -> Unit ) { Row { for (bitPosition in byte.size - 1 downTo 0) { Bit( bitPosition = bitPosition, bit = byte.getBit(bitPosition), bitPlaceValue = byte.getBitPlaceValue(bitPosition), bitFlip = bitFlip ) } } }
I made an incidental change to Bit(), passing in bitPlaceValue instead of computing it.
@Composable fun Bit( bitPosition: Int, bit: Boolean, bitPlaceValue: Int, bitFlip: (bitPosition: Int) -> Unit ) { Column( horizontalAlignment = Alignment.CenterHorizontally ) { Text( text = bitPlaceValue.toString() ) val colors = if (bit) { ButtonDefaults.buttonColors( backgroundColor = Color.White, contentColor = Color.Blue ) } else { ButtonDefaults.buttonColors( backgroundColor = Color(240, 240, 240), contentColor = Color.Blue ) } Button( onClick = { bitFlip(bitPosition) }, modifier = Modifier .padding(2.dp) .size(45.dp), colors = colors, border = BorderStroke(1.dp, color = Color.Blue) ) { Text( text = if (bit) "1" else "0", modifier = if (!bit) Modifier.alpha(0.4f) else Modifier ) } Text( text = bitPosition.toString(), color = Color.Gray ) } }
Decimal() is unchanged from the original code.
@Composable fun Decimal( decimal: String ) { Text( text = decimal, Modifier .width(70.dp) .border(BorderStroke(1.dp, color = Color.Black)) .padding(4.dp), textAlign = TextAlign.Center, color = Color(0, 180, 0), fontSize = 5.em ) }
I wrote an alternative implementation for UiByte, using Kotlin’s unsigned byte type (UByte). In this case, the bits and the value are one and the same; the value is the integer value of the UByte, and the bits within it are accessed with masks and bitwise operations. (It felt a little too low-level to use as the primary implementation, at least for the purposes of this article.)
class UiByte( val value: UByte = 0u ) { val size = 8 fun getBit(bitPosition: Int) = (value and bitPlaceValues[bitPosition].toUByte()) != (0u.toUByte()) fun getBitPlaceValue(bitPosition: Int) = bitPlaceValues[bitPosition] private companion object { val bitPlaceValues = intArrayOf(1, 2, 4, 8, 16, 32, 64, 128) } fun bitFlip( bitPosition: Int ): UiByte { val newValue = value xor bitPlaceValues[bitPosition].toUByte() return UiByte(value = newValue) } }
The app worked perfectly fine as coded. And even though I wrote it to a pre 1.0 release alpha, it compiles and runs unchanged on the latest version of Compose/Material2 (Compose UI 1.3.0-alpha03, Compose compiler 1.3.0).
I updated the app because I did not want to pass a State type to my composable functions; I wanted to follow Google’s recommendations. In the process, I consolidated the app’s state into one immutable object.
As newly coded, the app still recomposes efficiently; only the changed bits are recomposed. This either reflects that I didn’t fully understand back then how recomposition worked, or Compose has since added optimizations (or both).
I have learned much more about how to code Compose apps, and how to code better in Kotlin itself. A lot of that can’t be demonstrated with this little demo app though (and getting into those things would be beyond the scope of this blog in any case).
The post Jetpack Compose Byte Converter App: 2022 Version first appeared on Exploring Binary.
]]>The post Anomalies In IntelliJ Kotlin Floating-Point Literal Inspection first appeared on Exploring Binary.
]]>For Doubles for example, every literal over 17-digits should be flagged, since it never takes more than 17 digits to specify any double-precision binary floating-point value. Literals with 16 or 17 digits should be flagged if there is a replacement that is shorter or closer. And no literal with 15 digits or fewer should ever be flagged, since doubles have of 15-digits of precision.
But IntelliJ doesn’t always adhere to that, like when it suggests an 18-digit replacement for a 13-digit literal!
You can enable the inspection in IntelliJ IDEA by going to Preferences -> Editor -> Inspections -> Kotlin -> Other Problems and checking Floating-point literal exceeds the available precision. (The inspection was enabled by default for me.)
Kotlin uses Java’s floating-point to decimal conversion routine, the dtoa() method of Java’s FloatingDecimal class. It has a 20-year old bug against it that is the source of the issue. I learned about this bug over six years ago, when writing “Java Doesn’t Print The Shortest Strings That Round-Trip”. When I recently encountered the Kotlin inspection I went back to that bug report and tried out some of its examples; here are eight literals that the inspection flagged, along with their suggested replacements:
Ex. # | Literal | Digits | Replacement | Digits |
---|---|---|---|---|
1 | 4.8726570057E288 | 11 | 4.8726570056999995E288 | 17 |
2 | 7.68905065813E17 | 12 | 7.6890506581299994E17 | 17 |
3 | 1.790086667993E18 | 13 | 1.79008666799299994E18 | 18 |
4 | 2.273317134858E18 | 13 | 2.27331713485799987E18 | 18 |
5 | 2.82879384806159E17 | 15 | 2.82879384806159008E17 | 18 |
6 | 1.45800632428665E17 | 15 | 1.45800632428664992E17 | 18 |
7 | 1.9400994884341945E25 | 17 | 1.9400994884341944E25 | 17 |
8 | 5.684341886080801486968994140625E-14 | 31 | 5.6843418860808015E-14 | 17 |
Examples 1-6 should not have been flagged, since they have 15 digits or fewer. Moreover, the replacements all have more digits.
Example 7 should not have been flagged either, because the replacement is another 17-digit number which is further than the original from the floating-point number. Written as integers, the original is 19400994884341945000000000, and the replacement is 19400994884341944000000000. Both convert to floating-point as 19400994884341944949932032 (the floating-point value written exactly in decimal). The original is closer to the floating-point value, with an absolute difference of 50,067,968 vs. 949,932,032.
In Example 8, the replacement is shorter than the original, but a one digit shorter replacement, 5.684341886080802E-14, would work as well, even though it’s further than the replacement from the floating-point number. This happens because the literal is a power of two (2^{-44}). The gap size above a power of two is double the gap size below it, which in this case leads to a shorter replacement candidate that is not nearer.
Replacements could change format with respect to the original. For example, 123456789012345678E-16 becomes 12.345678901234567 (the “E” notation is removed), and 1234.56789012345678E-16 becomes 1.234567890123457E-13 (the number is normalized). This is not an error; it’s just a byproduct of FloatingDecimal’s dtoa() method’s rules for formatting, which are what they are.
It was easy enough to determine how the inspection was implemented; the source code is on github. The code is simple, but I simplified it further for the purposes of this article, keeping only the logic for Doubles.
fun intellijFloatingPointLiteralPrecisionInspection( literal: String ): String { var replacement = literal.toDouble().toString() val literalBD = BigDecimal(literal) val replacementBD = BigDecimal(replacement) if (literalBD == replacementBD) replacement = "" return replacement }
The inspection relies entirely on the conversions done under the covers by the Java code, the decimal to floating-point conversion done by toDouble(), and the floating-point to decimal conversion done by toString(). The assumption is that the literal round-trips back from double-precision in its shortest or nearest form.
BigDecimal is used to factor out formatting differences between the literal and the replacement string that toString() returns. For example, if the literal is 5.198135943396802E6, the replacement is 5198135.943396802. Those are numerically equivalent, so there is no need to suggest a replacement.
The implementation of the inspection demonstrates the principle I described in my article “17 Digits Gets You There, Once You’ve Found Your Way”. Literals greater than 17 digits can’t simply be rounded or truncated to 17 digits directly. Depending on how close the literal is to halfway between two binary floating point numbers, all of its digits may be needed to decide the correct Double to round to. The conversion done with toDouble() takes this into account. Once the floating point number is identified, its shortened “stand in” is generated by the return trip conversion, the toString() call.
Subnormal numbers change the rules in that they aren’t subject to the 15/16/17 digit boundaries described above; they have lower, variable effective precision. For example, 8.3616842E-322 has a suggested replacement of 8.35E-322, which is expected because the floating-point number has only 8 bits of precision.
(I did not look for anomalous replacements for subnormal numbers.)
These examples would be addressed by a fix for the FloatingDecimal bug: return the shortest replacement or, if there isn’t a shorter one, return an equal-length replacement that is nearer to the floating-point value. (Coincidentally, this bug has been marked “resolved” as of June 2022, which I noticed as I was writing this article. I will look into it when it is released and incorporated into IntelliJ.)
Short of that proper fix, a workaround in the inspection itself could be to filter out most of the bad replacements. It could filter any replacement returned by toString() that has more digits than the original, and it could use BigDecimal to check whether an equal-length replacement is closer. Adding an else to the code above should take care of those two cases, addressing literals like Examples 1-7:
fun intellijFloatingPointLiteralPrecisionInspectionWorkAround( literal: String ): String { var replacement = literal.toDouble().toString() val literalBD = BigDecimal(literal) val replacementBD = BigDecimal(replacement) if (literalBD == replacementBD) replacement = "" else { // Filter out false positives when { literalBD.precision() < replacementBD.precision() -> replacement = "" literalBD.precision() == replacementBD.precision() -> { val doubleBD = BigDecimal(literal.toDouble()) val literalDeltaBD = (literalBD - doubleBD).abs() val replacementDeltaBD = (replacementBD - doubleBD).abs() if (literalDeltaBD <= replacementDeltaBD) replacement = "" } } } return replacement }
Coding a workaround for “shortest not nearest” cases like Example 8 would be harder. And it may not be worth the effort, since there are only 54 powers of two for which this comes into play.
I submitted issue KTIJ-22245 against the IntelliJ IDEA Kotlin plugin.
If this inspection worked perfectly, I still don’t think I would find it useful. I encountered it while coding floating point literals to approximate logarithms. The originals can be more visually faithful to the actual decimal value. For example, log_{2}(15) = 3.9068905956085185293…, which I coded (rounded to 17 digits) as 3.9068905956085185, had a suggested replacement of 3.9068905956085187. While both map to the same double-precision binary floating-point value (0x1.f414fdb498226p1), the original literal is closer in decimal, and is a correct decimal rounding of the logarithm; the replacement is not.
Also, Doubles don’t have 17 digits of precision (nor do they have 16 digits across the whole range of floating-point numbers). When the suggested replacement has 17 digits, it’s not really solving the “cannot be represented with the required precision” problem; it’s only giving what I have dubbed coincidental precision. This makes the name of the inspection itself questionable.
Perhaps a more generic name would be better: “Another literal can represent the same floating-point number”. That name would also have these benefits:
There probably is no perfect name, because to describe it exactly would take too many words: “A shorter or closer decimal literal can represent the same binary floating-point number”.
These are not accompanied by suggested replacements, but it would seem reasonable to suggest 0 for the “conforms to 0” case and either Double.MAX_VALUE or Double.POSITIVE_INFINITY (for positive numbers for example) for the “conforms to infinity” case.
Also, I would suggest a new inspection for subnormal numbers, warning of their lower precision (there would be no replacement to make).
The post Anomalies In IntelliJ Kotlin Floating-Point Literal Inspection first appeared on Exploring Binary.
]]>The post Showing n! > 2^{n} When n Is A Power of Two first appeared on Exploring Binary.
]]>When I saw this problem though I wondered if I could solve it in another way: Could the factors of two alone in 64! be greater than 2^{64}? As it turns out, almost.
64! has a factor of 2^{63}; 16! has a factor of 2^{15}; 512! has a factor of 2^{511}. That’s what Wolfram Alpha told me, after I worked out similar but smaller examples on paper. The pattern was clear: the factorial of any positive power of two p, p!, had a factor of 2^{p-1}. (p = 2^{k}, for any integer k ≥ 1.)
It’s not hard to see why. The factors of 2 come from the even numbers, and from the 64 factors in 64! (1, 2, 3, …, 64) there are 32 even numbers: 2, 4, 6, …, 64. If you pull out a factor of two from each number in that list you get a new list: 1, 2, 3, …, 32. In that list there are 16 even numbers (2, 4, 6, …, 32) from which you can pull out another 16 factors of 2. Repeating this process until the list reduces to no even numbers (a list of just 1), you have a total number of factors of two of 32 + 16 + 8 + 4 + 2 + 1 = 63.
To generalize, we are repeatedly halving a power of two with each new list of even numbers, and repeatedly halving a power of two 2^{k} gives a sequence of nonnegative powers of two from 2^{k-1} down to 1. If we sum that sequence of powers of two — 2^{0} + 2^{1} + 2^{2} + … + 2^{k-1} — we get 2^{k} – 1.
(I haven’t thought much about how to do this with an inductive proof, or whether it would be necessary — it certainly isn’t for my purposes. But you’d do it in “reverse”, starting with smaller powers of two and proving the property holds for bigger ones.)
Although there aren’t more than p factors of two in p! like I had wondered, the proof that p! > 2^{p} is still trivial just using the powers of two that are present. For our example, the factors 2^{63} and 3 of 64! make it greater than 2^{64}. Because 3 will always be a factor of p! for any p ≥ 4, this result applies generally to show p! > 2^{p}.
The post Showing n! > 2^{n} When n Is A Power of Two first appeared on Exploring Binary.
]]>The post Hexadecimal Numbers: Uppercase or Lowercase? first appeared on Exploring Binary.
]]>For example, do you prefer the integer 3102965292 written as B8F37E2C or b8f37e2c? Do you prefer the floating-point number 126.976 written as 0x1.fbe76cp6 or 0x1.FBE76Cp6?
I ran this poll on my sidebar, and after 96 responses, about 70% are “prefer uppercase” and about 9% are “prefer lowercase”. What do you think? (For the “depends on context” answer I meant things other than numeric values, like the memory representation of strings. However, for the purposes of this article, please answer with just numeric values in mind.)
I had always used uppercase but switched to lowercase when I started this blog. (I don’t remember why, but I think I was under the impression that the wider programming community preferred lowercase.) My article examples are in lowercase (I use “%a” rather than “%A” for hexadecimal floating-point constants), as are the outputs of my floating-point converter (the Hexadecimal floating-point constant and Raw hexadecimal output formats) and base converter (with the default digit characters). Since I’m writing an app that displays hexadecimal numbers, I figured this would be a good time to check my assumptions.
(I think lowercase is easier to read, but uppercase fits better with numeral fonts, making it look more uniformly numeric.)
The post Hexadecimal Numbers: Uppercase or Lowercase? first appeared on Exploring Binary.
]]>The post Another NaN In the Wild first appeared on Exploring Binary.
]]>(According to Castbox, this is an error in the ad and is out of their control.)
The post Another NaN In the Wild first appeared on Exploring Binary.
]]>The post A Simple Binary To Decimal Converter App In Jetpack Compose first appeared on Exploring Binary.
]]>(This app has been updated; see Jetpack Compose Byte Converter App: 2022 Version.)
My demo app consists of four composable functions:
ByteToDecimal() is the top-level composable which I call from setContent() in MainActivity.kt. (I start with “Empty Compose Activity” in Android Studio.) ByteToDecimal() calls Byte(), which calls Bit() in a loop eight times to generate a “toggleable” button for each bit. It then calls Decimal() to generate a text composable which displays the current decimal value of the byte.
/** Rick Regan, https://www.exploringbinary.com/ */ val byte = arrayOf( mutableStateOf(false), mutableStateOf(false), mutableStateOf(false), mutableStateOf(false), mutableStateOf(false), mutableStateOf(false), mutableStateOf(false), mutableStateOf(false) ) var byteValue by mutableStateOf(0) fun bitFlip( bitPosition: Int ) { var bit by byte[bitPosition] bit = !bit val bitValue = 2.0.pow(bitPosition.toDouble()).toInt() byteValue += if (bit) bitValue else -bitValue }
The app represents a byte as eight individually mutable Booleans, a change to any of which will cause the corresponding button to be recomposed (redrawn). It represents the decimal value as a mutable integer, a change to which will recompose the text displaying the decimal value. The changes to this mutable state are initiated in the UI (button presses in Bit()) but made here in the function bitFlip() through a callback.
The app’s state lives outside of the composable functions so that it can survive configuration changes.
@Composable fun ByteToDecimal() { Column( horizontalAlignment = Alignment.CenterHorizontally, modifier = Modifier.fillMaxWidth() ) { Byte(byte, ::bitFlip) Spacer(modifier = Modifier.padding(top = 20.dp)) Decimal("$byteValue") } }
Byte() creates eight buttons by calling the Bit() composable in a loop. It gives each button its bit position, the value of its mutable Boolean representation, and a reference to the bitFlip() function that reacts to the bit being toggled.
@Composable fun Byte( byte: Array<MutableState<Boolean>>, bitFlip: (bitPosition: Int) -> Unit ) { Row { for (bitPosition in 7 downTo 0) { Bit( bitPosition, byte[bitPosition].value, bitFlip) } } }
Bit() displays a button with its current state (“0” or “1”), its place value equivalent in decimal above it, and its bit position below it. The buttons are styled to make it easier to differentiate between a “0” and a “1”.
@Composable fun Bit( bitPosition: Int, bit: Boolean, bitFlip: (bitPosition: Int) -> Unit ) { Column( horizontalAlignment = Alignment.CenterHorizontally ) { Text( text = 2.0.pow(bitPosition.toDouble()).toInt().toString() ) val colors = if (bit) { ButtonDefaults.buttonColors( backgroundColor = Color.White, contentColor = Color.Blue ) } else { ButtonDefaults.buttonColors( backgroundColor = Color(240, 240, 240), contentColor = Color.Blue ) } Button( onClick = { bitFlip(bitPosition) }, modifier = Modifier .padding(2.dp) .size(45.dp), colors = colors, border = BorderStroke(1.dp, color = Color.Blue) ) { Text( text = if (bit) "1" else "0", modifier = if (!bit) Modifier.alpha(0.4f) else Modifier ) } Text( text = bitPosition.toString(), color = Color.Gray ) } }
Decimal() simply displays the decimal representation of the byte.
@Composable fun Decimal( decimal: String ) { Text( text = decimal, Modifier .width(70.dp) .border(BorderStroke(1.dp, color = Color.Black)) .padding(4.dp), textAlign = TextAlign.Center, color = Color(0, 180, 0), fontSize = 5.em ) }
Feedback welcome!
The post A Simple Binary To Decimal Converter App In Jetpack Compose first appeared on Exploring Binary.
]]>The post Direct Generation of Double Rounding Error Conversions in Kotlin first appeared on Exploring Binary.
]]>
For my initial exploration of double rounding errors in floating-point conversions I generated examples by hand, setting the required bit patterns indirectly to produce long decimal strings. It occurred to me that some of these long decimal strings could be rounded to much shorter ones — like 0.1000000000000000055511151231257827021181583404541015625 rounds to 0.1 — and map to the same double, in our case the double representation of a float midpoint. From there, it’s only a matter of checking if the double rounding error still occurs, since the decimal number could have shifted to the midpoint or to the other side of it.
Starting with a randomly selected double rounding error bit pattern (which I call a “template” in the code) I fill in the “don’t care” bits with a random value and then randomly shift those example bits left or right a random number of binary places to create a decimal number that will experience a double rounding error upon conversion from decimal to double to float. I do these calculations using BigDecimal operations on powers of two, meaning I create a decimal number indirectly by thinking of its construction in binary. Then, I successively round this long, exact decimal representation to 17 digits, 16 digits, 15 digits, etc., stopping when the double rounding error no longer occurs. (In many cases the 17-digit representation does not even cause an error.)
Here is the Kotlin code I wrote:
/* Rick Regan, https://www.exploringbinary.com/ */ import java.math.BigDecimal import java.math.MathContext import java.math.RoundingMode import kotlin.random.Random fun findDoubleRoundingExamples(maxLength: Int) { val random = Random(1) for (i in 1..1_000_000) { val (decimal, numSigDigits) = generateDoubleRoundingExample(random) if (numSigDigits <= maxLength) { // Only interested in numbers this short or shorter println("$numSigDigits digits: $decimal") } } } fun generateDoubleRoundingExample(random: Random): Pair<String, Int> { val decimalExact = generateExactDoubleRoundingExample(random) var numSigDigits = decimalExact.precision() // Try to make the exact number shorter (often will get at least a 17-digit number) var decimalPrev = decimalExact.toString() for (i in 17 downTo 1) { val decimal = decimalExact.round(MathContext(i,RoundingMode.HALF_UP)).toString() if (decimal.toFloat() == decimal.toDouble().toFloat()) break // No double rounding error at this length decimalPrev = decimal numSigDigits = i } return Pair(decimalPrev, numSigDigits) } fun generateExactDoubleRoundingExample(random: Random): BigDecimal { /* Double rounding bit pattern templates (pick one randomly) 1 2-23 (any value) 24 25 26-53 54 up 1: 18014401730707455 : 1 0000000000000000000000 1 0 1111111111111111111111111111 1 1 up 2: 18014401730707454 : 1 0000000000000000000000 1 0 1111111111111111111111111111 1 0 down 1: 18014399583223809 : 1 0000000000000000000000 0 1 0000000000000000000000000000 0 1 down 2: 18014399583223810 : 1 0000000000000000000000 0 1 0000000000000000000000000000 1 0 */ val templateBits = when (random.nextInt(1, 5)) { 1 -> BigDecimal(18014401730707455) 2 -> BigDecimal(18014401730707454) 3 -> BigDecimal(18014399583223809) else -> BigDecimal(18014399583223810) } /* Generate a random 22-bit integer to fill out bits 2-23 (22-bit numbers lie between 2^21 = 2,097,152 and 2^22 - 1 = 4,194,304 - 1). Shift the number left 32 bits (2^32 = 4,294,967,296) so it will start at bit 2 when added in to the template */ val pow2To21 = 2_097_152 val pow2To22 = 4_194_304 val pow2To32 = BigDecimal(4_294_967_296) val exampleBits = templateBits + BigDecimal(random.nextInt(pow2To21, pow2To22)) * pow2To32 /* The max binary exponent for a float is 127, and the min (normal) exponent is -126. We have a 55-bit integer, so normalized its binary exponent starts out as 54. This means we can only shift the point to the right a max of 73 more bits: 127 - 54 (add 1 for non-inclusive endpoint to get 74), but we can shift the point to the left as much as 180 bits: 126 + 54 (add 1 for non-inclusive endpoint to get 181). */ return if (random.nextBoolean()) // Randomly pick a left or right shift exampleBits * BigDecimal(2).pow(random.nextInt(0, 74)) else exampleBits.divide(BigDecimal(2).pow(random.nextInt(1, 181))) // Kotlin quirk: Must use divide() since / operator does integer division } fun main() { findDoubleRoundingExamples(12) // Look for examples of 12 digits or fewer (change as desired) }
Here are some examples the program found, from the initially generated decimal number to the shortest rounding:
17 digits: 16581582576129408000 => 1.6581582576129408E+19
17 digits: 3929563.874999999767169356346130371093750 => 3929563.8749999998 (this looks like it could round up to 10 digits but it doesn’t)
13 digits: 585276137701600012475039744 => 5.852761377016E+26
13 digits: 150821866599299996733401494192128 => 1.508218665993E+32
11 digits: 6.058141011400000204336628621036399613317319814496643298255323900016867931117800261830346267299951534823776455596089363098144531250E-33 => 6.0581410114E-33
10 digits: 5169850375000000058598302970544128 => 5.169850375E+33
10 digits: 9347089477999999790045239508467712 => 9.347089478E+33
The initially generated decimal string is not guaranteed to round to a shorter string that causes a double rounding error. In my tests, on average, about 45% round to at least a 17 digit example. If you break it out by pattern type, about 38% of up pattern 1, about 62% of up pattern 2, about 28% of down pattern 1, and about 62% of down pattern 2 round to at least a 17 digit example.
During the sequence of roundings, an initial up pattern 1 can end with a shorter string that exhibits up pattern 2 (and vice versa). Similarly, an initial down pattern 1 can end up with a shorter string that exhibits down pattern 2 (and vice versa).
I was surprised at first to see up pattern 1s converting to up pattern 2s and down pattern 1s converting to down pattern 2s. But after looking at examples I saw that this happened only for roundings to integers — integers with their last 1 bit at bit 54. For example, in an up pattern 1 to up pattern 2 scenario, 28808324560453631 rounded to 28808324560453630, the latter changing bit 55 from 1 to 0 (and keeping bit 54 1). In a down pattern 1 to down pattern 2 scenario, 16301684200308736.5 rounded to 16301684200308737, with the former having bit 54 = 0 and bit 55 = 1, and the latter having bit 54 = 1 (and no bit 55).
The post Direct Generation of Double Rounding Error Conversions in Kotlin first appeared on Exploring Binary.
]]>The post Double Rounding Errors in Decimal to Double to Float Conversions first appeared on Exploring Binary.
]]>The easiest way for me to approach Per’s question was to search for examples, rather than try to find a way to construct them. As such, I wrote a simple Kotlin^{1} program to generate decimal strings and check them. I tested all float-range (including subnormal) decimal numbers of 9 digits or fewer, and tens of billions of random 10 to 17 digit float-range (normal only) numbers. I found example 7 to 17 digit numbers that, when converted to float through a double, suffer a double rounding error.
Here are examples I found:
Decimal | # | To Float | To Double | To Double to Float |
---|---|---|---|---|
1.3006255030632019 | 17 | 0x1.4cf5cap0 | 0x1.4cf5cbp0 | 0x1.4cf5ccp0 |
6.467822313308716 | 16 | 0x1.9df0cep2 | 0x1.9df0cdp2 | 0x1.9df0ccp2 |
0.0691026858985424 | 15 | 0x1.1b0b6ap-4 | 0x1.1b0b6bp-4 | 0x1.1b0b6cp-4 |
0.025306879542768 | 14 | 0x1.9ea0bep-6 | 0x1.9ea0bfp-6 | 0x1.9ea0c0p-6 |
4.456769842065e-9 | 13 | 0x1.324452p-28 | 0x1.324453p-28 | 0x1.324454p-28 |
5.79090352403e-4 | 12 | 0x1.2f9c32p-11 | 0x1.2f9c31p-11 | 0x1.2f9c30p-11 |
3.0128387285e-10 | 11 | 0x1.4b43dep-32 | 0x1.4b43dfp-32 | 0x1.4b43e0p-32 |
7.582917533e-5 | 10 | 0x1.3e0cf6p-14 | 0x1.3e0cf5p-14 | 0x1.3e0cf4p-14 |
9.67498269e-11 | 9 | 0x1.a9829ep-34 | 0x1.a9829fp-34 | 0x1.a982a0p-34 |
4.1358803e34 | 8 | 0x1.fdc95ep114 | 0x1.fdc95fp114 | 0x1.fdc960p114 |
7.038531E-26 | 7 | 0x1.5c87fap-84 | 0x1.5c87fbp-84 | 0x1.5c87fcp-84 |
A quick way to see the rounding errors is to look at the hexadecimal floating-point representations of the conversions. The decimal to float and decimal to double to float values differ by one ULP. (Remember that when used to represent a float, a 0 must be tacked on to fill out the last hex digit. This makes it appear like the float has 25 bits of precision, not 24, and that the double rounding error is two ULPs, when it is only ever one.)
You can verify the decimal to float and decimal to double conversions with my decimal to floating-point converter.
To understand those hexadecimal floating-point representations of the conversions better, consider the full binary representation of 0.0691026858985424:
0.00010001101100001011011010101111111111111111111111111111101100...
The single-precision rounding bit, significant bit 25 (the first bit highlighted), is 0, so the full binary representation rounds down to 0.000100011011000010110110101 when converted correctly to a float.
However, a double rounding error occurs when converting first to a double, and then from the double to a float. The double-precision rounding bit, significant bit 54 (the second bit highlighted), is 1, and there are 1 bits after it, so it rounds up to 0.0001000110110000101101101011 as a double. The last bit of the double, which would be bit 25 of a float, is 1, so round-to-nearest-even dictates it rounds up to a float as 0.00010001101100001011011011.
Here’s a summary of those three binary representations:
float: 0.000100011011000010110110101 double: 0.0001000110110000101101101011 double to float: 0.000100011011000010110110110
The full binary representation of 5.79090352403e-4 is
0.000000000010010111110011100001100010000000000000000000000000000000001...
The pattern of bits between bits 24 and 54 is the “complement” of the first example: bit 24 is 0, bit 25 is 1, bits 26-53 are 0s, and bit 54 is a 0. A correctly converted float rounds up, but converted through an intermediate double it rounds down and then down again due to round-to-nearest-even.
There’s nothing special about the examples except that I chose ones with the smallest absolute value exponent.
Having tested all numbers with 9 digits or fewer, I found only one 7-digit number, nine 8-digit numbers, and 51 9-digit numbers. One of the 8-digit numbers was the 7-digit number with a trailing 0, and nine of the 9-digit numbers were a corresponding 8-digit number with a trailing 0.
Examples were hard to come by with testing of randomly generated decimal strings, although longer examples surfaced more frequently. I found 17-digit numbers roughly on the order of one per billion, whereas I found 10-digit numbers roughly on the order of one per ten billion. (Don’t quote me on those ratios: I did not run enough tests to establish them with high confidence.) Is this because shorter decimal numbers are sparser around eligible double-precision numbers?
While all the double rounding errors I found are verifiable, it’s possible that an incorrect conversion in Java (Kotlin uses Java’s FloatingDecimal class for conversions) could have missed some. (I have generally assumed these days though that FloatingDecimal is correct.)
Here are the significant bits of the examples above:
Decimal | Significant Bits of Full Binary Representation |
---|---|
1.3006255030632019 | 10100110011110101110010101111111111111111111111111111111110… |
6.467822313308716 | 11001110111110000110011010000000000000000000000000000001100… |
0.0691026858985424 | 10001101100001011011010101111111111111111111111111111101100… |
0.025306879542768 | 11001111010100000101111101111111111111111111111111111100011… |
4.456769842065e-9 | 10011001001000100010100101111111111111111111111111111100010… |
5.79090352403e-4 | 10010111110011100001100010000000000000000000000000000000001… |
3.0128387285e-10 | 10100101101000011110111101111111111111111111111111111100011… |
7.582917533e-5 | 10011111000001100111101010000000000000000000000000000001000… |
9.67498269e-11 | 11010100110000010100111101111111111111111111111111111110101… |
4.1358803e34 | 11111110111001001010111101111111111111111111111111111101001… |
7.038531E-26 | 10101110010000111111110101111111111111111111111111111110011… |
These examples demonstrate two patterns in the significant bits of the full binary representation of a decimal number that cause these double rounding errors:
The double round up pattern rounds the float up when it should be rounded down; the double round down pattern rounds the float down when it should be rounded up. Both patterns create a double that becomes a float halfway case.
In the double round up case, the rounding up propagates all the way down to bit 25, setting it to 1 and all bits above it to 0. This in turn sets up the float rounding, which is a halfway case that’s resolved by rounding up to the nearest even value. In the double round down case, the rounding down to double removes bits 54 and above, information essential to proper float rounding.
In the double round up pattern, the value of bits 55 and beyond are irrelevant. But to see a case where all those bits are 0s, you have to construct an exactly representable example, such as:
To get a string of 1s from bits 26 to 53, the first example uses 2^{-25} – 2^{-53}, and the second example uses 2^{29} – 2^{1}.
(In this case, if you round the first example to 17 digits, 0.50000008940696711, you still get the double rounding error, although there won’t be all 0s beyond bit 55.)
There is a third double rounding error pattern that’s a variation of the double round down pattern, but it’s not demonstrated in the examples:
To invoke this scenario you also have to construct an exactly representable example, like:
(If you round the first example to 17 digits, 0.50000002980232244, you still get the double rounding error, but the bit pattern switches from the double round down alternative pattern to the double round down pattern.)
Per was interested in fast path conversion, where numbers with 15 decimal digits or fewer and power-of-ten exponents between -22 and 22 (relative to the decimal digits viewed as an integer) can be converted correctly using just double-precision arithmetic. Specifically, he wanted to know if double rounding errors were possible if you used double-precision fast path as an intermediate for converting to float.
The shortest fast path case is 9 digits, and there are five of them among the 51 9-digit numbers: 9.67498269e-11 (described below), 5.85052973e21, 9.49766107e23, 8.04624287e26, and 8.96981543e28. (Remember, I tested all numbers through 9 digits.) Two of the nine 8-digit numbers though, 4.1358803e34 and 8.2717606e34, are fast path case in disguise, meaning zeros can be tacked on to make each a fast path case: 4135880300000e22 and 8271760600000e22, respectively.
Here are the 9-15 digit examples from above (which I selected in the first place because they are fast-path eligible), expressed as a fast path calculation:
Decimal | Significant Digits | Power of Ten | IEEE Operation |
---|---|---|---|
0.0691026858985424 | 691026858985424 | 10^{16} | Divide |
0.025306879542768 | 25306879542768 | 10^{15} | Divide |
4.456769842065e-9 | 4456769842065 | 10^{21} | Divide |
5.79090352403e-4 | 579090352403 | 10^{15} | Divide |
3.0128387285e-10 | 30128387285 | 10^{20} | Divide |
7.582917533e-5 | 7582917533 | 10^{14} | Divide |
9.67498269e-11 | 967498269 | 10^{19} | Divide |
(It’s just arbitrary that all the examples I selected require division, and that none require multiplication.)
The necessary condition that sets up a double rounding error is a double representation of a decimal number that rounds to a tie-breaking rounding case for a float. If the float representation in turn is rounded in the same direction as the decimal to double conversion, the error occurs.
To be correct, the original decimal number (its binary representation that is) must be rounded relative to bits 25 and beyond. If those bits are altered in a certain way by the rounding to the intermediate double, rounding goes in the opposite direction than intended.
While finishing this article I thought of a way to randomly generate double rounding cases directly, using Java’s BigDecimal. You can generate long, exactly represented decimal numbers and round them down (to 17 digits, 16 digits, etc.) until you find the shortest one that still causes the double rounding error.
^{1}Why Kotlin? It’s just the language I’ve been using lately.
The post Double Rounding Errors in Decimal to Double to Float Conversions first appeared on Exploring Binary.
]]>