αβγ

Accuracy

XCALC works (for floating-point modes) internally with long double numbers (on x86 hardware that is 64 bit mantissa plus 15 bit biased exponent, other platforms will vary). This accounts for an approximate accuracy of 19 decimal digits. This is, however, not precise, and certain operations may generate errors that accumulate to be larger than this.

For the integer modes (hex, octal, binary) you can set the word length to 8, 16, 32 or 64 bits. For most operations, this will represent a signed integer of the specified size, but for the shift and rotate operations shl, shr, rotr, rotl the value is considered unsigned,

I will give some examples to show what to expect (on x86 hardware):

Example 1:

Calculate the square of the square root of seven, minus 7:

Press: 7 q Q 7 -

The result is not zero, but approximately -4.34e-19. This is because the square root is not calculated exactly.

Example 2:

Calculate minus 1 to the power of two billion, using 32 bit binary wordlength:

Press: 1 m Enter 2e9 ^

The result is 1, as it should. When the power is a positive or negative integer that can be represented using the current wordlength (-2,147,483,648…2,147,483,647 in case of 32 bits), the power is calculated using repeated multiplications. However:

Example 3:

Calculate minus 1 to the power of three billion, using 32 bit binary wordlength:

Press: 1 m Enter 3e9 ^

The result is complex. Here, the power must be represented as a real number, and logarithms are used for the power function, resulting in a small “leakage” into the imaginary part.

Example 4 (not related to complex numbers):

Calculate 2.62 – 2 – 0.62:

Press: 2.62 Enter 2 – 0.62 –

The result is not zero, but about -1.08e-19. This is expected and normal, since XCALC does not use BCD numbers. (If you have a Python interpreter installed, you can try the same exercise and get similar [but slightly worse] results. This is not my fault, but the processor’s…)

See also: