javascriptschemerounding

How to implement Scheme's float, ceil, and round that work on exact rationals in JavaScript?


I just found about these two expressions in Scheme:

(quotient (expt 1000 999) 998001)
(floor (/ (expt 1000 999) 998001))

They return bigInt that looks like this 100200300 ... 999 (with all numbers except 998).

The problem I have is that in my Scheme interpreter in JavaScript, I've got +inf.0, because it creates float (inexact number).

This is the implementation of my quotient:

(define (quotient a b)
  "(quotient a b)

   Return quotient from division as integer."
  (typecheck "quotient" a "number")
  (typecheck "quotient" b "number")
  (if (zero? b 0)
     (throw (new Error "quotient: division by zero"))
     (let ((quotient (/ a b)))
       (if (integer? quotient)
           quotient
           (if (> quotient 0)
               (floor quotient)
               (ceiling quotient))))))

And this is implementation of floor, ceil, and round:

['floor', 'ceil', 'round'].forEach(fn => {
    LNumber.prototype[fn] = function() {
        if (this.float || LNumber.isFloat(this.__value__)) {
            return LNumber(Math[fn](this.__value__));
        } else {
            return LNumber(Math[fn](this.valueOf()));
        }
    };
});

(function() {
    var map = { ceil: 'ceiling' };
    ['floor', 'round', 'ceil'].forEach(fn => {
        var name = map[fn] ? map[fn] : fn;
        global_env.set(name, doc(name, function(value) {
            typecheck(name, value, 'number');
            if (value instanceof LNumber) {
                return value[fn]();
            }
        }, `(${name} number)

            Function that calculates the ${name} of a number.`));
    });
})();

As you can see, it just use Math.floor and convert the number to float (this.valueOf() on rationals return a float/inexact value).

And this is the constructor for LRational (the code is old, that's why it still uses ES5 prototypes):

function LRational(n, force = false) {
    if (typeof this !== 'undefined' && !(this instanceof LRational) ||
        typeof this === 'undefined') {
        return new LRational(n, force);
    }
    if (!LNumber.isRational(n)) {
        throw new Error('Invalid constructor call for LRational');
    }
    var num, denom;
    if (n instanceof LRational) {
        num = LNumber(n.__num__);
        denom = LNumber(n.__denom__);
    } else {
        num = LNumber(n.num);
        denom = LNumber(n.denom);
    }
    if (!force && denom.cmp(0) !== 0) {
        var is_integer = num.op('%', denom).cmp(0) === 0;
        if (is_integer) {
            return LNumber(num.div(denom));
        }
    }
    this.constant(num, denom);
}
// -------------------------------------------------------------------------
LRational.prototype = Object.create(LNumber.prototype);
LRational.prototype.constructor = LRational;
// -------------------------------------------------------------------------

LRational.prototype.constant = function(num, denom) {
    enumerable(this, '__num__', num);
    enumerable(this, '__denom__', denom);
    enumerable(this, '__type__', 'rational');
};

// -------------------------------------------------------------------------
function enumerable(object, name, value) {
    Object.defineProperty(object, name, {
        value,
        enumerable: true
    });
}

How can I write the floor, ceil, and round functions to create BigInt number when the argument is exact?


Solution

  • JS already has unlimited BigInts. Dividing two BigInts already produces the quotient. According to the docs the result of integer division of two bigints is a truncated bigint, which is what the quotient is.

    Both values you mention are the same one. To find it in JS you can do, in pseudocode,

    // (quotient (expt 1000 999) 998001)
    // (floor (/ (expt 1000 999) 998001))
    
    a = BigInt(1000)       //   1000n
    b = BigInt(999)        //    999n
    c = BigInt(998001)     // 998001n
    val = a**b / c         // integer division of two BigInts
    

    val is the value. In decimal notation, it contains 2992 digits.

    To have exact ratios, just maintain pairs of BigInts. Define operations

     // {a,b}   represents  a/b
    
     // a/b + c/d
     {a,b} + {c,d} = {ad+cb,bd}
    
     // a/b - c/d
     {a,b} - {c,d} = {ad-cb,bd}
    
     // a/b * c/d
     {a,b} * {c,d} = {ac,bd}
    
     // a/b / c/d = a/b * d/c
     {a,b} / {c,d} = {ad,bc}
    

    When you want to print the result to n-th decimal digit, multiply the numerator by 10n and perform the BigInt division by the denominator:

    truncateDigits( n, {a,b} ):
      m  = BigInt(10)**BigInt(n)
      q = (a*m) / b           // the integral part of the number (a*10^n)/b
      r = {a*m,b} - {q,1}     // the fractional part
      return [ q, {numerator(r), denominator(r)*m } ]  
    

    The q value contains the digits. To print, manually insert the decimal point n places before end.

    The above assumes all values are positive.

    Calling truncateDigits( 0, {1000n**999n, 998001n} ) shall produce the pair of values [ val, {1n,998001n} ] where val is the 2992-digit value mentioned at the top of the answer, the one which looks like 100200300 ... 999, as you mention in the question.

    Another example is truncateDigits( 0, {137n, 12n} ) => [ 11n, {5n,12n} ] and truncateDigits( 4, {137n, 12n} ) => [ 114166n, {8n,120000n} ]. The last one expressing the identity 114166/10000 + 8/120000 = 137/12.

    One word of caveat: running long series of calculations will likely see the underlying bigints in the ratios blow up in size, potentially slowing down the calculations immensely. In such cases, calling truncateDigits( n, ...) with some large value of n and replacing the ratio with its thus simplified version, from time to time, might be advisable. In the above example, this means replacing {137n, 12n} with {114166n,10000n} in the middle of a long computation. This would in effect provide "controlled-precision bigfloats", if you will (mentioned in the answer linked just above). But care must be taken to not let thus introduced error run up and accumulate, destroying the value itself.

    Such "simplification" is "poor man's solution", if you will. The right mathematical way to simplify the values properly while remaining within a given precision from the true value surely exist, like e.g. finding the famous 355/113 ratio as an approximation for pi, which is good for 6 decimal places – while our half-baked approach would use much larger bigints, within the 3141592/1000000 value.

    A quote from the linked Scheme answer: "... The last number has about 1.7 million bits in both its numerator and denominator, or over half a million decimal digits ...". In such a situation, using n = 1000 looks to be an enormous improvement, especially if we are only interested, say, in just first 100 decimal digits of the eventual answer. We don't care if the last 100 or even 200 digits of the calculated value are wrong. Still we must be sure somehow that those first 100 digits out of the calculated 1000 digits haven't been spoiled by the accumulated error.

    Also, 8/12 is really 2/3 – and that's even before considering the simplification algorithm.