javascriptnode.jstechnical-indicatorrsi

What is correct configuration of input parameters for calculating RSI (and ROC)


I have tried to use Technical Indicators library to calculate RSI (and ROC) for candlestick's closing prices, but when I compare results from Binance, I am not getting quite accurate results:

I fetch data using this Binance API:

This is example of usage for RSI and ROC indicators:

If I do this:

let inputData = {
        values: data, // 15 candlesticks, 1m candlestick data, values[0] is oldest closing price
        period: 14,
      };

and I do calculation:

const results_rsi = RSI.calculate(inputData);

I get single element array, with quite inaccurate result in compare to (realtime) data on Binance.

If I do this:

let inputData = {
        values: data, // 100 candlesticks, 1m candlestick data, values[0] is oldest closing price
        period: 14,
      };

 const results_rsi = RSI.calculate(inputData);

I get a result with a bunch of elements, and if I compare result_rsi's last element with Binance RSI 14 (1m) I get actually very accurate result. Also, I have read in one of the git issues that providing more historical data is better.

Now, so far so good... Or at least that is what I thought :) However, both RSI and ROC results were very accurate.

The thing is, when I applied same logic, but with different parameters, say like this:

let inputData = {
        values: data, // 100(or even 200 and 500) candlesticks, 1h candlestick data, values[0] is oldest closing price
        period: 30,
      };

       const results_rsi = RSI.calculate(inputData);
      const results_roc = ROC.calculate(inputData);

and I check last element of results_rsi and results_roc (which I consider that are actual results, but maybe not?), I am still getting quite good results for RSI, but for ROC I am getting very wrong results. It makes me think if I even use this library correctly, and I am not quite sure if even RSI results are correct, cause I didn't try it with many different parameters / data.

So, the questions :

(from docs):

var data = 
[11045.27,11167.32,11008.61,11151.83,10926.77,10868.12,10520.32,10380.43,10785.14,10748.26,10896.91,10782.95,10620.16,10625.83,10510.95,10444.37,10068.01,10193.39,10066.57,10043.75];

var period = 12;
        
var expectResult = [-3.85,-4.85,-4.52,-6.34,-7.86,-6.21,-4.31,-3.24];
    
ROC.calculate({period : period, values : data});
  1. What is the actual result of ROC here? Cause the array is returned.
  2. How input values should be sorted? (what should be the values[0])?
  3. Where am I wrong? :D

Solution

  • What is sufficient Data-depth for "Accuracy" ?
    ( better: When do we get equal outputs on screen ? )

    RSI is one of several indicators that include an element of prior data. As such a 14 day RSI based on 15 days or 50 days of underlying data will be significantly different to a 14 day RSI based on 500 days of data.

    So, unless all TimeSeries' "observers" compute RSI from (a) the exactly the same TimeSeries and (b) using the very same "length" ( for depth-of-prior DATA dependent underlying computing, here starting with a plain SMA for the very first "observed" period-length bars ) and (c) using the very same numerical properties of computing methods ( having almost all platforms using the same 64-bit IEEE-754 numerical processing, this need not cause problems, using hybrid FPGA/GPGPU/SoC/ASIC algos yet may introduce this class of further incoherencies (causing new breed of differences in results) ),
    so,
    there is the highest chance to meet both (a) & (b) & (c) if and only if we all start from the very "beginning" of the DATA in the TimeSeries-history ( easy if we all use the same source of data, not so easy, if some use time-zone uncorrected, different history-depths from different (T)OHLC(V)-data sources ) and using the same numerical processing methods.

    Some technical-indicators are less susceptible to depth-of-observation, some more ( if this is a core problem ( for sake of shaving latency off / increasing performance / maintaining Quant-models' reproducibility & repeatability of results ),
    try to set your "Accuracy" threshold and test all technical-indicators' dependence on the depth-of-prior DATA ( so as the convergence starts to meet your "Accuracy" threshold, making no sense to extend the depth further, if results started to converge and remain stable irrespective of any further extended depth-of-prior DATA re-processing )

    In cases, where you happen to reach such "short enough" depth-of-prior DATA, you need not re-process a single bar deeper into past. Not so in all other cases, where DATA-depth dependence cannot be avoided. Pity, there we all need to take the same depth (often the maximum one, see above), if we want to get the same result(s).