The function's detail says:
The test evaluates if the second time series causes the first one. Two MLP artificial neural networks are evaluated to perform the test, one using just the target time series (ts1), and the second using both time series
I'm using the following code:
for (i in series[-5]) {
prueba = nlin_causality.test(ts1 = peru[,"gap_y"],ts2 = peru[,i],lag = 4,
LayersUniv = 1,LayersBiv = 1,iters = 10000,bias = F)
og_nl[i,1] = round(prueba$Ftest,4)
og_nl[i,2] = round(prueba$pvalue,4)
}
and the output is the following table:
+-------------+-----------+----------+
| Variable | F-stat | P-value |
+-------------+-----------+----------+
| Inflación | 0.4468 | 0.7744 |
| Var.PBI | 2.2039 | 0.0766 |
| Var.Emisión | 2.7633 | 0.0335 |
| gap_y | 0.5546 | 0.6963 |
+-------------+-----------+----------+
So from the function's detail what I understand is that the null hypothesis is that ts2 do cause ts1, so if I have a pvalue lower than my 0.05 in my case I can say that ts2 does not cause ts1?
Thanks
the interpretation of this test is similar to the Granger causality test. In general, the p-value of the test is the probability to observe the given result under the assumption that H0 is true. For this test, H0 is the hypothesis of non-causality. So, by using a threshold value at 5% for example, a p_value greater than 0.05 means that ts2 does not cause ts1.
As a side remark, the size of Hidden layers for both univariate and bivariate models are vectors not integers. For example, LayersUniv = c(1, 2), is equivalent to an MLP model with two hidden layers, where the first contains one neuron and the second tow neurons.
Best, Youssef