When I try to apply MonteCarlo calculations, I notice that the mean of the MC results is systematically higher than the static results
For instance, with the production of Li-ion cell in China :
cell=[i for i in bw.Database('ecoinvent 3.8_APOS') if i['name'] == 'battery cell production, Li-ion, NMC111' and i['location']=='CN'][0]
The static LCA score for GWP is equal to 18.3 kgCO2eq/kg cell :
CC = [method for method in bw.methods if "('EF v3.0 no LT', 'climate change no LT', 'global warming potential (GWP100) no LT')" in str(method)][0]
lca=LCA({cell:1},CC)
lca.lci()
lca.lcia()
lca.score
And when we compute the MC for 100 iterations :
MC = bw.MonteCarloLCA({cell:1}, CC)
scores = [next(MC) for i in range(100)]
The mean MC score is really higher than the static one : by doing np.mean(scores)
we get something near 20 kgCO2eq/kg.
If I repeat several times the MC computations, the MC mean value seems to be always higher than the static value (even with a large number of iterations), and I cannot understand why.
I tried to look at the uncertainty of the activity but it seems normally defined for each exchanges (lognormal uncertainty mostly here). I also tried this for others random activies of the EcoInvent Database and I always have the same conclusion
Does it come from the uncertainty representation in the EcoInvent DataBase or did I miss something ?
Most uncertainty in ecoinvent is defined with the lognormal distribution, and the static value given in inventory datasets is the median, not the average, of this distribution. The mean of the lognormal is always higher than the median, so it is normal to get higher average values from Monte Carlo uncertainty analysis.
You can read more about the lognormal in LCA and the representative value of the lognormal.