I recently found this library that seems to provide its own types and operations on real numbers that are 2 to 3 orders of magnitude faster than normal floating point arithmetic.
The library is based on using a different representation for real numbers. One that is described to be both more efficient and mathematically accurate than floating point - posit.
If this representation is so efficient why isn’t it widely used in all sorts of applications and implemented in hardware, or maybe it is? As far as I know most typical hardware uses some kind of IEEE floating point representation for real numbers.
Is it somehow maybe only applicable to some very specific AI research, as they seem to list mostly that as an example?
If this representation is not only hundreds to thousands of times faster than floating point, but also much more deterministic and designed for use in concurrent systems, why isn’t it implemented in GPUs, which are basically massively concurrent calculators working on real numbers? Wouldn’t it bring huge advances in rendering performance and GPU computation capabilities?
Update: People behind the linked Universal library have released a paper about their design and implementation.
The most objective and convincing reason I know of is that posits were introduced less than 4 years ago. That's not enough time to make inroads in the marketplace (people need time to develop implementations), much less take it over (which, among other things, requires overcoming incompatibilities with existing software).
Whether or not the industry wants to make such a change is a separate issue that tends towards subjectivity.