Specifically on how they are advertised. A lot of posit proponents are saying things like "look at this example, my drop-in replacement for float handles it correctly!"
It does appear that 8-bit posits strike a nice balance between int8 and bfloat16 for ML specifically. The chance that they make it into scientific computing seems small - doubles were already designed for that.