They should have showed the car moving. You'd have seen the reflection slide up the hood in the raytraced demo.
It takes A LOT of work and planning to get light and shadow correct in a raster setting. Most AAA games don't bother. Raytracing makes it easy to get them right, which will make all the difference in the world.
I don't think it _is_ super obvious. Rather, I think the goal is to properly fill in the scene with the very subtle clues we unconsciously look for.
I had a similar experience working a bit with Radiance years ago. The output looked mediocre, at best, but it was reporting a real view of the scene; it's call to fame is that it outputs real energy values in physical units (like energy per unit solid angle or luminance or such). But despite the cartoony models I provided, it did demonstrate subtleties that weren't hacks (needed for this application), things that you wouldn't think to or bother implementing directly.
This? This grabs a lot of needed visual artifacts faster and more correctly without crazy hacks (or so it seems - it does say hybrid rendering).
I think that's because they used poor art assets. Most games look far more pleasing and realistic than those examples while using traditional rasterization pipelines, mostly on the strength of the artists and an offline prebaking stage.
There's a reason Nvidia employs top CG talent for their hardware demos. A pretty demo sells more than an ugly one, even if they essentially pull off the same effect.
Today's mobile devices, especially the higher-end ones, are quite powerful. They offer more processing power, memory and storage than laptops did just a few years ago, and desktops just a few years before that. So I don't think that "doing it on mobile" is really that powerful of an argument any longer.