Because if you use the protobuf wire format, all the parsing code needs to be written in JS and won't be as performant as JSON parsing built into the browser.
But yes you can in fact transform protocol buffers into JS arrays in the way I described. I'm essentially describing protobuf designed for JS. Imagine your protobuf definitions are read by a compiler which spits out JS classes with getters and setters. These getters and setters access the underlying array with an assigned index. Your minifier inlines these getters and setters into direct array access. Voilà.
Right, but the bottleneck is generally not "how long it takes to parse the payload". For anything other than MB of data, I doubt you'll notice much of a difference between parsing JSON and flatbuffers. The bottleneck is how large and how long it takes when sending it over the wire.
But transferring the parser happens once, transferring JSON happens every time.
Really depends on the use case I guess. But any situation where I'm using JSON arrays instead of keyed objects for efficiency reasons is probably a situation where flatbuffers makes just as much (if not more) sense.
Having spent a LOT of time looking at this, browsers come with built-in (nil cost) JSON parsers.
You need a proto parsing lib and a collection of .proto schemas to even begin using protobufs, so you need to be dealing with at least that much data saved before proto even starts being a win. While the parsing lib can be cached and is largely a rounding error over a long term, every iteration to the .proto files means fetching a new version which contains all the contents of the previous version (or else sacrificing backwards compatibility).
Beyond the additional payload costs you also have to factor in the API itself. Any win to keys can largely be obtained via compression so that's only a nominal win. APIs with many string values are not going to see many benefits, either, and may actually be better served by compression. The real win for proto is in large numbers but there aren't many APIs using many values in the 256-65k range (let alone higher). Proto does do really well with booleans and null, though. Unpacked arrays aren't a really strong win for them, either (though packed ones are a win for large arrays). They also have weird quirks for maps that don't let them achieve parity with JSON, IIRC.
Parsing time is not a huge win given normal API response sizes. I was parsing a JSON blob with 100k values four years ago on a shitty Dell in 2 seconds and can't think of anything near that size in the wild. Most API responses are going to be parsed faster than human perception rendering the point mostly moot.
The real win is the direct impact to spend on bandwidth that scales with size, but that comes at the cost of developer productivity and not everyone has Google's warchest and can afford SWEs memeing about how they get promoted by spending 2 years updating protos.
Having worked at Google, Protobuf is a solid choice when you're working in multiple languages on multiple internal machines and haven't already bought into other means of serializing data. But they do not particularly shine when targeting browsers unless there is a LOT of data going back and forth and your front-end engineering team doesn't mind working around jspb's quirks, opaque errors, and subtle nuances.
But yes you can in fact transform protocol buffers into JS arrays in the way I described. I'm essentially describing protobuf designed for JS. Imagine your protobuf definitions are read by a compiler which spits out JS classes with getters and setters. These getters and setters access the underlying array with an assigned index. Your minifier inlines these getters and setters into direct array access. Voilà.