Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
HoloLens secret sauce: A 28nm customized 24-core DSP engine built by TSMC (theregister.co.uk)
267 points by runesoerensen on Aug 23, 2016 | hide | past | favorite | 132 comments


For those who haven't had the pleasure: developing on Tensilica Xtensa cores generally means living within 128-256KB of directly-accessible memory; a windowed register file that makes writing your own exception handlers "interesting"; a 6-year-old GCC bolted to a proprietary backend; per-seat licensing fees to use the compiler; and a corporate owner that's only halfway interested in the ecosystem they now control.

So yeah, kind of wishing it would just die and let ARM take over the embedded space.


I'm not personally opposed to Tensilica "dying", especially if it doesn't involve Cadence dying, since they are a (somewhat indirect) competitor from my point of view, but ARM is not a substitute for Tensilica. You can't extend its ISA unless you license the arhictecture for $30M. A DSP like Tensilica's is also much more efficient than ARM on a range of tasks, and in particular, having local memory instead of caches is done for a reason. (The least efficient accelerator of all and the favorite of academics who have easy access to it, GPUs, also have this.)

As to their compiler licensing - that's what happens when you develop for a small niche, you get more expensive tools which are worse than the free ones used by the majority. But it doesn't mean that the thing doesn't have its uses. I hear that a recent chip by AMD had 40 Tensilica (smallish, inaccessible to most software) cores.

The same is true about CEVA (which was mentioned in a sister thread), more or less.


The least efficient accelerator of all ... GPUs

Interesting, could you please elaborate on why you think this? Any data you know of on the subject? What are better alternatives in you view?


In terms of performance/power or per dollar, GPUs are only good at graphics - if you're doing deep learning, for instance, GPGPU will lose to almost any other accelerator, Google's TPU is one example, FPGAs are another, a DSP designed for deep learning by CEVA, Tensilica or Synopsys are a third, and by "lose" I mean factors of 5-10x at least, assuming the same process node and operating environment, especially when deploying networks as opposed to training them. But deep learning is just one example, it'll be the same story elsewhere. The strength of GPGPU is high availability and a relatively simple & convenient programming model relatively to other accelerators.


Cadence is an EDA company. They have a lot of other software they sell.


Precisely, and I use it, so wouldn't want Tensilica to die as a part of Cadence dying.


Ceva DSP is almost same story about compiler. They used to have GCC2.95 glued to their proprietary backend, 33% of compilations results in crash :-). They got better by going to GCC4.4, but still use crazy licensing. But other DSP cores are often even worse offering as little as only assembler.

But, DSP core itself is nice, very well thought out in contrary to mainstream DSPs from TI or Freescale(now NXP).


> But, DSP core itself is nice, very well thought out in contrary to mainstream DSPs from TI or Freescale(now NXP).

Could you please elaborate on this statement?


Hardware companies treat SW as crap usually

The ones that manage to get software end up better in the long run

But I suppose in the case of MS they will have lower level information to make MS compilers target it directly without intermediaries


> a 6-year-old GCC bolted to a proprietary backend

Not only a violation of the GPL, but for a code owned by the FSF and even Stallman himself.

That's a bold move. And very douchey.


From what I can tell, they have structured it in a way that it's a violation in spirit, but not in letter.

They don't link to GCC code directly; instead they output the intermediate representation and execute a standalone binary to finish processing the IR. They offer source code for all their GCC modifications, but carefully work around the GPL to accomplish the same thing.


Yeah they tried to obfuscate their internals precisely so this wouldn't happen. Notice how it didn't stop anybody, but instead led to poor user experience.


Typically those are creative workarounds for the GPL not actual violations.


The question of whether what they've produced is a derivative work of GCC is one for the courts, which tend to take a dim view of creative workarounds.


Absolutely not. The GPL boundary is very clearly defined and intentionally has never been brought to court.


"The broadest and most established definition of derivative work for software is the abstraction, filtration, and comparison test (“the AFC test”) as created and developed by the Second Circuit. Some circuits, including the Ninth Circuit and the First Circuit, have either adopted narrower versions of the AFC test or have expressly rejected the AFC test in favor of a narrower standard. Further, several other circuits have yet to adopt any definition of derivative work for software.

As an introductory matter, it is important to note that literal copying of a significant portion of source code is not always sufficient to establish that a second work is a derivative work of an original program. Conversely, a second work can be a derivative work of an original program even though absolutely no copying of the literal source code of the original program has been made. This is the case because copyright protection does not always extend to all portions of a program’s code, while, at the same time, it can extend beyond the literal code of a program to its non-literal aspects, such as its architecture, structure, sequence, organization, operational modules, and computer-user interface."

https://copyleft.org/guide/comprehensive-gpl-guidech5.html


How so? The GPL grants permission to distribute derivative works in general under the GPL, and additionally to distribute derivative works consisting of object code that is the compiled form of system libraries (narrowly defined) without including that source (though AIUI still under the GPL); that's all. If someone were distributing a derivative work of a GPLed program not under the GPL and therefore without permission (i.e. without being otherwise licensed by the copyright holder), that would be copyright infringement. The question of what is or is not a derivative work in the general case is absolutely a question for the court. The FSF explicitly says as much: https://www.gnu.org/licenses/gpl-faq.html#MereAggregation


All of this is irrelevant. The GPL stops at the process boundary and that's exactly what these compilers do. They provide the modifications to GCC itself under the GPL and then have separate executables that are proprietary that feed things in and out of GCC.


It's still pretty clearly a violation of the intentions of the gcc developers, and something that only exists to wrap one product seems very suspiciously like a derivative work. I'd be curious to see this tested in court.


But the FSF would not be curious to test this in court because they know they lose. I had this conversation many years ago with people that wrote extactly this type of software and the general consensus was that the FSF would stay away from probing the GPL there because there is angood chance that the ruling would set a precedent they don't want. The FUD surrounding the license in that regard is preferrable.

In general it's 2016 and no longer relevant. The world moved on to LLVM and these GCC using toolchains are on their way out for good.


The text of the GPL (nor the FSF FAQ) does not explicitly support the "stops at process boundaries" interpretation, though a form of that's been a popular interpretation since at least the GPLv2 period.


> The GPL stops at the process boundary

No it doesn't, and if it did that would only mean that processes that form derivative works of GPLed programs were unlicensed, and therefore a violation of copyright.


What the FSF wants the boundary of the GPL is irrelevant, since copyright law of the televant jurisdiction determines whether copyright license is even necessary, and the boundary of derivative works in copyright is both often unclear (as noted in sibling comments, different circuits in the US apply different tests that have not been harmonized by the Supreme Court) and differ between jurisdictions.

The GPL cannot create certainty that doesn't exist in the law on which the GPL relies to have any force.


That's the company/family powering the ESP8266 SoC isn't it ( LX106)? Is it just their DSP chips that are using that weird gcc/proprietary frankenstein? I ask because I have an xtensa lx106 cross-compiler for the esp8266 chip installed which appears to be GCC 4.8.2


The ESP8266 community has their own, community-developed GCC port for the LX106 that they use. It was one of the reasons that it took so long for there to be a practical way to run your own code on it. I believe it was based on an existing out-of-tree port for one of the other not-quite-compatible chips in the family.


I'd be interested in running Rust on the ESP8266 - are you aware of any efforts of getting LLVM to work on the LX106?


the C backend to LLVM was the most promising way to get Rust on a lot of embedded devices, sadly, it has stalled.


Yeah it is. Maybe there is something about the Tensilica cores, as the ESP8266 still hasn't got a real contender? Apparently they have a special toolchain that lets customers create customized cores in a more or less automated way, and they started as a dataplane/DSP company in the 90s. Quite weird.


Not sure what the competition looks like. I saw this the other day but I haven't read much more about it: http://hackaday.com/2016/07/28/new-chip-alert-rtl8710-a-chea...


True, that one looked ok, but 48k of RAM is a step down, and I don't see any other peripherals (e.g. bluetooth) that would make me want to switch.


The first real contender to the ESP8266 is set to release September 1st: the successor to the ESP8266, the ESP32 - now with dual Tensilica cores (and a bunch of other stuff).


I'm waiting for this one to drop like a little girl for a pony on christmas. Hope its not gonna be a disappointment for some reason.


That sounds like a violation of the GPL or at least some sort of trickery.


Isn't that par for the course for DSPs? Now-a-days I see a lot of hybrid DSPs with ARM cores that you can run whatever gcc you want on them but the main processing cores that do all that DSP goodness usually require some proprietary software from the vendor to operate.


How does the esp8266 SDK get round this. They're using an Xtensa core but make the SDK freely available (it's horrible, but it's free).


there is a community driven fork of crosstools-ng which in turn is based on a gcc fork made by a tensilica/cadence employee (jcmvbkbc). both gcc and gdb are ported to the call0 calling convention (no register windows on lx106).

it's quite a mess but it's getting better.

we (Cesanta) packaged and cleaned up the build environment: https://github.com/cesanta/mongoose-iot

you can use our docker based toolchain. It contains a few patches (malloc, stdio, moved all text sections to flash, ...), a gdb stub and serial crash dumper, working OTA solution and a sensible event driven networking API (based on the good old mongoose web server) for those that just can't wrap their heads around espcon .

please take a look. the embedded JavaScript interpreter thing can be disabled if not necessary


That was my thought too. I guess it worked well for them, but it certainly wouldn't be my first choice. I wonder what the 10 custom instructions were?

ARM licensed cores don't have an easy way to add instructions, but they do have the TCM bus which might be low latency enough, depending on what they were trying to do.


I've been involved in design of an ASIC that used a customized Tensilica DSP, circa 2009. The custom instructions were the main reason we opted for that core. There are a ton of other customizations that you can make with Tensilica so you end up with a core that meets 100% of your needs with very little fluff -- almost as if you rolled your own.

I do wish ARM would get in on that action.


Radeon R600 (HD 2000/3000 series) also used xtensa as an embedded controller and Unified Video Decoder


The "add your own instructions" thing is a pretty cool, no doubt. I would be surprised if most ASIC designs really really need the low latency that provides, especially considering the other costs involved in adopting a minor-league ISA.


They do... the Cortex A/M coprocessor scheme can be used to add instructions & functionality IIRC.


Would there be any benefits from newer GCC on such specific and simple (compared to general-purpose CPUs) hardware?


- Fixes for compiler bugs; probably not that likely, but I'm used to knowing that on a free toolchain I can always try a newer version to see if it exhibits the same behavior. Being locked to an old version feels risky.

- New language dialects (i.e. C++11). It's annoying to not be able to use the same techniques you use elsewhere because your ASIC core vendor made your compiler decision for you.

- New compiler features. It's no fun to have some clever trick I use other places fail to build because it depends on a compiler warning or some other GCC-ism that's present in all my other environments.

I also like to write unit tests and other simple mockups that can build and run on a plain linux machine, so it's nice to keep the delta between the two compilers as small as possible.


Error messages, for one.


Holographic Processing Unit (HPU) chip used in its virtual reality HoloLens specs

It seems to be insurmountably hard for press to understand that virtual reality and augmented reality are related but distinct concepts.

Beyond that, interesting teardown. The 10w number is great, but my guess is a lot of the high power processing happens in the sensor suite because it's using 4 repurposed IR sensor receiver combos to relay depth data in a highly structured way. That means this processor is the glue between the IMU and RGBD camera combo. I think in the end this can't scale down to consumer side with this approach, not to mention the other hindrances to scaling down with the protection system.


> It seems to be insurmountably hard for press to understand that virtual reality and augmented reality are related but distinct concepts.

The media write in the language understood by the reader, not the jargon of the domain expert. Once the AR-VR distinction filters down to most readers, media writers will dutifully follow suit.


Like describing a semi-automatic weapon as "automatic", frequently misusing "assault rifle", or calling a magazine a "clip"? No, the press is full of people who barely understand what they're writing about, they aren't purposefully dumbing down content for the reader. I certainly don't envy them, because it would be impossible for them to always know what color the boat house is unless they stuck to one very narrow field.


The media can describe the distinction when it is relevant to the issue at hand, possibly with the jargon of the field and possibly without. In the article, the distinction between AR and VR wasn't relevant.


They can when the mistake is too obvious to be overlooked. Pretend for a moment that this article is your introduction to the poorly named HaloLens. The distinction between AR and VR is incredibly important to a story about the thing's sensors DSP. AR and VR use sensors very differently, and a VR chip is going to be much less impressive than an AR chip.


Bangy thing, long dakka dakka boomstick, hologram. Just be glad you don't fly quadcopters or you'd get drone rage, too. /s


How can the AR-VR distinction filter down to most readers?

I'd say the media also has a responsibility to broaden the scope of what the readers understand...


In my experience as an AR dev you have to actually experience it to understand.


Why? The difference is easy to explain. AR overlays things on your vision, VR replaces it with something else.


With your description someone might just think what you see with a Military jet HUD or Google Glass. Saying "overlay" doesn't tell you anything about depth.


A HUD is AR, some HUDs are just more impressive than others. When you are talking about the difference between printers and monitors, the function of either is so different that it doesn't make sense to describe SVGA and XGA monitors separately.


An ordinary persons understanding of holograms (ie as a shimmery 3d light projection, like in Star Wars) is actually quite close to AR, which presumably is why MS have chosen to use that term. It's a bit more immediately understandable than Augmented Reality.


I think he is referring to the use of "virtual reality" instead of "augmented reality" and not to the use of "holographic".


> It seems to be insurmountably hard for press to understand that virtual reality and augmented reality are related but distinct concepts.

Microsoft hasn't helped in this case, with all their blabbering on about "holograms" when as far as I can tell, the images it produces are not holograms in any conventional sense of the word (although the diffraction-based wave guiding is pretty cool.)


Isn't VR just AR with the camera covered?


Not really. VR is an entire immersive environment to itself, it doesn't have to interact with reality.

AR has to blend in to the environment around, so it's a different class of problem. It has to locate points in the scene to add things to and project in a believable manner.


Turn off the lights. It's VR.


Not really, turn out the lights and you'll stop seeing AR at all.

VR will, for example, create a room with a table and put a chess game on that table. This all happens within the VR author's control and designs.

AR has to analyse the scene and find the table, find the angle the table is at relative to the eyes, compute the position and perspective of the table and the chess board to look correct on the table.

Both have the same end result - images projected to a user's eyes - but how they get there is quite different.


Hololens does not proxy the user's viewport through a camera, but the user is looking at the real world through a semitransparent visor, and contents are overlaid on that visor.

BTW, The viewport is quite narrow in the current version, but I have to say, tracking works REALLY well.


Flippant, but:

VR: Everything is in the computer

AR: The computer is in everything


I was about to disagree with you but then I realized you're right. Yes, exactly, VR is just AR with the camera covered. It's a subset of the AR challenge, which is much more difficult.


I'd flip it around. AR is VR where you're given an exogenously-generated background into which you have to integrate your elements. Said background owes you no duty to coöperate.


HoloLens 3D miniMap of House with Synced Ladies +1 https://www.youtube.com/watch?v=T-JvTZjbwNs GG:)


That cracked me up. Is that blue circle a gaze tracker? because it keeps pointing at virtual boobs whenever he looks at the models :)) not to mention sending avatar to the kitchen, priceless :D


It is! I feel like gaze tracking is one thing that's missing from VR headsets that's in the hololens. Social VR won't work until your eye movement and facial expressions can be translated into the application.


I'm sure gaze tracking would make rendering more efficient, just render the stuff the user is looking at in high def, the rest in low polygon and low rez textures.


Unlike head, eyeballs rotate too fast.

Wikipedia says “the peak angular speed of the eye during a saccade reaches up to 900°/s in humans”.

For reality-like experience, you need to have sub-millisecond rendering latency. For a moderately-complex 3D scene, current GPUs can’t do anything close to that.


Modern mouse sensors can track ridiculous speeds and accelerations, 6000-10000 frames per second are the norm nowadays. Eye might rotate fast, but your brain turns off the picture during saccades.


You’re right, low latency sensor is a minor problems. Low latency rendering is much harder to solve.

A good-looking 3D scene usually takes around 10-15 milliseconds to render. For the last decades, GPUs were optimized for throughput, not for lower latencies.

One common method how existing VR products achieve lower latencies — they render in a larger buffer, then shift the result to accommodate for head rotation since when the rendering was started, then present to the VR headset.

With dynamic LOD based on eye targets this trick will not work. To show better details at the center, you have to actually re-render your scene using better LoDs/textures near the view center. And that, my friend, is going to take 10-15ms.


that's a great idea actually... microsoft needs to work on the peripheral field of view.


yes, the blue circle is a gaze cursor... great attention to details! subconsciousness took over.


Interesting how he walked around the 3D characters instead of taking the easy way though them.


It's kind of unpleasant to have your head stuck into a virtual object. People generally avoid objects in VR, too.


yes, because walking through them sometimes cause minor turbulence :)


Lay speculation here:

If the first run is on a 28nm process, does this suggest the second generation on e.g. Intel's 14nm process might yield a drastically more powerful HoloLens model in the current form factor, or at least a more compact one capable of all the same (assuming the optics were also compacted) for revision 2?

There's a whole ton of stuff I'm not taking into account such as the physical size of the current HPU, but my guess is that this is the largest point of improvement just going by the process alone.


Pure speculation: 28nm masks and manufacturing are significantly cheaper than 16FF and newer, which likely helps meet the cost of HoloLens at the volume they're forecasting. Moving to 16ff or Intel's 14nm allows more for processing cores at a fixed die size, but also makes it more expensive.

There's a reason it's called the bleeding edge. :)


Correct. My co backed off 16FF in favor of 28nm for exactly this reason.


What is your company and product? I’m just curious.


Would they want to compact the optics though?

I have absolutely no knowledge of the physics involved, but it looks like the projection style that is used has a very limited field of view that won't be resolved with more processing power. I don't see how shrinking the optics could help this.

Because of this, it looks like future immersive improvements will be restricted, but I would be very happy to have it explained to me why I am wrong :)

Edit: my pessimism around FOV was from reading this: http://doc-ok.org/?p=1274


The main thing people complain about in the HoloLens is the field of view; I don't know how much these chips are the bottleneck for that, but it seems like a problem you can throw more cores at, so I'm sort of expecting them to solve it that way, but I don't know a whole lot about hardware.


I believe it's not the cpu or gpu limiting the field of view. if you read the patents behind the waveguides, the real limitation seems to be in the waveguides. They need to make them thin enough for light to pass through, while at the same time refract light into you eyes at angles that don't result in major image distortions.


I haven't tried them so I'm going by the models and demo's I've seen.

I think the field of view issue is a physical limitation, not software, and only slightly hardware related. The peripheral vision is blocked out by the components. To achieve a wider field of view you have to shift components further back out of viewing range. and then modify the screens to wrap around in the new viewing frame.

I don't think this has any impact of field of view for that reason.


that's almost certainly a limitation of the projector


Depending on what format the IP was provided in, and the level at which Microsoft made their customization, it's not necessarily possible to just transplant it onto a newer process node.


No. In fact pure performance will very likely be lower on the 14nm process. They might get better power/performance, but I would be willing to be that they won't even achieve that. The only thing guaranteed is higher density aka more of those Tensilica cores.


Why would that be the case? 14nm FinFet designs tend to have a higher clockspeed (at least at the lower TDPs) while still consuming less power.


  e.g. Intel's 14nm process
Intel doesn't let anyone else use their fabs. TSMC has their own fabs, and their own fab tech.

14nm is outdated, by the way. TSMC says they'll start shipping 10nm parts by the end of the year: http://en.ctimes.com.tw/DispNews.asp?O=HJZ4GC65UYSSAA00NW


That changed a week ago (Intel making chips for others).

http://www.recode.net/2016/8/16/12507216/lg-chip-manufacture...

Also, it's important to understand that chip process measurement is not standard across the industry. 14nm has different measurements across different components across the vendors. See: http://www.extremetech.com/computing/221532-tsmc-will-begin-...



Yeah Intel Foundry has been alive and chuggin for a while now, fabbing chips for others.


Intel absolutely lets customers use their fabs.

http://www.oregonlive.com/silicon-forest/index.ssf/2014/07/i...


> Intel doesn't let anyone else use their fabs.

I don't think this is true any more.


Did they just buy a license for ARM or announce they'd let people use their fabs for ARM?


You're talking about the Altera deal a while ago? Intel bought them last year.


Well this explains a lot. In my university (Russia, Mathematic-Mechanic faculty of SPBu) there are a lot of investments in computer vision stuff and it it almost impossible to build fast and with low TDP software for CV without SDP/FPGA/etc stuff.


This article gave a more in-depth look at optics (waveguides) and other parts of Hololens: http://www.tomshardware.com/news/microsoft-hololens-componen...


Only 10W for all that? While I'm a little sceptical that is an impressive number.


Yeah but - 10W all getting radiated inside a single 12x12mm BGA package? Isn't that a lot? It sounds to me like that'd be running very hot...

I had a project with a Raspberry Pi2 which was only drawing ~2.5W all up (including the wifi dongle), and it got flaky when the sun shone on the case and got it a bit warm.


10W is probably on the high-end for passive cooling of such a package, but zero problem with a fan.


For reference: an Intel Atom (e.g. Braswell) CPU has a TDP of 6W and a pretty large passive heatsink.


also for reference ~10W is Pentium ~100MHz territory. Afair 133MHz was the first one that required fan on top of the heatsink.


Fwiw my 486dx2-66 had a fan, and when that fan died we noticed because of corrupted memory crashing the system. Compared to modern CPU coolers it was downright tiny, but required.


From memory though, they were closer to 20mm or 30mm square though, right? (At least the outside package which I'm assuming is what the 12x12mm spec is referring to)


DSPs have insanely tiny power per peak OPS. The problem is that they are not good at running general purpose loads.


Lots of small cores is a significantly more power efficient model than one big core. Picochip, for instance, managed to spectacular things on a low power budget, but you needed to really think out you model and how you were going to apportion work to cores.


They should thank Cadence for the IP cores I guess.

On a similar note, an RPi model B consumes ~5W, which is damn good for a full-fledged computer.


I love to see when vendors actually modify a chip instead of using just off the shelf parts.

But even then, low bar....


Several independent review article about Hololens mention the limited view (relatively small view angle), underwhelming and off putting. (You can immediately spot a sponsored article if it doesn't even mention that issue at all.) iPhone 6 and high end Android smartphones already have augment reality apps with SLAM technology that is far more impressive than what MS PR is suggestion Hololens might one day be able to offer - the E3 presentation in 2015 and 2016 were faked as we know in the meantime.


> iPhone 6 and high end Android smartphones already have augment reality apps with SLAM technology that is far more impressive

This is the most wrong comment I've seen on HN in a long time. The SLAM in Hololens is incredibly impressive and nothing on any smartphone is one hundredth as good, with the possible exception of the still unreleased Project Tango phone.


Could you possibly explain for those of us out of the loop as to why? Otherwise I could simply say:

"This is the most wrong comment I've seen on HN in a long time. The SLAM in smartphones are the same as the one in Hololens. Project Tango should prove even more impressive."

And no one would be any wiser.


SLAM stands for "simultaneous location and mapping", it's about being in an unknown environment which you need to both map and localise yourself in. I'm familiar with it in the context of robotic navigation, but it seems Hololens deals with it too - mapping the environment to place holograms, localising within that environment because you need to be able to interact with them.

Most smartphones simply don't have to do this, they don't have to map their environments without user input. They don't have to localise themselves within a map of the environment that they've built themselves. Project Tango does do AR and environment mapping on a smartphone, so it does deal with the SLAM problem too.


Have you even tried one? The small field of view is disappointing, but only because what is in it is so impressive. I'm not sure what you're talking about with the smartphones, but I'm pretty sure they can't show objects in a room with natural looking distances and depth of field like the Hololens can.


The Hololens dose not have an accommodative display (natural looking depth of field). I believe their display is collimated, so that everything is always in focus.


They have gaze tracking so this should be fixable.


gaze tracking is not eye tracking


These kind of comments are really dismissive. Sure, MS PR did fire up the hype train a bit too much, but in the end they only want to show what it will be capable of in the future, as they are not selling the current version to consumers yet.

Pretty sure it will eventually reach that level of the E3 presentation when hardware gets faster/smaller/cheaper, by now it's just a tech demo and they have a lot more to actually show than Magic Leap for example.


This kind of comment I call FUD.

Please watch many of available videos on YT and you will see that HoloLens is really impressive device (despite fact that is just 1st version).

------

PS. Microsoft announced couple days ago that HoloLens is "ready for businness" [1]

[1] https://www.microsoft.com/microsoft-hololens/en-us/commercia...


An article that mentions the field of view: http://www.heise.de/newsticker/meldung/Microsoft-HoloLens-im...

https://translate.google.com/translate?sl=de&tl=en&js=y&prev... (scroll down a bit)

Machine translated:

  Opposite VR solutions HoloLens however significantly 
  limited field of view has: Particularly when you view the 
  video on the large virtual canvas was how far away an 
  ugly proscenium effect.  In addition, the glasses showed 
  in black and white content distinct RGB effects - similar 
  to the rainbow effect, causing the color wheel of DLP 
  projectors.


This is why I've decided to wait 3+ years to jump aboard the AR-VR train. The tech is still too raw.


Yeah. That [1] is like pokemon go. Or faked.

[1] https://www.youtube.com/watch?v=T-JvTZjbwNs posted elsewhere in the comments.


TLDR; It's running Windows 10 :)


This is a GPU rather the a DSP (the tricky part is rendering not filtering). Core count is a bit of a pointless metric unless you know how big/ powerful a core is.

I like the idea of a self contained VR headset with no wires, but I don't think going to see on for ab a decade, there's just too much processing power needed for a realistic experience. I hope I'm wrong!

Edit: It appears this is in fact DSP (digital signal processor). That's a huge amount of power for dedicated signal processing, I'm intrigued to know what can be done with it.


  This is a GPU rather the a DSP
No? It's a bunch of Tensilica DSP cores on a chip, for image processing stuff. It doesn't do any rendering: http://ip.cadence.com/vision

The Cherry Trail SoC has the GPU hardware.


I think you're wrong about this being a GPU. The first bullet point in the slide tilted "HPU: Architecture" says "Sensor aggregator with environment and gesture processing". I don't see any indication of the HPU doing rendering tasks.



castAR is also 'pivoting' from computer accessory to self contained unit


I'd guess they do the rendering on the DSPs, they're not GPUs but they'd be a lot better at that than an Atom CPU...


the 'too much' will come in 18 months :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: