Hacker Newsnew | past | comments | ask | show | jobs | submit | b33pr's commentslogin

As a startup CEO that uses Brex and hasn’t been kicked off I have to say that I’m strongly considering self-evicting. What a shitty way to handle this.


Same, I am presuming that they will be kicking off all startups at some point so may as well move now.


Same here, looking at alternatives


I’m in favor ¯\_(ツ)_/¯ (also a founder)


Of course! So all the skyrocketing rents across the country have been caused by all the minimum wage increases! Like when that last increase in the federal minimum happened… when was that again?


Thank you so much for pointing this out. We'll get updated numbers out soon. How did you benchmark plaid, out of curiosity? The error which I correct here (https://github.com/brianretford/nnvm-rocm/blob/master/mxnet_...) was caused by a desire to roughly approximate how keras does things, and plaidbench w/ keras is the easiest way for us to evaluate things, though it definitely adds in a lot of overhead. My script roughly matches the numbers I get out of your script, though I will say that I think the TVM time_evaluator should be calling Sync on the inside of its loop, to be fair (which I patched it to do to compare against your methodology). It doesn't make a huge difference, but it does exist.

If I just pull the overall kernel runtime from our logs, I get ~525 inferences/sec.


for plaid, I used

plaidbench keras mobilenet

plaidbench keras resnet50

time_evaluator is what tvm/nnvm folks use for benchmark. See their benchmark script here https://github.com/dmlc/nnvm/blob/master/examples/benchmark/...


To expand on this a bit, NNVM is mostly a graph serialization format and graph optimizer with a cuda/cudnn (and now TVM) backend. In this NNVM is very similar to XLA. Our approach handles both full graph optimization (though we have a lot of work to do there) and kernel creation and optimization through an intermediate language called Tile. TVM seems somewhat derivative of our approach, though it lacks a reasonable mechanism for optimizing kernels.

PlaidML and Tile are able to create optimal kernels for just about any architecture. This approach reduces dependencies and ensure that new hardware will just work.

We intend to have NNVM and Tensorflow backends in the future. The keras backend is only 2000 lines of code (thanks to tile).


We have done some preliminary tests. We need to tweak the configuration before we formally support them. When we officially release os-x support we will also support Intel GPUs (should happen in the next two weeks)


Yeah we'll be able to as soon as their OpenCL driver supports it or we write a direct ROCm backend. We have one in the lab now -- we definitely have room to improve its perf. We'll be looking at that a lot more in the future.


Actually, if you're adventurous, you can clone it from github and build on your mac with bazel, but, your experience may suck in terms of performance (that's why it's not officially released yet). But, because you have an AMD, it will probably work well.

You would need to: bazel build -c opt plaidml:wheel plaidml/keras:wheel

and then

sudo pip install bazel-bin/plaidml/whl bazel-bin/plaidml/keras/whl


The Mac build should be coming in the next two weeks. We're just tweaking compiler parameters to make sure Intel GPUs work as well as they should.


Great! I look forward to that announcement.


So the company's fault. Embarrassing they tried to blame the new guy. So many things wrong with this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: