Hacker Newsnew | past | comments | ask | show | jobs | submit | joenathan's commentslogin

There is zero growth in GPU performance from the 520 to the 620.


Selling like crazy to Slice users. These stats are useless.


You can actually have both, Tesla Model S P100D http://www.fueleconomy.gov/feg/Find.do?action=sbs&id=38172


You understood my point...


You understand that your point is flawed? Compromise isn't always necessary! The OP was looking for innovation in this space, similar to say what Tesla did for autos.


-.- but it is. You can't make thinner devices with more battery. The way battery tech is right now is that to make it thinner you need to sacrifice battery volume and consequently capacity. Nothing short of a major technological breakthrough will change this, and nobody on earth has shown even the slightest signs of having managed to produce a better battery technology, let alone it being ready for consumer tech. So no, my point is not in the least bit flawed.


20% smaller battery on a device that consumes 50% less power = thinner device and longer battery life.


...

Which is 1.6x increase ion battery life, that would be a 2x increase if you just kept the thickness as it was. That's. the. point.

Jesus, when people want to be dense...


It's funny cuz' it's true.


This is good info to know, helps me as a sysadmin to be confident in making decisions for my customers and their data. I regularly use a tool called Crystaldiskinfo to check the SMART stats of drives. Will pay more attention to the raw values in the future.


It's interesting that most people rely on the raw values, since the standard does not require them to be meaningful and depending on the vendor it could be anything.

I suspect this is because value, worst, threshold columns are kind of confusing to understand.


There aren't too many vendors for spinning disks, and if you have a lot of disks it doesn't take too long to see that the sector count metrics correspond to sectors. In my experience, bad sector count is a good predictor of future trouble, and running disks until they threw read errors (before we were running smart monitoring), they all had lots of bad sectors. That said, there's a threshold, getting to 100 slowly is probably ok, a thousand is probably not.

SSDs though, they just disappear from the bus when they fail; so I haven't been able to look at a dead one and see what looks like a useful predictor. I have seen some ssds reallocating a big block, which kills performance while its going on...


"SSDs though, they just disappear from the bus when they fail"

This isn't always true, and actually shouldn't ever be true - it's a particular failure mode you're seeing, and while it appears to be one common across a number of SSD controllers, it's still a pretty sorry fact that it happens.

All SSDs (at least all not-complete-rubbish ones) report some kind of flash/media wearout indicator via SMART, which isn't necessarily an imminent failure indicator (SSDs will generally continue to work long past the technical wearout point), but is a very strong indicator that you should replace it soon and should probably buy a better one next time.

SSDs do suffer from sector reallocations in the normal way, and the same kind of metric monitoring can be done. It's pretty vendor-specific as to what SMART attributes they report, but attributes like available reserved space, total flash writes, flash erase and flash write failure counts and so on are pretty common.


With thousands of sata SSDs, I've seen one fail in a traditional fashion (some sectors weren't readable, otherwise mostly fine) and the rest of the maybe hundred that failed would just disappear from the bus. I don't monitor the wear out indicators, but from occasional looking, we're never near a significant fraction of the wear capacity. I'm very happy not to have anymore spinning disks in production, because the ssds fail less often, it's just the failures are more annoying, because it's hard to have an orderly shutdown when disks disappear.


Funny how ~18 years later I still have compact flash devices plugged into IDE ports that have never failed. In fact, across a broad spectrum of applications and installs, I have never seen a working CF device fail in the field.

SSDs on the other hand ...

I use SSDs for caching (ZFS read cache and mirrored SLOGs) and I use them for mirrored boot devices in modern, production systems that should have a fast OS device.

But if I want a system to run forever ... if I am optimizing for longevity ... I use compact flash, even in 2016.

(yes, of course I set them to be read-only and disable swap)


Don't need to have the ability to delete, with create and append access one could also just corrupt all the backups.


LG, Samsung, Sony to name a few. Having vertical intergration can lead to huge wins


I'm partial to the Hikvision 4MP IP cameras and Blue Iris dvr software. What is great about Blue Iris is you can mix and match IP camera from different OEMs and it doesn't require any plug-ins to view live streams or recorded footage from any web browser. The only issue I've been running into is having too many 2k res cameras at 20+ FPS eats up a lot of CPU on DVR PC. Don't know if the Unifi DVR software is better optimized or if it only works with Unifi gear.


The Windows build doesn't appear to work at all. In addition to getting file path too long issues, once I've accommodated the app by extracting on the root of my drive it never loads past a white screen.


That's more like 1 cent. Why do you love it? What do you love about it?


off-topic, but is there a difference between the how many cents you use in this idiom?


The idiom is always "two cents". The parent was implying that because the GP didn't add much to the conversation that this subtracted a cent.


There is a little use of five cents here. Not lots, but it's not unknown for it to change amount.

...still no correlation, though.


Only if you have zero sense.


Also from Windows 8 and up the built in task manager has per process network usage along with disk usage. Makes it real easy to find bottlenecks or resource hogs.


Does this require Administrator rights to see this granularity? In my office as a non-admin user, we have noticed that Task Manager seems really crippled.


You won't see other users processes, at least on my box however the detail is there for yours. Probably possible for it to be crippled by group policy.


In previous versions of Windows, the task manager menus could be toggled by double clicking an empty area in the window. You should also be able to select additional columns to display in the options. Note: I haven't used the win8 taskmgr.


Task Manager or Resource Monitor (which you can launch from Task Manager)? I see lots of per-process stuff in the latter but not the former.


Windows 10 Task Manager http://imgur.com/R9mbMZY


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: