If you haven't been following the deep learning trend, a "Cat vs. Dog" picture classifier, or variations of such, are now taught in introductory AI classes. Applying the techniques taught in the courses, you can easily reach 99% accuracy yourself, on your own run-of-the-mill hardware. If you extrapolate that to Google / Microsoft's resources, it's a clear yes.
Based on all the images on the product page, can SOMEONE please tell what which side is the front of this car? The side with the AWS logo looks most car like, but the logo makes it look like it runs with the black side first.
As a network engineer for over 2 decades, I've been skeptical of IPv6 for the majority of them.
I still am.
As IPv4 becomes more scarce, two economic forces trigger
1) They become more valuable (read more desired). IPv4 has all the network effects going for it. It's where 99.9% of the Internet already is. .1% being IPv6 only devices.
2) To counter the rising value/cost: Workaround/Kludges/Alternatives to every devices needing a globally unique address are tried. Everyone is going to reply with how awful NAT is, and I concede it has its flaws. However, it is hard to deny its success so far. Business then do the cost benefit of the shortcomings of things like NAT vs selling their now valuable IPv4 address space, think where they are going to come down?
Having rolled out IPv6 only networks experimentally at conferences I was struck by how well it worked. By the end of the conference 1/2 the delegates used the IPv6 network (ie was a separate SSID). There was not one complaint. (I had the pleasure of a few network engineers tell me their phones, tablets and PC or whatever would never work on IPv6 network, only to later discover later they were already on it.)
In particular NAT works both ways, meaning IPv6 only devices have zero issues accessing the IPv4 world via NAT. When used in that way you get the best of both worlds - a world route-able IPv6 address and perfectly backward compatibility.
In the mean time I've rolled our a dual IPv4, IPv6 network that spans the country for my company. Almost zero issues (I had routing bugs). I was blown away to discover that any device that was smart enough to get itself a IPv6 address was smart enough to just work in the mixed environment. And this is in an environment running from XP machines to the latest Android and iPhones.
So we appear to be reached the stage where there are no reasons not to use IPv6. Of course they is also very little reason to move to IPv6 if you have an existing stable IPv4 network. Our networks aren't stable of course, and we are continually adding new connections. We (or rather I) demand a routeable IP address for each of them - life is just too hard otherwise. We are in the APNIC allocation pool that run out of IPv4 addresses ages ago. Right now the ISP's aren't handing out IPv6 addresses. If that ever changes to "you have to pay for an IPv4 address, or you can have a IPv6 address for free", I know which way I will be jumping. It's a no-brainer.
That would be an interesting choice to make, seeing as the surrounding infrastructure is still transitioning and you can just as easily support both stacks. The real obstacles are legacy devices and misconfigured appliances, which can't simply switch over without anyone noticing.
Business then do the cost benefit of the shortcomings of things like NAT vs selling their now valuable IPv4 address space, think where they are going to come down?
Yes but we could could look at the amount of of data transmitted in total. Audio compression is well understood, and can infer within an range of usable quality, if any excess voice or other data is sent over the network.
So what you're saying is, if a company like Amazon or Google has the excess bandwidth, it is beneficial for them to send way too much data in the first place in order to disguise what data is actually being sent.
ASR is a hugely complex process that is handled by ML algorithms on Amazon's servers. The echo simply does not have the hardware to handle this on it's own.
Is it though? Not trying to be argumentative but I remember using dragon naturally speaking to do voice dictation way back in like 98 on a processor that makes today's average smartphone look like a supercomputer. I thought all the ML stuff was for figuring out context and the like, but straight transcription?
We're an anarcho-syndicalist commune. We take it in turns to act as a sort of 'Satoshi Nakamoto' for the week.
But all the decision of that Satoshi have to be ratified at a special biweekly meeting.