Hacker Newsnew | past | comments | ask | show | jobs | submit | scratcheee's commentslogin

Yes, and I think that’s actually intentional, they’re rewarding renewables way over the odds without needing to give politically controversial benefits. The rewards are just an inherent result of the existing system. This is why renewables are growing rapidly in the uk.

Of course we’ll need a way to resolve fluctuations both rapid and slower. Rapid fluctuations are handled by pumped hydro and increasingly by batteries.

The slow fluctuations (day/night all the way to summer/winter and good/bad weather patterns) are much trickier, I think it’s still unclear how well handle them, but it will certainly be partly handled by having an excess of renewables, though we’ll likely need some other solutions too, nuclear is probably one of them.


The irony is that your comment should be entirely inverted. Renewables are not rewarded way over the odds - in fact the ruling party banned onshore wind entirely and i remember them banning at least one offshore wind farm. Luckily it is very cheap to build.

Now Hinkley Point C is another story. It's a hugely expensive boondoggle which is taking decades to construct at enormous cost and the reward at the end is that they are rewarded with a strike price that is 3x that of solar and wind. That is an obsecene subsidy forced on to customers for a power source that cant even do load following and doesnt help with fluctuations in supply and demand.

The slow fluctuations on cold, windless nights or when nuke plants are down for unplanned maintenance are going to be managed with gas.

Maybe one day it'll be gas synthesized with electricity from solar+wind overproduction on a day like today. The roundtrip is expensive, but will still be cheaper than nuclear power on a windy, summer day.


The implied part is “within common conditions” it’s a law limited to a specific regime, much like Newtonian physics. We know it’s not universally true but we can see it’s often true in common scenarios.

In the extreme case, as road coverage approaches 100%, the city stops containing buildings so the traffic will drop towards zero, so there’s actually a balance point somewhere, but roads are quite inefficient for high density cities, so probably the balance point would be less about fulfilling traffic demand and more about reducing the demand by demolishing most of the city.


I know the thread is about tvs, but since gaming has come up, worth noting that at computer viewing distances the differences between 1080p/1440p and 4k really are very visible (though in my case I have a 4k monitor for media and a 1440p monitor for gaming since there’s 0 chance I can run at 4k anyway)


The obvious use-case for unsafe is to implement alternative memory regimes that don’t exist in rust already, so you can write safe abstractions over them.

Rust doesn’t have the kind of high performance garbage collection you’d want for this, so starting with unsafe makes perfect sense to me. Hopefully they keep the unsafe layer small to minimise mistakes, but it seems reasonable to me.


I'm curious if it can be done in Rust entirely though. Maybe some assembly instructions are required e.g. for trapping or setting memory fences.


If it comes to it then Rust has excellent support for inline assembly


But how well does it play with memory fences?


You don’t even need inline assembly for those https://doc.rust-lang.org/stable/std/sync/atomic/fn.fence.ht...


You make a strong case for voice, but that doesn’t necessarily invalidate their argument, they never said voice should be replaced.

Here’s some ideas: 1. A data side channel 2. Use it to send originator for each message, have unique note on other end per sender so they don’t need to check visually, but also show on their display so corrupted or suspicious sender can be verified, in desperate circumstances (rather than the current case of “that cannot be done at all”). 3. Digital audio, allowing actual high quality audio, which we know does improve comprehension, which should not be optional in this context. 4. Take some lessons from modern coms systems on how to handle overlapping coms, plus the extra bandwidth from digital, so overlapping coms is handled gracefully (I realise the realtime nature prevents being too clever, but perhaps blocking all but the first to speak and playing a tone if you’re being blocked), perhaps with some sensible overrides like atc and anyone declaring an emergency getting priority. Currently overlap obliterates both messages and it’s possible for senders to not even know their message was lost. This has contributed to accidents, whilst basic direct radio transmissions cannot avoid this, smart algorithms with some networking could definitely reduce the failure cases to very rare and extreme scenarios 5. Let atc interact with flight planners on aircraft, show the aircraft’s actual locally programmed flight plan to atc, with clear icons if it differs from the filed plan atc has, and perhaps as an emergency only measure, allow atc to submit a flight plan to the aircraft (not replacing the active plan of course, just as a suggestion/support for struggling pilots, “since you have not understood my instructions 3 times, please review the submitted plan on your flight computer, note how it differs from what you programmed”) 6. Aircraft usually know where they are, and which atc they’re meant to be communicating with, have the data channels talk even when the audio channel is not set correctly. If incompetent pilots forget to switch channel, you can force an alarm instead of launching a fighter jet, or just have a button for “connect to correct atc” and a red light when you’re not on the correct one.

That’s just the ideas I’ve come up with just now. 4. Is probably quite hard to get right, and 5 could add load, so should be done carefully. But hard to believe the current system is technically optimal, or even vaguely close to optimal.

Admittedly, I know the real reason is that having 1 working system for everyone is better than a theoretically great system that is barely implemented and a complicated mess of handoffs between the 2. But with care they can absolutely improve things, but feels like things are moving a few decades slower than they should be.


The article explains the weaknesses of the password-centric approach:

> whether by phishing or exploiting the fact the passwords are weak or have been reused

1. Phishing is harder when you only ever enter your password into 1 place, and that one place is designed to be secure and consistent.

2. Much easier to have exactly 1 strong password than unique strong passwords for every website.

Is it better than a vault full of random passwords? Probably not, beyond pressuring the user into using the more secure method


Not disputing the obvious advantages, but since you asked:

Being forced to maintain compatibility for all previously written apis (and quite a large array of private details or undocumented features that applications ended up depending on) means windows is quite restricted in how it can develop.

As a random example, any developers who have written significant cross platform software will be able to attest that the file system on windows is painfully slow compared to other platforms (MS actually had to add a virtual file system to git at one point after they transitioned to it because they have a massive repo that would struggle on any OS, but choked especially badly on Windows). The main cause (at least according to one windows dev blog post I remember reading) is that windows added apis to make it easy to react to filesystem changes. That’s an obviously useful feature, but in retrospect was a major error, so much depends on the filesystem that giving anything the ability to delay fs interaction really hurts everything. But now lots of software is built on that feature, so they’re stuck with it.

On the other hand, I believe the Linux kernel has very strict compatibility requirements, they just don’t extend to the rest of the OS, so it’s not like there’s a strict rule on how it’s all handled.

Linux has the obvious advantage that almost all the software will have source code available, meaning the cost of recompiling most of your apps for each update with adjusted apis is much smaller.

And for old software that you need, there’s always VMs.


Kind of a bad example. Firstly because you are comparing windows with the Linux kernel. The Linux kernel has excellent backwards compatibility. Every feature introduced will be kept if removing it could break a userland application.

Linus is very adamant about "not breaking userspace"

The main problem with backwards compatibility (imho) is glibc. You could always ship your software with all dynamic lobs that you need, but glibc does make it hard because it likes to move awkward and break things.


Glibc is one of the few userspace libraries with backwards compatibility in the form of symbol versioning. Any program compiled for glibc 2.1 (1999!) and later, using the publically exposed parts of the ABI, will run on modern glibc.

The trouble is usually with other dynamically linked libraries not being available anymore on modern distributions.


There’s a classic yes minister skit on how dubious polls can be: https://youtube.com/watch?v=ahgjEjJkZks&t=45s


Easy: provide high quality output when being tested for a new task, The moment you are done outperforming the competition in the tests and have hit production you slowly ramp down quality, perhaps with exceptions when the queries look like more testing.

Same problem as ai safety, but the actual problem is now the corporate greed of humans behind the ai rather than an actual agi trying to manipulate you.


This is confusing on so many levels.


See also: Volkswagen emissions test scandal


I assume they’re referring to ag-gag laws, https://en.wikipedia.org/wiki/Ag-gag Gives a reasonable background by the looks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: