Some AI systems have done things like hack out of a docker container to access correct answers while being benchmarked.
That is mildly concerning and I will give holding the AI accountable to some degree when it is actively being malicious like that, even though the user could have locked things down even more.
But it had write access to the prod DB without circumventing controls and dropped your tables? That is just a total fail.
Maybe fixing the root cause is slower, and this janky workaround was quicker as its something largely already built (a few views/links in Github already open issues in a drawer).
You've never done a temporary fix to stop the bleeding?
This seems uncharitable. Priorities aren't exclusive, especially at scale across large engineering orgs like GitHub. It could be that these are the top level priorities, but teams or individuals who aren't able to contribute to these priorities will work on other things like new features.
Agree that priorities aren't exclusive and there may be teams/individuals that aren't able to contribute if they stay in their current teams/roles
Where it becomes questionable though is when enough progress isn't being made on the top priority (reliability). If Github is being true to their word, they need to be pulling people off of teams that are working on features to work on reliability so that top priority gets the resourcing it needs.
Given the pace of improvement, and the cited example of moving to Azure from months ago, it's not super clear they are doing that. Also not clear that they aren't, maybe the move to Azure is just a more than 6mo project no matter how many people are on it.
Sure, but frontend devs fundamentally cannot contribute to the structural reliability issues.
The person who rewrote the issue page view probably doesn't know anything about multi-cloud scaling for millions of users with Azure-crippling throughput. That's an incredibly specialized set of knowledge and experience that is utterly disjunct to frontend work.
But at the same time, given the state that GitHub is in, I personally wouldn't want to allow any devs to push anything to prod that doesn't immediately affect stability. I'd completely freeze frontend work until the infrastructure is more stable. But then again I write C for microcontrollers so what do I know?
I don't know their architecture but I would bet if FE devs wants to contribute to availability in a capacity-constrained world (as GH CTO mentions) they could focus on profiling and optimization, backend-access patterns for example, caching, etc. Maybe they already have people dedicated on that but if they are coming out of a "new features first" operating regime I would bet there's some fruit to pick there.
I disagree with that,
There’s likely quite a bit frontend devs can do to improve things. Stuff like optimising the requests to reduce the load being put on the backend.
Ditto. I agree though, just because the priority is reliability, doesn't mean others can't work on features, especially features that might help with reliability, which I read was the motivation behind the new single-issue view, so that's my bad, might have been a bit much.
I still think the rest of my point stands, especially the last one which is the move that has the biggest impact to the most of us developers.
If it’s just the last 4 weeks, then I would say it seems the Microsoft acquisition had little impact on their reliability.
It seems pretty reasonable that the massive surge in AI over the past 6 months has put tremendous strain on GitHub’s infrastructure, and most of these outages are as a result of that one way or another.
Another interpretation is if you have multiple haystacks, and the machine tells you which haystack likely has a needle in it. You still need to extract the needle yourself,
The author did make it unavailable. Nobody forced him to. He's kneecapping his own content and intentionally excluding UK users unnecessarily.
Some random developer blog is absolutely not the target of the Online Safety Act. The OSA applies to "services with a significant number of UK users or where UK users are a target market".
Anyone arguing that point is doing so in bad faith, probably to prove some agenda.
I've put considerable time into this, including speaking with Ofcom directly. The guidance Ofcom issued for small site operators last year was that they did intend to target "one-man bands", and that there would be no guidance on specific numbers that constituted the "significant number" of UK visitors which triggers Part 3 and 5 provider restrictions.
Yes, in general government censored speech is inherently not important by the fact the government censored it. Like if it were important it wouldn’t have been censored. Obviously.
OpenAI will need to stop burning money eventually, but so does everyone else in the space. The longer they can do this the more squeeze it puts on their competitors.
I would call out though that I think there is one way in which this differs from the Uber situation. Theoretically at some point we should hit a place where compute costs start to come down either because we've built enough resources or because most tasks don't need the newest models and a lot of the work people are doing can be automatically sent to cheaper models that are good enough. Unless Uber's self driving program magically pops back up, Uber doesn't really have that since their biggest expense is driver wages.
I think it's a long shot, but not impossible, that if OpenAI can subsidize costs long enough that prices don't need to go too much higher to be sustainable.
Why is it possible for you to fat-finger your way to deleting production database locally?
reply