If you want to use that for unit testing, then I think it would be better to mock the calls to AWS services. That way you test only your implementation, in an environment you control.
If you want to use that for local development, then I think it would be better to provision a test environment (using Terraform or any other IaC tool). That way you don't run the risk of a bug slipping into prod because the emulator has a different behaviour than the real service.
Trunk-based development fits nicely when you have a single deployment product like a SaaS and you don't need to maintain old versions of your software. You only have one prod environment.
If you build a software that you distribute so people can deploy it themselves (a library, a self-hostable solution, ...), then you most likely semantic versioning. In that case, the best model is to use what semantic release offers.
Using an LLM to generate an image of a diagram is not a good idea, but you can get really good results if you ask it to generate a diagram.io SVG (or a Miro diagram through their MCP).
I sometimes ask Claude to read some code and generate a process diagram of it, and it works surprisingly well!
When doing advanced terminal UI, you might at some point have to layout content inside the terminal. At some point, you might need to update the content of those boxes because the state of the underlying app has changed. At that point, refreshing and diffing can make sense. For some, the way React organizes logic to render and update an UI is nice and can be used in other contexts.
How big is the UI state that it makes sense to bring in React and the related accidental complexity? I’m ready to bet that no TUI have that big of a state.
> Berulis found that on March 3 one of the DOGE accounts created an opaque, virtual environment known as a “container,” which can be used to build and run programs or scripts without revealing its activities to the rest of the world. Berulis said the container caught his attention because he polled his colleagues and found none of them had ever used containers within the NLRB network.
Not at all: it says DOGE appears to have created a container in a place where containers were never created by NLRB. Tell THAT to someone who doesn't know what Docker is, and it is less informative.
I think it sounds a bit off in the same way as "Linux, a computer program commonly used by hackers, was found on the suspect's machine" does, though not to that extent.
It's not saying anything technically untrue, and emphasising the aspects it does arguably makes sense within the context of what the concept is being brought up for, but it comes across as an odd framing for people familiar with the concept in general (using containers for standardization/scaling/etc.)
If you installed linux in a network that didn't typically have linux machines, and then had no accountability to what was running on said machine... yes, that would be suspicious and of note.
My point isn't that it couldn't be of note, but rather that - even when relevant - the phrasing makes for a strange-sounding definition to people already familiar with containers/Linux in a general context (and people who weren't familiar with containers/Linux might come away with that lopsided impression of them, even while having an accurate impression of how they were relevant to the article).
I think it could potentially be improved with a more general/typical definition first ("Containers are self-contained environments that bundle all dependencies a piece of software needs to run and are commonly used to streamline deployment across different machines, but can also ...")
And this guys how you get $200 per hour consultant say "I'm on my 15th sprint, still trying to figure out how to transform a CSV using powershell. Maybe next week it will be done."
It's only odd for people in the middle segment of "just smart enough to understand why you want containers, not experienced enough to understand how they work"
We use them for standardization and scaling exactly because they are opaque. I personally believe the explanation shows a deep understanding of the technology, but also a good grasp of what matters politically.
From the email shown in the photo, it seems like DOGE was trying to build and run a docker container using Integuru (YC W24) https://news.ycombinator.com/item?id=41983409 to scrape the system
I was wondering when Y Combinator affiliated companies were going to show up to help DOGE dismantle democracy, and it looks like we've found the first instance.
It's just docker containers. As a technical person I was confused reading that at least 3 times until I made the mental connection that it's docker containers. So yes you are right it's made to sound more opaque and nefarious than one would normally assume in our field. If they have a policy that says we can't run docker containers in network A or zone B then just say so but don't lie to make it sound like Russia Hackers. That's the kind of shit that makes fence sitters and reasonable people across the isle not trust your motives.
Anywho, this whole "opaque" or "untrusted" code running in a VM is the same lingo that big corporates use to gatekeep newer technologies that bypass traditional processes. E.g. "oh sorry you can't test locally because you need to use our officially licensed and expensive Oracle DB instance. Oh and BTW, you can't use the free container image that Oracle provides free of charge. It's running 'untrusted' code in our network." and endless variations of that.
This is a smoking gun. I'm a little shocked at how little MSM coverage
this is getting and the moral gymnastics some commentators are
performing to lend a veneer of innocence to this. It's an incident on
par with 1950s Cambridge ring [0] and I cannot understand why an
investigation team from the Pentagon are not all over this kicking-in
doors and taking names?
There will be coverage, but it has little point. The information network in America is Centre, left and centre right orgs, and then there is the Hermetically sealed Fox and related ecosystem.
So even if 2/3rds of America decide this is too much, they aren’t sufficient to shift what is covered in the idea economy and the political economy.
I just found out there’s even a book that did the ground work to make this case, in 2018. (Network propaganda.)
This is the prime reason I recommend all democracies look beyond their current leaders and grapple with the structural issues caused by capture of the media ecosystem.
Do note - this isn’t an issue of bias. There’s a protectionist economy on the right, where reality is whatever storyline they need to share.
At this point I wonder if it's fear. They were able to cover the Clinton story because they knew no harm would come to them - the government wouldn't prosecute the press. But these stories, under this government, is the sort of thing where it could end up on the wrong side of an unchecked tyrant who is increasingly vocal about their desire to ignore due process.
The media companies ate so well and grew so fat covering the rise of fascism they didn't think what would happen when it finally gained power.
It's not about your union stopping them from pulling you out of bed, it's about what happens after that. Rumeysa Ozturk, the student who was abducted in Massachusetts was a member of a union and her union immediately sprang into action. Part of the reason this was national news so quickly was because her union took to the streets.
I think a part of it is simply that the space is absolutely flooded and the public becomes almost numb to it: This administration is so absolutely rampant with criminality, constitution shredding, and just rank incompetence that reports of more of the same just doesn't trend. I mean, it's similar to the fact that Trump lies about everything constantly -- even the most meaningless facts like his height and weight -- and soon it just isn't noteworthy that he continues lying about everything constantly. When Trump is caught in an obvious lie, which is basically a daily occurrence, he doesn't apologize, he doubles down, and this is his super power, at least among his incredibly stupid fans and base.
"But her emails" was when Hillary using a private server was actually so exceptional it was like the singular thing. Trump's crew of misfits and clowns and self-dealing grifters have turned the government into a circus. They're all insider trading, launching shitcoins, turning the WH lawn into a pathetic infomercial while your commerce secretary -- Howard "Used Car Salesman" Lutnick -- is pushing stocks.
I don't recommend resolving actions on the server in any situation:
For actions that require secret information, you would filter the actions sent to the client of any secret information and make sure the code handling the action can handle both the action and the filtered actins.
For actions involving RNG, make all randomness rely on a seed. This seed would be stored server-side and passed along the action when sent to the client. This makes sure the clients can deterministically reproduce the update.
I had the same issue in AGoT:BG and I solved it by representing the state of the game as a tree. At any point of the game, the current game state is a leaf of the tree.
You'd represent this kind of choice as a child node. When the user has made their choice, the code can return to the parent node with the choice being made so it can continue with the next "step" of the game.
This is the correct response. Hearthstone is structured like this internally.
If you are curious about it, I wrote a cc0 spec which stores hearthstone game state in xml. It’s based on how hearthstone stores game state on the server and client, and it was the first time a replay format was created for hearthstone:
https://hearthsim.info/hsreplay/
Incidentally the UI we wrote for hearthstone replays is a react app. It’s funny because looking back it was the first time I used react and typescript, and both were not at all adopted by the js community yet at the time.
The way I wanted to implement this in my turn-based game engine:
If you implement the deterministic update pattern to handle state synchronisation you can add "event" inside the logic that handles updates that pause the processing allowing your animations to be played. In JS, for example:
Server-side, "emitEvents" would be a no-op. Everything would resolve synchronously.
Client-side, the UI can listen to those events to pause the updating of the game state to see the intermediary state of the game and play animations. When the animation is done, it can resolve the promise, resuming the game updating logic.
If an update arrives while an update is being handled, it can be queued so it can be played after the current update finishes.
I would agree with you, if HCL wasn't a bad language in itself:
* You can't make have variables in an import block (for example, to specify a different "id" value for each workspace)
* There is no explicit way to make a resource conditional based on variables. Only a hacky way to do that using "count = foo ? 1 : 0"
* You can't have variables in the backend configuration, making it impossible to store states in different places depending on the environment.
* You can't have variables in the "ignore_changes" field of a resource, making it impossible to dynamically ignore changes for a field (for example, based on module variables).
* The VSCode extension for HCL is slow and buggy. Using TS with pulumi or TFCDK makes it possible to use all the existing tooling of the language.
This massively depends on your provider code. Using loops to manage tf stuff can you you into really “fun” scenarios when you want to e.g delete an openstack firewall rule from the middle of the array.
I’ve been burned so many times here that I hate all of this stuff with an extreme passion.
Crossplane seems to be a genuinely better way out but there are big gotchas there also like resources that can simply never be deleted
If you want to use that for unit testing, then I think it would be better to mock the calls to AWS services. That way you test only your implementation, in an environment you control.
If you want to use that for local development, then I think it would be better to provision a test environment (using Terraform or any other IaC tool). That way you don't run the risk of a bug slipping into prod because the emulator has a different behaviour than the real service.
reply