> I always found it odd that scaling down an image now and then scaling it back to its original size 2 seconds later with the same tool resulted in a loss of quality
Maybe it's because I grew up with Paint Shop Pro 6 and such, but that seems completely normal and expected to me
I was using Photoshop, I don't remember when exactly but it's probably in the 15-20 year range when non-destructive scaling was available. I don't remember not having it. Glad to see GIMP is moving in this direction.
Yep, there were people hand-building wooden PC cases, building a fish tank into their case, painting fancy colors and patterns on it, ... And there were colored LEDs too, but they didn't come with bloatware OS-dependent software, because they didn't need software
"The dangling cables of wired headphones are a must-have fashion accessory in 2026"
Gee, is that the kind of stuff that makes people want this, rather than actual usefulness related reasons?
I want it because I don't want yet another thing to have to charge, and because I'd want to be able to throw some cheap headphones in my backpack that I can use the one time in a month that I actually need them in combination with a phone (which of course isn't possible anymore today)
Also, why are ANC headphones today worse for gaming than in the year 2018 when they supported aptx that had less lag? Technology is going backwards?
Great point about the transparency grid for image search SEO. Hadn't thought about that... Might add checkerboard thumbnails for search engines while keeping the downloads as pure transparent PNGs. Thanks!
If you block name reuse globally, you introduce a new attack surface: permanent denial by squatting on retired names. Companies mess up names all the time from typos, failed rollouts, or legal issues. A one-shot policy locks everyone into their worst error or creates a regulatory mess over who can undo registrations.
Namespaces are annoying but at least let you reorganize or fix mistakes. If you want to prevent squatting, rate limiting creation and deletion or using a quarantine window is more practical. No recovery path just rewards trolls and messes with anyone whose processes aren't perfect.
Potential reasons I can think of for why they don't disallow name reuse:
a) AWS will need to maintain a database of all historical bucket names to know what to disallow. This is hard per region and even harder globally. Its easier to know what is currently in use rather know what has been used historically.
b) Even if they maintained a database of all historically used bucket names, then the latency to query if something exists in it may be large enough to be annoying during bucket creation process. Knowing AWS, they'll charge you for every 1000 requests for "checking if bucket name exists" :p
c) AWS builds many of its own services on S3 (as indicated in the article) and I can imagine there may be many of their internal services that just rely on existing behaviour i.e. allowing for re-creating the same bucket name.
I can't accept a) or b). They already need to keep a database of all existing bucket names globally, and they already need to check this on bucket creation. Adding a flag on deleted doesn't seem like a big loss.
That would be a huge breaking change. Any workload that relies on re-using a bucket name would be broken, and at the scale of S3 that would have a non-trivial customer impact.
Not to mention the ergonomics would suck - suddenly your terraform destroy/apply loop breaks if there’s a bucket involved
Any workload that relies on re-using a bucket name is broken by design. If someone else can get it, then it's Undefined Behaviour. So it's in keeping with the contract for AWS to prevent re-use. Surely?
I think a better policy would be to disallow bucket names that follow the account regional namespace convention, but don’t match the account id indicated in the name.
I followed the inquiry when it was ongoing — all of the depositions were live on YouTube. The level of both hubris and incompetence involved in that case was breathtaking.
Office chair technology also has really advanced since then (looking at the chair on the picture, which is commonly seen near computers in photos of this era)
Indeed, the Aeron chair, which became a design classic and the apparently the best-selling office chair ever in the US, only came out in 1994. So about the same time as the web. Not sure if it’s the only office chair design with a dedicated wikipedia page? https://en.wikipedia.org/wiki/Aeron_chair
Python is extra annoying though with refusing to support division through zero the way other programming languages with IEEE floats do (i.e. output inf or nan instead of throwing an exception), even though it has no problem doing things like float('inf') / float('inf') --> nan. It specifically does it for division through zero as if it wants to be a junior grade calculator just for this one thing. They could at least have fixed this when breaking backwards incompatibility from python2 to 3...
In most languages, `x: float = 0` involves an implicit conversion from int to float. In Python, type annotations have no impact on runtime behavior, so even though the type checker accepts this code, `type(x)` will be `int` -- python acts as if `int` was a subtype of `float`.
It would be weird if the behavior of `1 / x` was different depending on whether `0` or `0.0` was passed to a `x: float` parameter -- if `int` is a subtype of `float`, then any operation allowed on `float` (e.g. division) should have the same behavior on both types.
This means Python had to choose at least one:
1. division violates the liskov substitution principle
2. division by zero involving only integer inputs returns NaN
3. division by zero involving only float inputs throws
exception
4. It's a type error to pass an int where a float is expected.
They went with option 3, and I think I agree that this is the least harmful/surprising choice. Proper statically typed languages don't have to make this unfortunate tradeoff.
C does different things for 0.0 / 0.0 and 0 / 0 and it's not that weird to deal with (well it has other issues like it being platform dependent what happens with this). JS has no problem with it either (0.0 / 0.0 gives nan, 0n / 0n gives exception since it are integers).
Python is the only language doing this (of the ones I use at least).
I don't think the notation `x: float = 0` existed when it was new by the way so that can't be the design reason?
since python seems to handle integer through integer divisions as float (e.g. 5 / 2 outputs 2.5), 0 / 0 giving nan would seem to be expected there
> liskov substitution principle
that would imply one is a subtype of another, is that really the case here? there are floats that can't be represented as an integer (e.g. 0.5) and integers that can't be represented as a double precision float (e.g. 18446744073709551615)
Python chose, quite some time ago, not to follow C's lead on division:
PEP 238 – Changing the Division Operator (2001) [1]
The rationale is basically that newcomers to Python should see the results that they would expect from grade school mathematics, not the results that an experienced programmer would expect from knowing C. While the PEP above doesn't touch on division by zero, it does point toward the objective being a cohesive, layman-friendly numeric system.
C and JavaScript both treat integers and floats as separate types. In Python, ints and floats with the same numeric value are considered identical for almost all purposes.
Nah. Python gets it right; all high level languages should operate this way. Division by zero is a bug 90% of the time. Errors should never pass silently. Special cases aren't special enough to break the rules.
IEEE floats should be a base on which more reasonable math semantics are built. Saying that Python should return NaN or inf instead of throwing an error is like saying that Python should return a random value from memory or segfault when reading an out-of-bounds list index.
Exactly. Programming language design should be coherent and consistent, not just for floats, but arrays, classes, pointers, or anything else that it offers.
And the sensible thing will depend on that language.
I thought about that recently while researching how VCRs work before attempting to fix one. I didn't even think about seeing the actual video signal, I was just curious what the diagonal lines and control pulses on the tape look like. There are many other things as well that would be interesting to look at (all kinds of tapes, all kinds of floppies, hard drive platters, magstripe cards), but unfortunately I don't think there exists a technology capable of visualizing magnetic fields with enough precision.
Thoughts on that: (1) you'd need a way to visualize the magnetic fields, (2) the data is frequency modulated, (3) due to helical scan, the video field lines do not line up evenly one over the next as they did so nicely in the Laserdisc / CED (there'd be a skew).
So I don't want to say it's impossible, but I think it would require a lot more creativity.
Maybe it's because I grew up with Paint Shop Pro 6 and such, but that seems completely normal and expected to me