On a phone at normal reading distance, with the articles styling, it’s really hard to tell the difference between n and u without zooming, and the decimal points get lost - scanning the tables is hard.
It's a good article, but I think you could sum it up neatly by saying "photographic masses search for photographic rules, come up empty." I've been doing amateur/hobbyist photography, as I suspect many here have as well, for quite some time.[0]
For a while, I followed the rule. But as a physics professor of mine once aptly put it, "Stop trying to look for a formula all the time. You have the tools to derive the formulas yourself." The rule of thirds, golden ratio, golden mean, golden doodle, whatever, are just hodge podge tools used by people who want to take a better pictures than the standard iphone eye-level shot (or the old Kodak 35mm point-and-click).
The image is interesting because of the curve of the street, the Escher-esque staircase, and the fact that a bicyclist in motion happens to be moving past the only dead area of the image.
And that gets to the main point: is the image interesting? If it's not an interesting image in the first place, no magic formula is going to fix it. That's where the creativity comes in. Find the non-obvious angle that gives the shot some interest, find a subject that's a little less obvious than the influencer instacrap wingspan shots, find a location that's a little off the beaten path. Do that 10,000 times and you'll train your eye and develop a unique style that can last you through life.
Burk Uzzle is famously quoted as saying "Photography is a love affair with life" and I wholeheartedly agree. Life is beautiful, so just get out there and shoot it. You don't need a formula to find the love in a good shot.
The site is a bit more enjoyable imho once you realize that even the satire is self-reflecting satire.
(And checking html for comments in 2024? We've forgotten more than we ever knew.)
On a more serious note, it pains me a bit that our legends are slowly passing into obscurity as surely as they will soon pass away. Donald Knuth deserves a presidential medal of freedom or some other high award for his many accomplishments and gifts to the field.
I highly recommend doing this. I also highly recommend not doing it digitally.
I recently came across an email I'd sent to myself a decade ago. It was a serendipitous find and could've easily been lost for all time among the 100K emails floating around. But the process of writing it is worth it, and the reading of it some time later can be deeply rewarding.
Hah.. the Swiss Post has an app that lets you design a postcard and send it, it even has a "every user can send 1 free postcard every day" that cool kids use a meme-printing service. I feel like they should also offer a "print-a-letter-in-the-future and send it to user" service. As a bonus they can probably track the movements of their customers because of their "address change" service (if you move houses and use this service they'll let companies know of your new address). And it's the Swiss Post, it shouldn't be disappearing any time soon..
Processors are inherently awesome at branching, adding, adding, shifting, etc. And shifting to get powers of 2 (i.e., KB vs. GB) is a superpower of its own. They're a little less awesome when it comes to math.pow(), math.log(), and math.log() / math.log().
Why 300K+ people copied this in the first place shows some basic level of ignorance about what's happening under the hood.[1]
As someone who's been at this for decades now and knows my own failings better than ever, it also shows how developers can be too attracted by shiny things (ooh look, you can solve it with logs instead, how clever!) at the expense of readable, maintainable code.
[1] But hey, maybe that's why we were all on StackOverflow in the first place
> Processors are inherently awesome at branching, adding, adding, shifting, etc. And shifting to get powers of 2 (i.e., KB vs. GB) is a superpower of its own. They're a little less awesome when it comes to math.pow(), math.log(), and math.log() / math.log().
And here's something to consider -- if you're converting a number to human readable format it's more likely than not your about to do I/O with the resulting string, which is probably going to be an order of magnitude more expensive than the little function here.
Great point, I wish I'd mentioned it. The expense of the printf dwarfs the log / log (double divided by a double then cast to an int), which itself is greater than some repeated comparisons in a for loop.
It's key to be able to recognize this when thinking about performant code.
In other words, the entire exercise is silliness because the eventual printf is going to blow away any nanoseconds of savings by a smarter/shorter routine.
It's not that we think it's arcane or that we are in our own "bubbles of thought", it's that we aren't doing math. We're programming a computer. And a competent programmer would know, or at least suspect, that doing it with logarithms will be slower and more complicated for a computer. The author even points out that even he wouldn't use his solution.
I'm having a hard time imagining a situation where "printing out the number in a human readable format" is more time consuming than "figuring out what the number is".
I think a competent programmer might also ask themselves "am I prematurely optimizing?" if their first instinct is to pick the method that only works on a computer. I've operated in this space long enough that bit shifting is synonymous with doing the logarithm in my mind, but if I had to explain how my code works, I would use the logarithm explanation. I would be sure to point out that the computer does log (base 2) of a number much much MUCH faster than any other base.
Its probably excessive to say that literally every one is taught logarithms as the ideal solution to this problem, but logarithms are almost universally introduced by explaining that the log (base 10) of a number is always greater than or equal to the number of digits in that base 10 number. So if you completed a high school education in the United States, you have almost certainly heard that much at least.
edit: printing out the number is almost always gonna be faster than figuring out the value of the number, if the speed of the operation matters. My original post implied the opposite. Part of being a competent programmer is recognizing that optimizing is sometimes bikeshedding.
The author's final suggested solution at the bottom of the article still relies on logarithms.
> doing it with logarithms will be slower and more complicated for a computer
This is a fascinating point of view and while it isn't wrong in certain "low-level optimization golf" viewpoints is in part based on old wrong assumptions from early chipsets that haven't been true in decades. Most FPUs in modern computers will do basic logarithms in nearly as many cycles as any other floating point math. It is marvelous technology. That many languages wrap these CPU features in what look like library function calls like Math.log() instead of having some sort of "log operator" is as much an historic accident of mathematical notation and that logarithms were extremely slow for a human.
Logarithms used to be the domain of lookup books (you might have one or more volumes, if not a shelf-full) and was one of the keys to the existence of slide rules and why an Engineer would actually have a set of slide rules in different logarithmic bases. Mathematicians would spend lifetimes doing the complex calculations to fill a lookup book of logarithmic data.
Today's computers excel at it. Early CPU designs saved transistors and made logarithms a domain of application/language design. Some of the most famous game designs did interesting hacks of pre-computing logarithm tables for a specific set of needs and embedding them in ROM in useful memory versus CPU time trade-offs. Today's CPU designs have plenty of transistors and logarithm support in hardware is just about guaranteed. (That's just CPU designs even; GPU designs can be logarithmic monsters in how many and how fast they can do.)
Yesterday's mathematicians envy the speed at which a modern computer can calculate logarithms.
In 2023 if you are trying to optimize an algorithm away from logarithms to some other mix of arithmetic you are either writing retro games for a classic chipset like the MOS 6502, stuck by your bosses in a history-challenged backwards language such as COBOL, or massively prematurely optimizing what the CPU can already better optimize for you. I wish that was something any competent programmer would know or at least suspect. It's 2023, it's okay to learn to use logarithms like a mathematician, because you aren't going to need that "optimization" of bit shifts and addition/subtraction/multiplication/division that obscures what your actual high-level algorithmic need and complexity is.
> what are you people even programming that you need to know so absolutely little about how anything else in the entire world works
Feoren, your comment takes an incredibly superior attitude and accuses its reader, every reader, of being stupid.
When taking the log of a number, the value in general require an infinite number of digits to represent. Computing log(100) / log(10) should return 2.0 exactly, but since log(100) returns a fixed number of digits and log(10) returns a fixed number of digits, are you 100% confident that the ratio will be exactly 2.0?
Maybe you test it and it does return exactly 2.0 (to the degree floating point can be exactly any value). Are you confident that such a calculation will also work for any power of 10? Maybe they all work on this intel machine -- does it work on every Arm CPU? Every RISCV CPU? Etc. I wouldn't be, but if I wrote dumb "for" loop I'd be far more confident that I'd get the right result in every case.
> your comment takes an incredibly superior attitude and accuses its reader, every reader, of being stupid.
It's also an incredibly superior attitude to think that the discipline of software development is so uniquely special that other subjects, even basic math, have nothing to offer it, and that one could be an effective and productive software developer without having to besmirch your perfect code with concepts from other schools of thought.
And "stupid" would mean "incapable of understanding basic math". This is more like "unwilling to even try". Mere stupidity would be fine: stupid people need jobs too. But a statement that the operation everyone else in the world would use is "unmaintainable" because the programmer is unwilling to refresh themselves on how logarithms work with a quick scan of its Wikipedia article, that's not stupidity. That's bordering on malpractice.
> When taking the log of a number, the value in general require an infinite number of digits to represent.
So does taking a third of a number. So? Do you consider the code "x / 3.0" unmaintainable?
> Computing log(100) / log(10) should return 2.0 exactly, but since log(100) returns a fixed number of digits and log(10) returns a fixed number of digits, are you 100% confident that the ratio will be exactly 2.0?
Exactness was never a requirement. Do you really never use floating point? The reality is that showing "1000 kB" 1% of the time that you should have shown "1.0 MB" is actually fine -- nobody cares, everyone understands what it means -- which applies almost all floating point imprecision. It's important to know when it does matter, but it usually doesn't. It's important for a professional to know when to not care. How much of your client's money are you going to spend on worrying about tiny details that they don't care about?
> Are you confident that such a calculation will also work for any power of 10? Maybe they all work on this intel machine -- does it work on every Arm CPU? Every RISCV CPU? Etc. I wouldn't be, but if I wrote dumb "for" loop I'd be far more confident that I'd get the right result in every case.
Except a 0.00001% imprecision doesn't matter for most cases, but an off-by-one error does. For loops are much more common sources of error than logarithms are.
> You're all literally writing CRUD React front-end javascript by copy-pasting "for" loops from StackOverflow?
To an approximation, yes.
The underlying calculations at my bank were probably written once in 1970 in COBOL and haven't changed meaningfully since. But the front-end UI to access it has gone from teletypes and punch cards to glass terminals to networked DOS to Win32 to ActiveX to Web 2.0 to React and mobile apps. Lots and lots of churn and work on the CRUD part, zero churn and work on the "need to remember logarithms" part.
AI? You have core teams building ChatGPT, Midjourney, etc. Then huge numbers of people accessing those via API, building CRUD sites to aggregate midjourney results and prompts, etc etc. Even Apple has made a drag-and-drop UI to train an AI object classifier, the ratio of people who had to know the math to make that vs the people using it is probably way above 1:100,000
Well, maybe not exactly unmaintainable but I think most of us have learned that floating point operations are not to be trusted, especially if it needs to run on different processors.
Furthermore, calling such math operations is an overkill most of the time. I would definitely never consider it for such a simple operation. I actually agree with you that it might look cleaner and easier to understand, but in my mind it would be such a heavy weight overkill I would never use it.
Ahhh fonts, where everyone gets an opinion and they're all super important.
If you're designing a font not just for legibility, but primarily for safety, then it seems extremely important that each glyph is uniquely distinguishable from other glyphs. Although this font has different characters for 1/I/l, at a quick glance an uppercase i could still be confused with a pipe (|), and 0 (zero) and O (capital o).[1] I'm sure there are more. So from that standpoint, this font fails for me for legibility/safety.
Also a nitpick, but assuming Chrome is using 60pt B612 font for the title (../fonts/B612-Regular.woff), the "B/6/1" glyphs are hideously formed (that "1" puke) and make me doubt the rest of the character set.
The font has a slashed 0 for use in alphanumerics and an open 0 for use in numerics. There's discussion of it in other comments here.
I imagine pipes aren't used beside uppercase I in cockpits.
The odd serif on the 1 is to ensure it degrades correctly at low resolution.
When you design for safety, you also have to ask "safety in what context". It's neat that they released the font with an open license, but they didn't design it for anything other than Airbus cockpits.
> I have to question any article that claims task manager items are "randomly moving around."
"Randomly" is colloquially used to mean "chaotically" in a lot of contexts, and the chaotic they mean is "unpredictable enough as to seem or feel random".
I don't think I could even describe what the current Task Manager window looks like because I've been using Process Explorer as a replacement for probably a decade now.
Mostly agree with the other posters here. To add my own anecdotal piece, I had a majority of the 12 symptoms[1][2] defined as long covid. Got covid immediately in Northern Thailand, Dec 2019. Over the next 24 months I went from a healthy (perhaps hyper fit) 40-something male to someone who could barely drag himself to the gym, struggled to complete workouts, went through bouts of mental fog, heart arrythmias, chronic exhaustion, etc. Power outputs were down ~20% on pre- vs. post-covid tracked workouts. Doctors were unsympathetic, prescribed vitamin D for lack of sunlight.
But it's impossible to know if "long covid" is causal vs. ancillary. I'm also 3 years older now, and who knows, maybe that's just part of life and the aging process. I'll say that what helped me the most was just consistently getting out of the house and getting exercise. I started small but it had its own compounding effects. I'm up to about 4 - 5 times per week now. Whether that's walking, running, lifting, or a even a Murph, every bit of exercise seems to have helped me down the (long) road to recovery.
I'll probably never be back to my old fitness level, but things are at least better now. Long covid is as undefinable as it is real, but it can be beat imho. ymmv of course.
Good luck to all who are struggling with this.
That was one of the ostensible reasons, although the real driver likely had more to do with commercial than technical reasons.
There were probably some AOL types who accidentally opened toolbars and couldn't figure out how to close them. But no one who knew how to use Word launched two dozen toolbars as in that infamous image. Keep in mind toolbars (plural) arose from a single bar of tools that was limited with the width of the screen. Someone who knows more than me may want to comment, but my guess was that product groups didn't like seeing their features buried in three layers of flyout menus and used by no one. (Tools...Options...) At some point some PM declared that every menu-accessible function needed a corresponding toolbar icon and the carnage started.
The best explanation of the ribbon I can remember comes from Jensen Harris's old blog[1]. Unfortunately much of his goals and designs were dropped to make the original ship date, leaving the ribbon the mess that it was, and not much better today imho.
The biggest loss was for keyboard users, for whom correctly chorded toolbar mnemonics (win32 parlance: accelerator keys) were dropped for nonsensical ribbon chords where the keyboard letter had almost nothing to do with the desired feature. And to add insult to injury, menu operations that used to happen in milliseconds took nearly full seconds to do the same thing on capable machines of the time (and even today).
> At some point some PM declared that every menu-accessible function needed a corresponding toolbar icon and the carnage started.
I'm pretty sure Office had customizable toolbars, so yes, every menu item should be a toolbar item, but it's unlikely that anyone actually wants them all enabled. And yeah, customizing toolbars is pretty far up the learning curve; so I understand the motivation for the Ribbon, but I managed to stop using office before I had to experience it, so I don't know how well it actually worked.