But a synchronous function can and many do make network calls or write to files. It is a rather vague signal about the functions behavior as opposed to the lack of the IO monad in Haskell.
To me the difficulty is more with writing generic code and maintaining abstraction boundaries. Unless the language provides a way to generalise over asyncness of functions, we need a combinatorial explosion of async variants of generic functions. Consider a simple filter algorithm it needs versions for: (synchronous vs asynchronous iterator) times (synchronous vs asynchronous predicate). We end up with a pragmatic but ugly solution: provide 2 versions of each algorithm: an async and a sync, and force the user of the async one to wrap their synchronous arguments.
Similarly changing some implementation detail of a function might change it from a synchronous to an asynchronous function, and this change must now propagate through the entire call chain (or the function must start its own async runtime). Again we end up in a place where the most future proof promise to give for an abstraction barrier is to mark everything as async.
> But a synchronous function can and many do make network calls or write to files
This, for me, is the main drawback of async/await, at least as it is implemented in for example Python. When you call a synchronous function which makes network calls, then it blocks the event loop, which is pretty disastrous, since for the duration of that call you lose all concurrency. And it's a fairly easy footgun to set off.
> It is a rather vague signal about the functions behavior as opposed to the lack of the IO monad in Haskell.
I'm happy you mentioned the IO monad! For me, in the languages people pay me to write in (which sadly does not include Haskell or F#), async/await functions as a poor man's IO monad.
> Again we end up in a place where the most future proof promise to give for an abstraction barrier is to mark everything as async.
Yes, this is one way to write async code. But to me this smells the same as writing every Haskell program as a giant do statement because the internals might want to do I/O at some point. Async/await makes changing side-effect free internals to effectful ones painful, which pushes you in the direction of doing the I/O at the boundaries of your system (where it belongs), rather than all over the place in your call stack. In a ports-adapters architecture, it's perfectly feasible to restrict network I/O to your service layer, and leave your domain entirely synchronous. E.g. sth like
Async/await pushes you to code in a certain way that I believe makes a codebase more maintainable, in a way similar to the IO monad. And as with the IO monad, you can subvert this push by making everything async (or writing everything in a do statement), but there's better ways of working with them, and judging them based on this subversion is not entirely fair.
> ugly solution: provide 2 versions of each algorithm: an async and a sync
I see your point, and I think it's entirely valid. But having worked in a couple async codebases for a couple of years, the amount of stuff I (or one of my collaborators) have had to duplicate for this reason I think I can count on one hand. It seems that in practice this cost is a fairly low one.
What you write is eerily similar to one of the pain points of Haskell. You can write a compiler that is purely functional. But then you want logging so you must put wrap it with the IO monad. And then also every function that calls the compiler and so on.
It is not a matter of the age of your hardware but your software. Ipad mini 4 got its latest OS update in 2023. The problem is that in the Android ecosystem it is unfortunately common to only have a short window of OS updates from the release date of the hardware.
Yes, but parent was specifically talking about hardware specs. And a 4x times increase since 2016 isn't much progress compared to the 'good ole times' ;)
Would you care to elaborate which device it is? 12 years old means it at the most could have come with Honeycomb when you bought it. Has the base OS been updated since? It is great that you bought a device which presumably got OS updates for many years, unfortunately that is not common for android devices. And the problem is, if it did not get an OS update, the truststore did not get updated, and thus it does not trust the LetsEncrypt root CA sometime next year. Except for Firefox which luckily comes with its own root certificate list.
Iphone 7 got an iOS update to 15.7.7 in 2023 and its truststore contains the ISRG Root X1 certificate: https://support.apple.com/en-gb/HT212773 I am unsure which apps you cannot install but a quick look in the app store indicates that Zoom, LinkedIn, Notability, 1Password, Disney+ and Netflix all support iOS 15.7. And in my anecdotal experience I could find no app with a minimum OS requirement greater than 15.7. As far as I can tell you need to go back to iPhone 4S to find an iphone which does not support the LetsEncrypt root certificate. That device is only 11 years old, so still worse than your Android device. And I do not think there is a workaround by using a different browser like for Android.
Interesting - what apps were working as new installs on 12 year old android but not iPhone 7? That’s unusual. iPhones update cycle is much longer and they usually ship w a recent iOS version when released
But usually these other devices can also update their software, it is quite unique for the android operating system to run old and unupdatable software.
Or they should if they are web facing. For security reasons at least.
But would you version it by storing it as output in an ipnyb file where it is overwritten if you rerun that cell? I would store the data in a versioned database or as separate data files in the repository (possibly stored in git-lfs). And I would store results of the analysis as data files / image files / whatever else, NOT as ephemereal outputs in an ipynb file. But I am pretty far down the “ipynb files are for local use only” path.
Yeah if your analysis to takes hours to run, you should really split up the number crunching code and result analysis / visualization. Not only does it make version control of the code easier, you can save the output in an organized labeled manner (time-stamped, etc.) and, if you lose power or the kernel crashes, you don’t need to rerun the lengthy analysis if you want to make a change further down the pipeline.
It was an extreme example to drive home the point that one is "human scale time" and one is "computer scale time", people are reading far far too much into my choice of hours specifically there.
If for some reason you really wanted to compute Fib(n) for ridiculously large numbers of n, you would probably use that [Fib(n), Fib(n-1)] = A [Fib(n-1), Fib(n-2)] for the transition matrix A = [[1, 1], [1, 0]] and thus [Fib(n+1), Fib(n)] = A^n [Fib(1), Fib(0)] and then use exponentiation by squaring to compute A^n directly and thus Fib(n) in log_2(n) steps.
I am not a cryptographer but to my understanding, the number of PBKDF iterations is really only of concern for weak (low-entropy) passwords. If you know that your password has high entropy (>128 bit), for example because you generated it randomly uniformly from at least 2^128 possible outcomes[1], you are safe even if you used only 1 iteration. PBKDF is all about password strengthening, so if you are making changes for yourself the most effective change is just to use a secure password and stop worrying about key derivation functions.
[1] 28 characters in a single case, 23 characters if both upper and lower case are used, 22 characters if you include numbers, 12 words if you use a word list of 2000 words and sample uniformly
> If you know that your password has high entropy (>128 bit)
I don't think that is practical for most users - 12 words (or 10 taken from a 10k list) - or 22 random alphanumeric characters - is hard to remember - and long enough that they are difficult to type correctly. 70 bits might be a more sensible goal - but still long. (6/7 words, 12 characters from a set of 62).
This is the "trust anchor", so something the user needs to remember and type in - from what I've seen - remembering/representing and inputting 128 random bits is tricky.
And with modest stretching and a salt, probably overkill anyway.
I think your point is valid and important, especially considering the average user. However in my experience it worked surprisingly well with a long word based master password. Since I only needed to remember 1 password that I then used daily it was not that difficult. And typing it was quick since it was all lowercase which most keyboards are optimized for. However the issue came when I started using my password vault on my phone and tablet. I was way too slow at typing on them. I now have a 22 character password which takes the same time for me to type on a keyboard, maybe a bit slower, but is faster on my phone though still annoyingly slow.
As for 70 bits password, it might be enough, but you need a lot of iterations (2^58) if you want to completely make up for the lost security margin. Which will also be unusably slow in practice.
To me the difficulty is more with writing generic code and maintaining abstraction boundaries. Unless the language provides a way to generalise over asyncness of functions, we need a combinatorial explosion of async variants of generic functions. Consider a simple filter algorithm it needs versions for: (synchronous vs asynchronous iterator) times (synchronous vs asynchronous predicate). We end up with a pragmatic but ugly solution: provide 2 versions of each algorithm: an async and a sync, and force the user of the async one to wrap their synchronous arguments.
Similarly changing some implementation detail of a function might change it from a synchronous to an asynchronous function, and this change must now propagate through the entire call chain (or the function must start its own async runtime). Again we end up in a place where the most future proof promise to give for an abstraction barrier is to mark everything as async.