If I'm understanding the problem correctly, this should be solved by pnpm [1]. It stores packages in a global cache, and hardlinks to the local node_packages. So running install in a new worktree should be instant.
With that feature the external editor is launched via Joplin, so the editing is still happening in the ecosystem. You can’t just open the notes from outside Joplin.
He mentions input latency[1] as one of 4 aspects of being fast that were considered during development. I’m not aware of how that was tested, but would trust that it outperforms Iterm2 in that regard.
> Algorithmically, this is an interesting problem but it should be quite solvable. Just, for some reason, nobody has worked on this yet. So, thanks for writing this post and bringing more attention to this problem!
I'm skeptical that an algorithmic solution will be possible, but I can see this being handled in a UX layer built on top. For example, a client could detect that there's been a conflict based on the editing traces, and show a conflict resolution dialog that makes a new edit based on the resolution. The tricky part is marking a conflict as resolved. I suspect it could be as simple as adding a field to the crdt, but maybe then it counts as an algorithmic solution?
I should have been more clear in my original comment.
I don't think that the conflict detection/resolution needs to live inside the CRDT data structure. Ultimately you might want to bake it in out of convenience, but it should be possible to handle separately (of course the resolution will ultimately need to be written to the CRDT, but this can be a regular edit).
Keeping the conflict resolution in the application layer allows for CRDT libraries that don't need to be aware of human-in-the-loop conflicts, and can serve a wider range of downstream needs. For example, a note app and a version control system might both be plain text, but conflict resolution needs to be handled completely differently. Another example would be collaborative offline vs. online use cases, as noted above, they are very different use cases.
I’m not sure I agree that that approach would work. There’s two reasons:
1. The crdt has an awful lot of information at its disposal while merging branches. I think “branch merging” algorithms should ideally be able to make use of that information.
2. There’s a potential problem in your approach where two users concurrently merge branches together. In that case, you don’t want the merges themselves to also conflict with one another. What you actually want is for the standard crdt convergence properties to hold for the merge operation itself - and if two people concurrently merge branches together (in any order) then it all behaves correctly.
For that to happen, I think you need some coordination between the merging and the crdt’s internal data structures. And why not? They’re right there.
A sketch of a solution would be for the merge operation to perform the normal optimistic merge - but also annotate the document with a set of locations which need human review. If all peers use the same algorithm to figure out where those conflict annotations go, then the merge itself should be idempotent. The conflict annotations can be part of the shared data model.
Another, maybe simpler approach would be for the crdt library itself to just return the list of conflicting locations out of band when you do a merge operation. (Ie, we don’t change the crdt data structure itself - we just also return a list of conflicting lines as a return value from the merge function). The editor takes the list of conflicting locations and builds a ui around the list so the user can do manual review.
I used an 8-bit PIC micro a couple years ago for power applications (think non-IoT lighting). The specific microcontroller we used had nice peripherals for sensing, and controlling diodes, but no FPU. I remember looking into getting something external to handle the PID, but the cost and board layout constraints made it challenging.
They have a solution for that, a session link. They pass a token through a query parameter that validates to your account. You set that as the default search engine for each browser, no need to repeatedly sign in.
Kagi uses multiple search APIs (including their own) and implements their own ranking and mixing of results.
Basically you get the same results with spam removed, with some truly unique results from the in-house engine.
It is, the QR code spec details a few specific masks that are available. The one which produces the best QR code is chosen (best in this case means most clear/easiest for a scanner to read. So, no large blocks of white or black).
that would actually be... quite good. but I'd love to see more ambition (and community support) for thunderbird. I would like it (or some future converge) to be my RSS reader and, why not, my fediverse client.
[1]: https://pnpm.io/motivation