I had the same thought. I remember exporting image-based webpages in Macromedia Fireworks using HTML maps. No reason they shouldn't work today, even if they carry the same problems now as they did then (image-based websites are massive).
Yeah, my first thought was <map>, then I moused over it and noticed how utterly trivial this would be to re-style in comparison.
This is a good approach, I think. It's kinda fine-tuned for reMarkable output, as it's mostly pure black or white pixels, but clearly they're fine with that. And the end result works quite well, and seems as easy or easier to implement.
This is all also kinda made moot by there being a link to an excellent transcript at the bottom. Accessibility needs are better covered by a straightforward alternate page than <map> alone.
Not sure if I triggered some sort of race with fast mouse movement or my mouse being over the link before it appeared or something, but for me on Firefox on Linux, the blue color went away when I hovered my mouse over a link. I'm not sure if that's really a benefit; it honestly confused me and made me think that I wasn't actually hovering over a link at first, since that's not how traditional links work.
Do you maybe have a user-stylesheet that overrides something about links? But I can't think of anything that makes sense that wouldn't also break the non-hovering version too.
Does opacity() combined with mix-blend-mode maybe not work right on your box? I'm not sure why it wouldn't, but you can convince me of just about anything by saying "Firefox does weird things on some GPUs".
Blending stuff is a particularly unreliable part of the web platform, because GPU drivers are astonishingly bad, and a lot of the functionality is poorly exercised. The situation has improved enormously in the past decade as a few things have started to use the GPU and so there’s been more attention to it all. For Firefox, the Quantum Render project ironed out almost all the problems so that bugs are now fairly rare. For web content in general and especially in other browsers, I suspect that Google Maps switching to WebGL single-handedly drove most of the improvement that has happened (even when it comes to identifying and blacklisting bad drivers and deliberately falling back to a software implementation).
But quite seriously, rejoice that you’re using Firefox. It has very few blending issues in general. I’ve found the Chromium family to be vastly buggier in this area, and I’ve learned when doing almost anything with filters and blending to develop with Chromium—previously I developed with Firefox and everything would just work, and then I’d test it in Chromium and discover some important bug or fundamental undocumented limitation in Chromium which meant I had to start again with a different approach, e.g. https://bugs.chromium.org/p/chromium/issues/detail?id=992398 where a perfectly normal combination of things just stopped rendering the page after 8,192 pixels (which isn’t all that much); or for something more recent (earlier this year) look at the source of https://temp.chrismorgan.info/2022-10-03-hand-drawn-with-lin... and see some of the compromises I made for Chromium.
I do have some custom CSS stylesheets installed, but none of them seem to be active on that URL. I did do some messing with my GTK and Gnome shell themes a few weeks ago, and I noticed that it strangely seemed to have some parts of the NY Times crossword site use a dark background where they didn't before, so maybe that's affecting it somehow?
<map> was introduced in HTML 3.2 in 1997. The other technique I often saw was to cut the image into a grid of separate image files. They would look to the user as if they were a single image, but you could hyperlink some of the images so the effect was that parts of the overall image were hyperlinked and some weren't. Developers came up with all kinds of interesting tricks in those days!
My first websites were built like this. I'd design PSDs and then slice them into images that I'd place in tables. I completely forgot about that approach!
I have to imagine some can, as the article mentions on the fly OCR of handwriting to generate the transcription, and I can select text from the handwriting on my phone (iOS 15 currently).
I was initially surprised to see it’s not transparent text overlaying the image, but apparently the text recognition is actually that good.
<map> and <area> are fine for a11y if marked up responsibly (they accept ARIA attrs and alternative text for example). I learned Amazon still uses them regularly!
Imagine if you could hyperlink as you handwrote notes, fluidly with the motion or gesture of the pen. As with text editors like emacs or *vim, you'd create new modes of thinking as you interact with a machine at the speed of thought, with the underpinning physical motions being too fast and too easy for conscious attention. Make wormholes from one part of the writing surface to a distant one, recording connections, asides, without conscious effort. No more than you expend on the shapes of letters, or the layout of QWERTY.
Instead, we're kind of stuck on the UX of "ordered sequence of rectangles", with some usability hacks to mitigate the severe limitations of this skeuomorphism.
Surely there must be some astounding low-hanging fruit here, in the UX of epaper handwriting as a tool of exploratory thought.
My current note-taking stack is an Obsidian Vault on my iPad with the mazec handwriting keyboard [1] (this post was written with it), occasionally switching to Nebo [2] if I want a long-form writing experience. Apple's built-in scribble functionality would be great if it weren't so inaccurate and kludgy when correcting mistakes. I won't wax poetic about Obsidian more than others already have, but this setup gets me very close to the link-as-you-write workflow that I sorely wished I had in grad school.
It would be cool if there were a gesture for this within Remarkable, like if you underline a word and draw a line to another word or it pulls up a table of contents you could point to, then when you finish the line, your “link line” would disappear leaving only the underline.
That's a neat idea. I was hoping the author would share an interesting strategy for linking to external documents inside a handwritten post. But I don't see any way it could be much more elegant that "I just wrote the URL down manually and linked the text to it," which is what they did. But linking to points inside the document, you could get pretty clever.
My sense is that the design goal of Zettelkasten is basically this. Why note cards? Why only a few sentences per card? So you can insert text in a fine-grained manner -- and so you can insert cross-links in a fine-grained manner.
More generally, with handwritten (or other physically-manifested writings), an indexing and referencing system substitutes for hyperlinks. The Bullet Journal index and references operate in a similar manner. The O'Reilly book UNIX Power Tools offers a print hypertext experience by numbering each individual article and section, and liberally referencing between these. The first edition is superior to later ones in that the text is printed in multiple colours (black and blue) which distinguish the text and navigational elements, much as early Web documents did with the standardisation of blue link colours.
For those who remember such things, the paper and print card catalogues of libraries operated similarly.
Gestures on epaper will be absurdly, hilariously overpowered. Gesturing with fingers on touchscreen is sluggish and imprecise; it's not at all like handwriting (and who in the pre-digital era wrote books with the technique of finger-painting? It'd be absurd). But, how many distinct glyphs can you draw with a fine-tipped pen in a quarter of a second? 50? 100? Easily the entire Latin and Greek alphabets, just as a starting point. It's a high-bandwidth input channel, just like a PC keyboard.
All you would need to control an epaper with gestures is to reserve some subset of the character-glyph space as an in-band control channel. Glyph shapes that are never semantically valid as written text. Shapes that are easy to reliably parse, which are highly "orthogonal" to written text, as well as to other control gestures.
- "How do you click the links though?"
I'd suggest they be interpreted as special "active" regions with a single default action, so any user input works, if it's within the region. Scratch a line over them and they open. My thinking is controlling the machine, and writing, should be the same actions and combine together fluidly: everything is a pen stroke.
On my monitor I found the text very difficult to read in giant print, so I tried to zoom out to make it smaller... and was somewhat disappointed to find that the text countered that zoom and remained the same size. This is an accessibility problem; many folks will need the inverse. Please don't do this.
(If you're in the same boat as a user, you can work around the issue by resizing the browser window. Well, you can on PC, I'm not sure that's possible on a tablet.)
not bad at all. Sure, it would be better if the transcript was just part of the page, but I understand the tradeoff and it's quick enough to get to the transcript.
Yeah, the page could be more accessible to users who don't use a screen reader but don't want to parse small handwriting.
If you had a screen reader you'd hear literally only:
- Link, image with alt handwritten.blog
- Link, image with alt Comment on this post
- Link, image with alt Read the transcript
- Link, image with alt Back to main page
I think it'd be less confusing if the main body had an alt. It would just be a short FYI that there's also a big image of text and that the transcript reproduces it.
But the transcript would be a lot harder to miss with a screen reader!
When I got my reMarkable two years ago I tried their app’s vector output: it was awful, because they were using completely the wrong technique: taking intersecting paths, taking the union, and then simplifying the path very lossily, which particularly ruins intersections because of the union step.
It’s possible they’ve improved the technique now, but I’d be a little surprised if they have, given that it amounts to throwing away the entire thing and starting from scratch.
There are a few open-source libraries to convert the .rm format to SVG. None are good for anything but the fineliner. I’ve written the closest one to decent for tools like the pen (though I haven’t got round to publishing it), but it still lacks the texture (pretty much unavoidable) and is thus never going to be excellent. SVG just lacks the necessary primitives.
That's a lot of work for I don't know how much gain. And, more importantly, I have to agree with a number of posters here that page (and anything like it) is an accessibility nightmare.
Edit: I've thought about this more and think this is wrong. There should be a non-handwritten transcript link at the top. Embarrassingly, I focused on thinking about screen readers rather than accessibility more broadly.
---
(I've read a bunch about accessibility, but I'm no expert).
I don't know, I actually think this blog is more accessible than most.
You might have missed the transcript link at the bottom.
The only changes I'd make is adding a summary alt to the image saying something like "The handwritten body of the blog post. A transcript is linked below".
(Without any alt the page might be confusingly blank, but as many screen readers poorly support long alt text you don't want to copy the transcript in.)
Having support for those who use screen readers is a huge improvement ovet the majority of sites, surely?
May I ask, what other accessibility conditions do you feel this site doesn't meet?
Also - for a non corporate website - why should accessibility be a relevant factor? For a personal project, I don't see why anyone has to make it targeted for every case.
I like how sarcastic it is that link to this completely inaccessible post written as PNG image is at HN just next to the link to Bogota's gov page which has a sign language titles :)
Wouldn't it be more the opposite? Someone's elsewhere in these comments noting that a screen reader can navigate through the structure of paragraphs much better than blobs of alt text.
Subtitles are blobs of text alternatives (to sound) consumed alongside the other content. As you point out, it's important that they're accurate and well-constructed!
Zooming is a bit hit-and-miss (works great in Firefox, not at all in Edge). But I just tried it with a screen reader and that experience is great: every image has alt tags, the link to the transcript is about the fourth element, and the transcript is beautiful semantic HTML that the screen reader can easily makes sense of.
Even then it would not really be accessible. Alternative texts are just text blobs without any semantics and are read by screenreaders without any interruption.
There's massive controversy in the disabled community over alternate site versions that's far too complicated to get into.
Everyone I've heard from thinks a separate site is better than a massive alt though. Many screen readers handle them extremely poorly, and they can't be annotated with html tags to indicate semantics (link, list, etc).
Is that really a goal of accessibility, though? Are image descriptions not "separate"? If this is not enough, should no one have any images on the web at all?
I don't think we should discourage people from setting up systems that have this kind of accessibility, even if it's not as "pure" from the perspective of duplicated content.
Indeed, the Web Content Accessibility Guidelines explicitly allow creating a separate accessible version of content and linking to it. They call it a "conforming alternate version": https://www.w3.org/TR/2016/NOTE-WCAG20-TECHS-20161007/G190
(The guidelines say it's preferred for everything to conform directly, but they use "artistic integrity" as an example of a situation where a conforming alternate version could be used. I'd say that applies here.)
You can't make a text description of a guitar solo that's "equal", much less not "separate"; I don't buy that people really think that we therefore shouldn't have music.
Yeah but how do you encode the smell of a rose to someone without a nose? You're being incredibly discriminatory by only giving the smelless a warning and nothing else.
Great question! People with a typical sense of smell get to experience the smell of expired food already, adding expiration dates for who can't is inclusive, not ex.
But even then it appears to be a different OCR process than what the author intended; spelling mistakes and text to speech errors abound. Not accessible.
I gave up handwriting in cursive in my late high school days as it was substantially more readable for both myself and for the teachers who had to grade my work. It's not uncommon, I think print had a good 30% share by the end of it.
As for my coworker's notes these days, I'd say print has a majority even, but in these days most people use computer documents instead of handwritten anyway.
I was truly puzzled at whether it was done with one of those "realistic handwriting" fonts with many character variations (look at the lowercase 'a') and spent a bit more time than I wish I did trying to figure that out.
That said, I do wonder why. The fastest I can write manually is roughly 20-25wpm, and that's already going to produce something far more illegible than this article and be extremely fatiguing, whereas if I'm using a keyboard, I can do 5-7x that for long periods of time without tiring.
The trick with overlaying colorful rectangle on top of handwritten glyphs is the same thing I did for New Kind of Paper [0]. Apple PencilKit has this weird behavior that when you change color of some stroke, the canvas freezes for some time and the experience is horrible. Overlaying SwiftUI rectangles on top is the easiest way.
I was hoping this would be an innovative way to write hyperlinks kind of what bit.ly or QR codes do and then you just point a camera at what you drew and it will go to it.
You could do this with a standard marker (like the corner parts of a QR code) on a piece of transparent adhesive plastic, paired with a database. Turn anything into a QR code.
In theory you could do it without the plastic sticker on the lookup side, but it seems like you'd use far more resources with little gain (principle gain being you only need one transparent marker; principle loss, you have to search every image for every "marker code" ever created across every part of the image).
I’ve been toying with basically the same concepts. It’s fun.
My preference has been to use SVG for the links (independent of whether the strokes are done raster or SVG). It’s not fundamentally any more expressive, but it’s much more convenient for a number of techniques like adding background colour or link underlines (I found changing only text colour with mix-blend-mode unsatisfactory), and links that span a line break.
Here’s a snapshot of roughly where I last left this a few months ago. (I’ve had to substitute in an inferior document, as I was working with a lined-paper prose document that I’m not willing to share at this time. It works better with that type of content.) You can see that I’ve leaned into the pagination, styling it more paperily, and also that there’s a fairly significant overload of SVG filters which tend to ruin performance. I’ll pare that down significantly if I go ahead with it. (Still keeping some of the page-edge and maybe link shape roughness, but with a simpler paper texture.)
https://temp.chrismorgan.info/2022-10-03-hand-drawn-with-lin... (almost 800KB because the pen uses a variable-width stroke and my SVG generation for that isn’t particularly optimised because I’ve skimped on some of the fancier trigonometry involved—but I should note that it’s much more accurate than the “lots of short line segments” approach used by every existing .rm-to-.svg library I found).
Also if I go ahead with this, I’ll use the colour support that came in version 2.13 of the reMarkable system software, drawing my hyperlinks in blue from the start and reserving some highlighter colour if I want the background shading. Using distinct colours like this will make identifying and marking up links mechanically straightforward (since all the strokes in a given layer appear in draw order), leaving only the entry of the URL.
—⁂—
One other thing about the presentation on this site: the image scales to the viewport width, which is very, very not good: it means that regular desktop-style zoom (most importantly zoom out) just doesn’t work. The most common desktop screen sizes are getting the equivalent of a body font-size in excess of 50px (three times what is generally reasonable). You should add something like `.post { width: 40em }` which fixes the problem without causing any other trouble.
If this works for them, fine. It would be weird if everyone liked the same workflows. But for me, every extra step reduces my likelihood to write something.
Right now, it's three steps:
> make newpost NAME='name of this post'
[edit the skeleton until it has what I want in it]
I'd go for the SVG format instead of rasterizing to PNG. It's going to be lighter and give you a similar crisp feeling.
Have you tried passing the pages through a OCR transcription system like tesseract? You also get this with the text-parsing feature of the reMarkable 2 for free, and I was thinking you could use the [link title] syntax to mark links and have some sort of automatic mapping to the links so you don't have to specify the links "by hand."
Anyhow, great idea. It's a bit small to read on iPhone.
A cool idea would be to convert each word to an individual image and put each of them inside of an inline div to make the article responsive.
I experimented with this approach on my blog[1]. I had an image of the handwritten blog post and used a feature in Inkscape to get SVG and make clickable text. I decided to place a little plain text in there to help Google search crawlers know what content is on the page. I really would prefer handwriting more.
My problems boil down to just that flow of the tools sucked. I didn't want to have to attach multiple pages together. Inkscape was honestly a bit clunky to work with and I didn't have any personal tools to generate the full html from the svg.
A solution could be to OCR the site and overlay the actual recognized text on the site as HTML items that can be hidden but useful for Google index the contents.
I can see the pain and friction that doing that manually would entail though.
I clicked through to the transcript after reading the handwritten part and it felt like a punchline because of how readable it was! It would be nice if this were somehow a JS toggle but hidden on the same page to enable accessibility.
I think the problem with doing something like this (besides accessibility) is you can never get the comfortable reading size right because the screen size/shape will never align with the flow of your text.
personally, whenever I write, the pace of my writing dictates me to think slower and choose the wording to be short enough while keeping the intention clear.
I'm curious, if you write with pen and paper, does it look the same as how the remarkable 2 renders it? I returned my RM2 because it make my handwriting look wobbly and difficult to read. Now I've returned to paper, but now it's more effort to OCR.
I'm reminded of Dijkstra's notes but with hyperlinks.
It unfortunate that the OP didn't turn their writing into SVG, and it's somewhat stunning that turning a webpage into a PNG with a ridiculous resolution results in a _smaller_ webpage size than most webpages with similar functionality today.
The goal here is to present the content in hand-written form. That service is just OCRing and discarding the handwriting medium, which is not at all what’s wanted.
tangentially related, but wow this is eerie: 2 days ago, I randomly thought about starting a blog that uses only my handwriting for everything and how unique that would be. And here we go ...
Interestingly, the transcript linked at the bottom of the page does not active my FF android reader mode either. However, that page _is" quite readable.
I hadn’t noticed there was a transcript, thanks. The transcript page does support Safari Reader mode. I think that Reader mode could also be supported on the main page, given that highlighting and copying text is possible there.
I have been playing lately with source code of https://www.tldraw.com/ to do something similar - allow writing of blogs / websites completely on ipad. I managed to insert links into the SVGs that were clickable...
My thinking was to allow people to doodle / made little comics on the web. Right now artsy people are confined to exporting to pngs and including it as image.. I was thinking somebody should just grab ipad and start drawing. If they want to write a text they should be able to do with pen, keyboard or voice transcription...
If it were me, I'd go for the <map> tag instead of fancy new CSS!