Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Hyperlinks in Handwriting (handwritten.blog)
237 points by nathell on Oct 2, 2022 | hide | past | favorite | 129 comments


https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ma...

If it were me, I'd go for the <map> tag instead of fancy new CSS!


I had the same thought. I remember exporting image-based webpages in Macromedia Fireworks using HTML maps. No reason they shouldn't work today, even if they carry the same problems now as they did then (image-based websites are massive).


What IDEs have a GUI for creating image maps anymore? Last one I remember using was in Macromedia Dreamweaver.

> image-based websites are massive

With the huge gains we've made in image optimization since <map>s were popular this isn't always true anymore.


That's more due to the huge gains in javascript bloat offsetting the cost of images.


No it's due to image optimization. 5 bloated apps can easily be smaller than 1 bad gif.


AFAICT, there's not a way to get those nice hover effects with a <map>, at least not without a bunch of JS.


Yeah, my first thought was <map>, then I moused over it and noticed how utterly trivial this would be to re-style in comparison.

This is a good approach, I think. It's kinda fine-tuned for reMarkable output, as it's mostly pure black or white pixels, but clearly they're fine with that. And the end result works quite well, and seems as easy or easier to implement.

This is all also kinda made moot by there being a link to an excellent transcript at the bottom. Accessibility needs are better covered by a straightforward alternate page than <map> alone.


Not sure if I triggered some sort of race with fast mouse movement or my mouse being over the link before it appeared or something, but for me on Firefox on Linux, the blue color went away when I hovered my mouse over a link. I'm not sure if that's really a benefit; it honestly confused me and made me think that I wasn't actually hovering over a link at first, since that's not how traditional links work.


(Also on Firefox on Linux)

I can't begin fathom how that could happen. It's not some fancy JS thing, it's a very simple CSS effect:

    a:hover {
        filter: opacity(0.5) !important;
    }
    
    .post a {
        background-color: #35c;
        mix-blend-mode: screen;
        position: absolute;
        display: block;
    }
Do you maybe have a user-stylesheet that overrides something about links? But I can't think of anything that makes sense that wouldn't also break the non-hovering version too.

Does opacity() combined with mix-blend-mode maybe not work right on your box? I'm not sure why it wouldn't, but you can convince me of just about anything by saying "Firefox does weird things on some GPUs".


Blending stuff is a particularly unreliable part of the web platform, because GPU drivers are astonishingly bad, and a lot of the functionality is poorly exercised. The situation has improved enormously in the past decade as a few things have started to use the GPU and so there’s been more attention to it all. For Firefox, the Quantum Render project ironed out almost all the problems so that bugs are now fairly rare. For web content in general and especially in other browsers, I suspect that Google Maps switching to WebGL single-handedly drove most of the improvement that has happened (even when it comes to identifying and blacklisting bad drivers and deliberately falling back to a software implementation).

But quite seriously, rejoice that you’re using Firefox. It has very few blending issues in general. I’ve found the Chromium family to be vastly buggier in this area, and I’ve learned when doing almost anything with filters and blending to develop with Chromium—previously I developed with Firefox and everything would just work, and then I’d test it in Chromium and discover some important bug or fundamental undocumented limitation in Chromium which meant I had to start again with a different approach, e.g. https://bugs.chromium.org/p/chromium/issues/detail?id=992398 where a perfectly normal combination of things just stopped rendering the page after 8,192 pixels (which isn’t all that much); or for something more recent (earlier this year) look at the source of https://temp.chrismorgan.info/2022-10-03-hand-drawn-with-lin... and see some of the compromises I made for Chromium.


I do have some custom CSS stylesheets installed, but none of them seem to be active on that URL. I did do some messing with my GTK and Gnome shell themes a few weeks ago, and I noticed that it strangely seemed to have some parts of the NY Times crossword site use a dark background where they didn't before, so maybe that's affecting it somehow?


I don't know if this is the same way image maps were made in the early 90s but that's what I immediately thought of too.


<map> was introduced in HTML 3.2 in 1997. The other technique I often saw was to cut the image into a grid of separate image files. They would look to the user as if they were a single image, but you could hyperlink some of the images so the effect was that parts of the overall image were hyperlinked and some weren't. Developers came up with all kinds of interesting tricks in those days!


My first websites were built like this. I'd design PSDs and then slice them into images that I'd place in tables. I completely forgot about that approach!


When I clicked the first link in the article, I though it was an image map too.


Map is no good for SEO or WCAG though.


Are images of handwritten text good for that?


Maybe there's an opportunity here with on-the-fly OCR. Imagine if a screen reader could really read.


I have to imagine some can, as the article mentions on the fly OCR of handwriting to generate the transcription, and I can select text from the handwriting on my phone (iOS 15 currently).

I was initially surprised to see it’s not transparent text overlaying the image, but apparently the text recognition is actually that good.


Safari already has this.


Safari has Live Text; that's just Google Lens with less features


On the fly for each end user loading seems wasteful, why reprocess each time?


No, they aren't.


<map> and <area> are fine for a11y if marked up responsibly (they accept ARIA attrs and alternative text for example). I learned Amazon still uses them regularly!


That's interesting, do you know what Amazon uses them for?


I've seen them in promotional carousels. One large bg image, each individual product is it's own <area>


Oh, that's not that I thought the title meant!

Imagine if you could hyperlink as you handwrote notes, fluidly with the motion or gesture of the pen. As with text editors like emacs or *vim, you'd create new modes of thinking as you interact with a machine at the speed of thought, with the underpinning physical motions being too fast and too easy for conscious attention. Make wormholes from one part of the writing surface to a distant one, recording connections, asides, without conscious effort. No more than you expend on the shapes of letters, or the layout of QWERTY.

Instead, we're kind of stuck on the UX of "ordered sequence of rectangles", with some usability hacks to mitigate the severe limitations of this skeuomorphism.

Surely there must be some astounding low-hanging fruit here, in the UX of epaper handwriting as a tool of exploratory thought.


My current note-taking stack is an Obsidian Vault on my iPad with the mazec handwriting keyboard [1] (this post was written with it), occasionally switching to Nebo [2] if I want a long-form writing experience. Apple's built-in scribble functionality would be great if it weren't so inaccurate and kludgy when correcting mistakes. I won't wax poetic about Obsidian more than others already have, but this setup gets me very close to the link-as-you-write workflow that I sorely wished I had in grad school.

[1] https://apps.apple.com/us/app/mazec/id943711253

[2] https://www.nebo.app/


It would be cool if there were a gesture for this within Remarkable, like if you underline a word and draw a line to another word or it pulls up a table of contents you could point to, then when you finish the line, your “link line” would disappear leaving only the underline.


That's a neat idea. I was hoping the author would share an interesting strategy for linking to external documents inside a handwritten post. But I don't see any way it could be much more elegant that "I just wrote the URL down manually and linked the text to it," which is what they did. But linking to points inside the document, you could get pretty clever.


My sense is that the design goal of Zettelkasten is basically this. Why note cards? Why only a few sentences per card? So you can insert text in a fine-grained manner -- and so you can insert cross-links in a fine-grained manner.


Pretty much what I was going to suggest.

More generally, with handwritten (or other physically-manifested writings), an indexing and referencing system substitutes for hyperlinks. The Bullet Journal index and references operate in a similar manner. The O'Reilly book UNIX Power Tools offers a print hypertext experience by numbering each individual article and section, and liberally referencing between these. The first edition is superior to later ones in that the text is printed in multiple colours (black and blue) which distinguish the text and navigational elements, much as early Web documents did with the standardisation of blue link colours.

For those who remember such things, the paper and print card catalogues of libraries operated similarly.


Something similar is being carried out by team at Muse.. https://museapp.com/memos/2022-09-linked-cards/


Oh, haven't read the article yet, but expected the same thing.

To make this work, I need a nice copy/paste method for pen+touch input to input the link, along with nice app switching.

Otherwise, it should be drawing a circle around the marks, right click, add link.

How do you click the links though?


Gestures on epaper will be absurdly, hilariously overpowered. Gesturing with fingers on touchscreen is sluggish and imprecise; it's not at all like handwriting (and who in the pre-digital era wrote books with the technique of finger-painting? It'd be absurd). But, how many distinct glyphs can you draw with a fine-tipped pen in a quarter of a second? 50? 100? Easily the entire Latin and Greek alphabets, just as a starting point. It's a high-bandwidth input channel, just like a PC keyboard.

All you would need to control an epaper with gestures is to reserve some subset of the character-glyph space as an in-band control channel. Glyph shapes that are never semantically valid as written text. Shapes that are easy to reliably parse, which are highly "orthogonal" to written text, as well as to other control gestures.

- "How do you click the links though?"

I'd suggest they be interpreted as special "active" regions with a single default action, so any user input works, if it's within the region. Scratch a line over them and they open. My thinking is controlling the machine, and writing, should be the same actions and combine together fluidly: everything is a pen stroke.


On my monitor I found the text very difficult to read in giant print, so I tried to zoom out to make it smaller... and was somewhat disappointed to find that the text countered that zoom and remained the same size. This is an accessibility problem; many folks will need the inverse. Please don't do this.

(If you're in the same boat as a user, you can work around the issue by resizing the browser window. Well, you can on PC, I'm not sure that's possible on a tablet.)


Same, you can zoom in but you can't zoom out? That's a quick exit from the website.


    <img src="xyz" alt="Handwritten post">
Thanks for the info!

I'm with the author in that this is a nifty trick, similar in some way to the map element (https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ma...), but it's one of the least accessible web pages I've ever seen.


I find the screen reader experience of

- image: handwritten.blog

- image: handwritten post

- link (image): comment on this post

- link (image): read the transcript of this post

- link (image): back to main page

not bad at all. Sure, it would be better if the transcript was just part of the page, but I understand the tradeoff and it's quick enough to get to the transcript.


I didn't even notice the transcript text down there! That's better but still far from ideal.


Yeah, the page could be more accessible to users who don't use a screen reader but don't want to parse small handwriting.

If you had a screen reader you'd hear literally only:

- Link, image with alt handwritten.blog

- Link, image with alt Comment on this post

- Link, image with alt Read the transcript

- Link, image with alt Back to main page

I think it'd be less confusing if the main body had an alt. It would just be a short FYI that there's also a big image of text and that the transcript reproduces it.

But the transcript would be a lot harder to miss with a screen reader!


He could do an aria-describedby pointing to the link and it'd be a fair bit better.


Alternative idea: convert the file imported from remarkable to vector images (SVG) and enclose the hyperlinks in <a> tags


I believe the reMarkable 2 can output SVG.


When I got my reMarkable two years ago I tried their app’s vector output: it was awful, because they were using completely the wrong technique: taking intersecting paths, taking the union, and then simplifying the path very lossily, which particularly ruins intersections because of the union step.

It’s possible they’ve improved the technique now, but I’d be a little surprised if they have, given that it amounts to throwing away the entire thing and starting from scratch.

There are a few open-source libraries to convert the .rm format to SVG. None are good for anything but the fineliner. I’ve written the closest one to decent for tools like the pen (though I haven’t got round to publishing it), but it still lacks the texture (pretty much unavoidable) and is thus never going to be excellent. SVG just lacks the necessary primitives.


That's a lot of work for I don't know how much gain. And, more importantly, I have to agree with a number of posters here that page (and anything like it) is an accessibility nightmare.


Edit: I've thought about this more and think this is wrong. There should be a non-handwritten transcript link at the top. Embarrassingly, I focused on thinking about screen readers rather than accessibility more broadly.

---

(I've read a bunch about accessibility, but I'm no expert).

I don't know, I actually think this blog is more accessible than most.

You might have missed the transcript link at the bottom.

The only changes I'd make is adding a summary alt to the image saying something like "The handwritten body of the blog post. A transcript is linked below".

(Without any alt the page might be confusingly blank, but as many screen readers poorly support long alt text you don't want to copy the transcript in.)


Adding a link to an accessible version to the very end, after the visitors who haven't left fought through the full inaccessible article isn't great…

edit: Yes, it works okayish in screen readers. Accessibility is about far more than screen readers.


Having support for those who use screen readers is a huge improvement ovet the majority of sites, surely?

May I ask, what other accessibility conditions do you feel this site doesn't meet?

Also - for a non corporate website - why should accessibility be a relevant factor? For a personal project, I don't see why anyone has to make it targeted for every case.


Another blog with handwritten posts is [1]... This person also is an Emacs contributor, and writes the blog with a Supernote ([2]), not a Remarkable.

[1]: https://sachachua.com/blog/2022/09/monthly-review-august-202... [2]: https://supernote.com/


I like how sarcastic it is that link to this completely inaccessible post written as PNG image is at HN just next to the link to Bogota's gov page which has a sign language titles :)


There is a link to the transcript, and the link has alt text indicating what it is.


Why bother with subtitles when I have a copy of the screenplay right here?


Wouldn't it be more the opposite? Someone's elsewhere in these comments noting that a screen reader can navigate through the structure of paragraphs much better than blobs of alt text.


Subtitles are blobs of text alternatives (to sound) consumed alongside the other content. As you point out, it's important that they're accurate and well-constructed!


Did you mean ironic?


I created a handwritten blog back in 2005, then discovered that there was a Flickr group devoted to handwritten blogs: https://www.flickr.com/groups/handwrittenblogs

It looks like that group is pretty much frozen in 2005-2006, and it's really interesting to see how these blogs were handled back then.

The domain on which I hosted my blog is no longer active, but I've still got all the source code, and it's fun to look back on it as an experiment.

For me, the hardest part was the SEO. I ended up transcribing my handwriting as much as I could, then had the images cover the text.


This webpage is an accessibility nightmare. Can't even zoom out.


Zooming is a bit hit-and-miss (works great in Firefox, not at all in Edge). But I just tried it with a screen reader and that experience is great: every image has alt tags, the link to the transcript is about the fourth element, and the transcript is beautiful semantic HTML that the screen reader can easily makes sense of.


> on this blog, I want to thoughtfully embrace imperfection.

Jokes aside, yes it is a nightmare. However, you can find a transcript at the bottom of the page, which makes it a little more readable.


Could you put the text in the alt text? I was kinda hoping this was using a handwriting font.


Even then it would not really be accessible. Alternative texts are just text blobs without any semantics and are read by screenreaders without any interruption.


There's a link to the transcript at the bottom.


Goes against the idea that all users should have "full and equal enjoyment" of a site instead of a "separate but equal" version just for "them."


There's massive controversy in the disabled community over alternate site versions that's far too complicated to get into.

Everyone I've heard from thinks a separate site is better than a massive alt though. Many screen readers handle them extremely poorly, and they can't be annotated with html tags to indicate semantics (link, list, etc).


Is that really a goal of accessibility, though? Are image descriptions not "separate"? If this is not enough, should no one have any images on the web at all?

I don't think we should discourage people from setting up systems that have this kind of accessibility, even if it's not as "pure" from the perspective of duplicated content.


Indeed, the Web Content Accessibility Guidelines explicitly allow creating a separate accessible version of content and linking to it. They call it a "conforming alternate version": https://www.w3.org/TR/2016/NOTE-WCAG20-TECHS-20161007/G190

(The guidelines say it's preferred for everything to conform directly, but they use "artistic integrity" as an example of a situation where a conforming alternate version could be used. I'd say that applies here.)


Yes, it's an explicit goal of accessibility, and predates the web


Can you cite someone talking about this? Another commenter has pointed to w3 stuff on non-conforming versions and from there I got here: https://www.w3.org/TR/UNDERSTANDING-WCAG20/conformance.html#... .

You can't make a text description of a guitar solo that's "equal", much less not "separate"; I don't buy that people really think that we therefore shouldn't have music.


So how did we encode smells for the noseless before the web?


Losing our sense of smell is actually somewhat dangerous and more common than you think (can be lost temporarily due to illness).

So we install smoke detectors and put expiration dates on things to detect danger without smell.


Yeah but how do you encode the smell of a rose to someone without a nose? You're being incredibly discriminatory by only giving the smelless a warning and nothing else.


Great question! People with a typical sense of smell get to experience the smell of expired food already, adding expiration dates for who can't is inclusive, not ex.


That's not the question I asked.


Yes and?


I opened this on iPhone with Safari and you can actually select the text and copy paste


Can't select in normal browsers.

I remember when whiteboard coding interviews still existed, I was shocked to find someone actually writing code by hand on purpose.


Haha, coding with pencil on paper (but modern) is actually pretty awesome: https://mlajtos.mu/posts/new-kind-of-paper


But even then it appears to be a different OCR process than what the author intended; spelling mistakes and text to speech errors abound. Not accessible.


Why is this person handwriting in print instead of cursive?

Is it what society has come to?


> Why is this person handwriting in print instead of cursive?

Perhaps they want it to be readable? :-)

> Is it what society has come to?

Thankfully, yes.


I gave up handwriting in cursive in my late high school days as it was substantially more readable for both myself and for the teachers who had to grade my work. It's not uncommon, I think print had a good 30% share by the end of it.

As for my coworker's notes these days, I'd say print has a majority even, but in these days most people use computer documents instead of handwritten anyway.


I pretty much had to do the same in high school because apparently it was unreadable to them (not to me though...)


Did it hinder communication in any way? Were you unable to understand what was written?



In my country, kids are required to use fountain pens.


Yes, but we all have typewriters in our pocket these days.


Why not?


I was truly puzzled at whether it was done with one of those "realistic handwriting" fonts with many character variations (look at the lowercase 'a') and spent a bit more time than I wish I did trying to figure that out.

That said, I do wonder why. The fastest I can write manually is roughly 20-25wpm, and that's already going to produce something far more illegible than this article and be extremely fatiguing, whereas if I'm using a keyboard, I can do 5-7x that for long periods of time without tiring.


Thus, handwriting encourages more thoughtful writing, and brevity.


Personally, writing slower encourages me to put more effort into choosing the right words. My writing is better for it.


The trick with overlaying colorful rectangle on top of handwritten glyphs is the same thing I did for New Kind of Paper [0]. Apple PencilKit has this weird behavior that when you change color of some stroke, the canvas freezes for some time and the experience is horrible. Overlaying SwiftUI rectangles on top is the easiest way.

[0] https://mlajtos.mu/posts/new-kind-of-paper


I was hoping this would be an innovative way to write hyperlinks kind of what bit.ly or QR codes do and then you just point a camera at what you drew and it will go to it.


You could do this with a standard marker (like the corner parts of a QR code) on a piece of transparent adhesive plastic, paired with a database. Turn anything into a QR code.

In theory you could do it without the plastic sticker on the lookup side, but it seems like you'd use far more resources with little gain (principle gain being you only need one transparent marker; principle loss, you have to search every image for every "marker code" ever created across every part of the image).


Just write the url and a camera—on iOS at least—will find it and make it clickable.


Ah now I’m not going to stop thinking about this all evening.


I’ve been toying with basically the same concepts. It’s fun.

My preference has been to use SVG for the links (independent of whether the strokes are done raster or SVG). It’s not fundamentally any more expressive, but it’s much more convenient for a number of techniques like adding background colour or link underlines (I found changing only text colour with mix-blend-mode unsatisfactory), and links that span a line break.

Here’s a snapshot of roughly where I last left this a few months ago. (I’ve had to substitute in an inferior document, as I was working with a lined-paper prose document that I’m not willing to share at this time. It works better with that type of content.) You can see that I’ve leaned into the pagination, styling it more paperily, and also that there’s a fairly significant overload of SVG filters which tend to ruin performance. I’ll pare that down significantly if I go ahead with it. (Still keeping some of the page-edge and maybe link shape roughness, but with a simpler paper texture.)

https://temp.chrismorgan.info/2022-10-03-hand-drawn-with-lin... (almost 800KB because the pen uses a variable-width stroke and my SVG generation for that isn’t particularly optimised because I’ve skimped on some of the fancier trigonometry involved—but I should note that it’s much more accurate than the “lots of short line segments” approach used by every existing .rm-to-.svg library I found).

Also if I go ahead with this, I’ll use the colour support that came in version 2.13 of the reMarkable system software, drawing my hyperlinks in blue from the start and reserving some highlighter colour if I want the background shading. Using distinct colours like this will make identifying and marking up links mechanically straightforward (since all the strokes in a given layer appear in draw order), leaving only the entry of the URL.

—⁂—

One other thing about the presentation on this site: the image scales to the viewport width, which is very, very not good: it means that regular desktop-style zoom (most importantly zoom out) just doesn’t work. The most common desktop screen sizes are getting the equivalent of a body font-size in excess of 50px (three times what is generally reasonable). You should add something like `.post { width: 40em }` which fixes the problem without causing any other trouble.


If this works for them, fine. It would be weird if everyone liked the same workflows. But for me, every extra step reduces my likelihood to write something.

Right now, it's three steps:

> make newpost NAME='name of this post'

[edit the skeleton until it has what I want in it]

> make rsync

I still don't write as often as I think about it.


Every <a> is empty! No text at all! This site is barely usable let alone accessible.


I realize you could click the "view transcript" at the bottom of the page to get accessibility.


A leftover from a bygone era, kind of like "click here to view the Netscape version of this page!"

The transcript appears to only have 7 steps, there are 9 on this page...so is it a transcript?


I see 7 steps on both pages.


This site should be part of the marketing demos for the iPhone/iOS.

Without any effort on my part, the system recognizes handwriting inside an image and lets me copy/paste it like I was working with standard text.

It’s 100% seamless and feels magical.


Not sure why this would be a demo specifically for Ios, Android has been doing this for a few years now.


I'd go for the SVG format instead of rasterizing to PNG. It's going to be lighter and give you a similar crisp feeling.

Have you tried passing the pages through a OCR transcription system like tesseract? You also get this with the text-parsing feature of the reMarkable 2 for free, and I was thinking you could use the [link title] syntax to mark links and have some sort of automatic mapping to the links so you don't have to specify the links "by hand."

Anyhow, great idea. It's a bit small to read on iPhone.

A cool idea would be to convert each word to an individual image and put each of them inside of an inline div to make the article responsive.


I experimented with this approach on my blog[1]. I had an image of the handwritten blog post and used a feature in Inkscape to get SVG and make clickable text. I decided to place a little plain text in there to help Google search crawlers know what content is on the page. I really would prefer handwriting more.

My problems boil down to just that flow of the tools sucked. I didn't want to have to attach multiple pages together. Inkscape was honestly a bit clunky to work with and I didn't have any personal tools to generate the full html from the svg.

I might just be inspired to try again though.

[1] https://blog.alew.is/insertionsort.html


Nice!

A solution could be to OCR the site and overlay the actual recognized text on the site as HTML items that can be hidden but useful for Google index the contents.

I can see the pain and friction that doing that manually would entail though.


I clicked through to the transcript after reading the handwritten part and it felt like a punchline because of how readable it was! It would be nice if this were somehow a JS toggle but hidden on the same page to enable accessibility.

I think the problem with doing something like this (besides accessibility) is you can never get the comfortable reading size right because the screen size/shape will never align with the flow of your text.


The size seems perfect for mobile phones, I think. At least it was great for me. But maybe it can indeed be confusing on desktops, true.


personally, whenever I write, the pace of my writing dictates me to think slower and choose the wording to be short enough while keeping the intention clear.




You can’t implement the colouring with an image map.


You're right. The image its self would need to be colored and then switched out perhaps.

Or use an SVG.


I'm curious, if you write with pen and paper, does it look the same as how the remarkable 2 renders it? I returned my RM2 because it make my handwriting look wobbly and difficult to read. Now I've returned to paper, but now it's more effort to OCR.


Everything that’s old is new again. Reminds me of using Frontpage in the 90s.


I'm reminded of Dijkstra's notes but with hyperlinks.

It unfortunate that the OP didn't turn their writing into SVG, and it's somewhat stunning that turning a webpage into a PNG with a ridiculous resolution results in a _smaller_ webpage size than most webpages with similar functionality today.

The new web: forget HTML, use PNG. \s


There’s always a twist. Now you get to do the old things with an expensive gadget.


Maybe just try using this service instead? https://paperwebsite.com/


The goal here is to present the content in hand-written form. That service is just OCRing and discarding the handwriting medium, which is not at all what’s wanted.


In a similar vein: https://www.jeffbridges.com/


On a side note, I think I'd prefer footnote links in handwriting instead of blue inked words.


So for every edit, typo, etc, repeat steps 3–9 (and update the transcript)?!


tangentially related, but wow this is eerie: 2 days ago, I randomly thought about starting a blog that uses only my handwriting for everything and how unique that would be. And here we go ...


1MB to only deliver a small text? You've got to be kidding.


Unfortunately the site doesn’t support Safari Reader mode.


Interestingly, the transcript linked at the bottom of the page does not active my FF android reader mode either. However, that page _is" quite readable.


I hadn’t noticed there was a transcript, thanks. The transcript page does support Safari Reader mode. I think that Reader mode could also be supported on the main page, given that highlighting and copying text is possible there.


I have been playing lately with source code of https://www.tldraw.com/ to do something similar - allow writing of blogs / websites completely on ipad. I managed to insert links into the SVGs that were clickable...

My thinking was to allow people to doodle / made little comics on the web. Right now artsy people are confined to exporting to pngs and including it as image.. I was thinking somebody should just grab ipad and start drawing. If they want to write a text they should be able to do with pen, keyboard or voice transcription...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: