With my Librem 5 developer hat on, I'm surprised that a SPI chip matters at all.
The Librem 5 eMMC contains a hidden "partition", which is preferred by the CPU to boot from over the "data partition" where the OS boot loader resides. It should be possible to put Tow-boot into this hidden area, and the OS stored in the data area will have no say. No separate flash chip needed to have an independent boot loader.
I wonder why the PinePhone needs a separate chip for this purpose. Is the CPU unaware of hidden areas on the eMMC?
That does work on the original pinephone, the A64 SoC can boot from the mmcblk0boot0 partition. The rk3399 in the pinephone pro is way more problematic which it is why it's important to have the SPI and also have the SPI flashed from the factory.
The pinebook pro batch that has shipped now did not end up with Tow-Boot and instead has BSP u-boot on the eMMC which makes it unable to boot anything else than the shipped manjaro by default. Of course it's only the other distributions supporting this platform that have to deal with this
The boot ROM in the RK3399 in the PinePhone Pro has a hardcoded boot order, and doesn't use the special boot partition of the eMMC - it only looks for a bootloader on the data partition at a fixed sector (64).
Would it be possible to flash Tow-Boot to that sector and only use the rest of the partition as actual data storage? It seems like it would effectively work the same as having a separate SPI hardware chip on-device, while also restoring distribution independence. The downside is that implementing this might require some special-case support though it could likely be implemented most conveniently by using, e.g. dm-linear, so only configuration changes would be needed, not new kernel code.
The issue with that is it's really easily wiped or changed when installing an OS. In the end the SPI hardware did get added and it just works, it's by far the simplest and most reliable solution.
> In the end the SPI hardware did get added and it just works
What about their new PineBook Pro, though? Is that situation still in flux?
> The issue with that is it's really easily wiped or changed when installing an OS.
I'm not really seeing this. If the OS supports this special scheme, they need only deal with the soft block device that's created by mapping "the rest" of the partition. And if they don't, then all bets are off anyway.
It seems like in the end the pinebook pro did get the SPI chip on it, but then they flashed a closed source U-Boot to the eMMC instead which does not allow booting from SD. So it's again a complete pain for the other distributions to help these users.
For the wiping issue, a lot of times I see suggestions to wipe the first few MB of the storage when there's booting and flashing issues to get rid of an old U-Boot, which is exactly what you don't want in the Tow-Boot case.
Not useless, but users will need to manually flash tow-boot when that should just be the default. If Pine64 had made better moves, their customers would have never had to worry about firmware or bootloaders, only which (standard) distro install media to use.
I'm not a Rust-only developer, but a general systems programmer, and my foray into embedded is limited to a side project (see my profile), but no, I haven't switched to C/C++. Rust brings too much goodness to give it up.
Rust-embedded is an easy ecosystem to work with (if immature), and if you want more flexibility, Tock OS [0] is trying to cover that space (also immature, but I'm working on it).
I can figure out how to send you a patch set via email (see my Linux kernel contributions), but if I can avoid doing that, sure as heck I will. Your project must be really important to me, or I have to get paid.
Based on your first example of running git send-email without providing it any patch files or revision list, you appear to be making the assumption that someone doesn't bother reading the documentation before using the tool.
This would be like someone trying out make the first time and not realizing why it isn't working becaue they didn't realize they need literal tab characters in the make file for the rules to work. But if they don't read the documentation, there's no way they would know that.
The real problem is people trying to figure out how tools work by experimentation as opposed to reading documentation. If someone reads the documentation of git send-email and the project's contrib document contains the preferred settings for that utility, then submitting patches should not be an issue.
That's a valid observation of one of the reasons I won't use git-send-email.
I have limited mental resources, and given the choice between a tool where I have to spend half an hour before I can begin using it, and a tool which will guide me, I'll always choose the latter. After I'm done, I can even forget I ever used the latter tool! It's a boon for one-offs.
Keep reading, there's more criticism on other aspects of the tool.
After reading through the rest of the post, I do see your point. When I last tried it, I thought that most of the email formatting should be done with git format-patch and then git send-email should be used to actually send the email without having to answer any questions.
That would address one of your concerns about saving the email on disk and also ensuring that the headers have the correct contents.
If the project's contrib document contained information about what settings to use for format-patch and send-email, then the process would be much more seamless. I haven't looked at the kernel (or subsystems) documentation on that. The git project itself doesn't seem to contain that information though.
Regarding your other point about using your email client to handle sending the emails, git does have a utility called imap-send that would allow you to upload the patches to an IMAP folder, which, I believe, would allow you to then send the messages using your MUA of choice instead of git send-email.
You could mail patches as plain text attachments, if there are concerns about the clients mangling them. You could also try some easier plain-text clients instead of mutt. Claws mail is a simple GUI based one.
Honestly, if you can't be arsed to format an email then why should you expect anyone to spend the effort to review your patches and maintain your additions going forward.
Author here. While the essay comes from the side of smartphones, it's not really limited to them. As I mention, even some laptops use setups that require complex infrastructure to support. libcamera itself is also used in the Raspberry Pi, and the interfaces in the Linux kernel are used by the Axiom camera, which is truly a photo camera.
The problem of camera diversity is not limited to open source either, because a similar infrastructure to handle all the different cases must be replicated by closed drivers as well. I don't know about Macs, but the Surface laptop is a Windows beast.
Can you reuse some of the algorithms provided by open source RAW image processing pipelines for SLRs?
Many SLRs are already well supported, though the open source stuff doesn't focus on low latency conversions, which is needed for viewfinders, or focus control, etc.
Cutting edge cell phone camera performance is absolutely mad.
I did a side-by-side comparison of a Micro Four Thirds camera (4/3" sensor) and an iPhone SE (1/3" sensor) and the performance was... pretty much the same.
And I'm not talking about some ML interpolation wizardry or automatic face beautification; I was photographing barcodes and testing how many were readable in the resulting images - hardly something Apple would have specially optimised for.
The iPhone has a much smaller sensor, a much smaller lens, costs less, and manages to pack in a bunch of non-camera features. To be competitive in the modern cell phone market your camera has to be straight up magic.
It actually wouldn't surprise me if they optimized for bar/qr code readability. I wrote something years ago that used industrial cameras to read QR codes as well as very precise metrology features. I had to optimize the optical/lighting setup for the feature measurement and then wrote some finely-tuned operations to identify the QR code, window down to the code only, clean up edges/features with expensive convolutions (mostly median filter) and then finally read the code. None of this was visible to the operator, but if you saw the final image of the QR code it was essentially binary color space and looked a bit cartoon-like.
They might have some optimisation for photographing documents, it's true.
But when I say the performance is good, I'm not just declaring the images good because the portraits have simulated bokeh, or face-detecting autoexposure, or image stabilisation, or tasteful HDR, or a beauty mode that airbrushes out blemishes and makes photos of sunsets really pop.
Even in applications where none of those features come into play, the iPhone, can still go toe-to-toe with cameras with much larger sensors.
Did you compare raw sensor output, or post-processed?
Big sensors capture more light and have more bokeh. With enough light, the first doesn't matter, and bokeh is not a thing for QR codes.
If you didn't have enough light, then it's probably the question of how denoising was done, and what details have been guessed by fancy algorithms. Geometrical shapes are easy to guess, but when I look at pictures of landscapes, they typically devolve into a painting after 1000×1000px if taken on a phone.
Totally, RAW processing is planned for after resolution changing works correctly. Do you have a recommendation about which implementation is easy to understand and work with?
If I have to rewrite stuff for low latency, I'd rather start it as an independent library so that other projects can reuse the code.
If I were looking for such a thing, I'd check out darktable and go upstream through its RAW-processing pipeline. Whatever they're using may not be the best, but I'd imagine that it is average or better...
At Purism, our goal is not to build a phone or two, but to contribute to the ecosystem as well. That means the Linux kernel, and the Linux camera infrastructure. Now, we have two choices: either contribute support for our hardware, or use some hardware that was already supported.
In reality, UVC is not suitable for a phone, so we can't leverage that. There are some camera drivers in the kernel already, but not necessarily ones that we could buy or meeting our expectations.
Even if there were, that still leaves us with the problem of connecting the cameras to applications in a standard way, so we can't really avoid working on libcamera.
There is a standard that all the normal desktop userspace apps are already using, it's v4l2 and in particular a single /dev/video# device use case with all the highlevel controls exposed on this device directly.
For likes of Librem and Pinephone, the highlevel controls either don't exist on HW level, or they exist there, but are not exposed on the video device itself, but on various v4l2 subdevices that form the video pipeline.
One way to support all the already existing apps would be to implement what they already expect. (see above) That is, to be able to control the video device by usual means they already posess. Instead of extending all the apps to use libcamera, and leaving the rest behind, we could simply proxy the video controls from ioctls where all apps expect them to be to some userspace daemon, that would then configure the complex HW specific media pipeline behind the scenes (basically all the media system subdevices for sensors, sensor interfaces, ISP, etc.).
In other words to implement what's implemented by some USB microcontroller in UVC webcams in some userspace daemon, but keep the existing userspace interface expectations for simple camera usage.
This is kinda orthogonal to the libcamera effort, I guess. Just wanted to say that there already is a standard. :)
It's not orthogonal. In fact, it's a very good observation, and even libcamera itself recognizes it by providing v4l2 emulation.
It could be a viable way to get the basic use case of video streaming where special controls are not needed. It's worth considering, although then it makes sense to leverage work already in libcamera to implement the extra layer.
It's hard to tell because I don't know what Android phones do exactly. Does it vary by manufacturer? Do we include AI tricks and high speed video encoding?
I think it's going to be a long way to get there, but also the openness of the drivers will let us find our own strengths (I have high hopes for super-high FPS recording).
> (I have high hopes for super-high FPS recording)
That would be very cool. Google’s phone does 4K HDR stabilized video at 60 fps. Their slo-mo is 240 fps but I don’t know what resolution that would be.
https://puri.sm/posts/cameras-its-complicated/