Hacker Newsnew | past | comments | ask | show | jobs | submit | fendale's commentslogin

Are you saying that on IOS 18 if you enable developer mode then each time you forgot the network it gets a new Mac? But without developer mode it does not get a new Mac each time you forget it? The Apple docs linked elsewhere in this thread suggest it only gets a new Mac once per 24 hours when you forget the network normally. I’m going on a long boat trip in the next week where this trick might work for me if so!


> local enterprise-grade SSDs support multiple namespaces (with their own internal queues)

What do you mean by namespaces here? Are they created by having different partitions or LVM volumes? As you mentioned consumer grade SSDs only have a single namespace, I am guessing this is something that needs some config when mounting the drive?


With SSDs that support namespaces you can use commands like "nvme create-ns" to create logical "partitioning" of the underlying device, so you'll end up with device names like this (also in my blog above):

/dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 ...

Consumer disks support only a single namespace, as far as I've seen. Different namespaces give you extra flexibility, I think some even support different sector sizes for different namespaces).

So under the hood you'd still be using the same NAND storage, but the controller can now process incoming I/Os with awareness of which "logical device" they came from. So, even if your data volume has managed to submit a burst of 1000 in-flight I/O requests via its namespace, the controller can still pick some latest I/Os from other (redo volume) namespaces to be served as well (without having to serve the other burst of I/Os first).

So, you can create a high-priority queue by using multiple namespaces on the same device. It's like logical partitioning of the SSD device I/O handling capability, not physical partitioning of disk space like the OS "fdisk" level partitioning would be. The OS "fdisk" partitioning or LVM mapping is not related to NVMe namespaces at all.

Also, I'm not a NVMe SSD expert, but this is my understanding and my test results agree so far.


Ah ok - so googling a bit on this, you do specify the size when creating the namespace. So if you have multiple namespaces, they appear as separate devices on the OS, and then you can mkfs and mount each as if its a different disk. Then you get the different IO queues at the hardware level, unlike with traditional partitioning.


Yep, exactly - with OS level partitioning or logical volumes, you'd still end up with a single underlying block device (and a single queue) at the end of the day.


Over the last few months I have been seeing a push toward using Sqlite in production, where appropriate. Some of this has been coming from 37 Signals with their "Once" products, which use Rails and a SqliteDB.

Sqlite can go pretty far if you have fast SSD disk. The biggest problem is that your app is constrained to a single host. For many apps, with backups and a failover plan, that may be OK. For other its a non starter.


Where are you seeing these sort of roles? Are they labeled as "devops" or something else?


Platform engineers perhaps, that's what my company is hiring for at the moment to get off heroku


One reason we have so many poor engineering managers is because the top engineers get promoted to management. The skills for the two jobs are very different.

I’m sure many good engineers have tried management and found it to be not for them. Have an open discussion with your manager and see where it goes, as if you don’t you will likely end up applying for an engineering role elsewhere and leave anyway.


I was interest in a position at Aiven some time back. Did a recruiter screen and then a manager screen and then got asked to do a 20 hour take home assignment in Python that was to be completed in a single week. I politely declined at that point. Is that still part of the hiring pipeline?


Yes, there is still a take home assignment involved, I think it has been tinkered with to reduce the time required slightly.


I have a Shelly EM on my main grid circuit and solar array and find it to be an excellent product. They have lots of other automation products too.


I’m kind of similar. I write up how I set something up or some strange problem I solved mainly for my own reference but if it also helps someone else, great. Once or twice I’ve Googled a problem and landed on the solution in my own blog or a stack overflow post I made years ago!


For a job that needs to access 100's of thousands of small files, the ability to read the meta data quickly is very important.

This is the wider issue with small files. On HDFS each file uses up some namenode memory, but if there are jobs that need to touch 100k+ files (which I have seen plenty of), that puts a real strain on the Namenode too.

I have no experience with S3 to know how it would behave in terms of metadata queries for lots of small objects.


Small files with S3 is both slow and expensive too. But at least one bad query won't be able to kill your whole cluster like HDFS.


Apache Ozone https://hadoop.apache.org/ozone/ is an attempt to make a more scalable (for small files / metadata) HDFS compatible object store with a S3 interface. Solving the meta data problem in the HDFS namenode will probably never happen now. Too much of the Namenode code expects all the meta data to be in memory. Efforts to overcome the NN scalability have been around "read from standby", which offers impressive results.

The meta data is not the only problem with small files. Massive parallel jobs that need to read tiny files will always be slower than if the files were larger. The overhead of getting the metadata for the file, setting up a connection to do the read is quite large to read only a few 100kb or a few MB.

The other issue with the HDFS namenode, is that it has a single read/write lock to protect all the in memory data. Breaking that lock into a more fine grained set of locks would be a big win, but quite tricky at this stage.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: