Hacker Newsnew | past | comments | ask | show | jobs | submit | ptx's commentslogin

Although PowerShell borrows the syntax, it (as usual!) completely screws up the semantics. The examples in the docs [1] show first setting descriptor 2 to descriptor 1 and then setting descriptor 1 to a newly opened file, which of course is backwards and doesn't give the intended result in Unix; e.g. their example 1:

  dir C:\, fakepath 2>&1 > .\dir.log
Also, according to the same docs, the operators "now preserve the byte-stream data when redirecting output from a native command" starting with PowerShell 7.4, i.e. they presumably corrupted data in all previous versions, including version 5.1 that is still bundled with Windows. And it apparently still does so, mysteriously, "when redirecting stderr output to stdout".

[1] https://learn.microsoft.com/en-us/powershell/module/microsof...


IIRC PowerShell would convert your command's stream to your console encoding. I forget if this is according to how `chcp.com` was set or how `[Console]::OutputEncoding` was set (which is still a pain I feel in my bones for knowing today).

It's also not a file descriptor. It's a PowerShell Stream, of which there are five? you can redirect to that are similar to log levels.


But the rest of QBASIC is missing.

Well... Right here on the the very first website Tim Berners-Lee talks about how to build interactive web applications (here called "gateways"), albeit server-side rather than client-side: https://info.cern.ch/hypertext/WWW/FAQ/Server.html

Couldn't they simply switch to zip files? Those have an index and allow opening individual files within the archive without reading the whole thing.

Also, I don't understand how using XML makes for a brittle schema and how SQL would solve it. If clients choke on unexpected XML elements, they could also do a "SELECT *" in SQL and choke on unexpected columns. And the problem with people adding different attributes seems like just the thing XML namespaces was designed for.


It's a single XML file. Zip sounds like the worst of both worlds. You would need a new schema that had individual files at some level (probably at the "row level.") The article mentions SQLCipher which allows encrypting individual values separately with different keys. Using different keys for different parts of a kdbx sounds ridiculous, but I could totally imagine each row being encrypted with a compound key - a database-level key and a row-level key, or using PKI with a hardware token so that you don't need to decrypt the whole row to read a single field, and a passive observer with access to the machine's memory can't gain access to secrets the user didn't explicitly request.

ZIP files can have block-like relatives to the SQLite page. It could still be a single XML file and have piecewise encryption in a way that change saving doesn't require an entire file rewrite, just the blocks that changed and the updated "File Directory" at the end of the ZIP file.

Though there would be opportunity to use more of the ZIP "folder structure" especially for binary attachments and icons, it wouldn't necessarily be "required", especially not for a first pass.

(That said there are security benefits to whole file encryption over piecewise encryption and it should probably be an option whether or not you want in-place saves with piecewise encryption or whole file replacement with whole file encryption.)


A ZIP file with solid encryption (i.e., the archive is encrypted as a single whole) has all of the same tradeoffs as a KDBX file as far as incremental updates are concerned.

A ZIP file with incremental encryption (i.e., each file is individually encrypted as a separate item) has its own problems. Notably: the file names are exposed (though this can be mitigated), the file metadata is not authenticated, and the central directory is not authenticated. So sure, you can read that index, but you can't trust it, so what good is it doing? Also, to support incremental updates, you'd either have to keep all the old versions of a file around, or else remove them and end up rewriting most/all of the archive anyway. It's frankly just not a very good format.


> My cursory (lol, get it?) understanding may be incorrect.

I don't get it. Is this a reference to database cursors? Or is it implying that the blog post was AI generated?


Fortunately for everyone, database cursors. Took me a second to even realize how this could be related to AI. (I’ve never used cursor!)

Apparently "AI is speeding up the onboarding process", they say. But isn't that because the onboarding process is about learning, and by having an AI regurgitate the answers you can complete the process without learning anything, which might speed it up but completely defeats the purpose?

Yes, that's how I'd interpret it, too.

According to the article, onboarding speed is measured as “time to the 10th Pull Request (PR).”

As we have seen on public GitHub projects, LLMs have made it really easy to submit a large number of low-effort pull requests without having any understanding of a project.

Obviously, such a kind of higher onboarding speed is not necessarily good for an organization.


Yeah it should only count ACCEPTED pull requests.

I think there's definite scope for that being true; not because you can start doing stuff before you understand it (you can), but because you can ask questions of a codebase your unfamiliar with to learn about it faster.

id guess the time til forst being able to make useful changes has dropped to near zero, but the time to get mastery of the code base has gone towards infinity.

is that mastery still useful as time goes on though? its always felt a bit like its unhealthy for code to have people with mastery on it. its a sign of a bad bus factor. every effort ive ever seen around code quality and documentation improvement has been to make that code mastery and full understanding irrelevant.


Correct. Reading code is important. The details are in the minutia, and the way code works is that the minutia are important.

Summarizing this with AI makes you lose that context.


This has been my experience as a dev, and it always confuses me when people say they prefer to work at a “higher level”. The minutiae are often just as important as some of the higher level decisions. Not everything, but not an insignificant portion either. This applies to basic things like correctness, performance, and security - craft, style, and taste are not involved.

> this new method is possible to work because FreeBSD switched from Heimdal Kerberos implementation to MIT Kerberos in FreeBSD 15.0-RELEASE … and I am really glad that FreeBSD finally did it.

What was the problem with Heimdal? The FreeBSD wiki says they used an old version, but why not upgrade to a newer version of Heimdal instead of switching to an entirely different implementation?


Because we (Heimdal) need to make a release, darn it. I'm going to cut an 8.0 beta within a week or two.

Basically, an 8.0 release is super pent up -- years. It's got lots of very necessary stuff, including support for the extended GSS-API "cred store" APIs, which are very handy. Lots of iprop enhancements, "virtual service principal namespaces", "synthetic client principals", lots of PKINIT enhancements, modern public key cryptography (but not PQ), etc.

The issue is that the maintainers (myself included) have been busy with other things. But the pressure to do a release has ramped up significantly recently.


Also things like support for GSS-API pre-authentication mechanisms (so, you can use an arbitrary security mechanism such as EAP to authenticate yourself to the KDC), the new SAnon mechanism, pulling in some changes from Apple's fork, replacing builtin crypto with OpenSSL, etc. Lack of release has been typical OSS lack of resources: no one is paid to work on Heimdal full time.

Oh yeah, it's huge.

Also included are experimental:

- httpkadmind (which together with virtual service principal namespaces makes a very nice keytab orchestration system)

- bx509d (an online CA)

- JWT support for the above


This [0] may provide a hint. Heimdal was developed outside of the US and not subject to export restrictions, unlike MIT. So perhaps in the beginning it wasn’t the package of choice to begin with.

And this [1] says for interoperability reasons.

[0] https://docs-archive.freebsd.org/doc/11.1-RELEASE/usr/local/...

[1] https://freebsdfoundation.org/project/import-mit-kerberos-in...


I don't think that has anything to do with FreeBSD's choice of MIT Kerberos or Heimdal.

Well, except the FreeBSD Foundation explicitly says MIT was chosen for interoperability.

Are you disputing the FreeBSD Foundation document?


Er, sorry, I meant the whole thing about Heimdal being non-U.S. based.

Is that safe? Microsoft's policy [1] seems to say that anyone can publish an update to a package as long as it passes "an automated process" which checks that it's "not known to be malicious".

[1] https://learn.microsoft.com/en-us/windows/package-manager/pa...


It’s not. And it gets worse. A WinGet package can suddenly be introduced for software you have already installed and then the next "update all" will install whatever. Could be something completely different!

WinGet is not only unreliable, it is but one step removed from Remote Code Execution as a Service. Well, maybe one-and-a-half, if package repo maintainers were to pay attention, but that’s not realistic.


It would have prevented both this 7zip attach and the recent notepad++ one.


Is this the same compiler that famously spurred Richard Stallman to create GCC [1] when its author "responded derisively, stating that the university was free but the compiler was not"?

It seems to be free now anyway, since 2005 according to the git history, under a 3-clause BSD license.

[1] https://www.gnu.org/gnu/thegnuproject.en.html


The relevant bit:

" Shortly before beginning the GNU Project, I heard about the Free University Compiler Kit, also known as VUCK. (The Dutch word for “free” is written with a v.) This was a compiler designed to handle multiple languages, including C and Pascal, and to support multiple target machines. I wrote to its author asking if GNU could use it.

He responded derisively, stating that the university was free but the compiler was not. I therefore decided that my first program for the GNU Project would be a multilanguage, multiplatform compiler."

And not only was the university 'free' and the compiler not, neither was 'Minix', which was put out there through Prentice Hall in a series of books that you had to pay a fairly ridiculous amount of money for if you were a student there.

So the VU had the two main components of the free software world in their hand and botched them both because of simple greed.

I love it how RMS has both these quotes in the same text:

"Please don't fall into the practice of calling the whole system “Linux,” since that means attributing our work to someone else. Please give us equal mention."

"This makes it difficult to write free drivers so that Linux and XFree86 can support new hardware."

And there are only a few lines between those quotes.


I was one of those students saving up the large sum for the book, when Linux was announced. There were other tensions at the time - the biggest was that Minix on 8086 was 16 bit real mode only. Someone had developed patches to run in 32 bit protected mode, but they were invasive and large, and the Minix maintainers would not integrate them as the increased complexity would not help the mission of Minix being easy to learn and tinker with. The filesystem code was also single threaded, essentially doing one request at a time. IIRC there were patches to address that too, also not integrated for the same reason. (Note that the books included print outs of the source so keeping it short did matter.)

This explains the final 2 sentences of the original Linux announcement:

> PS. Yes - it's free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.

The book publisher is blamed for preventing Minix from being freely distributed: https://en.wikipedia.org/wiki/Minix#Licensing


Tanenbaum made that deal. He collected royalties from the book (as was his right) but it clearly was a way to make money for him. Just another part of the textbook grift because students were forced to work on Minix long after that that made any sense at all.

Ironically, that single threaded nature of the FS made it a perfect match for my own little OS and I happily hacked it to pieces to bootstrap it using message passing into a FS executable. That trick probably saved me a year in bringing up the kernel to the point that the OS could compile itself, which greatly sped up development.


> students were forced to work on Minix long after that that made any sense at all

Not to defend the textbook grift or the lack of vision here, but I strongly suspect an undergraduate minix course taught at VU would be very good. It’s not obvious to me that it would be inferior to the xv6-based course taught at MIT, for example.


That's fair, but it would be no less effective than a similar course based on Linux which would actually give the graduate a far more practical amount of knowledge. Acquisition of knowledge isn't free and to purposefully use a toy when the real thing is freely available for commercial reasons is just grift and AT and VU were well aware of this.

Note that all I'm doing here is taking AT at his word that he developed Minix solely because the source to Unix wasn't free to universities to hack on. They could have adopted Linux from the day that it became available then, or at least the beginning of the next academic year.


I believe in the present day, the premise motivating these undergrad books and courses based on alternatives (VU and Minix, MIT and xv6, Purdue and Xinu, God knows what else) is that Linux has become too complicated for an introductory course. I honestly don’t have any instinct as to whether this is correct pedagogically. I suspect the two main factors are how well the software facilitates getting students situated and in a position to do meaningful programming assignments quickly, and how motivated the students are to work on the software.

I reminder taking a security-oriented class ages ago and hacking on an operating system that was already dead as a trilobite, and we were all smart enough to realize this was not a triumph we’d be bragging about to our future children (or recruiters). Bleh.


> Linux has become too complicated for an introductory course.

So that already suggests a fantastic way to make some progress.

I think Tanenbaum had a unique vision at the time, but he went about it in the most ham handed manner possible and if not for VU Minix wouldn't even be remembered today. Linus had a huge advantage: he didn't have a lifestyle to support just yet.


Terrible mistakes. People keep repeating these mistakes. Makes me think of Larry McVoy.


Re your last paragraphs: I think RMS really meant just the Linux kernel when he wrote that(the topic is drivers, after all), not GNU/Linux, the OS or GNU/Linux, "the system". So it can be argued that he isn't really contradicting himself


Agreed. As a practical example, Alpine Linux isn't a GNU/Linux OS, but it does use Linux+Xorg graphics drivers.


Selling ACK meant money for research into distributed systems (Amoeba) and parallel programming languages. I can see that money for research is more attractive than open source.

For MINIX the situation was different and I think more unfortunate. AST wanted to make sure that everybody could obtain MINIX and made his publisher agree to distributing the MINIX sources and binaries on floppies. Not something the publisher really wanted, they want to sell AST's book. In return the publisher got (as is usual for books) the exclusive right to distribute MINIX.

Right at the start that was fine, but when Usenet and the Internet took off, that became quite painful. People trying to maintain and distribute patch sets.


I disagreed strongly with that at the time and still do. The money we're talking about here was a pittance compared to the money already contributed by Dutch society to the university where these people were working. Besides that some of these royalty streams went into private pockets.

A friend of mine was studying under Andy and I had a chat with him about this at his Amstelveen residence prior to the release. He was dead set on doing it that way. As a non-student and relatively poor programmer I pointed out to him that his chosen strategy would make Minix effectively unaffordable to me in spite of his stated goal of 'unlocking unix'. So I ended up in Torvald's camp when he released Linux as FOSS (I never contributed to either, but I figured as a user I should pick the one that would win the race, even if from a tech perspective I agreed more with Tanenbaum than with Torvalds).

Minix was (is?) flogged to students of VU for much longer than was beneficial to those students, all that time and effort (many 100's of man years by now) could have gone into structurally improving Linux. But that would have required admitting a mistake.


Universities get paid for teaching and research. Any software that is produced is a by product. Producing production quality software in a university is not easy and the university has to find a way to fund it.

MINIX was originally a private project of ast. It worked very well for the goal of teaching student the basics of operating systems.

One thing that might have been a waste of time is making the MINIX utilities POSIX compliant. Then again, many students would like an opportunity to work on something like that. The ones that wanted to work on Linux could just do that. Students worked in their free time on lots of interesting projects that were unrelated to the university.


> The ones that wanted to work on Linux could just do that.

Sure, but time is a very finite quantity and wasting a couple of years on Tanenbaum's pet project may have resulted in some residual knowledge about how operating systems in general worked but looking at most of the developments they pursued the bulk were such dead-ends that even outside of VU there was relatively little adoption. The world had moved to Linux and VU refused to move with it.

From being ahead they ended up being behind.


I wonder who you are thinking of who 'wasted a couple of years'. Regular students do one course in operating systems. That is a series of lectures and some practical work. The practical work is a couple of weeks at most if you know what you are doing.

Some people spent a lot more time on MINIX, but that was either as a hobby or the PhD students who worked on MINIX3. But MINIX3 generated lots of papers with a best paper award, so that can hardly be seen as wasted from an academic point of view.


I have some friends that went that route. They did not come away with anything that helped their careers later on and the 'academic point of view' in CS in NL hasn't been the best way to put food on your table since the days of Dijkstra.


> I love it how RMS has both these quotes in the same text: > > "Please don't fall into the practice of calling the whole system “Linux,” since that means attributing our work to someone else. Please give us equal mention." > > "This makes it difficult to write free drivers so that Linux and XFree86 can support new hardware." > > And there are only a few lines between those quotes.

I'll be honest, I don't understand your point here?


RMS calls it Linux, not GNU/Linux in the second quote.


He means Linux the kernel, getting new drivers.

Another interesting fact is that until Linux came to be, GCC only became relevant because Sun started the trend among UNIX vendors to split UNIX into user and developer SKUs, thus making the whole development tooling behind an additional license.


the Free University Compiler Kit, also known as VUCK. (The Dutch word for “free” is written with a v.)

I'm not sure if I'm reading satire or they are having some fun trolling.


Of course RMS understood the overtone perfectly, but Vrije Universiteit (vu.nl) is the real name of the university. Its name can be translated to "liberated university". As I understand it, it's a free university in the sense that historically, students of all religions were eligible to attend, as opposed to e.g. Katholieke Universiteit which was Catholic.

https://en.wikipedia.org/wiki/Vrije_Universiteit_Amsterdam


The librarated part means free from government control. Until the VU all Dutch universities belonged (indirectly) to the Dutch government.


Some universities, especially in Latin America, use the term "autonomous". Is that the same thing is "free" in this context?


Yes. Absence of direct control by the government. The VU was founded for religious reasons, so the main goal was to be able to teach theology according to the particular type of protestant Christianity that the founders of the VU believed in.


Vrije as in "Not Catholic", not as in beer.


Sounds like Katholieke Universiteit ought to release their own Compiler Kit ;)


Catholic University Compiler Kit? It would have to use one of the eponymous licenses if it didn't want to cause a paradox, heh.

https://lukesmith.xyz/articles/why-i-use-the-gpl-and-not-cuc...


I think the part that he - and you - missed is that tuition at the time was entirely free, so it wasn't just 'free' in one sense of the word.


The adjective meaning "free" is "vrij" or "vrije" in Dutch.

Amusingly, the Dutch verb "vrijen" does, in fact, mean to have sex.


I like the Afrikaans (evolved from Dutch) even better for its streamlined spelling and double-use depending on context:

Vry == "free" (noun) or "to court/kiss/have sex" (verb, contextual).


You really just made an account now to make that point?


his comment was more useful than yours


He made an account name called vrijen which is having sex in dutch.. as he himself explained. Not sure if you noticed that part


But it’s correct. :)

Linux the kernel has the drivers.


UniPress, RMS's arch enemy Evil Software Hoarder, sold a commercial version of the Amsterdam Compiler Kit as well as Gosling's Emacs.

https://compilers.iecc.com/comparch/article/92-04-041

UniPress made a PostScript back-end for ACK that they marketed with the NeWS version Emacs, whose slogan was "C for yourself: PostScript for NeWS!"

https://news.ycombinator.com/item?id=42838736

>UniPress ported and sold a commercial version of the "Extended Amsterdam Compiler Kit" for Andrew Tanenbaum for many CPUs and versions of Unix (like they also ported and sold his Unix version of Emacs for James Gosling), so Emacs might have been compiled with ACK on the Cray, but I don't recall.

>During the late 80's and early 90's, UniPress's Enhanced ACK cost $9,995 for a full source license, $995 for an educational source license, with front ends for C, Pascal, BASIC, Modula-2, Occam, and Fortran, and backends for VAX, 68020, NS32000, Sparc, 80368, and others, on many contemporary versions of Unix.

>Rehmi Post at UniPress also made a back-end for ACK that compiled C to PostScript for the NeWS window system and PostScript printers, called "c2ps", which cost $2,995 for binaries or $14,995 for sources.

>Independently Arthur van Hoff wrote a different C to PostScript compiler called "PdB" at the Turing Institute, not related to c2ps. It was a much simpler, more powerful, more direct compiler written from scratch, and it supported object oriented PostScript programming in NeWS, subclassing PostScript from C or C from PostScript. I can't remember how much Turing sold it for, but I think it was less than c2ps.

https://compilers.iecc.com/comparch/article/92-04-041

https://donhopkins.com/home/archive/NeWS/NeScheme.txt


> cost $2,995 for binaries or $14,995 for sources

My goodness, this is hard to imagine from today when open source has driven the price of software (code itself) to nil. And that's the price from decades ago. While I'm glad I don't have to pay 15K for a C to PostScript compiler, as someone who might have written similar software if I'd lived back in those days - I can imagine an alternate timeline where I'd be getting paid to write such tools instead of doing it as a hobby project.

> NeScheme.txt

Nice rabbit hole about LispScript, what a cool idea. I've been re-studying Scheme recently, its history and variants like s7, and was appreciating its elegance and smallness as a language, how relevant it still is. One of the books I'm reading uses Scheme for algorithmic music composition. (Notes from the Metalevel: An Introduction to Computer Composition)


this does not suprise me at all if other stories i heard are true.


Go on...


nothing bad but just doenst suprise me with the reaction he gave to stalman


To avoid confusion, since you say the process is reversible, you might want to use the term pseudonymization rather than anonymization.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: