Hacker Newsnew | past | comments | ask | show | jobs | submit | rphln's commentslogin

Flamegraphs are a really lovely tool for visualizing trees. Slightly related anecdote:

A while ago I was experimenting with interactive exploration of (huge) Monte Carlo Tree Search trees. Inspired by file system visualization tools, my first attempts were also tree maps and sunburst graphs, but I ran into the same problems as in the article.

I tried flamegraphs next with the following setup:

- The number of visits in each node maps to the width and order of each bar (i.e., the most visited node was first and was the largest)

- The expected value maps to the color of each bar.

And then it was a perfect fit: it's easy to see what's going on in each branch at the first levels, and the deeper levels can be explored through drilling down.


Can't speak about this one in particular, but the face scans for banking that I know of also includes liveness checks to avoid AI impersonation (and presumably dead bodies).


What kind of liveness checks?


The app asks you to move the camera or change your expression throughout the verification.

I'm not familiar with how it works on the implementation side. A ML conference that I attended had a presenter working on this area and beating AI impersonation was their #1 priority (along with other trivial approaches).

For what it's worth, this might also be just a security theater from the banks, though.


> The app asks you to move the camera or change your expression throughout the verification.

The Natwest app will sometimes ask you to say words / numbers when doing a verification for e.g. a large transfer. Probably beatable by decent OCR feeding into voice synthesis with facial animation though (they don't have voice prints to check against that I'm aware of but even if they did...)


Mixing them should be relatively common when denoting intervals, as in "(a, b]" or "[a, b)", so that'd be one cause for being unbalanced. But even so, the math on their usage still doesn't add up.


This apparently fits the bill: https://patricksurry.github.io/posts/flowsnake/. Amit's post led me to search about recursive coordinates for hexes again and I came across it in a StackExchange comment.


> with the S and Q in Portuguese

Guilty as charged. Pronouncing letters and numbers in English requires an active mental effort for me, so that happens if I'm not paying attention. I guess it's because they don't really register as another language, even with context. My inner monologue also does letters and numbers in Portuguese, which doesn't help either.


There's already `ls --hyperlink` for clickable results, but that depends on your terminal supporting the URL escape sequence.


This is nice, but a poor substitute for what Genera was doing.

You see, Genera knows the actual type of everything that is clickable. When a program needs an input, objects of the wrong type _lose their interactivity_ for the duration. So if you list the files in some directory, the names of those files are indeed links that you can click on. Clicking on one would bring up a context menu of relevant actions (view, edit, print, delete, etc). If a program asks for a filename as input then clicking on a file instead supplies the file object to the program. Clicking on objects of other types does nothing.


> Genera knows the actual type of everything

I have this side-project fantasy of a very simple terminal pipe-types project. The basic idea is a set of very basic standardized types, demarcated using escape sequences. Dates, filenames, URLs, numbers, possibly one or two number units as well (time periods, file sizes only).

Tools that already produce columnar data (ls) get a flag that lets them output this format, and tools that work with piped data (cut, sort, uniq) get equivalents or modes that let them easily work with this.

Essentially, simple typed tables held in text, with enhancements for existing tooling to know how to deal with it. Would make my day-to-day on the command line much easier.


Could be fun :)

But note that on the Lisp Machine/Genera, every type has a presentation and can be “printed” to the REPL. This includes any new classes that you create as part of your own programs. It’s not just a small list of standard types, but every type.

The standard tutorial for the system is to implement Conway’s Game of Life. It has you create a class to hold the game board and then guides you through the process of defining a presentation for it so that the it can be displayed easily.


I think PowerShell works this way essentially. As I understand, all data is structured which makes formatting and piping to other programs much simpler.


Arcan is experimenting with something like this (among others): https://arcan-fe.com/2024/09/16/a-spreadsheet-and-a-debugger...

See also:

* NuShell (https://www.nushell.sh/)


nushell goes in that direction. Programs can output tables, and the shell (or other tools) know how to work with this structured data.


I always thought to do that by having a virtual file system that tags my files and so they are available at specific location if they fit the bill.


https://kellyjonbrazil.github.io/jc/docs/parsers/ls.html

...glom on to this: "+JSONSchema" with some sort of UNIX-ish taxonomy. Everything from `man test`, add in `man du`, `date`, `... ago` (relative time) as you'd mentioned.

`jc ls | add_schema...` => `jq ...`

...or `jc ls --with-schema | jq ...`

(it appears as though `jc` already supports schema's, so perhaps it'd be `jc ls --with-types` or something, but there's your starting point!)


That's neat and a similar idea. I think JSON probably ends up being too expressive (not just an array of identically-shaped shallow objects), too restrictive (too few useful primitives), and also too verbose of a format, but the idea of a wrapping command like that as a starting point is neat


I'll share this comment from 7 months ago with you:

https://news.ycombinator.com/item?id=40100069

"prefer shallow arrays of 'records', possibly with a deeply nested 'uri'-style identifier"

...the clutch result is: "it can be loaded into a database and treated as a table".

The origin of this technique for me was someone saying back in 2000'ish timeframe (and effectively modernized here):

    sqlite-utils insert example.db ls_part <( jc ls -lart )
    sqlite3 example.db --json \
      "SELECT COUNT(*) AS c, flags FROM ls_lart GROUP BY flags" 
    [
      {
        "c": 9,
        "flags": "-rw-r--r--"
      },
      {
        "c": 2,
        "flags": "drwxr-xr-x"
      }
    ]
...this is a 'trivial' example, but it puts a really fine point on the capabilities it unlocks. You're not restricted to building a single pipeline, you can use full relational queries (eg: `... WHERE date > ...`, `... LEFT JOIN files ON git_status...`), you can refer to things by column names rather than weird regexes or `awk` scripts.

This particular example is "dumb" (but ayyyy, I didn't get a UUOC cat award!) in that you can easily muddle through it in different (existing pipeline) ways, but SQL crushes the primitive POSIX relationship tooling (so old, ugly, and unused they're tough to find!), eg: `comm`, `paste`, `uniq`, `awk`


Tab completion has developed some similar features. I've seen shells that will only autocomplete what seem to be appropriate choices.


I typically turn this off. Many times it's too slow, and many times it hides local filenames, and I do want local filenames.


That's one aspect I prefer in playing with TempleOS over Linux. The rest of the command line is a bit of a pain, with no history, C-as-a-shell, etc.


  $ man ls | grep '\--hyperlink' -A 1
  --hyperlink[=WHEN]
         hyperlink file names WHEN


Would've been perfect had you said jndex.


I'm so sorry


I think it'd be interesting to compare against F#, especially since Godot already has first-class support for .NET. When the bindings get fully fleshed out, what will OCaml bring to the table to warrant the extra hoops for the user that has no prior preferences for either languages?


Godot uses source generators to fill out C# partial classes, so it doesn't quite have native F# support. You'd at least need C# shims for your types that inherit from Godot classes.


Bummer, I had no idea about that. But I guess that, at least for turn-based games, it wouldn't be super annoying if you manage to split the logic (F#) from the UI glue (C#), right?

(I never got around to that point though, I usually start making games as libraries and then get nerd sniped into reading about game theory for the AI before I make them playable for humans.)


This is why I haven't done f# with godot yet, haven't felt like building out the shims.


Is it first-class support? I hear lots of godot devs saying "I can't use X because I'm using C# instead of gdscript"


C# support is great. But yes, if you need to call a library/extension written in gdscript from the C# code, you'll need to write some C# bindings to make it practical.


The only thing not available in C# in godot 4 is web exports - they'll come whenever upstream support arrives.

Using C# actually nets you more features, not less.


It is first-class support, as far as I know everything is accessible from C#. I developed YKnytt, an open source game, fully in C# and never had the issue of something not working in C# as it did in GDScript.


> the extra hoops for the user

The idea of Godot is "extra hoops."


Can you elaborate on that? I thought it was a fairly batteries included game engine, if you use gdscript at least.


Previous discussion: https://news.ycombinator.com/item?id=13857116

Love it, thanks for posting.

Reading through, this method sounds a lot like Thompson sampling, and surely enough, the previous thread came up after a quick search for comparisons.

Sadly, no immediate empirical results came up, but having another tool in the belt is good on its own.


Often I wish that:

    enum Auth { Guest, User(u32), Bot(u32) }
was just syntactic sugar for

    struct Auth::Guest;
    struct Auth::User(u32);
    struct Auth::Bot(u32);

    type Auth = Auth::Guest | Auth::User | Auth::Bot;
where `|` denotes an anonymous union. Using enums feel very clunky when you start needing to work with subsets of them.


I 100% agree. I tried a few workarounds to get the same result, but it quickly gets out of hand. The two main solutions I'm using is to define the individual variants as in your post, and then one enum per union I care about; even with macros it gets very verbose. There's also the problem that it requires explicit conversions (so it's not zero cost). As an alternative, I define a generic enum where each variant corresponds to a generic param and then you can set it to an inhabited type (such as Infallible) when the variant is not possible. Converting is not zero-cost again. You can retrieve the zero cost manually, but then you need to deal with discriminators yourself / the compiler can't help you there.

The pattern type work I mentioned earlier would bring us very close to what you have at the end:

    enum Auth { Guest, User(u32), Bot(u32) }

    type FullAuth = pattern_type!(Auth, Auth::Guest | Auth::User(_) | Auth::Bot(_)); // useless, but works
    type IdAuth = pattern_type!(Auth, Auth::User(_) | Auth::Bot(_));
The full enum still needs to be defined. In particular, it serves as the basis the in-memory representation. All the subset can be handled by the compiler though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: