That's not all, there's the cost of (down)loading the dead code, and startup cost for loading the unoptimised cost. Even more critical when you consider environments where you can't JIT, like React Native on iOS for instance, which was one of the many motivations for Prepack AFAIK.
> What will Verve do well, that no (major) language does well?
I'm always thinking about that, and I honestly wouldn't write a "proper" language (as in working on it full time, and expecting people to really adopt it) without having an answer for that. Nonetheless, I still see the value in writing things that are not ground-breaking, for the sake of learning. Just reading about compilers and PLT without any practice was really hard for me, and working on this language has been super helpful from a learning perspective.
I have had an idea concerning error handling, but I don't really know if it makes sense. There has been a push in the functionality community to do error handling with Maybe or Either monads or perhaps something similar to them. However, this creates the situation where many functions return these monads(two track values), but most functions accept as input simple values(one track values). So there is a constant need to lift functions. I just know a little bit of Haskell so maybe the burden of constantly lifting is not so awful, but it currently seems that way to me. I was thinking that a language that could detect this and then automatically lift the functions might be cool. However, the normal case is simply to push failures through the system, by skipping all functions after the failure, and this is not always the behavior needed. Perhaps it is ok to continue if only one of a set of functions succeeds, or perhaps one wants to retry some number of times or for some duration of time before giving up. It would seem possible to use monads to handle all desired cases, but not with auto-lifting. In any case, I really think that a better system for handling errors would be that killer feature that would make a language worth while. This is something that would pervade the languages library and thus make sense to have a new language. This video partially inspired the idea. Scott Wlaschin - Railway Oriented Programming — error handling in functional languages. https://vimeo.com/97344498 The other one is the language Icon where every expression has a success/failure property with automatic backtracking.
What you're describing as far as lifting is not unlike algebraic effects, like Eff[1] and Idris[2] have. Basically, you can have some sort of "exception" effect, except limited to particular type of error, and this system will mix them without having to use monad transformers. This particular application is not too dissimilar to checked exceptions, but the algebraic effect approach gives you a lot more power. It also makes it easy to deal with pure vs. impure functions, IO, etc. seamlessly.
Responding to internal errors differently based on the caller's desire is possible with Lisp; there's a good overview here[3]. It looks like algebraic effects can do something similar, as described in this paper[4] (search for lisp and the relevant portion should show up).
Another language worth looking into if you're interested in effect systems is Nim. It includes an effect system that deals with both exceptions and other effects (like for example IO read/write effects): http://nim-lang.org/docs/manual.html#effect-system-exception.... The exception tracking is similar to a checked exception system but is far less annoying than Java's. In my opinion it is the best way to ensure exceptions are handled.
I have a suggestion. What about not having integers and floating point numbers, only fractions. Some types of mathematics are faster and more precise when calculating with fractions instead of floating point numbers.
It would be an interesting twist if fractions were the default.
Sorry, I shouldn't have implied you were intentionally misleading. I'm simply pointing out that by describing your language as without dependencies, you're just swapping out the explicit IR layer with a hard wired ISA layer (and linux abi), which is no less of a dependency.
I mean, GAS/x86-64 is cool, I enjoy toy languages as much as the next person—I just wouldn't describe it any other way than architecture-specific (and therefore useless to most people) so I don't have to click through if I can't play with it.
I started with colons indicating the return type, but IMO it gets too confusing when you have functions as parameters. e.g. `foo(bar: (int, string): float): float` but that might be just personal preference.
Forth and Factor do a double-dash to separate the inputs from the outputs and multiple outputs are allowed. So your example could look something like `foo(bar: (int, string -- float) -- float)` or `foo(bar: (int, string -> float) -> float)`
`extern` means it's implemented in native, but it has to conform to the interface that the VM provides, and register that function with the VM. At runtime, the parser knows when a function is local (i.e. in the bytecode) or extern (i.e. C++). If it's local, it'll jump to the function's offset in the bytecode, if it's extern, the interpreter will use a native call.
Right now it runs on its own VM. What I meant by "the VM is no longer necessary" was that the language went from being dynamic (when I was just prototyping with lisp) to static (in the current state), so it should be easy to generate machine code ahead of time, instead of having an interpreter (or adding a JIT).
The reason the definitions link to Wikipedia is that this is the first piece of documentation on Verve ever. Hopefully I'll be able to cover most of it in proper docs later, but I thought for now I'd add the link to the definitions in case someone reading through was not familiar with them (I personally hate to stop reading to start googling for acronyms).