This sums up quite a lot about Lisps in general. I'm amazed OP got so fast to this "insight" :)
(And this is probably Lisp's greatest weakness as well – with this level of
possible diversity, everyone has to use the “common lowest denominator”
simply because nobody can agree on what alternative syntax / library / etc.
is better and should be used.)
Off-topic: it's not enough to give everyone opportunity to improve the
language; you have to choose the winners and promote them heavily. The rules
of free market don't work here; people won't use the best thing, they'll use
the one which is available out of the box and which is used by their peers.
As a lisper, I can say that this insight is very true, but it is still just a simplification. Most lisps deal with it in different ways.
In the scheme world, the way they deal with this is by wasting a decade on making decisions about the language that should have been done in the 80s. The result is that scheme essentially split into scheme and racket. Now we have an awesome language and a nice little ecosystem thats good for teaching and research, and possibly even real work(tm). Classic scheme unfortunately is fragmented into implementations who all do things slightly differently and occupy their own niches.
In the clojure world, they have a) a BDFL who sets the course of the language. b) A very strong core community of very smart people who managed to build a nice culture based on common ideas about software and design.
In the common lisp world, because we have a very high-quality standard, implementations are almost completely compatible. Compatibility libraries make it easy to write very portable code, avoiding the scheme problem. At the same time implementations are free to experiment. The other problem of everybody developing their own little universe tends to be rare. Because common lisp is so old, we have a long history and traditions that guide future design, but don't constrict it. There is a subtle balance here. We have a lot of old examples to learn from, but we are not locked in by too many bad old decisions(not always the case, but good enough in practice).
A few examples where this does not work include utility libraries and things like json libraries, libraries for outputing html etc. Since we don't have a BDFL we are left to figure things out amongst our selves and sometimes, like with utility libraries(there are dozens such, which is ridiculus) it doesn't work. In other cases, it works very well, for example quicklisp, ASDF, bordeaux-threads, closer-mop, hunchentoot etc. are either de-facto standards, or sufficiently popular to be a very good default. As with clojure, there are a lot of common ideas about what is good design in the community, we have a lot of examples to learn from, as I mentioned.
In the end, at least in the case of common lisp and clojure, I see it as an advantage to have this "level of possible diversity", it's what has kept lisp alive for 50+ years! The fact that lisp can adopt to each new era of software development philosophy is a great reason to study it. It will be with us for many more decades because of this.
Show me a language with >1 implementation where at least two are completely compatible. Common Lisp has one of the best track records in this aspect. C, C++ and JavaScript are all far worse.
C++ is similar with regards to C code. They're compatible enough that a lot of people will write "C/C++", but they're still incompatible enough that you'll get yelled at by a lot of people if you write "C/C++".
That's very true and I think that it's related to the topic of Rich Hickey's talk Simple Made Easy [1].
Maybe what we need is to study the economics of software and come up with a system in which market outcome is promotion of good libraries. I think that the social/economic dynamics of software development play a huge role in building a successful product, both free and commercial. Has anyone studied the subject in greater detail?
I dream of a modular language in which languages, or dialects, can be built from a small base language, which can then be extended, and so on. Of the languages that I've seen, Racket looks the most promising. Forth might be good for this, too, but to build large hierarchies of languages seems the antithesis of Forth - or at least, Chuck Moore's - philosophy.
On the other hand, such a language might just end up as an incredibly fragmented mess of different languages, with little interoperability between languages - some one makes a 'typed' version of the language, another takes that version and tunes it just enough for it to be incompatible with the original version and, in turn, all languages that are built on top of that original typed language, and so on.
Maybe incredible modularity is fundamentally at odds with (social) interoperability.
It's ironic that Haskell is such a language: it grows from a tight, small core into larger and greater abilities as you learn it even without using its extensions.
I agree that Common Lisp a very powerful language, but I can't live with all that power uncontrollably thrown on me. Common Lisp grossly lacks self-discipline and self-limitation when it's needed.
...but to be honest, in everyday work Javascript feels a lot like a bondage and discipline language, because it lacks so many features and in practice you always use a restrictive "coding guideline" and a linter configured for the maximum strictness you can have, so you end up with a pretty verbose, retarded and restricted dynamic language. In order to keep your sanity and be able to work in a team in Javascript you basically have to throw away the baby and keep the bathwater to work with :)
I suspect the parent thread was dreaming of a framework where languages with arbitrary syntax can be mixed and matched in user-defined ways.
Haskell isn't necessarily that language, partly because it still requires centralized coordination of development of these "extensions" to ensure they're interoperable - that is, there is only one parser for the language and its supported extensions, and many of them are build into the compiler rather than added as libraries, except those extensions which are done through quasi-quotation, such as with MetaHaskell, or some EDSL. Even that has it's own problems, and you'll have issues parsing if your quoted language happens to have delimiters which conflict with Haskell's quasiquoting delimiters `[| |]` - producing syntax which cannot be parsed unambiguously (perhaps very rare or unlikely though).
Perhaps the biggest hurdle of having a modular language is that we do not understand how unambiguously parse the combination of two or more syntaxes. We only know that composition of two CFGs results in another CFG, but with no guarantee of unambiguity, and other parsers such as PEG rely on ordered choice, where the computer can't decide which choice you really want.
What makes lisps great for composition of languages (or "EDSLs" in market speak), is that it bypasses the parsing problem by asking you to just write your language directly in terms of the syntax tree which a parser would generate - and perhaps use macros or other functions to simplify the use of that tree. Instead of a language being vocabulary+syntax, we create new vocabulary for what would be done through syntax in other languages - and we can thus refer to it unambiguously. Similar can be done in haskell too, through regular functions and quotation.
The parse problem is only really a problem because we're stuck with this silly model of "sequential text files" to describe code, and we're required to limit our languages such that a parser can take one of these text files and make sense of it. When we break out of this model, and use intelligent editors, we can reach the point where syntaxes can be composed arbitrarily, because we can indicate where each new syntax begins and ends. Diekmann and Tratt have demonstrated how this can be done while still appearing much like traditional text editing, which they call Language Boxes.[1][2]
Language Boxes only provide the means to compose syntaxes, but handling the semantic composition of languages is left to the authors of the languages which are being composed. Haskell is perhaps a good choice of language for providing the kind of glue needed here, where we can decide where languages can be composed based on the types returned by their parsers.
I don't think the problem stops at syntax. It's possibly an even bigger issue that mixing different language semantics can be awkward. As a big obvious example, a language where all objects are nullable will interface awkwardly with one that only has option types. Similarly, interfacing with something like Smalltalk (which uses methods for flow control) or Forth (which…is Forth) would be awkward from a language that's more like C++.
Even in an environment like the JVM which specifies a lot of stuff for you, it's awkward to call into Clojure from Java because of the semantic differences.
I wasn't implying there is no problem with the semantics, just that it's much easier to deal with when you already have the parsed trees, because they're easier to reason about with code - and we can project them unambiguously.
We already do write tools for such language interoperability for specific pairs of languages, which is often really awkward because it requires us to re-implement the parsers, and only deals with entire code files rather than specific productions in the syntax.
It's pointless composing languages unless it makes sense semantically, which would need to be done on a per-language basis (or per-production rule), which is where I was hinting with using Haskell as the glue for such interoperability - because if we encode the semantics into the type system, such that one syntax expects a language box of type T in it's grammar, then one should be able to use any other language whose parser returns a T, and the semantics will be well-defined for it.
It could also provide the glue for converting between nullable types and option types for example too, by requiring that a language returning a "Nullable T" be wrapped in some function "ToOption", which converts "Nullable T" into "Option T". Attempting to use the Nullable where an Option is expected would fail to parse. How ToOption is implemented is left to the author of the code.
It's much easier to have interoperability between individual production rules in different languages (which share many parts in common) versus "whole text files" which we currently have, which basically require the languages be almost equivalent to convert between them.
Also as a result of storing the semantic information as opposed to sequential text, it would be possible for the user to chose his preferred syntax for any semantic elements in the tree, since they're just working on a pretty-printed version. Most of the concerns about "code style" disappear because they're detatched from the actual meaning that is stored.
Yes, Haskell does not allow free syntax extensions composition as easily as Lisp does. But it gets right semantical compositions, using monads for encapsulating semantics and monad transformers to compose effects easily and in a controlled way. I think the latter is much more valuable.