At one point I was seriously fascinated by the lambda calculus and particularly the S and K combinators - two very simple functions that you can build any computation from:
Having a system that implemented recursion using the Y combinator in terms of S & K and have it actually compute stuff was rather amusing. I'd be fascinated to see what it would look like using just U.
Actually, one of the many things I learned from doing a final year project for my CS degree on "benchmarking" different sets of combinators was how to write a garbage collector - I was doing this in '87/'88 on a Unix mini (HLH Orion) and memory was tight and my trees were comparatively huge!
It's worth noting, BTW, in case anyone is still following this thread, that what is going on here in all of these single-instruction languages is not particularly interesting. You're just encoding the program semantics in the arguments, or in the structure of the tree, or the length of the program, or something like that, rather than in the instructions.
If programming with just one instruction seems too low-level, I have just the solution for you! Now you can compile the user-friendly, high-level language of Brainfuck to this single-instruction machine!
I wrote a little implementation of an OISC virtual machine many years ago [1]. Feel free to have a play. There are some sample 'one instruction assembler' programs in the test programs folder.
My final year project was a hardware implementation of this. My group made an 8 core coprocessor and an OpenCL-like implementation complete with an LLVM IR to SUBLEQ translator. We actually got performance better than the proprietary NIOS II e soft processor on our image processing demo (Y colorspace conversion).
Well it could be interesting as a test architecture for a new logic family as long as you have wide memory.
The Manchester Baby was the first programmable electronic computer and was close to a single instruction computer. It can emulate the single instruction computer with 5 of its 7 instructions: load, subtract, store, skip if negative and jump. The two extra instructions were halt and relative jump.
Side note: Douglas W. Jones was touting this [1] in a computer architecture class in the mid-1980s. Fun stuff. He later went on to do a lot of work with computerized voting [2][3]. Great instructor...
How about the one instruction "RunInst a, b, c, d", whose behavior is defined as "Do the instruction described by parameter a in [your favorite architecture], upon parameters b, c, d"?
Yeah, the notion of "a single instruction" really doesn't mean a whole lot, just like how (in many languages) "a single line of code" doesn't mean much.
OISC is something you sometimes get to build in EE labs, usually as the last project for that class. It's something that you can put together out of 7400s and, at least in our case, an eeprom for a lookup table. It's a fun thing to do and I recommend it.
I always thought MADD (Multiply + Add) made a great candidate for a single instruction set computer, if the program counter was usable as an operand to the instruction.
http://en.wikipedia.org/wiki/SKI_combinator_calculus
More recently I found out about the U combinator which is a single combinator that can be used to define S & K:
http://en.wikipedia.org/wiki/Iota_and_Jot
Having a system that implemented recursion using the Y combinator in terms of S & K and have it actually compute stuff was rather amusing. I'd be fascinated to see what it would look like using just U.