And if anyone is interested in knowing more about Factor, this tutorial is pretty good, from basics to the web development using Furnace (Factor Web Framework) - https://andreaferretti.github.io/factor-tutorial/ :-)
It certainly seems like a neat language, but I could never find enough beginner material to make any headway (not sure if anything has changed in the past few years).
There is a Google tech talk where Slava talks about Factor @ Google.
Factor is definitely very cool. Apart from being stack-based, it's also impressive for compiling to native code on Windows, Linux and OS X without going via LLVM or anything similar. It can even produce self-contained executables.
IIRC, named local variables have been available as a library feature for a long time now, so you don't have to write in a concatenative style when it's cumbersome to do so.
How Factor compiler is different from something like SBCL? It's interesting because Factor have plans to support ARM/Android/iOS, how hard to achieve these goals? It's interesting because Common Lisp's (SBCL, CCL), Scheme (Chez Scheme), Smalltalk (Pharo, Squeak) also have native compiler's, but no one have iOS/Android support, SBCL also have self-contained executables and ARM port. What are the biggest problems to these projects from having iOS/Android support.
The iOS situation is difficult, because Apple does not allow runtime compiled/loaded code in general. LispWorks pre-compiles Lisp code and one creates an App with the help of Apple's Xcode. LispWorks has still an Interpreter at runtime. Generally it is a very complete implementation of Common Lisp (minus the runtime compilation), but without a Lisp-based GUI library. They also had to write a special Garbage Collector for the 64bit ARM iOS. Seems like it was not possible to have their usually more advanced GC, which for example is available with their 64bit ARM Linux port of LispWorks.
That's not exactly Apple's policy - there are React Native apps in the iOS store, and that runs on nodejs. Apple specifically forbids applications that download and run code or allow for other code outside of the application's code to be run. I.e. any and all code that your application will execute has to be submitted with your app.
> On iOS, V8 cannot run because the operating system forbids just-in-time compilation; so instead of V8, we use our own port of the ChakraCore engine, on top of the integration with Node that Microsoft created in Node.js on ChakraCore. ChakraCore has a well-optimized, pure interpreter mode which complies with iOS’ restrictions.
> Apple does not allow Just-In-Time compilation on iOS (except for its own JavaScriptCore engine).
So the claim is that Apple will not allow a third-party Javascript engine which provides a JIT - even though interpreted code engines are allowed and this is what their node.js version did.
My impression is also that this is a technical restriction.
I'll have to check out the new 0.98 release. I always enjoy playing with Factor, and I even wrote a program to munge data from a sensor many years ago for work. It's one of those languages that if it took off, I'd be happy to use it for work all the time. It is a lot of fun though!
Everything is based on composition, which makes it easy to build computations up like building blocks. Stacks are also efficient. The JVM bytecode is actually a stack language, for example.
There's a newer one called Kitten that is statically typed and uses term rewriting to allow for more normal syntax when you want. It's pretty cool. Too bad none of these languages will ever take off.
”Stacks are also efficient. The JVM bytecode is actually a stack language, for example.”
The advantage of stack-based is that instructions need not tell where there arguments live, making them smaller (if you have 16 integer registers, for example, a simple “add register I to register j, store result in register k” instruction needs 12 bits to encode the values of i, j, and k. That’s quite a lot if your instructions are 16 bits)
The disadvantage is that you have to move data to where instructions expect to see it before you can do computations. That makes your code larger and slower.
> Stacks are also efficient. The JVM bytecode is actually a stack language, for example.
Being stack-based doesn't help JVM efficiency. Modern JVM's take that bytecode and immediately translate it to some other non-stack-based form and then compile it. JVM bytecode is effectively a serialization format for syntax trees. (Stacks are a nice notation for serializing a tree.)
And since composition is seamless, and they're relatively trivial to implement; they make great glue/DSL/process description languages. Take off as in replace C++ or Java, probably not. I see them more as a complement to existing languages, a more convenient way to glue the pieces together. Even wrote my own [0] to see how far it's possible to push that idea in C.
Interesting. But does cixl really do refcounting on stack values? It should only be done on aggregates or objects reaching out into the heap, not on int or float primitives on the stack.
Forth, Factor, Joy, et. al. provide a fundamentally better mental environment for thinking about problem solving (in code.) You'd have to read "Thinking Forth" by Brodie (out of print; PDF made available here: http://thinking-forth.sourceforge.net/ ) and actually play with them for a bit to catch on though, there's no substitute, I think.
I played with Factor when it was new and never managed to advance to a stage where I wouldn't have to come back and fix stack corruption. And then again, and again, even for simple pieces of code. YMMV.
What I find a bit strange is that outside of Kitten (and Cat before it) nobody seems to have tried to make any stack-based languages with a strong typing system that prevents this. I mean, shouldn't it be trivial to follow which type of data a word expects from a stack, and which type it returns? The only limitation I can think of is words that leave a variable amount of data on the stack (one workaround would be to forbid that, or to have the type system be able to express that as well).
I suspect something like Julia's type system, with Bits types and Abstract types, would work perfectly here[0][1].
People have investigated typing for stack-based languages before, [1] is a good overview. I've been implementing a dialect of Joy in Python and it turns out that it is pretty straight forward to implement type-inference and -checking.[2]
However, I recently realized that I was doing waaaay too much work. I reimplemented Joy in Prolog and got typing for free. Several pages of Python became two pages of Prolog and the interpreter is more powerful, I could go on. It's really weird being so excited on the one hand and so rueful on the other. This is so cool but I wasted so much time!
Anyhow, going by what you're saying the_grue about passing variables in the wrong order, I think you may have been going at it in the wrong way.
Using Joy and deriving new definitions with it I've only very occasionally had bugs related to "typos" in argument order of stack items. If you start with little pieces and build up stable conceptual interfaces as you go the process seems to flow and lead to correct code. It's like playing with Legos, or deriving a mathematical theorem.
Looking forward to reading both links tonight after work!
> It's really weird being so excited on the one hand and so rueful on the other. This is so cool but I wasted so much time!
I wouldn't be surprised if the whole struggle to create a Python implementation was necessary to get a clear enough picture of how it all comes together, which is why it then was so easy to re-implement it in a much fewer lines of code. That is how learning how to program works in my experience, at least. So don't be too hard on yourself there.
> I wouldn't be surprised if the whole struggle to create a Python implementation was necessary to get a clear enough picture of how it all comes together, which is why it then was so easy to re-implement it in a much fewer lines of code.
There was definitely some of that, but much less than you might think. The timeframe I'm talking about is a bit longer: a friend of mine tried to interest me in Prolog twenty years ago and the "penny has dropped" only now. Better late than never. But I estimate I may have wasted, outright wasted, three to five man-years of work. (I spend a lot of waking hour on programming, a lot.)
The impact of the realization is so severe that I've coined a new personal rule (the first in over a decade) to wit: Always use the highest-level language/system and only "drop-down" to a lesser paradigm if I absolutely have to (for efficiency or expressiveness.)
(The paradigm hierarchy being: Logical/Relational >= Functional >= Imperative)
I find myself pivoting from being an expert Python programmer to a tyro Prolog programmer. It's disorienting, exciting, it makes me giddy at moments. I'm already more productive, and it's easier to write bug-free code.
In the particular case of implementing Joy in Python and in Prolog, I had previously learned about Logic Programing and Unification by studying an implementation of miniKanren in Python[1], so I knew what I was doing when implementing the type inference. There even came a moment when I realized that I should reimplement in Kanren or Prolog if I wanted to do things like propagate constraints ("value types", etc...)
Then a link here on HN to "Logic Programming and Compiler Writing" by David H. D. Warren[2] finally pushed me to do it. I was in the middle of writing a compiler (to Python/Cython) for Joy and I realized that Warren's paper showed a better way.
Reimplementing Joy in Prolog was as simple as typing in descriptions of the basic relations, which are then already executable.
Something curious happened next. I went to reimplement the type inference code and when I had finished I realized that I had just reimplemented the interpreter. In other words, in Prolog, the interpreter and the type inferencer are the same thing.
(Also, ten pages of Python code became one page of Prolog.)