Thank for that pdf, reading it right now. Having a bit of Forth experience probably helps tremendously in reading it.
I noticed the 'pipeline' vs 'function call' equivalence a couple of months ago and I'm still working out all the implications of it.
It may be that stack oriented languages and functional languages are a lot closer to each other conceptually than you'd normally think they are and I think the pdf you linked here illustrates that nicely.
Also there is some equivalence between a process pipeline and either of these two. (for instance 'cat file.txt | grep x | wc').
Even if it uses a series of inter-process communication the effect is much the same. In factor that you could code up words for 'cat', 'grep' and 'wc' and then it would probably look something like this:
Not sure if that's exactly correct but it seems a trivial expansion of the example in the pdf, more complicated pipelines would simply require more steps but the principle remains.
stack oriented languages and functional languages are a lot closer to each other conceptually than you'd normally think they are
Yes. In fact, here’s something to think about.
Imagine a dialect of lisp where every function takes a known number of arguments. You could hard-code this per function, for example saying that + always takes exactly 2, or you could do it by keyword args, or you could make every function take a quoted list, or maybe some other ways.
In this lisp dialect, parentheses would be superfluous (except to specify quoted lists).
If you wrote it backwards, it would be a forth.
For example, let’s say * and + take exactly 2 args each.
(* 3 (+ 1 2)) ; lisp
* 3 + 1 2 ; trivial variation if arity is always known
2 1 + 3 * ; valid in almost every forth dialect, RPN calculators, etc.
Likewise, it would be ugly, but we could imagine a shell-like language that used dash-flags for all args such that you never had to make a pipe explicit:
cat --file=foo sort -n head
Instead of:
cat foo | sort -n | head
If you wrote this backwards, it would basically be an ugly forth dialect with keyword args.
Basically, if you don’t use assignment, you can chain things up however you please. If your syntax chains in one direction, you’re arguably writing some kind of bastardized lisp; if it goes the other way, it’s arguably a forth dialect. This is why I capitalize Perl and Python but not forth and lisp ;).
there is one huge advantage to the 'forth' way of doing things, you can start processing left-to-right as you read in the data. This makes it possible to stick a forth 'processor' on the other end of a wire and to parse / run the code as it gets pushed down the wire without needing backing store the size of every possible input.
For a 'lisp' like structure you'd need at a minimum enough storage to get all the way to the innermost expression before you can start to interpret the data.
Normally not much of a problem, but if you were to make a fabric that consists of little processors stuck together with comms wires the forth way of doing it could be an advantage.
No way. This language has been around for all this time, and it's only submitted now? Any idea why it took so long? Are there any other submissions about Factor in HN?
Factor supports machines without SSE2 if you compile it yourself. In this case it will use the x87 floating point. The binaries are built to use SSE2 for floating point since most machines have it these days.
SSE2 is great for more than just SIMD. It gives you additional registers, allowing the compiler to generate more parallel scalar operations. Visual C++ with /arch:SSE2 will happily interleave x87 instructions with SSE2 instructions, which looks strange but works well.
You may also want to use SSE2 for saturated scalar additions or prefetches.
Very few people have x86 processors that don't support SSE2 these days. In our tests at IMVU, I think less than 1% of our customers didn't support SSE2, and it was only that high due to the Athlon XP.
In short: when starting a project targeting x86 today, default SSE2 on.
Under what circumstances does Visual C++ still emit x87 instructions? I'd think that other than legacy support, 80-bit long double arithmetic, and the 32-bit ABI (float return values are passed on the x87 stack) there's no reason to use x87.
The optimizer will choose when and how to make use of the SSE and SSE2 instructions when /arch is specified. SSE and SSE2 instructions will be used for some scalar floating-point computations, when it is determined that it is faster to use the SSE/SSE2 instructions and registers rather than the x87 floating-point register stack. As a result, your code will actually use a mixture of both x87 and SSE/SSE2 for floating-point computations. Additionally, with /arch:SSE2, SSE2 instructions can be used for some 64-bit integer operations.
In addition to using the SSE and SSE2 instructions, the compiler will also use other instructions that are present on the processor revisions that support SSE and SSE2. An example is the CMOV instruction that first appeared in the Pentium Pro revision of the Intel processors.
Probably not; It's just that floating point on x86 without SSE is ridiculously awkward, and it's becoming hard to find platforms with no SSE2, so no sense in making an effort to support the x87 or SSE1 instruction sets.
Unfortunately enough people have older CPUs that we still have to support x87 code generation; its just not used in the binary packages, you have to build the source to get it.
Factor uses ahead of time compilation. When you load a source file at the REPL, all definitions in it are compiled immediately, and compiled code is saved in the image. So when you download a pre-built binary package, all the code has already been compiled so auto-detection is not an option.
There are pros and cons to both compiling at load time or run time; one of the disadvantages of the former is less flexibility when it comes to using CPU-specific features.
http://docs.google.com/viewer?url=http%3A%2F%2Ffactorcode.or...
Good introduction to the language for beginners.