Still reading through, it's definitely piqued my interest. However, in the interest of consistency [as it translates to readability] two issues right away:
1) Get rid of the infix modifier to the filter. The statement is made that in ANI everything flows left to right. The infix operator while providing familiar 1+6->7 notation conflicts with the above flow statement. 1,6+->7 while not immediately familiar is easy to understand after explained. However, added special case rules for infix operation muddies the water.
2) In the following code sample provided, the left to right rule is again not followed and thus leads to increased difficulty in [human] parsing:
multiPrint= [string\ s, int\ times] {
\\[std.gen] <- \times s ->std.out;
};
"Hello, World!\n", 10 ->multiPrint;
Instead, to be consistent and follow the previous left to right flow rule, use the following syntax:
multiPrint= [string\ s, int\ times] {
\times -> \\[std.gen];
s ->std.out;
};
"Hello, World!\n", 10 ->multiPrint;
In general if there is an everything flows left to right rule, there should not be a <- operator.
People shouldn't make claims like that honestly. It makes be discount the language almost immediately because I know I can't really trust what's written about it on its site.
Furthermore, there's tons and tons of parallel C code. Probably more than any other language. Almost any decent C library is reentrant. Many of major C projects (mostly thinking of server software and operating system kernels) are threaded.
Actually, it's entirely parallel, unless you only have one core. In a typical OS (like Linux), the scheduler runs on each core and selects which task to run next on that core.
People should also understand generalization, though.
That there's "tons and tons of parallel C code" doesn't matter.
Parallelization is built-in into this language. It is not built-in, as a first class language feature, in C.
A lot of C projects, such as servers, are indeed threaded, but threads are also so last century.
Different paradigms (like in Erlang) are better to parallelize, easier to write, more performant, and much much less confusing to debug than threads. Heck, even no-side-effects functional language programming is better to parallelize than C with threads.
Sorry, I missed it at the time. BTW I wanted to avoid a double submit and I have been looking for some "search" box without success, it is probably obvious once you know where it is, where is it?
"Faster than C ... as most C programs being single threaded and ANI being inherently multithreaded."
in terms of South Park it is a Chewbacca argument.
Another example of it would be - "C is smarter than Python because most of programs in C is smarter than ones in Python." (C and Python choosen randomly as i have no idea about Python :)
The compiled binary result of an ANIC program can not run on Windows natively. Thus where on other systems my dependency list is simply the native binary, on Windows, Cygwin either needs to be packaged with the release or installed manually by the user.
It's a blurry question regarding where you say a VM/Compatibility shim means that you no longer running the application natively. The same can be asked of Java, Flash, and a lot of interrupted/bytecode languages. The distinction to me in this case is that the code is meant to be compiled to native code (which it isn't in this case), and Cygwin is not a common platform available on a Windows system.
[Edit:] The asker of parent question should not be down voted, it is a reasonable question, and as I've stated it's open to interruption.
Distributing even a simple unmodified GPL library with your code is a huge PITA. Your app may not become GPL'd, but you still need to distribute a copy of the GPL with all its attendant "paperwork" and 3 year ftp servers and original sources, yada yada.
Cygwin can see every Windows file trivially via /cygdrive. Programs compiled in cygwin are mostly native code that at the bottom rely on Windows DLLs. This is a pretty significant difference from an emulator or a virtual machine (someone mentioned Parallels below).
Frankly there isn't a language in existence that's "easy to read" and I'm kind of sick of seeing that claim. There's no way you can read anyone's code without knowing at least something about the language.
That said, I really like the ideas this language is offering and I'm definitely interested in giving it at try.
I find Applescript quite readable, especially for its purpose, but it's a pain to write, because, unless you use it often, you never know which subset of English is valid.
It's the assignment part; it says where the result should go. In this case, back to the "COBOL" variable. But, it's optional if you're assigning back to the source: ADD 1 TO COBOL means the same thing as ADD 1 TO COBOL GIVING COBOL. But, you can also do ADD 1 TO COBOL GIVING FOOBAR to assign it somewhere else.
(Yes, I just spent the last 10 minutes reading the COBOL pages on Wikipedia and Wikibooks.)
Allegedly in the 80s software vendors sometimes advertised that their products were programmable/extensible in plain English. By which they meant BASIC.
Yes, but the GP was commenting that they'd never code in that, given the "ugliness". I'd love to reply to both posts as a set, but alas, it doesn't offer that.
>I'd argue that Python is very easy to read for someone new to programming
I'd agree that programming languages do have various levels of "readability", and that Python is better than quite a few...
However I think the poster has a point.
Someone familiar to programming concepts and new to Python could probably read that...but so could they read the equivalent statement in most languages.
Someone completely new, and reading only that statement?
I'm not convinced....
It seems obvious to you because you know it already.
As a newbie, I could see that as assigning an entire list to animal and then printing it...or even other things. I wouldn't be sure what the : was about
Additionally I think a similar for loop in any language will be basically just as readable to a complete newbie...in fact some may find a c-style for loop more readable.
For example, do you really think a newbie will understand this:
def __gen(exp):
for x in exp:
yield x**2
g = __gen(iter(range(10)))
print g.next()
Sure, it's pretty easy to understand once explained ...but so are most languages. It's obviously more readable than a Perl one-liner, and Python has properties that cause it to generally be more readable than many other languages....but I think the point about programming languages overall being "not so readable" stands.
This makes sense though, as generally what people seem to mean by "readable" is "looks like English/forms a narrative"...and those often don't fit with what a computer program is (though sometimes they do).
That's not really true since most languages are some what similar with some what similar constructs. Anyone who can read C, can read a multitude of other languages pretty well and figure out, just from the source code itself, the meaning of any unknown constructs.
I don't know... I kind of like it. It's different, especially where the '\' is concerned, but the syntax as a whole has one thing I really really really like:
Much of the language doesn't use the shift-key, especially in the number-row. And where it does, it's frequently on easier-to-hit keys like '<' or '{'. There are still parenthesis, but they appear to be used far less frequently than, say, C.
APL, k, and similar languages are terse. No need for loops, verbose variable/function names, etc. It's far more productive.
Plus, since all actions are based on arrays, the compiler can very easily target vector processors (including SSE) and make cache-effecient memory allocations. And since the language is small, the interpreter is usually small enough to fit in L1 cache.
Oh, they make sense, I have nothing against the language itself. It's just a special kind of ugly to anyone who's not fluent in it. I dare say it's worse than even regex.
If you can get past the syntax it's a lot nicer. They remind me of unix command-line tools, but on a much finer-grained level and without all the useless parsing and un-parsing.
I think it would make an excellent backend for a visual programming language. I've developed a simple data flow-based system that I use for home automation, and could see using something like this to replace my current "interpreter." Imagine something like Yahoo Pipes or Pure Data compiled directly into parallel code.
Granted, tasks which have a natural sequence can be a lot more difficult in dataflow-driven languages, so use whatever works for a given situation.
Actually, that is something we've discussed on the mailing list (I've been on the list since the last time this was posted to HN) and its something I'm personally interested in building once the compiler and runtime are far enough along. Dataflow-based visual programming is something I've been very interested for a while now and is even partially related to my startup stuff, so its something close to my heart.
I think what you're trying to say is "it doesn't look like my favorite programming language". Never mind that it also is designed with completely different paradigms than what you are accustomed to.
Looks ulgy and hacky. But OTOH is mind bogglingly fast on multicore processors (i.e anything bigger than an iPhone). I think its just become the basis of my next project.
It looks like make(chan int, 2) creates a channel with a maximum depth of two elements. I was curious how channels work, so I looked up the relevant part of the spec (http://golang.org/doc/go_spec.html#Channel_types). In case anyone else was wondering, specifying the length of a channel allows it to be written and read asynchronously until the channel is full. A zero-length channel will block until both a sender and receiver access the channel.
I think the concept of a data-flow programming language is very interesting and I'd love to see an active, open-source data-flow programming language like Anic (which doesn't currently exists). Which makes me a bit uneasy about Anic is that it only has 3 committers, with the last update end of September, and no working compiler.
The developer needs two things: (a) financial support so he can work on it full time, and (b) an active community of contributors. Maybe some organization could sponsor him, or perhaps do a Kickstarter fundraise or simply ask around for donations... I just hope it gets some traction and a compiler; for certain problems Anic could be a very interesting and natural approach.
makes me a bit uneasy about Anic is that it only has 3 committers
Sadly its much worse than that.
Only one of those 3 committers, Adrian/Ultimus, is actually actively committing. I know this because I'm one of those three committers - I was given commit access because I helped answer questions and edit the wiki pages a little (because I knew enough about dataflow languages prior to encountering ANI that I was quickly able to understand the concepts and code). But I have yet to actually commit any code.
I was working on a simple x86 code generator (basically walk the AST, using maximal munch instruction tiling), but its not near working and I've been horribly busy with paying projects to finish it :(
I'm curious how a language with no compiler is "faster than C" since the speed of C is nothing to do with the language and everything to do with the compiler implementations which have been so heavily worked over the years.
I can believe it - I've made my own language and compiler whose results are faster in the niche area it targets than any C compiler I've ever seen - I'm just curious how this assertion is backed up.
Nowhere in the comments it is shown to be a false claim.
"Faster than C" is doable, but needs qualifiers, e.g:
CUDA in modern GPU hardware is faster than dektop C.
JITed Java was shown to be faster than C in several cases, because the JIT compiler could make adaptable optimizations based on the actual runtime environment.
And, any competent language that takes advantage of multiple cores easily is faster than most C equivalent programs (even most multi-threaded ones).
Those were my thoughts as well. I like the idea of a language where multithreading is the default, but all those back slashes - it reminds me of * in C.
My suggestion is to get an editor to look at that tutorial. It's fantastic that so much effort was put into presenting the language, but reading it is rather painful.
At best it's written like a oral lecture, at worst it's wordy, condescending, overuses of italics for emphasis, leads by the nose for no apparent reason, and has obvious exaggerations that aren't clarified.
The grammar is all fine and so is the general structure, but I suspect a good writer could help a lot with the remaining details.
While I partially agree with you're saying (VHDL is hard to write well; thinking in parallel is hard), I'm not sure I can completely agree. In it current textual form, you are probably right, but with a visual programming frontend, I think a dataflow language like ANI may actually be easier to program than a standard sequential program in a traditional language.
Why do I think this? Because visual dataflow languages have been very successful as programming tools for non-programmers in niche areas. Eg, in music production: Puredata, MAX/MSP, Syntmaker, Reaktor; in 3D modeling (at least Blender has a dataflow-esque language for describing the render passes, but I've seen other visualization/graphics programs use datalow-like visual programming languages); in game development tools (I've seen at least three commercial engines which use some form of dataflow-esque visual programming language for describing shaders, AI and probably other things); the scientific community has LabVIEW. I'm sure theres others too (not exactly non-programmers, but the defense/aerospace industries have SCADE).
Of course, coming up with an intuitive, yet sophisticated enough to do real programs in, visual representation and GUI interface would still be a difficult task and I certainly agree that for a textual programming langauge, your comment is probably correct.
Agreed. Dataflow is hard to program. A classic Verilog example:
always @(posedge clk) begin
b <= a;
a <= b;
end
You know how SW developers like the teaser of swapping words... This is how HW people do it (and of course, if you had used a regular assignment "=" instead of the non-blocking one "<=", the swap would fail).
And just like a year ago, there's nearly 100 comments about whether or not the syntax is ugly, whether or not you find it readable because you can't break out of your Algol mindset, whether or not anything can truly be "faster than C", whether or not C is easy to parallelize, the url spells "panic", etc etc, and yet there's only 1 or 2 people that bother to look at the commit log and browse the source tree to find out that it's total fucking vapor.
Pretty embarrassing for HN to be so reliably trolled.
I respect the author's dedication, but I think he has his priorities completely wrong (and I told him so last year): how can you commit something like "cleanup and performance boost for color-coded output" when there is not even a proof of concept that the language is implementable? He's worked a lot on the parsing and the front-end, but I wish he would get something compiled, anything, and just show that his idea can work, instead of debating about things that are ridiculously minor for the moment like syntax or an interactive environment. Chancho's post is exactly my thoughts.
For example, why not compile or manually translate some examples to C as a prototype? What can be done in assembly can mostly be done in C as well, if you're willing to sacrifice some performance.
Well, fwiw, the binary that does exist (which does, as far as i can tell, parsing, type inference/type checking and a bunch of semantics checking, plus it walks to AST to generate code (except it doesn't actually generate code yet), so far is insanely fast, even in verbose mode (where it prints a load of shit to stdout).
As for proof of concepts, on the mailing list he has stated that ANI is more or less a natural progression from previous unreleased projects.
Having said that, I agree that focusing on getting something working before trying to make it fast is the correct way to do it...
(To make my comment clearer: there is no working compiler, no code generation, not even a proof of concept of what the compiled code would look like. It could become more interesting, but so far it's just an imaginary language making wild promises. IIRC the author also wants to code the whole stdlib in assembly for speed and because it's so 'radically different' from C...)
There are things which aren't algorithms, but which can embody the same strategy as an algorithm, like sorting networks. Instead of series of steps, the sorting network specifies a bunch of data flows, wherein things can happen in parallel.
Interesting, if he knows what he's doing. This doesn't fill me with hope however:
Building main executable...
src/types.cpp: In member function 'TypeStatus::operator uintptr_t() const':
src/types.cpp:1514: error: cast from 'Type*' to 'unsigned int' loses precision