Hacker News new | past | comments | ask | show | jobs | submit login
Anic -- Faster than C, Safer than Java, Simpler than *sh (code.google.com)
187 points by jhrobert on Nov 12, 2010 | hide | past | favorite | 93 comments



gotta love the url http://code.google.com/p/anic/


Still reading through, it's definitely piqued my interest. However, in the interest of consistency [as it translates to readability] two issues right away:

1) Get rid of the infix modifier to the filter. The statement is made that in ANI everything flows left to right. The infix operator while providing familiar 1+6->7 notation conflicts with the above flow statement. 1,6+->7 while not immediately familiar is easy to understand after explained. However, added special case rules for infix operation muddies the water.

2) In the following code sample provided, the left to right rule is again not followed and thus leads to increased difficulty in [human] parsing:

multiPrint= [string\ s, int\ times] {

        \\[std.gen] <- \times s ->std.out;
};

"Hello, World!\n", 10 ->multiPrint;

Instead, to be consistent and follow the previous left to right flow rule, use the following syntax:

multiPrint= [string\ s, int\ times] {

        \times -> \\[std.gen];

        s ->std.out;
};

"Hello, World!\n", 10 ->multiPrint;

In general if there is an everything flows left to right rule, there should not be a <- operator.

imho


s/peeked/piqued


Thanks.


Faster than C – this is explained in the FAQ as most C programs being single threaded and ANI being inherently multithreaded.

ANIC was posted to HN most of a year ago: http://news.ycombinator.com/item?id=1042122 with much commentary.


People shouldn't make claims like that honestly. It makes be discount the language almost immediately because I know I can't really trust what's written about it on its site.

Furthermore, there's tons and tons of parallel C code. Probably more than any other language. Almost any decent C library is reentrant. Many of major C projects (mostly thinking of server software and operating system kernels) are threaded.


Hell, many operating system's thread schedulers are written in C..


Ironically the task of scheduling threads isn't one that is particularly parallel, but I think the point stands.


Actually, it's entirely parallel, unless you only have one core. In a typical OS (like Linux), the scheduler runs on each core and selects which task to run next on that core.


People should also understand generalization, though.

That there's "tons and tons of parallel C code" doesn't matter.

Parallelization is built-in into this language. It is not built-in, as a first class language feature, in C.

A lot of C projects, such as servers, are indeed threaded, but threads are also so last century.

Different paradigms (like in Erlang) are better to parallelize, easier to write, more performant, and much much less confusing to debug than threads. Heck, even no-side-effects functional language programming is better to parallelize than C with threads.


Sorry, I missed it at the time. BTW I wanted to avoid a double submit and I have been looking for some "search" box without success, it is probably obvious once you know where it is, where is it?




Don't worry about it. Obviously generated some fresh interest today.


Look at the links in the page footer (Lists, RSS, Search, ...).


Thanks. it's obvious, now! ;)


"Faster than C ... as most C programs being single threaded and ANI being inherently multithreaded."

in terms of South Park it is a Chewbacca argument. Another example of it would be - "C is smarter than Python because most of programs in C is smarter than ones in Python." (C and Python choosen randomly as i have no idea about Python :)


Maybe it's just me, but I don't believe you can claim Windows supported by Cygwin proxy.


Why not?


The compiled binary result of an ANIC program can not run on Windows natively. Thus where on other systems my dependency list is simply the native binary, on Windows, Cygwin either needs to be packaged with the release or installed manually by the user.

It's a blurry question regarding where you say a VM/Compatibility shim means that you no longer running the application natively. The same can be asked of Java, Flash, and a lot of interrupted/bytecode languages. The distinction to me in this case is that the code is meant to be compiled to native code (which it isn't in this case), and Cygwin is not a common platform available on a Windows system.

[Edit:] The asker of parent question should not be down voted, it is a reasonable question, and as I've stated it's open to interruption.


What about .NET? By your logic, applications written for the .NET CLR don't support Windows (XP) either.


Well, they don't. That XP users are likely to have something installed that has dragged in .NET doesn't change that fact.


Perhaps you meant interpreted/interpretation instead of interrupted/interruption?


Cygwin is GPL.

Distributing even a simple unmodified GPL library with your code is a huge PITA. Your app may not become GPL'd, but you still need to distribute a copy of the GPL with all its attendant "paperwork" and 3 year ftp servers and original sources, yada yada.


That's like saying a windows app supports OSX by installing a Windows XP VM in Parallels.


No it's not. Virtualization and a compatibiltiy DLL are orders of magnitude different, both conceptually and in terms of performance.


So every windows only company can claim full Linux support ...via wine.


Unfortunately, not. Wine isn't that good, yet. E.g. .net stuff often doesn't run.


Cygwin can see every Windows file trivially via /cygdrive. Programs compiled in cygwin are mostly native code that at the bottom rely on Windows DLLs. This is a pretty significant difference from an emulator or a virtual machine (someone mentioned Parallels below).


wine is not an emulator.


Am I the only one thinking this code looks ugly and hacky. I would never code in this.


Frankly there isn't a language in existence that's "easy to read" and I'm kind of sick of seeing that claim. There's no way you can read anyone's code without knowing at least something about the language.

That said, I really like the ideas this language is offering and I'm definitely interested in giving it at try.


I'd argue that Python is very easy to read for someone new to programming, particularly someone familiar with mathematical notation. Something like:

    for animal in ['dog', 'cat', 'bear']:
        print animal
strikes me as intuitive and obvious, both absolutely and also relative to the equivalents in other languages.


That is not necessarily a sign of a good language, though.

ADD 1 TO COBOL GIVING COBOL

is also readable, and yet (thankfully!) we've mostly abandoned COBOL.

And no, I'm not saying Python is bad, or like COBOL - just that readability and quality of a language are not necessarily related.


I don't think that "englishy" (COBOL, applescript) code is readable, and I bet many programmers agree.

Python is readable because its syntax is very close to many people's version of pseudocode. This is why non-Python programmers find it easy to read.


> Python is readable because its syntax is very close to many people's version of pseudocode

Or at least that is what Python developers seem to believe


I find Applescript quite readable, especially for its purpose, but it's a pain to write, because, unless you use it often, you never know which subset of English is valid.


I don't know that I agree, I really am not sure what "GIVING COBOL" means.


It's the assignment part; it says where the result should go. In this case, back to the "COBOL" variable. But, it's optional if you're assigning back to the source: ADD 1 TO COBOL means the same thing as ADD 1 TO COBOL GIVING COBOL. But, you can also do ADD 1 TO COBOL GIVING FOOBAR to assign it somewhere else.

(Yes, I just spent the last 10 minutes reading the COBOL pages on Wikipedia and Wikibooks.)


It's a COBOL joke. (And a bad one at that)

ADD 1 TO X GIVING Y is COBOL's way of saying y=x+1 ADD 1 TO COBOL GIVING COBOL is COBOL=COBOL+1. Or COBOL++ :)


Really? I've never coded in COBOL and it was pretty obvious straight away that it was an assignment.


Allegedly in the 80s software vendors sometimes advertised that their products were programmable/extensible in plain English. By which they meant BASIC.


I wasn't responding to a comment on language quality, I was responding to a comment on language readability.


Yes, but the GP was commenting that they'd never code in that, given the "ugliness". I'd love to reply to both posts as a set, but alas, it doesn't offer that.


>I'd argue that Python is very easy to read for someone new to programming

I'd agree that programming languages do have various levels of "readability", and that Python is better than quite a few...

However I think the poster has a point.

Someone familiar to programming concepts and new to Python could probably read that...but so could they read the equivalent statement in most languages.

Someone completely new, and reading only that statement?

I'm not convinced....

It seems obvious to you because you know it already.

As a newbie, I could see that as assigning an entire list to animal and then printing it...or even other things. I wouldn't be sure what the : was about

Additionally I think a similar for loop in any language will be basically just as readable to a complete newbie...in fact some may find a c-style for loop more readable.

For example, do you really think a newbie will understand this:

    def __gen(exp):
        for x in exp:
            yield x**2
    g = __gen(iter(range(10)))
    print g.next()
Sure, it's pretty easy to understand once explained ...but so are most languages. It's obviously more readable than a Perl one-liner, and Python has properties that cause it to generally be more readable than many other languages....but I think the point about programming languages overall being "not so readable" stands.

This makes sense though, as generally what people seem to mean by "readable" is "looks like English/forms a narrative"...and those often don't fit with what a computer program is (though sometimes they do).


That's not really true since most languages are some what similar with some what similar constructs. Anyone who can read C, can read a multitude of other languages pretty well and figure out, just from the source code itself, the meaning of any unknown constructs.


I don't know... I kind of like it. It's different, especially where the '\' is concerned, but the syntax as a whole has one thing I really really really like:

Much of the language doesn't use the shift-key, especially in the number-row. And where it does, it's frequently on easier-to-hit keys like '<' or '{'. There are still parenthesis, but they appear to be used far less frequently than, say, C.

Besides. You want ugly? Try K: http://en.wikipedia.org/wiki/K_(programming_language) . Or, heck, anything APL influenced.


APL, k, and similar languages are terse. No need for loops, verbose variable/function names, etc. It's far more productive.

Plus, since all actions are based on arrays, the compiler can very easily target vector processors (including SSE) and make cache-effecient memory allocations. And since the language is small, the interpreter is usually small enough to fit in L1 cache.


Oh, they make sense, I have nothing against the language itself. It's just a special kind of ugly to anyone who's not fluent in it. I dare say it's worse than even regex.


If you can get past the syntax it's a lot nicer. They remind me of unix command-line tools, but on a much finer-grained level and without all the useless parsing and un-parsing.


I think it would make an excellent backend for a visual programming language. I've developed a simple data flow-based system that I use for home automation, and could see using something like this to replace my current "interpreter." Imagine something like Yahoo Pipes or Pure Data compiled directly into parallel code.

Granted, tasks which have a natural sequence can be a lot more difficult in dataflow-driven languages, so use whatever works for a given situation.


Actually, that is something we've discussed on the mailing list (I've been on the list since the last time this was posted to HN) and its something I'm personally interested in building once the compiler and runtime are far enough along. Dataflow-based visual programming is something I've been very interested for a while now and is even partially related to my startup stuff, so its something close to my heart.


I've been writing a lot of unix system tools that deal with stream in Node.JS. This might replace some of them.


I think what you're trying to say is "it doesn't look like my favorite programming language". Never mind that it also is designed with completely different paradigms than what you are accustomed to.


Classic Blub programmer reaction; extra bonus for dismissing a language just because of unfamiliarity/dislike of the syntax.


It's not uglier than Perl.

Can't be.


Looks ulgy and hacky. But OTOH is mind bogglingly fast on multicore processors (i.e anything bigger than an iPhone). I think its just become the basis of my next project.


Latch concept is probably worth exploring further. Does anyone know of a language with native variables being a FIFO pipe? I.e.

  x = 1   // x now contains 1
  x = 2   // x now contains 1 and 2
  y = x   // x now contains 2, y now contains 1
  ...
This sort of thing. Not as an add-on construct, but as a native part of the language.


go channels are like that (im a little rusty but it works pretty much like this).

c := make(chan int, 2);

c <- 1;//c contains 1

c <- 2;// c contains 1 followed by 2

y <- c;//y=1


It looks like make(chan int, 2) creates a channel with a maximum depth of two elements. I was curious how channels work, so I looked up the relevant part of the spec (http://golang.org/doc/go_spec.html#Channel_types). In case anyone else was wondering, specifying the length of a channel allows it to be written and read asynchronously until the channel is full. A zero-length channel will block until both a sender and receiver access the channel.


Not "native variables", but any concurrency package built on CSP does this. For languages where it is more built in, consider Erlang or Occam.


VHDL, a hardware description language can be used in the dataflow style and models the parallelism found in hardware.


Easy enough with Perl arrays using push and shift.


With the added bonus that Perl makes it obvious you're talking about an array.


Go channels?



I think the concept of a data-flow programming language is very interesting and I'd love to see an active, open-source data-flow programming language like Anic (which doesn't currently exists). Which makes me a bit uneasy about Anic is that it only has 3 committers, with the last update end of September, and no working compiler.

The developer needs two things: (a) financial support so he can work on it full time, and (b) an active community of contributors. Maybe some organization could sponsor him, or perhaps do a Kickstarter fundraise or simply ask around for donations... I just hope it gets some traction and a compiler; for certain problems Anic could be a very interesting and natural approach.

PS: The first response in this thread contains a few well articulated reasons for being interested in Anic: http://groups.google.com/group/ani-compiler/browse_thread/th...


makes me a bit uneasy about Anic is that it only has 3 committers

Sadly its much worse than that.

Only one of those 3 committers, Adrian/Ultimus, is actually actively committing. I know this because I'm one of those three committers - I was given commit access because I helped answer questions and edit the wiki pages a little (because I knew enough about dataflow languages prior to encountering ANI that I was quickly able to understand the concepts and code). But I have yet to actually commit any code.

I was working on a simple x86 code generator (basically walk the AST, using maximal munch instruction tiling), but its not near working and I've been horribly busy with paying projects to finish it :(


I'm curious how a language with no compiler is "faster than C" since the speed of C is nothing to do with the language and everything to do with the compiler implementations which have been so heavily worked over the years.

I can believe it - I've made my own language and compiler whose results are faster in the niche area it targets than any C compiler I've ever seen - I'm just curious how this assertion is backed up.


nm, i see in the comments... so its a false claim then. nice. :/


Nowhere in the comments it is shown to be a false claim.

"Faster than C" is doable, but needs qualifiers, e.g:

CUDA in modern GPU hardware is faster than dektop C.

JITed Java was shown to be faster than C in several cases, because the JIT compiler could make adaptable optimizations based on the actual runtime environment.

And, any competent language that takes advantage of multiple cores easily is faster than most C equivalent programs (even most multi-threaded ones).


That is the fugliest thing I've ever seen.

Looks like it's well written, planned and engineered tho.


Those were my thoughts as well. I like the idea of a language where multithreading is the default, but all those back slashes - it reminds me of * in C.


Yeah, it could use with different characters for syntax.


My suggestion is to get an editor to look at that tutorial. It's fantastic that so much effort was put into presenting the language, but reading it is rather painful.

At best it's written like a oral lecture, at worst it's wordy, condescending, overuses of italics for emphasis, leads by the nose for no apparent reason, and has obvious exaggerations that aren't clarified.

The grammar is all fine and so is the general structure, but I suspect a good writer could help a lot with the remaining details.


Looks like a computationally executed VHDL.

Interesting, but not necessarily useful. And looking how hard it is to "write" good vhdl logic, I'm not sure how many could handle coding in ANI.


While I partially agree with you're saying (VHDL is hard to write well; thinking in parallel is hard), I'm not sure I can completely agree. In it current textual form, you are probably right, but with a visual programming frontend, I think a dataflow language like ANI may actually be easier to program than a standard sequential program in a traditional language.

Why do I think this? Because visual dataflow languages have been very successful as programming tools for non-programmers in niche areas. Eg, in music production: Puredata, MAX/MSP, Syntmaker, Reaktor; in 3D modeling (at least Blender has a dataflow-esque language for describing the render passes, but I've seen other visualization/graphics programs use datalow-like visual programming languages); in game development tools (I've seen at least three commercial engines which use some form of dataflow-esque visual programming language for describing shaders, AI and probably other things); the scientific community has LabVIEW. I'm sure theres others too (not exactly non-programmers, but the defense/aerospace industries have SCADE).

Of course, coming up with an intuitive, yet sophisticated enough to do real programs in, visual representation and GUI interface would still be a difficult task and I certainly agree that for a textual programming langauge, your comment is probably correct.


Agreed. Dataflow is hard to program. A classic Verilog example:

  always @(posedge clk) begin
    b <= a;
    a <= b;
  end
You know how SW developers like the teaser of swapping words... This is how HW people do it (and of course, if you had used a regular assignment "=" instead of the non-blocking one "<=", the swap would fail).


AFAIK, this thing doesn't work any more than it did a year ago...


And just like a year ago, there's nearly 100 comments about whether or not the syntax is ugly, whether or not you find it readable because you can't break out of your Algol mindset, whether or not anything can truly be "faster than C", whether or not C is easy to parallelize, the url spells "panic", etc etc, and yet there's only 1 or 2 people that bother to look at the commit log and browse the source tree to find out that it's total fucking vapor.

Pretty embarrassing for HN to be so reliably trolled.


The commit log shows plenty of activity: http://code.google.com/p/anic/source/list


I respect the author's dedication, but I think he has his priorities completely wrong (and I told him so last year): how can you commit something like "cleanup and performance boost for color-coded output" when there is not even a proof of concept that the language is implementable? He's worked a lot on the parsing and the front-end, but I wish he would get something compiled, anything, and just show that his idea can work, instead of debating about things that are ridiculously minor for the moment like syntax or an interactive environment. Chancho's post is exactly my thoughts.

For example, why not compile or manually translate some examples to C as a prototype? What can be done in assembly can mostly be done in C as well, if you're willing to sacrifice some performance.


Well, fwiw, the binary that does exist (which does, as far as i can tell, parsing, type inference/type checking and a bunch of semantics checking, plus it walks to AST to generate code (except it doesn't actually generate code yet), so far is insanely fast, even in verbose mode (where it prints a load of shit to stdout).

As for proof of concepts, on the mailing list he has stated that ANI is more or less a natural progression from previous unreleased projects.

Having said that, I agree that focusing on getting something working before trying to make it fast is the correct way to do it...


(To make my comment clearer: there is no working compiler, no code generation, not even a proof of concept of what the compiled code would look like. It could become more interesting, but so far it's just an imaginary language making wild promises. IIRC the author also wants to code the whole stdlib in assembly for speed and because it's so 'radically different' from C...)


It looks tantalizing, and I love shell scripts, and I've spent countless hours wishing for pipes in my programming language.

But: when it says, "ANI is designed to abstract away from the idea of an "algorithm" altogether," I go, "quackery."


There are things which aren't algorithms, but which can embody the same strategy as an algorithm, like sorting networks. Instead of series of steps, the sorting network specifies a bunch of data flows, wherein things can happen in parallel.


At first glance I always think it's "simpler than shit.", but then I read the article, and no, for my poor brain it is not simpler than shit.


This latching and piping seems intuitive if you have done some monadic programming in Haskell.


I'm not really sure of the accuracy of any of its claims, but it certainly looks interesting.


what problem does this solve?


The syntax hurts my eyes.


Interesting, if he knows what he's doing. This doesn't fill me with hope however:

Building main executable... src/types.cpp: In member function 'TypeStatus::operator uintptr_t() const': src/types.cpp:1514: error: cast from 'Type*' to 'unsigned int' loses precision


Its easily fixed though: either compile on a 32bit machine, or change that line to type-cast to uintptr_t instead of unsigned int.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: