I was surprised by my almost-panicky reaction to seeing:
Identifiers Can Have Blanks
open_window_with_attributes(...)
becomes:
open window with attributes (...)
I think I actually felt that wrongness in my stomach. Like a more intense version of seeing our corporate network shared drive's files with spaces and parens in them.
I had a similar reaction, and I'm not sure that it's a "damn kids, get off my lawn" reaction. Specifying an unambiguous grammar may be difficult - which implies parsing may become a problem.
An implementation exists, so the author has something working, but I'm wondering how robust the parsing is. I haven't seen many code examples (only short fragments on the page), so I don't know what potential issues, if any, there are. But, this is the sort of thing that could significantly complicate adding new language features that requires additional syntax.
edit: I'm perusing the source for the compiler, which is of course written in Zinc. This code from the main driver of the compiler perhaps gives a better feel for how it may look in practice:
while i < argc
def arg = argv[i]
if is equal (arg, "-debug")
debug = true
elsif is equal (arg, "-v")
version = true
elsif is equal (arg, "-u")
unicode = true
elsif is equal (arg, "-o") && i < argc-1
out filename = new string (bundle, argv[++i])
to OS name (out filename)
elsif is equal (arg, "-I") && i < argc-1
append (include path, new string (bundle, argv[++i]))
else
filename = new string (bundle, arg)
end
++i
end
From an aesthetic point of view, it doesn't look that bad. In this example, I think "is equal", "out filename", "to OS name" and "include path" are all identifiers. But I'm still wondering what kind of parsing and lexing issues that may arise.
I already have hard time parsing this code. The main problem I see, is that to read the code I have to know every single keywords in the language.
For example I was wondering if "new" is a keyword. If it is, then "new string ()" might be something interesting, otherwise it's just a function call.
Similarly this raises a question of whether I can write the following code:
if end of line (str)
This might or might not be permitted because "end" is a keyword. If it is permitted, then the result looks pretty damn ambiguous to me. If it's not then I have to name my identifier differently, like so:
I thought of the recognizing a keyword issue, but then I dismissed it: syntax highlighting make it a non-issue. The ambiguity with using keywords in identifiers is valid, though.
Your complaint is that you can't parse the code with 100% certainty without some basic knowledge of the language? That hardly seems like a complaint at all. The same is true of any language that isn't explicitly identical to one you already know.
No, his complaint is that you can't parse the code without knowing all the keywords in the language.
He wrote every single keyword == all the keywords.
That's not basic knowledge of a language.
Maybe I'm odd, but when I start on a new language I don't learn all the syntax first, I usually I start mucking around with variable declaration, iterators, simple stuff like that just to get a feel of it. I'm guessing it's not that odd as most tutorials also follow that approach.
I don't see how what you said disagrees with what I said aside from a minor semantic quibble. You can't parse any other language with certainty without knowing all the keywords either. For example, in Ruby, you might see the identifier "continue" by itself. Is this a method call, variable access or keyword? What about "private"? No way to know if you don't know all the keywords. This is precisely cognate with nene's objection that you can't tell whether "new" is a keyword in Zinc if you don't know the language's keywords.
You can form a rough guess of what the various tokens are in a snippet of Ruby code without knowing all the keywords, and you can do the same with Zinc code. Any additional difficulty is most likely because you're less familiar with Zinc, not because it has the nearly universal property of needing to know the full grammar to correctly parse arbitrary programs.
And I agree, playing around with a language is a great way to learn. But if you play around without reading about the things you're doing first, you should expect not to always know what you're doing. That's a huge part of the learning process.
While i didn't panic, i find myself having quite a negative reaction to a language in which "Identifiers can have blanks" is listed under main features.
EDIT : Also, i see quite the opportunity from wrong parsing, not on the machine side, but on the human side. blanks already have a function in other programming languages : They are here to separate symbols. By giving them this double meaning, you actually bring context in the parsing of any piece of code, which i think could be a pretty painful exercise.
Other version : Don't design a language version because it makes code easier to type, if it doesn't also make it easier to read
(I know the author thinks it easier to read, but i'm not yet convinced about that)
I don't see why parsing would be a problem; identifiers (and their pieces) always start with letters (so no "var 1"), alphanumerical, and cannot be a reserved word.
Meaning, parse word by word until you hit a key word or a significant character (,:". etc). You can't have "varb function(arg)" or its equivalent in any language I know, because it doesn't make sense - there's no operation on the varb, it's just "there". Similarly, "x y z = q r t" is unambiguous, because there's no stop to parsing either "x y z" or "q r t".
I think I'd like it. Hitting shift all the time, or reaching for "_" is a PITA and significantly slows my typing. It's especially annoying when you realize that identifiers with blanks could be leveraged into most languages with almost zero change to the parser, as long as it requires an end-of-statement terminator or ends on newlines.
Meaning, parse word by word until you hit a key word or a significant character (,:". etc).
If keywords are allowable in identifiers (such as "end of file"), then your algorithm is not sophisticated enough. When you encounter a token that is the same token as a keyword, you need to use context to determine if it is actually a keyword or part of an identifier.
This may be a serious problem if the grammar has "<identifier> <keyword>" in it. That is, "X keyword" could be the identifier "X keyword" or it could be the identifier "X" followed by "keyword." There's a reason that most programming languages require that identifiers are a single token.
> When you encounter a token that is the same token as a keyword, you need to use context to determine if it is actually a keyword or part of an identifier.
You're presuming here that a space delimits tokens. In this language, that may not be the case. The lexer may create a single token from "a b c".
Big "if" (why shouldn't it disallow them?), and completely resolved by modifying your naming scheme in those situations: EndOfFile is unambiguous, as is end_of_file, ifSuccess, etc.
It's unusual as most programming languages allow keywords to appear in identifiers (for example, new_thing is a legal C++ identifier). Further, if I understand the language correctly, the literal "end_of_file" becomes the same identifier as "end of file". And the stated purpose of allowing white space in identifiers is to avoid camel case and underscores.
I don't think that's the case. I think the example was just to show how you can write with spaces instead of underscores. I could be wrong though, I haven't tried the language.
The documentation doesn't state one way or the other, but it does include underscores as part of identifiers, and doesn't mention any stripping. Only that spaces are ignored entirely.
That strikes me as giving lie to the "Ruby-like syntax" claim; ask a Ruby programmer what that line means and you will not get the correct answer for Zinc.
Actually the connection with Ruby is tenuous anyhow; Ruby and assembler just don't go together. An assembler should produce a very clear one-to-one correspondence of instruction to machine language opcode, pretty much by definition. A high-level language can turn a simple statement into arbitrarily-complicated run-time code, pretty much by definition. Neither of these are criticisms by any means, it's just what they are. There isn't much syntax cross-talk to be had there.
I said "run-time code", not bytecodes. I'm talking about what actually executes. I've seen "bytecodes" that qualify as high-level languages by this standard, like CPython bytecode. Is that even so surprising? Single bytecodes for OO languages can translate to a lot of work to resolve.
And, what about Forth? It's a fairly low-level language by this standard. It has convenient ways to link together a lot of little functions, but one word does not dispatch on types and expand operator overloading and do the other things that can result in one line of C++ producing half a kilobyte of code, to say nothing of the functions that half-a-kilobyte may be invoking. Nor do I see why you think that's related to the syntax point.
I really have no idea what points you or your upmodders think you've won.
Actually, bytecodes for langs like Smalltalk can get you down to controlling all of your runtime state down to the level of bits. (Squeak actually runs bit-identical on something like 50 environments!)
As for precisely what runtime instructions are executed, most of the time, we can consider this to be an implementation detail. In the case of superscalar processors, you can't necessarily tell me what order your assembly language instructions are executed.
And, what about Forth? It's a fairly low-level language by this standard.
It bridges the gap between high-level and low level. It's a clear piece of evidence that there isn't such a huge gulf as you claim.
but one word does not dispatch on types and expand operator overloading and do the other things that can result in one line of C++ producing half a kilobyte of code
There are high level languages that don't do this either. Actually, I know of a specialized declarative Smalltalk that has gotten the entire image down to 45k. A Smalltalk VM is basically little more than a 256 branch switch statement, plus message dispatch, plus GC.
The gulf isn't nearly as large as you imagine. Rather, there are a number of "high level" languages that are actually pretty minimal.
You've really missed my point. I pretty much defined high level and low level by how they expanded out from simple instructions. You can't cite examples to prove this is wrong, by definition you've classified your definitions wrong. That this is not a universal definition doesn't bother me one little bit, because there is not universal definition of any non-trivial software engineering term.
If you want to redefine a commonly used term your doing something wrong. What you need to do is define a new term. Replace "High Level Languages" with "Abstract Languages" and nobody would have a problem with what you said. It might not mean anything, but at least it's clear. However, when you, redefine an existing term and you can be wrong and people will call you on it.
Although I agree that it isn't really possible to have a "Ruby-like" syntax for a low level language since a lot of the syntax depends on Ruby being dynamic, it still seems like valid ambition, as long as you know that limitation.
I would love to have a form of C / C++ with iterators and blocks and without all the curly braces and assorted cruft like 5 different ascii symbols being used in 20 different contexts (actually, Ruby does that too, when will language designers start using a few additional symbols to improve cognitive load?).
"I did this because I hate uppercase characters in the middle of identifiers and I'm too lazy to type shift to get the '_'. In addition, I find it more readable."
Haha! I'm on Dvorak too and didn't understand at all why JohnnyCache thought hitting "-" was so much harder than Space! I guess I've been completely converted for too long.
As much as your criticisms may be valid, I think he has given sufficient justification:
"I did this because I hate uppercase characters in the middle of identifiers and I'm too lazy to type shift to get the '_'. In addition, I find it more readable."
This kind of "Because I said so" reasoning is valid in pretty much any hobbyist-type situation as far as I'm concerned. If you don't like it, fork it.
The Zinc paper is one of my all time implementation papers, right up there with Dybvig's thesis, Rabbit and Orbit papers on Scheme, Reppy's thesis on Concurrent ML, and SPJ's Tagless paper.
The Zinc Experiment is Leroy at his best; compiler hacking lore meets programming language research (no hand-waving past performance issues, with a critical eye towards foundations.)
What is wrong with 64 bit integers? Maybe they've been indicted on war crimes or something. The number of languages that appear and don't support them.... And what about interfacing with C? I can count the languages on one hand that have a simple and efficient C interface! (I have a list of other things almost always ignored by languages for no good reason... efficiency, friendly license, lack of macros or ability to extend the language...)
I will try to ignore the shallow (but horrifying) issue of identifiers including spaces.
The real question to be asked here is what is wrong with the current portable assembler (C) ? C has occupied this niche for a long time and quite successfully - I believe all current mainstream kernels are written in C (or possibly a limited subset of C++).
If you want a 'portable assembler', a modern C compiler is in my opinion, a good choice:
- a solid specification: detailing the behaviour of operations, what is defined, implementation, or undefined behaviour.
- access to platform specific features through builtins and intrinsics
- ability to use inline asm if you really want to (or need to)
- easy integration with existing libraries
- minimal dependencies on a runtime library (pretty much none in freestanding implementations)
- most compliers give have ways to get good control of both what code is generated and structure layout.
The modern C ecosystem provides (mostly good) tools for:
Admittedly, most of these tools don't depend on the code being written in C, but I suspect any new language would take a while to get properly integrated. If you want to use a low level language, you really want to have access to these tools or equivalent.
A new language trying to compete in this space would have to offer something fairly substantial to get me to switch - and a strange syntax like zinc is not going to help. From the documentation at least, zinc seems to currently be missing: an equivalent to volatile; asm; anyway to access a CAS like instruction; 64bit types; floats; a way to interface to C code; clear documentation about behaviour in corner cases (what happens if you a left shift a 32bit value by 40?). The only thing seems to bring to the table to compensate is the ability to inherit structures
I agree with you. I just wanted to list the one complaint I do have about C: missed optimization opportunities due to lax aliasing rules.
Consider the following C translation unit:
void foo(const int *i);
void bar();
int baz() {
int i = 1;
foo(&i);
return i + 1;
}
int quux() {
int i;
foo(&i);
i = 1;
bar();
return i + 1;
}
You'd like to think that both baz() and quux() could compile the return statements to a constant "return 2." After all, foo() is taking a pointer to a CONST int. But alas, this is not the case, because foo() could cast away the const. So in truth, both functions are forced to reload the integer from the stack, add 1 to it, and then return that! You can't use any values you had loaded in registers (or in this case, you can't evaluate the expression at compile time).
My example is contrived, but you can easily construct examples that fit the same pattern and are real.
I've heard that Fortran still beats C in optimization in some cases; I would expect that the above is one major reason why. C99's "restrict" addresses some of the difference but cannot help you with the above.
The main problems with C are inability to control memory layout in fine detail, and lack of control over the calling sequence - you can't portably get a tail call. Have a look at the C-- work by Simon Peyton Jones and Norman Ramsey and others for more details.
I guarantee that I would confuse the types "byte" (uint8_t) and "octet" (int8_t). The typical distinction between a byte and an octet has to do with the number of bits in the representation (a byte usually has 8, an octet always has 8). I don't know of any convention for bytes being unsigned and octets being signed.
You're right that with "byte" there isn't an official size specification, although the de facto size is 8 bits, unlike with "octet", which was specifically defined as 8 bits (for interoperability between different systems).
Regarding the question of signed/unsigned - I'll try to explain:
byte - unsigned
On page 37 of the C99 standard: "A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to 2^CHAR_BIT - 1)"
i.e. according to the C99 standard, a byte is unsigned.
octet - signed
Think of an octet in two ways: the concept of something that is exactly 8-bits on the one hand, and on the other hand, the technical representation of this concept.
When you read the literature you'll notice that an octet refers simply to the size of something (8 bits) and not is signedness.
For example, octets arguably arose in the networking world, and the NDR (Network Data Representation) refers to octet in sign-neutral way.
On page 256 of the C99 standard: "The typedef name int N _t designates a signed integer type with width N, no padding bits, and a two’s-complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits."
Now, how would you go about representing the concept of an "octet" (which is sign-neutral)? If you used an unsigned 8 bit integer, you can't represent the sign of the (conceptual) octet, while a signed 8 bit type can.
I guess I'm old.