What is "uniquely American" about the word apple? Both the word and fruit come from elsewhere and still abound in those places: the word came into use in English over a thousand years ago, and the first apple tree to grow in the Americas would have been planted by a European settler some centuries later. There are cultural associations with apples in America, but do these attend to the word or the culture? And are they necessary for an understanding of the word 'apple' - whether by someone learning English as a second language, or as their first but in another part of the world?
Perhaps you mean that 'apple' has a different meaning in the United States than, say, in Wales, because its web of implications looks different in one place than another. Following that thought, though, the same would be true of two American speakers, who surely have their own idiosyncratic webs. It's an interesting idea. But are words not 'synonyms' that have the same referent, only because two speakers have different relationships to that referent? Is a word partly its evocation? Or can we look at its evocation separately from a stricter 'meaning' it shares between speakers? (Surely it shares something, or language would lose its point.)
Incidentally, synonym is not the word to be nullifying here. A synonym is a like word, something that may equal the original but usually differs in degree, amount, tone, allusion, or other effect. Anyone using a thesaurus without a dictionary is sure to embarrass themselves sooner or later: differences in meaning between like words are common, and it is no revelation to say that one speaker will have different associations with a word than another, particularly if they come from different cultures.
Absolutely. Red and blue states. The east and west coast. Socioeconomic class. Gender, race, age, education. You and I.
But those are the differences. What we have in common also pertains. Almost everyone is American, exposed to the same media climate, and, most importantly, speaks English.
And that is why we communicate. We connect and overcome our differences using what we have in common to get things done.
"Apple" is only the tip of the iceberg. It will mean different things to different people. But what we share between us culturally is the American "apple" and the English "apple". If we compare that with the Japanese "ringo" and the Japanese word "ringo" there will be differences. To say "apple" = "ringo" is only equating symbols and mere entry points, to which not all else automatically follows.
> evocation separately from a stricter 'meaning'
There is evocation, and there is meaning, at all times. There is also context, and the intent of the speaker. There is even body language and tone. Even this is a simplification, but it is far more accurate than what they taught most of us at school, which is something like "language = grammar + vocabulary". This model does not translate mechanically even though theoretically it's suppose to. What we've now found is that what is missing is not technology or algorithms or processing power, but rather, most of the picture. That's why it still takes a good human translator to translate it all. Computers still cannot infer intent, transfer emotions, or cross cultural lines without embarrassing themselves.
(Thank you for a thoughtful and stimulating response.)
This doesn't quite hit the mark. The example given in Klisp will work with an arbitrary number of arguments. What looks to be the second of two named arguments, e, is actually the dynamic environment from which $and? is called.
Special Forms in Lisp [0] by Kent Pitman (1980) is about FEXRs vs. MACRO:
> It is widely held among members of the MIT Lisp community that FEXPR, NLAMBDA, and related concepts could be omitted from the Lisp language with no loss of generality and little loss of expressive power, and that doing so would make a general improvement in the quality and reliability of program-manipulating programs.
> There are those who advocate the use of FEXPR's, in the interpreter for implementing control structure because they interface better with certain kinds of debugging packages such as TRACE and single-stepping packages. Many of these people, however, will admit that calls to FEXPR's used as control structure are bound to confuse compilers and macro packages, and that it is probably a good idea, given that FEXPR's do exist, to require compatible MACRO definitions be provided by the user in any environment in which FEXPR's will be used. This would mean that a person could create a FEXPR named IF, provided he also created an IF MACRO which described its behavior; the FEXPR definition could shadow [12] the MACRO definition in the interpreter, but programs other than the interpreter could appeal to the MACRO definition for a description of the FEXPR's functionality.
But, of course, if you just write the macro IF, it's pointless to then write a FEXPR. This is because the interpreter can use the IF macro just fine. Nobody is going to write every operator twice in a large code base, first as a FEXPR operator and then a macro operator. It's extra development work, plus extra work to validate that the two behave the same way and are maintained in sync.
Kent is writing very theoretically there and being very generous to the idea.
Single stepping through macro-expanded code is perfectly possible. There is no debugging disadvantage between stepping through a macro-expanded control flow operator, versus one which is interpreted. In both cases, the single-stepping interpreter can know the source code location where the argument expressions came from and jump the cursor there, providing visual stepping.
Not to mention that compiled code can be stepped through a source code view; countless programmers have been doing this in C for decades, and similar languages. Given that we can write an if statement in C, compile it and step through it in gdb, the position that we benefit from an FEXPR to do the same thing in a Lisp interpreter is rather untenable.
but not a macro in the sense Common Lisp knows them... FEXPRs are first class objects. FEXPRs can be passed around as values of variables, whereas macros cannot
The point isn't that eval or fexprs are good to use all over the place. It's interesting as an exercise in language design to see what is possible when they are available.
And to the fact that fexprs operate on second class data. It's still a win that they are first class objects. It means you can dynamically pick which fexpr (or applicative operator) to call on a set of arguments, which like you said, can be selectively eval'd.
As far as the FEXPR itself is concerned, exactly that is possible with (some-fexpr env arg1 arg2 ...) as what is possible with (some-function env 'arg1 'arg2). It's just to that you have syntactic sugar there in not having to quote the arguments to suppress their evaluation.
The enabler of interesting semantics is not the FEXPR but the env: that the environment is available to the program itself, reified as an object. We can write code which somehow receives this env as an argument and then use it in eval. (Then it's basically an afterthought that we can put such code into functions, hook them to operator names, and have the interpreter dispatch them for us, and automatically pass them the environment.)
Given access to the environment, we can explore questions like, "what if we dynamically build a piece of code, say, based on some external inputs, and then evaluate it in the environment where it can see the local variables of the current function?"
Ultimately, this sort of thing is entertaining bunk, which could be why it disappeared: the evaluation-semantic equivalent of Escherian impossible waterfalls and such puns and ironies. (I just coined a term: trompe d' eval).
Or, maybe the ancient Lispers were wrong; was there a tiny baby hiding in the bath water? Was it really just chauvinism (our main program is research into better compilers, and whatever gets in our path is to be pushed aside).
Possibly, the Algol people and lexical scoping had an influence: lexical scopes encapsulate and protect. You don't want to reveal run-time access to the environment, which breaks the doctrines of lexical scoping, allowing a function to peek into or mutate another's environment, if it only it receives that environment as an object. That would have been repugnant to the Wirths and Dijkstras of that heyday.
We have a less powerful version of this in the lexical closure, which binds a specific piece of code to a specific environment, without revealing that environment as an object. The closure is reified; the environment isn't, being considered something lower-level that remains hidden under the hood (and subject to a myriad implementation strategies which make it hard to model as a cohesive object).
Great response. I admit I am interested in fexprs because of the syntactic sugar; not having to quote arguments means code can look more like words at the top level, and smaller functions can deal with how they are interpreted.
As far as the search for compilers is concerned, I think what is considered powerful notation should be kept around, even if it's tough to compile at the moment.
If you have a compiled Lisp with an interpreter also, adding fexprs creates the interesting possibility the fexprs themselves may be compiled.
Suppose the Lisp is bootstrapped in some other language, like C or assembler. The special operators in the interpreter are written in C. If you write the IF operator in C, and that operator itself needs an if operator, it uses the C if statement or ternary operator. (Obvious, right? No level confusion.)
If you add FEXPRS, they are interpreted code themselves: interpreted code controlling the interpretation of code. If you write an IF FEXPR and it needs an if operator, and you use IF, then you get infinite regress/recursion: while trying to interpret IF, the IF FEXPR calls itself, and then runs into the same situation, calling itself again, ...
If the Lisp has a compiler and macros, then you can write an IF macro, and compile that FEXPR. Then, when the interpreter evaluates an IF form, it now dispatches a compiled function. When that function needs IF, it's just running the compiled code, and not recursing any more; the IF FEXPR is only for interpreted code.
FEXPR's can do some "impossible things", and if you want to do those things fast, compiled FEXPR's could be useful.
In fact, that ought to work not just in Racket, but in any Scheme implementation that conforms to R5RS or later (or even R4RS plus appendix).
The Klisp authors picked a pretty bad example for demonstrating the power of fexprs. You can think of them as being first-class macros in a way [0].
Joe Marshall demonstrated that fexprs can be divided into two distinct classes: safe and unsafe [1]. He showed that all safe fexprs could be implemented as macros with no loss of expressiveness. (An unsafe fexpr is one that relies on metacircular fixpoints (whatever that means)).
[0]: That's not exactly true. Macros are syntactic transformers whereas fexprs are procedures that can syntactically modify and selectively evaluate its arguments in a given environment. Despite this semantic difference, there's a very large overlap in their use-cases.
There are plenty of English compounds that don't fit that description: forthright, downright, forthcoming; wisecrack (but "wise woman"); blackboard, greenback, greengrocer, yellowbelly; drive-by, drive-in; tapout, all out, balls-out, blackout, checkout. A couple common vulgarities do exactly what Sitzpinkler and Sockenfalter do, though admittedly the English -er is no longer "male-specific." (It once was.)
I think an article on German compounds would have been worthwhile on its own. To develop interest in it, however, the author has found it necessary--as many others have--to write of one language's richness as a factor of another's lack. English is actually quite pithy on some of the concepts he claims "require a mouthful when translated" from the German: "that which is in the process of becoming" (das Werdende) is (with varying nuance) nascent, incipient, inchoate, germinal, budding, springing, arising, dawning, crepuscular, in embryo, in the bud, in the gristle, forming, fashioning, styling, or becoming. To give rise to a noun to describe that thing which is nascent, incipient, etc., does not require the ham-fisted (there's a good, playful compound!) stroke which Duncan has used: he's playing it up! If you want a single word for it, bud,germ, etc., have their figurative uses as well. And if he wanted a dictionary definition for das Werdende, he could have dropped the relative pronoun and two prepositions and written, "a nascent thing." (In actual writing, thing would usually be replaced by a more precise identifier, and the phrase would be the richer for it.) There are some rare words, like inchoant, which also do the trick, and whose inception (or kick-off, throw-off, lift-off) looks rather like that of das Werdende.
yes it's not so prevalent indeed. It's likely that the ancestor of modern english relied more on forging compounds before it started importing a vast part of it's vocabulary from latin through french. Worth noting that many of the words of latin or greek origin are in turn compounds in those languages. Just a few random examples:
- exit: out - go
- prospect: forward - look
- decide: down - cut
- alarm: to the weapons
Such etymologies might still be somewhat intuitive to native romance language speaker, although in most of the cases the brain just treats them as opaque units of meaning and sometimes even an obvious alarm (italian: allarme) -> all'arme (italian for "to the weapons") which shares the same pronunciation is not immediately obvious to a native speaker until you point it out; after that an a-ha moment follows.
Thus I'm curious to know how does it work in German where so many compound words are accessible without being hidden by arcane etymology and the general rule is still productive: does it require some effort to actually break a compound in smaller parts once it became so common to be de facto a new word of its own ?
Does it require some effort to actually break a compound in smaller parts once it became so common to be de facto a new word of its own?
No in general it's quite easy. You just have to pretend it's early latin before the innovation of spaces between words ;) And then there are semi-consistently applied rules about certain suffices (like -s, -er, -en + others) which somtimes go between the subwords to indicate that one of the in a declensional position. For example:
Volksempfänger = Empfänger (des) Volk(es) = "(radio) receiver of the people"
Forming compounds is a bit trickier; everyone "knows" the rules (for applying joining suffices) though they may not be able to articulate as to why -- the new compounds just pop out.
> does it require some effort to actually break a compound in smaller parts once it became so common to be de facto a new word of its own ?
Older, well established compounds often deviate a bit from their original meaning over time, those are as opaque as allarme. "Creative" insults (where English also demonstrates a rich pool of compounds) for example will quickly erode from their literal meaning to "generic insult", only the rough position in the universal coordinate system for expletives (general magnitude, position in the spectrum between evil and stupid) remains.
For unknown compounds, there are some that follow standard patterns of known compounds ("Studentenvereinigung" - student union) and new ones can be read as fluently as an expression with blanks ("Studierendenvereinigung" - also student union, but properly gender neutral or "gegendert", "Crowdfundinggründervereinigung" - union of founders who are using crowd funding, hypothetical but would be easy to parse for anyone who has ever heard of kickstarter)
Other compounds however can be rarely used, yet still very much unparsable for much of the population. Take for example the "Backpfeifengesicht" that is cited here a lot, the sub-compound "Backpfeife" (a slap in the face) is strictly regional dialect (or even just local slang, and to make it worse from a time long past) and has been shortened from "Backenpfeife" which would still only make sense if you know what it is. I suspect that it was originally coined as an insider slang term deliberately misleading to outsiders. For those cases, you learn to quickly give up on extracting meaning when it does not work and learn it like any other new word, a meaningless GUID that could be anything, slowly narrowed down any time you encounter it in context. Some parsing may still happen, in the case of the Backpfeifengesicht one might for example infer/suspect that it is not about the face but about a person/type of person, because ending with "-gesicht" is also used in other compounds. For example the "Freibiergesicht", someone who will only grace you with his company when there is free beer (regional, typically used as a very low magnitude friendly provocation)
I think what he's basically saying is that when English became a fusional language, forged by the collision of two alien languages -- one Germanic, one Latin; each with its own morphological building system -- everything just turned into a big jumble, and people forgot all these (once) nuanced rules for compounding / deriving words -- leaving us with the comparatively limited ruleset we have now.
At least that's my basic interpretation of the coldly functional, atonal clusterfuck that is modern English.
Apathy and disagreement are different things. So are extrapolation and confirmation.
No one is in a position to "confirm" this statement or prove it irrecusably wrong. More or less tenable arguments may be made, more or less well. I think Hawking does well here, behind the poorer commentary, but I'm unwilling to engage in or let lie propaganda for any end, particularly on a forum on which we are to default to respect for others' ability to engage in thoughtful dialogue. The source for this article is not easily to be dismissed, but the article's title is misleading - a straight fib - and a misleading headline that suggests one reading over another while pretending to fact is propagandistic.
Yeah, and that goes both ways. You can't wave away the apathy that exists by disagreeing on other grounds.
> This article is not easily to be dismissed, but its title is misleading - a straight fib - and a misleading headline to suggest one reading over another is propagandistic.
There is not much leeway in how to read Hawkings' actual comments, and as for title, that is the word "confirm":
> 1. to state or show that (something) is true or correct
> 2. to tell someone that something has definitely happened or is going to happen : to make (something) definite or official
> 3. to make (something) stronger or more certain : to cause (someone) to believe (something) more strongly
Notice the last meaning. So this is just a high-ranking comment with nothing to add, splitting hairs about the title. What Hawking said is not easy to dismiss, but that doesn't stop people from trying now does it.
edit: yep, it's already getting buried. Someone of the smarter people on the planet is talking about one of the more serious things in life, but let's talk about gadgets some more.
Phonetic alphabets have strong precedents - and have had strong backers - in English as well. See the Shavian alphabet[1] for a relatively recent example. Others were drawn up by such as Benjamin Franklin and Sir Isaac Pitman, famed for his shorthand.
One of the greatest impediments to these systems was that, while pronunciation may be picked up from spelling in a so-called phonemic system (assuming a bog-standard means of determining stress, which English doesn't have), spelling does not always follow from pronunciation - not unless you happen to speak precisely the English that the standards body prefers. English phonology, particularly when it comes to vowels, varies so wildly from place to place that any new, purportedly simpler system will be met with resistance from the majority of speakers to whom the system seems to be wrong, messy, and arbitrary. As a simple and common example, that of 'Mary, merry, and marry': do you collapse these three vowels, as many English speakers do, thus frustrating those who would like to make the distinction in print; or do you retain the three and in doing so create three new arbitrary-seeming spellings for words that to many speakers are homophones? Do you, as in Shavian, retain an R at the ends of "star" and "mother," or do you bow to common British pronunciations and excise these in spelling as in speech - something another phonetician, Henry Sweet[2], would have recommended?
(We see something similar in the promotion of and resistance to new programming languages: fixing some problems is not always enough to attract adopters away from a language which is at least doing the job. Imperfections stand out in novel tools, even if other improvements have been made and the imperfections are nothing new.)
Shorthand authors were well aware of the difficulties in teaching a "phonography," or phoneme-based system, to a duplicitous student body. This is one of the reasons that many English shorthand systems do away with almost all vowels, except perhaps to indicate where the vowel occurs and what broad family of like sounds it belongs to. Of course, eliminating vowels is one of the best ways to promote speed in writing; but there was also the problem that, where writing a vowel was desired (e.g. to differentiate 'tarp' from 'trap'), not all students will agree on which symbol ought to be used for each vowel sound - the sound varying from person to person, even within the same county or town. These shorthand systems were left ambiguous on purpose, so that teaching speed might be improved alongside writing speed. No need for a student to learn whether the first vowel in 'father' is indeed the same as the one in 'bought' - and from whose mouth, anyway?
It is possible that in an IPA-based system, English spelling may differ from person to person as widely as pronunciation does. Surely, however, that would make things more difficult for second-language learners than having to contend with one standard that happens to be riddled with inconsistencies? (Or perhaps not?)
Thanks for the heads up. They say there is both a youth element and an income element and claim the effect is present even compensating for youth. However, there are exactly zero statistics (alpha, p, confidence, nothing) to justify the conclusion.
What use do you want us to make of that comment? Be upfront if you actually have something to say. Do you mean that Austria has only just changed its official position, relative to whatever scale matters to you personally, and should not be let off just yet from this long chain of acrid remarks?
Going by your time scale, WWII was still on three days ago. Then again, there are people who might come onto this forum and in all honesty say that 1943 was only yesterday to them, and that you mistakenly said that this morning was yesterday afternoon. Do we revise our comments yet again, in light of this fact?
By many measures, 24 years are scarcely any. Which measure do you want to make use of, and what consequences does it have? You haven't said anything.
It means that to me 1991 is figuratively speaking yesterday. I remember it clear as day and it does not seem long ago at all.
I realize that for lots of people on HN the early 1990's seem like forever ago, and probably quite a few of them weren't even born back then but for me personally it's an eyeblink.
Yes, there are some who remember WWII vividly, I'm not one of them because I was born well after that but I can see how what you determine to be a 'long time' very much depends on your own personal time-line and that was the full extent of the meaning of that comment.
To me it means yes, that they have just changed their official position, and they definitely should not be left off just yet, not because it took them that long but mostly because of developments in Austria since then. The legacy of Haider is alive and well.
Just like NL should not be let off the hook either, we have our own version of that problem to contend with. (And, for that matter, our own version of the Anschluss even though you'll never hear about it outside of NL, we had a very large chunk of the Dutch openly collaborating with the invaders and a political party (NSB https://en.wikipedia.org/wiki/National_Socialist_Movement_in...) with substantial following.)
I'm not sure how this would pose a problem. I'm not the parent commenter, but I don't typically go to Google to search for sites I frequent. It would be such an odd thing to do—granted that I know the URL—that I would be forced to consider what I was doing even more vividly than if I had to type the URL out.
Unless you mean the search bar?
browser.search.suggest.enabled = false
Or, if you mean that you'd like to have autocomplete on for searches, just leave the default:
On the top of Cummings, I have occasionally felt like I was faced with an incomplete set of instructions when reading certain of his poems. Of course, there's no particular reason to liken them to computer instructions; I've never seen anything in the poems to suggest a connection. Anything to link one art with another is a stretch—if we even accept that coding is an art.
(I think code can be, though it isn't as a rule. In the same way, writing can be artful but is typically workaday and unremarkable. This post, for an example.)
J is in fact a much more recent invention than many of us would suppose[0]. It is worth adding, however, that it would not have saved G, except in English; our pronunciation of J, while close to the French and a couple others (with an added /d/ at the front), is quite unique among languages that were written in the Roman alphabet at the time.
In any case, English suffers quite a few more more overloaded consonants that C and G. Most occurrences of /z/ (a frequent sound in English) are marked with an S (codes), and a great many /t/ sounds are written D (typed).
Perhaps you mean that 'apple' has a different meaning in the United States than, say, in Wales, because its web of implications looks different in one place than another. Following that thought, though, the same would be true of two American speakers, who surely have their own idiosyncratic webs. It's an interesting idea. But are words not 'synonyms' that have the same referent, only because two speakers have different relationships to that referent? Is a word partly its evocation? Or can we look at its evocation separately from a stricter 'meaning' it shares between speakers? (Surely it shares something, or language would lose its point.)
Incidentally, synonym is not the word to be nullifying here. A synonym is a like word, something that may equal the original but usually differs in degree, amount, tone, allusion, or other effect. Anyone using a thesaurus without a dictionary is sure to embarrass themselves sooner or later: differences in meaning between like words are common, and it is no revelation to say that one speaker will have different associations with a word than another, particularly if they come from different cultures.