"Quashing in a criminal defamation case is a difficult prospect. This is because – to simplify – under Section 499 of the IPC, a prima facie offence of defamation is made out with the existence of a defamatory imputation, which has been made with the intention or knowledge that it will cause harm. This is, evidently, a very low threshold. Section 499 also contains a set of exceptions to the rule (such as statements that are true and in the public interest, statements made in good faith about public questions, and so on) – but here’s the rub: these exceptions only kick in at the stage of trial, by which time the legal process has (in all likelihood) dragged on for years. What we essentially have, therefore, is one of those situations where the cost of censorship is low (instituting prima facie credible criminal proceedings), but the cost of speech is high (a tedious, time-consuming, and expensive trial, with the possibility of imprisonment). Long-standing readers will recall that this structure of criminal defamation law – and the chilling effect that it causes – was part of the unsuccessful 2016 challenge to the constitutionality of Section 499."
This was enlightening. Thanks for posting the link because I never would have found this page.
Notably, Jimmy Wales also posted a statement on that page. The tl;dr seems to be they are intent on exhausting all legal options in India, but non-compliance in the short term is not an option if they wish to retain the right to appeal in India’s court system. I don’t know anything about India’s courts myself, but I copied his statement below:
> Comment from Jimbo Wales
> Hi everyone, I spoke to the team at the WMF yesterday afternoon in a quick meeting of the board. Although I've been around Internet legal issues for a long time, it's important to note that I am not a lawyer and that I am not here speaking for the WMF nor the board as a whole. I'm speaking personally as a Wikipedian. As you might expect, it's pretty limited as to what people are able to say at this point, and unwise to give too many details. However, I can tell you that I went into the call initially very skeptical of the idea of even temporarily taking down this page and I was persuaded very quickly by a single fact that changed my mind: if we did not comply with this order, we would lose the possibility to appeal and the consequences would be dire in terms of achieving our ultimate goals here. For those who are concerned that this is somehow the WMF giving in on the principles that we all hold so dear, don't worry. I heard from the WMF quite strong moral and legal support for doing the right thing here - and that includes going through the process in the right way. Prior to the call, I thought that the consequence would just be a block of Wikipedia by the Indian government. While that's never a good thing, it's always been something we're prepared to accept in order to stand for freedom of expression. We were blocked in Turkey for 3 years or so, and fought all the way to the Supreme Court and won. Nothing has chnaged about our principles. The difference in this case is that the short term legal requirements in order to not wreck the long term chance of victory made this a necessary step. My understanding is that the WMF has consulted with fellow traveler human rights and freedom of expression groups who have supported that we should do everything we can to win this battle for the long run, as opposed to petulantly refusing to do something today. I hope these words are reassuring to those who may have had some concerns!--Jimbo Wales (talk) 09:13, 21 October 2024 (UTC)
A lot fewer devices than Apple, but with changing a device's entire operating system a lot more can go wrong. I wasn't on the team at the time, but maybe someone else can chime in with more details.
> I appreciate your story, but this comment bothered me, because it's something people repeat a lot and it's actually not true. There's no good evidence that adults have more difficulty acquiring language than children. There were some older studies that claimed to show such, but as has become all too familiar these days, their methods were spurious and there have been some replication issues.
According to a 2018 paper [1], the ability to acquire new languages declines steeply after age 17.
[1] Hartshorne, Joshua K., Joshua B. Tenenbaum and Steven Pinker. 2018. A critical period for second language acquisition: evidence from 2/3 million English speakers. Cognition 177:263-277. https://l3atbc-public.s3.amazonaws.com/pub_pdfs/JK_Hartshorn...
It doesn't sound like the person you're replying to understands that the code returned is largely synthesized with OpenAI's Codex. It is not simply a "snippet selection" mechanism, it has "learned" (to a limited degree) patterns in code, and can generate those patterns even if they don't exist verbatim from the training set.
Author here. I wish I had made it clearer that the intent of the post was "this is an interesting and surprising thing you can achieve in C" and not "this is a good idea for a real software project" or "this is a reason to use C instead of C++/Rust/Go".
Macros of this sort are indeed used to define stacks, queues, deques, and other generic data structures in the NetBSD (and presumably other BSDs) source code. So the pattern is used in real software projects.
> Author here. I wish I had made it clearer that the intent of the post was "this is an interesting and surprising thing you can achieve in C" and not "this is a good idea for a real software project" or "this is a reason to use C instead of C++/Rust/Go".
thanks, because from my experience edgy first year comp.sci. students will see that article and be like "see ! we don't need anything more than C!"
Build Your Own Lisp (http://buildyourownlisp.com/) is as good a starting point as any. You learn to implement a high-level dynamic programming language in C, which is a common choice of implementation language for interpreters (e.g., CPython, Ruby). I really can't recommend the book enough.
The problem with making spelling more phonetic is that not everyone pronounces English the same. Even a single country like the United States or the United Kingdom counts a range of accents and dialects. A phonetic spelling is always going to be non-phonetic for some speakers.
"A phonetic spelling is always going to be non-phonetic for some speakers."
Only if there is a single "correct" (or canonical) way to spell each word.
If every speaker uses a phonetic alphabet to spell their words the way they speak them then there will be no problem reading them back phonetically.
Now, understanding these phonetically spelled words might be a problem for some readers, but that's no different from the problem in understanding they'd have when they heard someone speak in a different accent.
Ugh, no. You'd gain the very minor benefit of being able to read in the author's dialect but at huge cost in reading speed. I cringe whenever an author forces an accent or dialect into their writing rather than simply noting that a character speaks with a particular accent. It forces me to read at the speed I can vocalize, which is about half my normal reading speed.
I can see this being useful for specific cases but not as a general practice.
There's even plenty of precedent for the same word having different spellings based on region, like with color/colour. I imagine it would be similarly normal to learn that "tomahto" is the British spelling of "tomato" (or whatever it would end up being).
We have that in south slavic languages. For example this word meaning time (or weather) spoken and written differently based on dialect/language: vreme, vrime, vrijeme. [1]
Maybe a regional spelling reform could lead to English starting to break apart into several different languages?
I read this book about a year ago and later wrote another Lisp [1] from scratch but based on the design in the book. It's still one of my favorite computer science books. He covers quite a range of topics (C programming, traditional Lisp stuff, some other functional programming topics like currying) and the quality is superb. One thing the Lisp interpreter described in the book lacks (if I recall correctly) is a proper garbage collector, which can be an interesting extension to the project if you're up for a challenge.
I agree, except make sure not to skip the SHRDLU [1] one, which isn't actually Hofstadter's writing at all, but a demo of the SHRDLU system which is fascinating and should not be missed.