Hacker News new | past | comments | ask | show | jobs | submit | ransom_rs's comments login

At the top of the article:

> This blog post is expressing personal experiences and opinions and doesn’t reflect any official policies of SourceHut.


If the system is implemented correctly then Kagi cryptographically can't link a particular search to a particular user.


A problem with "zero knowledge" proofs is that Kagi needs to verify that the user has paid for the service, which requires the server to have some knowledge about the client at some point.


You have to generate the tokens while signed in, but once you have the tokens, you can use them without your searches being associated with your account (cryptographically provable).


[deleted by author]


If the method works as described (which is a cryptography issue and can be verified?), there’s no way to track you.

Your claim is a bit like saying „it’s impossible to encrypt mail, the government wouldn’t allow it“. But PGP still exists.


In case anyone else does this and is new to Rust - you can use `cargo check` to type check/borrow check/everything else without doing any codegen


This thread is no longer with regards to Rust or checking whether the code compiles or not. It is about how you can work with compilation times that are longer than a coffee break.


Isn't Google Takeout pretty easy?


Google takeout rips out a lot of metadata information and virtually all organization you had - there are a couple open source projects trying to replace or augment gphotos takeout so it is useable but as is it isn’t a viable option for large libraries


Granted, after centuries of written works, we still don't have "the way" to take hand written notes. Everyones brain works different, and people have total different ways to take notes.

I don't think this is 100% of the reason computers don't have a universal note taking system, but it is definitely a part of it.


I feel like this is different because Amazon does tell you what happens, and they can technically argue that you are responsible for understanding what you are buying (the ethics of hiding this information is very questionable). On the other hand, the EULA is specific language that is legally bounded.


This uses [lalrpop](https://github.com/lalrpop/lalrpop) under the hood I believe, which is a pretty awesome parser generator library for Rust.


Nice, long go the days (20 yrs ago) i had to make a JS transpiler with lex+yacc. The pain


had to? or got to? Sounds great, please tell us more.


i wanted to add lambdas to JS, this was back in 2001 i think, so i had this compiler from my compiler class, changed it to parse JS (+ my extra features like the ()=>{} syntax) and generate an AST in XML (instead of emitting bytecode), then i made a python script that took the XML and generated minified JS from it.


Neat! I did something similar, not quite as big, but I was on a version of Python (Jython) that didn't yet have generator expressions, so I made a generator expression evaluator, gg("x for x in it") that returned a running generator. I was about a week away from shipping an ETL tool to the customer and this allowed me to hit that deadline.

I love the AST in XML approach.


> LALRPOP in fact uses LR(1) by default (though you can opt for LALR(1)), and really I hope to eventually move to something general that can handle all CFGs (like GLL, GLR, LL(*), etc).

Seems overkill for a language whose grammar is LL(1)?


Although the CPython implementation contains an LL(1) parser (which is being deprecated for a new PEG parser), the grammar contains bits that are not context-free, which involved a pre-parsing step before being fed into the LL(1) parser. That structure isn’t particularly good and it’d be beneficial for a new implementation to use something else.


> whose grammar is LL(1)?

Just curious, how did you know this?


The Python 3.9 (most recent) release notes:

> Python 3.9 uses a new parser, based on PEG instead of LL(1). The new parser’s performance is roughly comparable to that of the old parser, but the PEG formalism is more flexible than LL(1) when it comes to designing new language features. We’ll start using this flexibility in Python 3.10 and later.

https://docs.python.org/3/whatsnew/3.9.html#new-parser


That Python can be described by an LL(1) grammar is frequently mentioned in discussions, but I don't know if it's formally documented anywhere.

More specifically, the grammar [0] is in EBNF form and is ELL(1). That means that if each EBNF production is converted to a set of BNF productions in an LL(1)-compliant way, the BNF grammar as a whole is LL(1). It seems that the Python tools themselves do not check this [1], but I have verified it myself.

However, as another commenter mentioned, the grammar doesn't exactly describe Python, but a superset of it. The compiler needs to perform further checks after parsing that "should" be part of the parser itself. A more advanced parser would allow these checks to be performed in the correct place - this is probably what RustPython does when using LR(1), and was one reason why CPython recently replaced its LL(1) parser with one based on PEG.

[0] https://docs.python.org/3.8/reference/grammar.html

[1] https://discuss.python.org/t/should-there-be-a-check-whether...


Parsing a superset of the language and then using extra checks to detect bad syntax accepted by the superset is a well known and very effective way of being error tolerant and implementing decent error recovery.


It is a great strategy. One of my favorite examples of this is Rust. When talking about what synatax "await" should use, we decided on a postfix syntax: "foo.await" . But there was concern that because JavaScript and other languages use "await foo", it would be confusing.

Solution?

  async fn foo() {
      
  }
  
  fn main() {
      let x = async { await foo() };
  }

  error: incorrect use of `await`
   --> src/main.rs:6:13
    |
  6 |       let x = async { await foo() };
    |                       ^^^^^^^^^^^ help: `await` is a postfix operation: `foo().await`
Do exactly that: parse the incorrect form and emit a good diagnostic.


> This is what I'm talking about. How do you KNOW that $other_headphones sound better? The Apple model isn't even available yet.

I feel this part of the discussion is missing. A lot of people seem to love the HomePod for it's great audio quality at the price, and I have heard similar things for the HomePod Mini. Apple seems to be getting decent at making high quality audio equipment(per $), I feel like the quality of these headphones could be similar to the competition.

Apple for sure has a history of overcharging for certain products (remember, you can get a monitor stand for $1k), but the M1 Macbook Air is probably the best $1,000 computer by a long shot.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: