Hacker News new | past | comments | ask | show | jobs | submit | more mikea1's comments login

> how is it so much cheaper

Porkbun doesn't make money when you buy a domain name, but they may make money when you do not renew it:

> At about 21 days into the Auto-Renew Grace Period, the expired domain will be submitted to third-party auction services.

https://kb.porkbun.com/article/37-what-happens-after-a-domai...

Other registrars, like GoDaddy, do this too.


You can quickly go offline via dev tools. In Chrome, it's very simple[0].

[0] https://developer.chrome.com/docs/devtools/network/reference...


Workaround:

1. Install ServiceWorker.

2. Save data to LocalStorage/IndexedDB/ServiceWorker Cache/ServiceWorker Memory.

3. Wait for devtools to be closed, enabling internet access, send data from ServiceWorker.


I'd create a fresh browser profile just for this, download it, then point it to use a http/socks proxy that will never exist.

Work around that.


> Work around that.

Easy. I use HTTP/3.

No, really, HTTP and SOCKS proxies cannot carry QUIC traffic, so browsers don't even try. They just send it right through.

If you block UDP, I guess I can still try DNS for exfil. HTTP proxies don't support DNS, and browsers need to be explicitly configured to proxy DNS through SOCKS, if the SOCKS proxy even supports it. Chances are, DNS exfil will work.

Now, if you were to do what I do to disable network access, then I'd have no chance: network namespace in a jail with zero network interfaces (not even loopback).


I'm going to need a bug tracker link for that, it seems to dumb to be true. Surely they would just not use HTTP/3 if they can't do it through the configured proxy. I wouldn't bet my life on it though, I have seen dumber bugs.

edit: tested this the old-fashioned way with Firefox 116.0.3 on Ubuntu and nginx 1.25.1. Firefox does connect over HTTP 3 and CORRECTLY DOESN'T CONNECT AT ALL with a (bad) proxy configured. You are spreading FUD.

My Chrome 115.0.5790.170 doesn't seem to use HTTP 3 at all.


> Surely they would just not use HTTP/3 if they can't do it through the configured proxy.

That's what I thought, at first. But, back when Chrome introduced QUIC, this was a known phenomenon in proxy-restricted but not-UDP restricted setups. I doubt I'd be able to find a bug report for it, given Google's nature, but there's a few reports[1][2][3] by proxy vendors asking for QUIC to be disabled or traffic will go through, even when Chrome is configured to use a proxy.

And here's[4] a user report with the same observation, with Chrome connecting directly to its mothership without going through the configured proxy. The user reports successful blocking upon disabling QUIC

1: https://www.currentware.com/support/disable-quic/

2: https://support.umbrella.com/hc/en-us/articles/360051232032-...

3: https://support.forcepoint.com/customerhub/s/article/0000154...

4: https://superuser.com/questions/1688524/why-google-com-doesn...

> You are spreading FUD.

I assure you, I had no such intention. I was just reporting from memory, of years ago, back when I was in college and had to deal with a proxy-restricted network, and QUIC was being rolled out.

> My Chrome 115.0.5790.170 doesn't seem to use HTTP 3 at all.

Maybe your Chrome has HTTP/3 blocked for some reason. Or, more likely, Chrome supports a different draft of the HTTP/3 spec than the server you're testing against. It has a history of doing that too.


Your links point to people and services using intercepting proxies, not configuring one in their browsers. Techniques intercepting TCP traffic and redirecting it to a proxy will not work when the traffic is UDP. This is not a browser issue.

In this thread we were talking about the user willingly configuring a proxy in the browser or OS.


> Your links point to people and services using intercepting proxies, not configuring one in their browsers.

Sorry, I didn't look too closely through any of those links, because I never used those specific products myself.

But I do clearly remember this being an issue for me back in the day. So I dug further. You'll be happy to see this bug report[1] and this commit[2]. Note these words from a Chromium dev: "This code was written when we discovered a problem with QUIC bypassing proxies."

1: Issue 389684: QUIC bypasses proxy settings | https://bugs.chromium.org/p/chromium/issues/detail?id=389684

2: Issue 217783003: Do not use QUIC for requests that are through a proxy. | https://codereview.chromium.org/217783003


Thanks! This is scary, however you'll agree that a bug on Chrome closed 6 years ago is a far cry from your claim that "HTTP and SOCKS proxies cannot carry QUIC traffic, so browsers don't even try. They just send it right through" present tense. Still thank you for finding this reference, it is a sign that setting a proxy in your system is not as secure as a firewall/netns.


> however you'll agree ... present tense

Yeah, I agree. I should've checked the validity before posting it, instead of just going by memory from years ago.

> ... finding this reference ...

It was a lot more effort than I'd have liked to put into my original comment, but hey, I was ticked off by your accusation of spreading FUD. Also didn't help that search engines today aren't what they used to be.


Another explanation is that there are those who considered and thoughtfully weighed the ramifications, but came to a different conclusion. It is unfair to assume a decision process was agnostic to harm or plain ignorant.

For example, perhaps the lesser-evil argument played a role in the decision process: would a world where deep fakes are ubiquitous and well-known by the public be better than a world where deep fakes have a potent impact because they are generated seldomly and strategically by a handful of (nefarious) state sponsors?


there's also the issue that most of the AI catastrophizing is a pretty clear slipperyslope argument:

if we build ai AND THEN we give it a stupid goal to optimize AND THEN we give it unlimited control over its environment, something bad will happen.

the conclusion is always "building AI is wrong" and not "giving AI unrestricted control of critical systems is wrong"


The massive flaw in your argument is your failure to define "we".

Replace the word "we" with "a psychotic group of terrorists" in your post and see how it reads.


If you’re talking about some group of evildoers that deploy ai in a critical system to do evil… the issue is why do they have control to the critical system? Surely they could jump straight to their evil plot with the ai at all


Your question is equivalent to "if you have access to the chessboard anyway, why use Stockfish, just play the moves yourself."


Or "board of directors beholden to share-holders".


I completely agree that's a valid argument. I just think it is rational for someone to come to a different conclusion, given identical priors.


If it wasn’t clear, I agree with your parent comment


My main takeaway from Bostrom's Superintelligence is that a super intelligent AI cannot be contained. So, the slippery slope argument, often derided as a bad form of logic, kind of holds up here.


> the pipelined nature of PRQL really maps much better to how people should think about queries

I disagree. Database engines take SQL and transform it into an execution plan that takes into consideration database metadata (size, storage, index analytics, etc.). Queries should be thought of with a _set based_ instead of _procedural_ approach to maximize the benefits of this abstraction - diving into the implementation details to guide the execution plan formation only when necessary.

Also, the pipeline approach could be achieved with common table expressions (CTEs), right?

That said, I think PRQL looks promising because it is a solid attempt to make RDBMS development more approachable. I also like that `from` comes before `select`: it is far more readable. A solid and modern IDE experience for PRQL could be a "killer app".


I disagree. I find it extremely hard to reason about large queries as set transformations, whereas it is much easier to break it down to "first this, then that". And this is long before I've even started writing my first line of SQL.

So let me write it procedurally and have the optimization engine fix it for me, just like how it fixes my SQL.

Even SQL queries are often better understood procedurally. Take this one [1]:

    SELECT article, dealer, price
    FROM   shop s1
    WHERE  price=(SELECT MAX(s2.price)
                  FROM shop s2
                  WHERE s1.article = s2.article)
    ORDER BY article;
That inner WHERE clause doesn't make sense in my opinion, unless you think of it procedurally as for each row in s1, ask do a search for the highest price amongst all items that share article number.

[1] https://dev.mysql.com/doc/refman/8.0/en/example-maximum-colu...


Completely agree, thanks for putting it better than I could have, with an excellent example. Correlated subqueries like the example you give, or similarly lateral joins in postgres, fundamentally are treated like for loops by DB engines anyway.

Semi-related, but the example you give is also why I love Postgres' "DISTINCT ON" functionality (I don't know if other DBs have something similar) - it makes it so much easier to reason about these "give me the 'first' one from each group" type queries without having to resort to correlated subqueries.


queries like these are best suited for window functions, although I am not sure Mysql supports it:

  SELECT article, dealer, price FROM (
    SELECT *, ROW_NUMBER() OVER (PARTITION BY article ORDER BY price DESC) as rnk
    FROM   shop s1
  ) sub 
  WHERE sub.rnk=1
  ORDER BY article; 
this query will be a single pass over table without loops/joins


This is the "set based" approach for the MAX: there does not exist a bigger element:

  SELECT article, dealer, price
    FROM   shop s1
    WHERE  NOT EXISTS (SELECT 1 FROM shop s2 
                         WHERE s2.price > s1.price AND
                               s2.article = s2.article)
    ORDER BY article;


Unpopular opinion.

The uncorrelated example should have been rewritten with a CTE and should have been aliased as 'article_max_price' as if it was a computed property and where price = amp.price


Yeah, it's explicitly disallowed by ICANN to register a domain with this unicode character (along with numerous other characters):

https://www.verisign.com/assets/icannrestricted/idn-icann-re...


Thanks for sharing - I didn't know about OpenNIC. It looks like an alternative to ICANN where the key distinguishing trait is that TLDs and domain names are awarded with a landrush model (whoever claims it first owns it.) If it gains popularity, I wonder how it will avoid squatting and after-market (i.e., after-registration) trading.


There are many root server alternatives.

https://en.wikipedia.org/wiki/Alternative_DNS_root


> Less jobs is not necessarily a good thing ?

The rollout of the lightbulb significantly reduced the economic potential of candlemakers, but it improved the quality of life for society overall. So, whether technological advancement is a "good thing" depends on which group you are in: the one with obsolete skills or the one that benefits from cheaper and/or better goods and services. Either way, the overall economy benefits from creative destruction.

[0] https://en.m.wikipedia.org/wiki/Creative_destruction


I understand that you are being poetic, but just in case someone reads this as fact: you are describing a dedicated circuit - which is what telephones used. The internet works on packet switching, so there are numerous little breaks between the signal and receiver as your data is routed along a "connection".


No, I'm talking about the physical layer of the OSI model, and including the mechanical connections between those physical interfaces. You're talking about the link layer.

Unless your backbone/computer has a wireless hop, a literal, uninterrupted, physical chain of physical electronic devices, physically connected to one another with wires/cables, goes from my keyboard to yours. This is literal, not poetic. I'm not saying a galvanic connection. I'm saying a physical connection where, if nothing was bolted down, and high tension cables were used, I could pull and you would feel it.


AT&T had wireless microwave towers for phones and tv, so I imagine there was a period near the end if its life where some dial-up connections weren't physically connected:

https://99percentinvisible.org/article/vintage-skynet-atts-a...


Working for a Midwest dialup isp in the early 2000s, we definitely served some of our smaller POPs with PTP wireless backbones, thanks in part to vast expanses of flat land with fairly tall structures dotted throughout.


Yes, and if the comment implied a purely electrical connection, it is likely not the case either, as there is electrical to optical and vice versa transitions throughout.


> Most waiters/waitresses are payed below minimum wage and make up the difference with tips.

It's slightly more complicated: hourly wage is the greater of ($3 + tips) or (standard minimum wage). So, the effective minimum wage is still the standard minimum wage.


> ...dns verification proves you temporarily control name resolution relative to a viewer.

> Both are trivially hacked, multiple ways.

I'm genuinely curious how it is trivial to "control [authoritative] name resolution relative to a viewer".


Find out what the CA uses for its DNS resolver. Attack it with cache poisoning, or BGP spoofing, or compromise the account controlling the target domain's nameserver records, or trick some other system into making a record you want.

The BGP attack requires knowledge of internet routing and the DNS attack requires knowledge of DNS server exploits, but either of them can be executed with very minimal network access that any consumer can get. Target the nameserver account admin with a phishing attack, account reset attack, lateral password bruteforce, etc.

You'd be surprised how incredibly stupid the admins of some of the largest computer networks are. It's really not hard to get access to some accounts. It should require more than just a username and password to hijack a domain, but usually it doesn't.

In any case, if all you want is a valid cert, you can do it a number of ways that nobody will notice. Again, this only has to work once, on any one of 130+ different organizations. Not all of them have stellar security.

And I'm not even talking about social engineering either the CA, Nameserver, or Registrar's support people, which I consider cheating because it's so much easier.


It's not as much that it's trivial (but it seems like it always is because social engineering never stops working) but that once the attacker has authed they can generally delete whatever extra file or record they made and stay authed, potentially hiding the attack.

Whereas, if that required a signature from a private key, with a counter or other log of use in the TPM, it'd be caught by an audit without having to notice the symptoms.

I know that in security design that I've been involved with there's a lot more scrutiny given to each use of a privileged key than there is to making sure that all website logging lists each file in the directory at each request, or logging the full public state of your DNS every minute. Requiring a signed request makes the attacker come in through the front door.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: