Hacker Newsnew | past | comments | ask | show | jobs | submit | noam_k's commentslogin

I'm not sure what exactly you're referring to, but one avenue to implement AI is genetic programming, where programs are manipulated to reach a goal.

Lisp languages are great for these manipulations, since the AST being manipulated is the same data structure (a list) as everything else. In other words, genetic programming can lean into Lisp's "code is data" paradigm.

As others mentioned, today everything is based on neural networks, so people aren't learning these other techniques.


I'm referring to the fundamental idea in AI of knowledge representation. Lisp is ideal for chapters 1 through 4 of AIMA, and TensorFlow has shown that NN can be solved well with a domain specific language which lisp is known to be great for.

In fact, the first edition of AIMA even had a NN and Perceptron implementation in Common Lisp. (https://github.com/aimacode/aima-lisp/blob/master/learning/a...)


That would be cool.

I read somewhere that a black hole with the mass of the moon will absorb about as much cosmic radiation as it emits Hawking radiation. This is a fine line between "the black hole disappears before we can examine it" and "oops, we got eaten by a black hole".


If it's in a stable orbit in the solar system, it wouldn't be able to "eat" us. Black holes gravitate exactly the same as any other mass, so it would have the same gravitational effect on Earth as any object if the same mass.

What makes black holes special is that you can get much close to their center of mass than you can with normal objects. When you're that close - inside the radius that a normal density object of that mass would have - then you experience gravity at a much higher strength than normal.

Put another way, even if our Moon was a black hole with the same mass, very little would change except that it would no longer reflect sunlight. Ocean tides on Earth would remain the same. You wouldn't want to try to land on it though...


There was a movie where Moon was a hi-tech 'megastructure' with a white dwarf inside. I wonder if it would be theoretically possible to set up such a mini-dyson sphere around a mini-blackhole.


A black hole, or neutron star, would make much more sense in that scenario than a white dwarf.

A white dwarf smaller than the moon seems unlikely, if not impossible. If it were that small, unless it was in the (fast) process of collapsing to a neutron star, it wouldn't have enough mass to remain that compact.

A neutron star or black hole would work fine, because both can easily have radii much smaller than the Moon's.

Here's an article about that - https://www.fandom.com/articles/moonfall-real-life-astrophys... :

> “There are just so many things wrong with [the idea of a white dwarf inside the moon],” says Romer. “Now, a white dwarf is a very compact object. But, you know — people have heard of neutron stars — neutron stars are ultra-compact objects, they’re a few tens of kilometres across. White dwarfs are actually about the size of a normal star.”

You can come up with scenarios where white dwarfs are much smaller than a star, but smaller than the Moon is iffy at best.

As for the Dyson sphere idea, the biggest problem with it in this scaled-down scenario is stability. You can't exactly support it with struts, or something.

On that subject, I highly recommend the video "dyson spheres are a joke": https://www.youtube.com/watch?v=fLzEX1TPBFM , by astrophysicist Angela Collier. But you need to either watch all 53 minutes, or skip to near the end, to find out just how literal the title is.


If you set it up at the right radius it would have 1g gravity at the surface, like a little mini-world. It wouldn't be able to hold an atmosphere though, so it would have to have pressurized buildings on it.


Somebody write a sci fi with this please, just make sure to describe how trash disposal works


If you're interested in black holes and trash disposal, check out the 1978 short story, "The Nothing Spot": https://vintage.failed-dam.org/nothing.htm


Hey, its not like an analog of "Yeah, lets just throw some more mass at the newly-forming black hole in our neighbourhood", said every human that has ever thrown things into the fire, forever ..


Black holes aren't cosmic vacuum cleaners. They're just super super super compact objects.

I've actually posted this a few times:

If you suddenly transformed the Moon into a black hole of the same mass, it would continue to orbit the Earth in the same spot. It wouldn't suck up the Earth or anything. The ocean tides would continue as normal under the influence of the black-hole-moon's gravity, which would be the same if it was orbiting at the same distance. You wouldn't see a moon in the sky, but if you focused a good telescope on where it was you'd see gravitational lensing. It would be a little smaller than a BB.


Sorry, but I have to link the "Hole Lotta Trouble" episode of Pocoyo https://www.youtube.com/watch?v=HL_0OL7vZ44


Yes, you really do.


I'm surprised the article doesn't mention OpenASIP [0], which not only helps you define the architecture, but also provides RTL synthesis and a working (if not always useful) compiler.

[0] http://openasip.org/


Can anyone explain why the ID of the div is modified?


Because the -hidden variant is styled "display: none".


I think the issue here is that the server would have to store a copy of the register per peer, as it can't calculate which one is the most recent. Using FHE allows the server to hold a single copy.

In other words the server could forward and not store if all parties are always online (at the same time).


Server will store encrypted blob and its hash/etag.

Client before upload of data, check for hash/etag of blob he originally fetched. If blob on server has different one, it will download it, decrypt, patch new data on existing one, encrypt and reupload.

Whats the catch?

AES is hardware accelerated on the most devices - so with all the ops it will be significantly faster than any homomorphic enc nowadays.


I too was wondering the same thing. FHE is cool tech, but this seems to me to be a bad application of it since it will undoubtedly be less efficient.

FHE is useful when trying to compute on data from various sources who all mutually want to keep some information secret. For example, Apple's use of FHE to categorize photos [1]. In this case all the server is really doing is "compressing" for lack of a better word, the change sets, so each offline client doesn't need to sync every message since they are already merged by the server.

If all you want is to keep a synchronizing server in the dark, but all clients can be trusted with the unencrypted data, traditional key exchange and symmetric encryption should suffice.

[1]: https://machinelearning.apple.com/research/homomorphic-encry...


So it's "just" a storage optimization?


This sounds like the "only following orders" argument.

_If_ developers _collectively_ were to quit jobs that don't line up with their morals and ethics, we _might_ see a change. I'm not saying this is an easy decision to make, and I definitely don't want to judge someone who decides to take a higher paying job, but there's potential here to shift the direction AI is taking.


Have you seen the job market recently.

I mean, I do agree with the only following orders part. But I guess we humans are nuanced. We aren't logical but emotional beings.

You are saying its not easy to leave a job? That's so understating it

Imagine a family where the father has to worry that food might not be on their young daughter's table because he was 'logical', I guess, I don't want to be logical in such scenario where my loved ones suffer because of some greater good which might not even get to fruition anyway. (Stopping ai, in my pessimistic point of view, it won't)


Perhaps the union could take a stand.


You may want to look at Lua[0]. It's often used as an embedded scripting language in larger projects (and games), has good performance, is memory safe, and is extensible in the same manner as Python (write your performance bottleneck in C/C++).

I don't remember specifics, but there are some odd footguns to look out for.

[0] https://www.lua.org/


I happen to be taking a Team Lead course, and forming habits came up yesterday. 21 days weren't mentioned explicitly, the time frame was "a few weeks". We were given 6 criteria when forming a habit:

1. Tangible - you need to pick a tangible action that is observable. If you're trying to fix a part of your behavior you can't pick "I'll pay more attention" as a habit to correct yourself, instead you should write a note or say some phrase.

2. Up to me - don't form a habit that requires outside factors. If you want to start jogging, don't ask your neighbor to jog with you. Each time he's not available, you'll have an excuse not to jog.

3. Swallow the frog - don't push it off. This isn't a well defined criteria, the idea is to minimize excuses (like #2).

4. Daily - a habit needs to be formed by taking action every day.

5. Trigger - your action needs a trigger. This can be an internal (feeling hungry), external (a timer on your phone), or contextual (every morning, every time you walk into a conference room).

6. New - it's very hard to form a habit if you've already tried and failed. Pick an action that you haven't already tried.

There was also an important note that changing behavior often requires multiple steps. The instructor gave the example of using dental floss. It's hard to go from nothing to flossing every day, so break it into:

1. Every time you go into the bath room in the evening, pick up the dental floss, and put it down.

2. After picking up the floss becomes a habit, cut a piece of floss, and throw it out.

3. After cutting the floss becomes a habit, floss a few teeth.

And so on.


Is this a real issue? SCTP runs over IP, so unless your talking about firewalls and such, the support should be there.

Edit: a quick search showed that NAT traversal is an issue (of course!)


Yes this is called protocol ossification [1] or ossification for short. Other transport layer protocol rollouts have been stymied by ossification such as MPTCP. QUIC specifically went with UDP to prevent ossification yet if you hang out in networking forums you'll still find netops who want to block QUIC if they can.

[1]: https://en.m.wikipedia.org/wiki/Protocol_ossification


Because from an enterprise security perspective, it breaks a lot of tools. You can’t decrypt, IDS/IPS signatures don’t work, and you lose visibility to what is going on in your network.


Yes I know why netops want to block QUIC but that just shows the tension between the folks who want to build new functionality and the folks who are in charge of enterprise security. I get it, I've held SRE-like roles in the past myself. When you're in charge of security and maintenance, you have no positive incentive to allow innovation. New functionality gives you nothing. You never get called into a meeting and congratulated for new functionality you help unlock. You only get called in if something goes wrong, and so you have every incentive to monitor, lock down, and steer traffic as best as you can so things don't go wrong on your watch.

IMO it's a structural problem that blocks a lot of innovation. The same thing happens when a popular open source project that's author led switches to an external maintainer. When the incentives to block innovation are stronger than the incentives to allow it, you get ossification.


Possibly even SRE shouldn't even exist, not only the structural issues you mention, but...

If you approach to security is that only square tiles are allowed because your security framework is a square grid, and points just break your security model, maybe it was never a valid thing to model in the first place.

I'm not saying security should not exist, but to use an analogy the approach should be entirely different - we have security guards, less so fences, not because fences don't provide some security, but because the agent can make the proper decision, and a lot of these enterprise models are more akin to fences with a doorman, not an professional with a piece and training...


Agreed. I also think rotations, where engineers and ops/security swap off from time-to-time and are actually rated on their output in both roles would be useful to break down the adversarial nature of this relationship.


Wrapping everything in UDP breaks the same tools but it's more obnoxious for everyone involved.


> Other transport layer protocol rollouts have been stymied by ossification such as MPTCP

AFAIU, Apple has flexed their muscle to improve MPTCP support on networks. I've never seen numbers, though, regarding success and usage rates. Google has published alot of data for QUIC. It would be nice to be able compare QUIC and MPTCP. (Maybe the data is out there?) I wouldn't presume MPTCP is less well supported by networks than QUIC. For one thing, it mostly looks like vanilla TCP to routers, including wrt NAT. And while I'd assume SCTP is definitely more problematic, it might not be as bad as we think, at least relative to QUIC and MPTCP.

I suspect the real thing holding back MPTCP is kernel support. QUIC is, for now, handled purely in user land, whereas MPTCP requires kernel support if you don't want to break application process security models (i.e. grant raw socket access). Mature MPTCP support in the Linux kernel has only been around for a few years, and I don't know if Windows even supports it, yet.


It would be nice to generate some sort of report card here. Maybe I should try.


Every home user is behind a NAT. While you can send any protocol between datacenter servers, IPv4 home users are stuck with TCP or UDP.


Hole punching is perhaps why UDP is de facto?


But then you can't tell the difference between 0.12 and 0.00012.

Unless you're suggesting to use the strings "0" and "00012", at which point you could just use a byte string with the utf8 encoding of the value.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: