Can someone in the know give a little summary of what we’re looking at here? What’s the purpose? How effective is the code/system at accomplishing its purpose? Etc…
Automated Mathematician is a historically significant step in the evolution of classic AI based on evaluating symbols and rules. This branch of AI seems to have hit a dead end although one can never be certain of such things.
Obviously stuff like LLMs produces much more impressive results as of now, that's a given. OTOH who knows - neural networks have also had a long-ish period when OCR seemed to be the pinnacle of what they can deliver before they exploded via Deep Learning/Transformers/LLMs and what not.
Indeed, AFAIK neural networks have caused at least two AI winters before finally breaking through thanks to a few good new ideas and the fact that the needs of computer games incidentally led to the development of a big industry of specialized, programmable, high-performance dot product calculators.
Speaking of winters; there's a good article about Cyc, a successor to Automated Mathematician. Cyc was the last big project in symbolic AI: https://yuxi.ml/cyc
Eurisko demonstrated superhuman abilities to play strategy games in early 1980-th, and even used strategies from VLSI place-and-route task in planning fleet placement in games. This is knowledge transfer between tasks.
I wonder what the PGP signing concept does to thwart people who want to profit and don't care about the public good. It seems like anyone who attends a signing party can sell their key to the highest bidder, leading to bots and spammers all over again.
In the flat trust model we currently use most places, it's on each person to block each spammer, bot, etc. The cost of creating a new bot account is low so it's cheap to make them come back.
On a web of trust, if you have a negative interaction with a bot, you revoke trust in one of the humans in the chain of trust that caused you to come in contact with that bot. You've now effectively blocked all bots they've ever made or ever will make... At least until they recycle their identity and come to another key signing party.
Once you have the web in place though, a series of "this key belongs to a human" attestations, then you can layer metadata on top of it like "this human is a skilled biologist" or "this human is a security expert". So if you use those attestations to determine what content your exposed to then a malicious human doesn't merely need to show up at a key signing party to bootstrap a new identity, they also have to rebuild their reputation to a point where you or somebody you trust becomes interested in their content again.
Nothing can be done to prevent bad people from burning their identities for profit, but we can collectively make it not economical to do so by practicing some trust hygiene.
Key signing establishes a graph upon which more effective trust management becomes possible. It on its own is likely insufficient.
It takes a lot to deliver value at velocity with a team of engineers that couldn't give a damn about the product and just want to get a paycheck, move up the ladder, etc.
LinkedIn is not a fun problem.
The UI, the design, the dark patterns - all of it sucks.
It's a job. Nobody particularly wants to be there. There's nothing sacred about the product. Engineers don't worship it.
It isn't a place you'd take a pay cut for the opportunity to work there.
Eh … the argument will likely be things created by Thing at the behest of Author is owned by the Author. It’ll take a few cases going through the courts, or an Act of Congress to solidify this stuff.
Just like we settled on photographers havin copyright on the works created by their camera. The same arguments seem to apply
The US Copyright Office has published a piece that argues otherwise, but a) unless they pass regulation their opinion doesn't really matter, and b) there is way too much money resting on the assumption code can be copyrighted despite AI involvement.
It's not settled. The monkey selfie copyright dispute ruled that a monkey that pressed the button to take a selfie, does not and cannot open the copyright to that photo, and neither does the photographer who's camera it was. How that extends to AI generated code is for the courts to decide, but there are some parallels to that case.
But with the monkey there are two levels of separation from the artist: the human makes the creative decision to hand the camera to a monkey, who presses the trigger, and the camera makes the picture. Compared to the single layer of separation of a photographer choosing framing and camera parameters, pressing the trigger and the camera taking the picture. Or the zero levels of separation when the artist paints the picture.
A programmer writing code would be like the painter, and the programmer writing a prompt for Claude looks a lot like the photographer. The prompt is the creative work that makes it copyrightable, just like the artistic choices of the photographer make the photo copyrightable
You could argue that the prompt is more like a technical description than a creative work. But then the same should probably be true of the code itself, and consequently copyright should not apply to code at all
The copyright office's argument is that the AI is more like a freelancer than like a machine like a camera. Which you might equate to the monkey, who's also a bit freelancer like. But I have my doubts that holds up in court. Monkeys are a lot more sentient than AIs
There is case law surrounding the fact that just because you commission a work to another entity doesn't give you co-authorship, the entity doing the work and making creative decisions is the entity that gets copyright.
In order for you to have co-authorship of the commissioned work you have to be involved and pretty much giving instruction level detail to the real author. The opinion shows many cases that its not the case with how LLM prompts work.
The monkey selfie case is relevant also because since it also solidifies that non-persons cannot claim copyright, that means the LLM cannot claim copyright, and therefore it does not have copyright that can be passed onto the LLM operator.
The law is whatever it needs to be to satisfy monied interests with the degree of acceptable of adaptation being a function of the unity of those interests and the political ascendancy of those in favor.
Overwhelmingly this is in favor of treating ai as a tool like Photoshop.
Even those against AI disagree on different matters and will overwhelmingly want a cut not a different interpretation.
But, as someone who’s agile and adaptable, I can do any job. That doesn’t mean I can do them all simultaneously. It doesn’t mean I can be the full-time loan officer and the full-time app developer.
Can I do your job? Yep. Can I also, at the same time, be the engineer that optimizes the IT systems? No - one of these jobs will suffer.
Give me the chance to understand your job, and I’ll replace as much of it as possible with code to do the same thing. But what it won’t do is have good judgement. It will make decisions on actual data - accurate data, erroneous data, it doesn’t care.
I think this is an interesting place to put “AI” - can it take input in the form of data and historical decisions, and come to a new decision from recent data? The same decision a human would?
Aside: it has become an interesting personal experiment to stop being obviously ironic and see how people read what I’ve written. The voting is telling.
I didn't "vote" but there aren't many clues in your comment suggesting it's satire. It is a mainstream presentation of a mainstream point of view, how would anybody distinguish between you believing it earnestly and you mocking it?
I see they are offering to macos for iphone pro and ipad pro next years with subsc. ? or via upgrade with price I mean it's now possible more than ever
I almost added a line about my friends from the South exaggerating their "é" because they are afraid of sounding like Parisians. In reality, who cares? It’s just that statements like "it does not matter" is really unhelpful to people who are not native speakers.
reply