I fed it yesterday's and it did essentially a perfect run, even though it wasn't certain about 1 of the words in the 3rd guess.
Today it got 1 error, but only because it correctly identified palindromes but there were 5 and it picked the wrong one, and it was its first guess. After that it swiftly found all the categories.
I also fed today's same prompt to the o1-mini and it just outright sucked, even repeated an incorrect guess immediately after being told it was wrong.
There's definitely quite a leap between the mini and the full o1.
Bloomberg's Austin Carr says "Any employees who do join Microsoft can’t simply replicate the work they were doing on OpenAI properties like GPT-5 without inviting a nightmare of claims over trade-secret theft."
But doesn't really explain why, I wonder if there's any substance to it in the light of Satya's comments about having the rights to "continue the innovation".
IANAL but regardless of whether they’re actually infringing, even the appearance of impropriety would give OpenAI cause to sue the employees directly. Most people simply aren’t equipped financially and emotionally to defend against a well resourced corp so it’d be a bloodbath of epic proportions. Even if Microsoft paid for everyone’s legal fees, if the lawsuits don’t get thrown out quickly the stress alone would grind the entire division to a halt.
If Microsoft does end up poaching the entire team, everyone will be walking on eggshells with legal breathing down their neck.
OpenAI won't sue anybody. They are lucky to exist by Jan. 1st. And if they do it will be with substantially diminished stature unless they can reverse this whole shitshow (unlikely).
Let's see. It all depends on how high the temperature will go up from here. I give it 70/30 in favor of OpenAI still existing Jan. 1st 2024 in an operational aspect, with a lesser chance of them being defunct but still existing as a legal entity.
For every hour that the new CEO doesn't come out to start repairing their reputation that chance goes up.
I do see this useful in cases where you might want to keep something private from the server-side and are not too concerned about server-side tampering.
Then could use basic auth to protect access from outside and a tool like this to protect access from inside.
A nice next step would be tailscale managing an ssh key that's allowed to do interact with a git(hub) repository.
So that I wouldn't have to create multiple keys or setup the same key on different machines and still be able to interact with a repo from all of them.
It'd be really nice just using git transparently and having tailscale take over the git ssh connection and authenticate using taliscale access controls.
At least for personal projects or small teams that'd be quite convenient.
Depending on which part of those things you find painful, you might want to look into ssh certificates? They're pretty easy to work with, much easier than most kinds of certificate systems.
> co-pilot can write 90% of the code without me, just translating my explanation into python.
I fear copilot may encourage these type of pseudo-code comments.
The most valuable thing the AI doesn't know is WHY the code should do what it does.
Months later, we'll get to debug code that "nobody" wrote and find no hints of why it should behave that way, only comments stating what the code also says.
Seems we're replacing programming for reverse engineering generated code.
I can understand where you're coming from, but if a developer commits auto-pilot code without understanding it, that's not really auto-pilot's fault.
That dev could have done the exact same thing with stack over flow snippets. And create the same situation.
Sure its easier to make mistakes when copilot suggestions are so readily available, but its just a tool that needs to be wielded properly as any other.
It feels like an evolution of your typical IDE niceties that modify characters as you type.
I still remember when people were worried autocomplete would lead to code mistakes and variable mix-ups.
Now the one argument against this is if we become shielded from the full input and outputs of a tool.
It would work bad, but you could have a "copilot(code_fragment, args, ...)" that makes an executes a snippet blindly, hoping it's correct. That's when it stops being a hammer and starts being a boss looking over your shoulder and telling you what to do.
Fortunately, I think we have a while before AI can reliably spit out useful AST programs. But it could happen eventually.
Today it got 1 error, but only because it correctly identified palindromes but there were 5 and it picked the wrong one, and it was its first guess. After that it swiftly found all the categories.
I also fed today's same prompt to the o1-mini and it just outright sucked, even repeated an incorrect guess immediately after being told it was wrong. There's definitely quite a leap between the mini and the full o1.