Hacker Newsnew | past | comments | ask | show | jobs | submit | tucif's commentslogin

I fed it yesterday's and it did essentially a perfect run, even though it wasn't certain about 1 of the words in the 3rd guess.

Today it got 1 error, but only because it correctly identified palindromes but there were 5 and it picked the wrong one, and it was its first guess. After that it swiftly found all the categories.

I also fed today's same prompt to the o1-mini and it just outright sucked, even repeated an incorrect guess immediately after being told it was wrong. There's definitely quite a leap between the mini and the full o1.


They made ChatGPT with voice available to all free users.

Really wonder why they went ahead with a launch today with 90% of staff ready to jump ship.


Trying to look like its business as usual and project a appearance of stability. Only it didn't work


Bloomberg's Austin Carr says "Any employees who do join Microsoft can’t simply replicate the work they were doing on OpenAI properties like GPT-5 without inviting a nightmare of claims over trade-secret theft."

But doesn't really explain why, I wonder if there's any substance to it in the light of Satya's comments about having the rights to "continue the innovation".

https://www.bloomberg.com/news/newsletters/2023-11-21/micros...


IANAL but regardless of whether they’re actually infringing, even the appearance of impropriety would give OpenAI cause to sue the employees directly. Most people simply aren’t equipped financially and emotionally to defend against a well resourced corp so it’d be a bloodbath of epic proportions. Even if Microsoft paid for everyone’s legal fees, if the lawsuits don’t get thrown out quickly the stress alone would grind the entire division to a halt.

If Microsoft does end up poaching the entire team, everyone will be walking on eggshells with legal breathing down their neck.


OpenAI won't sue anybody. They are lucky to exist by Jan. 1st. And if they do it will be with substantially diminished stature unless they can reverse this whole shitshow (unlikely).


Why? The nonprofit will certainly continue existing in name, if nothing else.


Let's see. It all depends on how high the temperature will go up from here. I give it 70/30 in favor of OpenAI still existing Jan. 1st 2024 in an operational aspect, with a lesser chance of them being defunct but still existing as a legal entity.

For every hour that the new CEO doesn't come out to start repairing their reputation that chance goes up.


He doesn't cite any sources either. The contract details remain murky.


I do see this useful in cases where you might want to keep something private from the server-side and are not too concerned about server-side tampering.

Then could use basic auth to protect access from outside and a tool like this to protect access from inside.


Is it really doomed because competition showed up? I'd be surprised if they didn't get quite a bit of paying users right away.

I get that competing with an open self-hosted alternative is a tough sell, but is this really different from other pay vs self-host scenarios?


Try them both and see for yourself, see how generating 1000 images via SD feels vs 1000 images on Dall-E 2.

I’d bet one you’ll hit 1000 generations much faster than the other.


Recently switched to macos for development and I'm really thankful to have found out about Shortcat and Raycast.

Combined with Tridactyl plugin on firefox, I can keep my hands on the keyboard for almost every task across the OS.


Yes, love Raycast. ctrl + opt + space and I can immediately jump to my Zoom and Google meetings.


A nice next step would be tailscale managing an ssh key that's allowed to do interact with a git(hub) repository. So that I wouldn't have to create multiple keys or setup the same key on different machines and still be able to interact with a repo from all of them.

It'd be really nice just using git transparently and having tailscale take over the git ssh connection and authenticate using taliscale access controls.

At least for personal projects or small teams that'd be quite convenient.


Depending on which part of those things you find painful, you might want to look into ssh certificates? They're pretty easy to work with, much easier than most kinds of certificate systems.


> co-pilot can write 90% of the code without me, just translating my explanation into python.

I fear copilot may encourage these type of pseudo-code comments. The most valuable thing the AI doesn't know is WHY the code should do what it does.

Months later, we'll get to debug code that "nobody" wrote and find no hints of why it should behave that way, only comments stating what the code also says.

Seems we're replacing programming for reverse engineering generated code.


I can understand where you're coming from, but if a developer commits auto-pilot code without understanding it, that's not really auto-pilot's fault.

That dev could have done the exact same thing with stack over flow snippets. And create the same situation.

Sure its easier to make mistakes when copilot suggestions are so readily available, but its just a tool that needs to be wielded properly as any other.

It feels like an evolution of your typical IDE niceties that modify characters as you type.

I still remember when people were worried autocomplete would lead to code mistakes and variable mix-ups.

Now the one argument against this is if we become shielded from the full input and outputs of a tool.

It would work bad, but you could have a "copilot(code_fragment, args, ...)" that makes an executes a snippet blindly, hoping it's correct. That's when it stops being a hammer and starts being a boss looking over your shoulder and telling you what to do.

Fortunately, I think we have a while before AI can reliably spit out useful AST programs. But it could happen eventually.


hm, that is a conundrum to debug code nobody wrote

on the other hand, if an improved AI comes out in a couple of years, we can feed it the same pseudo-code and enjoy an improved output.

I would rather have a docstring explaining what the code should be doing

I've had co-pilot write its own comments too, my favorite one was, "this is a kind of a hack but it works", very professional indeed!


Really nice list, the rudiments of wisdom looks great for what I’m after, thanks.


It was the endpage for the 'Observer' colour magazine (sunday paper in the UK) when I was a child, made perfect 'bog' reading. I love the book!


Data visualization books seem perfect for this purpose, thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: