About the inverse feature: it is not clear to me how code using it evaluates and how much faster it is. But I find it confusing when reading the sub_min[1] function and reminds me how much I value readability. I'm not convinced that feature should be exposed just because interaction nets allow you to flow time backwards. Better free parallelism is already a great improvement!
There are many. One major difference is that HVM/Bend focus entirely on functional programming, whereas Vine supports both imperative and functional patterns, in a cohesive manner.
Well, there have been several iterations of HVM, each extremely different from the last.
IVM is architecturally similar to HVM-64 (which I was the lead developer of). The most major difference is how they handle IO. IVM uses its extrinsic system, where all side effects are mediated through an IO handle, which provides a number of useful properties, and is greatly simpler to implement / use. Interactions with side effects are small and low-cost, and can happen in parallel with the rest of the program.
HVM-64 had built-in net definitions that had side-effects when expanded, which was very messy to use in practice. HVM2 has a monadic IO interface, which requires stopping the whole program on every single IO call. (And also requires writing things monadically.)
Using extrinsics for IO handles in IVM creates a very nice API for IO in Vine; side-effect-ful functions simply take a mutable reference to the IO handle. It's also very easy to support multiple 'threads' of parallel IO effects – simply duplicate the IO handles.
I've been looking at that particular rabbit hole since a professor of mine mentioned it in 2003 or something. Once or twice a year, I'll read about some theorem or something and think it can be applied to Collatz somehow and dive back in.
I've actually proved it several times...except for the insignificant detail that I glossed over that didn't seem important but tanks the proof.
Someday I'll have to publish my "book of lemmas that don't prove the collatz conjecture."
"Example GPTs are available today for ChatGPT Plus and Enterprise users to try out including Canva and Zapier AI Actions." and yet as a paying ChatGPT Plus customer, neither the Canva nor the Zapier AI Actions link work for me, I get a "GPT inaccessible or not found" error for Canva or Zapier.
> Example GPTs are available today for ChatGPT Plus
or
> Starting today, no more hopping between models; everything you need is in one place.
Neither of which are true. I'm a paying user and I have access to neither. They do this _all the time_. They announce something "available immediately" and it trickles out a week or more later. If they want to do gradual rollouts (which is smart) then they should say as much.
I (Plus subscriber, EU) tried https://chat.openai.com/gpts/editor (as linked from https://help.openai.com/en/articles/8554407-gpts-faq#h_86549... ) a minute ago and got a "You do not currently have access to this feature" toast notification on orange background top of pace, along with "chat.openai.com" in the URL bar.
(Chrome on Android, but besides a responsive layout, I haven't noticed discrepancies with the desktop Chromium site/interface; sadly the native Android app still shows no signs of code interpreter mode.)
The announced (and a few days ago leaked) Omni prompt also doesn't show up in the model selector. And that despite the expanded context looking very promising for the REPL-feedback-augmented code generation abilities.