Hacker Newsnew | past | comments | ask | show | jobs | submit | discarded1023's commentslogin

Where do Hughes's Arrows fit in?

In this particular case, Hughes’ arrows are a practical implementation of a Profunctor Categorical Structure. They are roughly a generalization of what arrows (as in function types or more accurately relations) are.

In the article, author is pointing out that the selective applicative doesn’t seem to work correctly (in a categorical sense) for functions, but when generalizing to profunctors a near semi-ring structure appears and works for the SApplicative.

I am pretty sure I’m reading TFA correctly here, but I’ll check when off mobile and edit if I still can.


You'd like to know your fault tolerance is reliable and possibly even correct.


Not if proving so is more expensive to do than not. Reliability is only a means. Not the end. Also the human parts of the business would need to be simplified in order to model them. If deviate from the model that could invalidate it.


Agree on the economics. I’m not arguing for full formal proofs; I’m arguing for low-cost enforcement of invariants (ADTs/state machines/exhaustiveness) that makes refactors safer and prevents silent invalid states. Human processes will always drift, so you enforce what you can at the system boundary and rely on reconciliation/observability for the rest.


In the banking subdomain of credit/debit/fleet/stored-value card processing, over time when considering regulation and format evolution, services provided by banks/ISOs/VARs will effectively exhibit FP traits regardless the language(s) used to implement them.

Savvy processors recognize the immutability of each API version published to Merchants, along with how long each must be supported, and employ FP techniques both in design and implementation of their Merchant gateways.

Of course, the each bank's mainframes "on the rails" do not change unless absolutely necessary (and many times not even then).


You can also argue that debugging time can be expensive but static checks reduce debugging. This is much more true when it's concurrency errors.


A fantastic long read on this issue from a Glaswegian perspective (2022): https://www.lrb.co.uk/the-paper/v44/n18/ian-jack/chasing-ste...


Luca Cardelli worked on this stuff a while back [1]. Perhaps "systems biology" [2] might provide an entry to the literature.

[1] https://en.wikipedia.org/wiki/Luca_Cardelli

[2] https://en.wikipedia.org/wiki/Systems_biology


Hi Simurgh! This seems like a very ambitious project. I wonder if you've had a look at what others have done in this space. I initially liked the look of Ehud Shapiro's stuff [1] but I'm not sure he has the right take on the French political philosophy he draws on (or perhaps things have moved on now from the sources he cited). But perhaps his protocols are neutral enough to serve your purposes too?

[1] https://www.weizmann.ac.il/math/shapiro/home


Apologies for the very slow reply; I had mistakenly assumed this thread was inactive.

Hi discarded1023! Thank you for the kind words and for the excellent pointer to Professor Ehud Shapiro's work.

I've had a chance to look through his website. His work on designing protocols for large-scale collective decision-making is clearly very relevant and fascinating.

To your question, I think his protocols are very much aligned in spirit. The main difference in purpose seems to be that his models often focus on a group making a specific decision or recommendation. My project's immediate goal is slightly broader: to create a continuous, large-scale "map" of public opinion across many topics simultaneously. The underlying principles of fairness and structured debate, however, are very much the same.

Your note about his use of French political philosophy is also very insightful, and I'll keep that in mind as I explore his work further.

Thank you again for a very thoughtful and helpful pointer!


The author is right to note that Haskell can optimise across module (abstraction) boundaries. However I remember that in my childhood that Debray [1] did a lot of work on link-time optimisations (late 1980s). And of course there's the classic self work that fed into the JVM [2], and the whole-of-program compilers that are receiving renewed attention now; mlton [3] being a classic of the genre, "supercompiler" being the active area of research AIUI. So at least sometimes abstraction boundaries are transparent to optimisers.

On the other hand the classic data abstraction story (signatures/interfaces for structures/modules) naturally allows for selecting or optimising implementations depending on uses. There was some great work done in the early 2000s on that (see [4]) and I'm sure the state-of-the-art has moved on since [5].

[1] https://dblp.org/pid/d/SKDebray.html

[2] https://en.wikipedia.org/wiki/Self_(programming_language)

[3] http://mlton.org/

[4] https://dblp.org/pid/59/4501.html

[5] https://en.wikipedia.org/wiki/Interprocedural_optimization


hell yea


Thanks for the link. Is there anything new in these notes? They are cleanly presented but look like the greatest hits up to about 1982. Is there anything in there about reasoning about domains? e.g. what Andy Pitts made hay out of in the 1990s.


Domain theory has reduced to a trickle, with almost no new results since the late 1990s. Most domain theorist have retired, or moved on to other things. Aside, Andy Pitts has been made a fellow of the Royal Society a few days ago!


Fantastic news and well deserved; even when Andy Pitts goes categorical his papers are very readable.

I got told a while ago that Streicher's "sequential" domains had solved the full abstraction problem for PCF [1] ... was it that or something else that killed off the work on game semantics?

It seems that Jon Sterling, author of the tool used to express the thoughts at the link, has made recent progress in domain theory [2] but perhaps the "synthetic" qualifier means it's not the real thing?

[1] Streicher's notes/book on domain theory sketches the construction but does not take it anywhere; I wonder what the reasoning principles are.

[2] see e.g. https://www.jonmsterling.com/jms-0064/index.xml


Andy Pitts' writing is extremely clear, whatever he writes about. This clarity is not easy to achieve and shows mastery!

The full abstraction for PCF was solved in the mid 1990s by Abramsky/Jagadeesan/Malacaria [1] Hyland/Ong [2] and Nickau [3]. All three appeared simultaneously. This was a paradigm shift, because all three used used interactive rather than functional models of computation. (There was also later work on domain theoretic full abstraction, e.g. OHearn and Riecke [4], but I can't recall details. Maybe Streicher's work was in this direction?) The beauty of interative models like games is that they can naturally encode more complex behaviour, including parallelism.

[1] S. Abramsky, R. Jagadeesan, P. Malacaria, Full Abstraction for PCF.

[2] J.M. E. Hyland, C.-H. L. Ong, On Full Abstraction for PCF: I, II, and III.

[3] H. Nickau, Hereditarily sequential functionals.

[4] P. O'Hearn, J. G. Riecke, Kripke logical relations and PCF.


Yes, AIUI Streicher's work was in the vein of your [4]. (I got a vague pointer to him a while back; I don't know who's responsible for the meat of the development.)

Game semantics is expressive but AFAIK it has not (yet) provided new tools for reasoning about programs. I wonder why those tools have (apparently) not been developed, or do they just add (not very useful?) information to the old LCF story ala Scott? Has its moment passed?

By parallelism I think you mean concurrency. (Scott's domains have a bit too much parallelism as shown by Plotkin in his classic paper on LCF; these are at the root of the failure of Scott's models to be fully abstract.) And Scott's big idea -- that computation aligns with his notion of continuity -- conflicts with fairness which is essential for showing liveness. For this reason I never saw the point in powerdomains, excepting the Hoare (safety) powerdomain.

As these notes show, models, even adequate models, are a dime a dozen. It's formulating adequate reasoning principles that is tough. And that's what Andy Pitts brought to classic domain theory in the 1990s.


I can't speak to the point of doing this, but (IIRC/IIUC) you're talking paths and they're talking the entire computation tree, i.e., a term in their calculus represents all solutions, and computing normal forms makes them easy to read off (?). Perhaps there's some meat in how they handle the equivalent of `bagof/3`/`setof/3`.


It's a nice write up but I'm not sure what the contribution is. I would have expected more engagement with Peter Van Roy's work, in particular his and Seif Haridi's epic book CTM [1] where the logic variable got properly unpacked more than 15 years previously. Or at least a citation.

Has anyone looked into how to decouple logic variables from backtracking? i.e., is there a good reason to unbind a variable apart from the Prolog discipline? (Without unbinding we get single-assignment variables where initialisation is decoupled from declaration, which I feel can often be simulated with laziness ala Haskell, but see CTM.)

[1] https://webperso.info.ucl.ac.be/~pvr/book.html


They cite one of the works of Gert Smolka (Smolka and Panangaden, 1985), who was the original creator of Oz. But I agree that Van Roy and Haridi did a lot of research on functional logic programming back in the 1990s that seems to be ignored here.

When I worked on program semantics, I had the impression that PLT communities in the US and EU tend to ignore each other. This also translates to education. CTM is an epic book, and very readable, but I don't think it's well known or used much on the other side of the pond.


Yes, I was unaware of Smolka's early work and it does look very nice. (Much of Smolka's work seems underappreciated.) The evaluation in the paper under discussion of this work seems weak given that Smolka's early effort has a very similar set of features to what they want.

I concur about the research silo-ing. However in this case we have Europeans ignoring Europeans, and on that CTM page you will find a grab from a review in the Journal of Functional Programming. Several authors of this paper have published in the JFP, leading me to conclude that the JFP is a write-only journal, like much of CS literature.


Could it be that they're not aware of that which they are not referencing?

I notice that they reference Icon. They don't reference jq (well, there's no publications on jq to reference).


@JoeDaDude -- on macOS using recent Chrome/FireFox the puzzles linked from the top-level page yield terminal JavaScript errors. It'd be great to see them in action.

http://logicmazes.com/alice.html

  alice.html:353 Uncaught TypeError: Cannot read properties of undefined (reading 'play')
      at playSound (alice.html:353:30)
      at finalize2 (alice.html:347:1)
      at <anonymous>:1:1


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: