A key performance attraction of Scryer Prolog is its space efficiency for representing lists of characters, yielding a 24 times (!) more compact representation than a naive implementation would.
With Scryer Prolog and other recent systems that implement this representation, such as Trealla Prolog, we can easily process many GBs of text with DCGs, arguably realizing the full potential of the originally intended use case of Prolog for the first time. Trealla Prolog goes even further already, and allows overhead-free processing of files, using the system-call mmap(2) to virtually map files to memory, delegating the mapping to the operating system instead of the Prolog system.
The linked benchmarks do not test these aspects at all, and in addition use a version of Scryer Prolog that was completely outdated already at the time the benchmarks were made: The benchmarks use Scryer Prolog v0.8.127, which was tagged in August 2020, more than 3 years (!) before the benchmarks were posted. The linked benchmarks thus ignore more than 3 years of development of a system that was at that time 7 years old. Newer versions of Scryer Prolog perform much better due to many improvements that have since been applied. More than 1700 commits were applied between these dates.
In the face of the 24-fold reduction of memory use that the above-mentioned efficient string representation enables, small factors of difference in speed between different systems are in my opinion barely worth mentioning at all in any direction.
And yes, in addition to this great space efficiency, the strong ISO conformance of Scryer Prolog is also a major attraction especially when using it in highly regulated areas. For example, here is a recently envisaged application of Scryer Prolog in the context of machine protection systems (MPS) of giant particle accelerators, where adherence to industry standards is of great importance for warranty reasons among others:
>> The linked benchmarks do not test these aspects at all, and in addition use a version of Scryer Prolog that was completely outdated already at the time the benchmarks were made: The benchmarks use Scryer Prolog v0.8.127, which was tagged in August 2020, more than 3 years (!) before the benchmarks were posted. The linked benchmarks thus ignore more than 3 years of development of a system that was at that time 7 years old. Newer versions of Scryer Prolog perform much better due to many improvements that have since been applied. More than 1700 commits were applied between these dates.
In the SWI-Prolog discourse thread linked above this is pointed out to Jan Wielemaker who clarifies it was a mistake. He then repeats the benchmark comparing a newer version of Scryer to SWI and finds that Scryer has improved significantly:
Updated Scryer Prolog to 0.9.3. They made serious progress. Congrats! The queens_clpfd.pl and the sieve.pl benchmarks have been added. The ISO predicates number/1 and retractall/1 have been added. I had to made more changes to to get the code loaded. Creating a module with the programs and some support predicates somehow did not work anymore (predicates became invisible). Loading a file programs.pl from directory holding a subdirectory programs silently loaded nothing until I added the .pl suffix. The sieve bar is cut at 20, but the actual value is 359.
> adherence to industry standards is of great importance for warranty reasons among others
This is mostly a nice talking point rather than an actual thing, right? Scryer's license contains the usual all-caps NO WARRANTY and NO FITNESS FOR A PARTICULAR PURPOSE wording. Also, the links you provided describe these applications without references to warranties and standards and regulation. The users in these super-sensitive domains don't seem as sensitive about them as you claim.
> the links you provided describe these applications without references to warranties and standards and regulation.
This is not true. For example, quoting from page 2 of the paper that is linked to in a discussion I posted, An Executable Specification of Oncology Dose-Escalation Protocols with Prolog, available from https://arxiv.org/abs/2402.08334:
"Standards are of great importance in the medical sector and play a significant role in procurement decisions, resolution of legal disputes, warranty questions, and the preparation of teaching material. It is to be expected that the use of an ISO-standardized programming language will enable the broadest possible adoption of our approach in such a safety-critical application area. For these reasons, we are using Scryer Prolog for our application. Scryer Prolog is a modern Prolog system written in Rust that aims for strict conformance to the Prolog ISO standard and satisfies all syntactic conformity tests given in https://www.complang.tuwien.ac.at/ulrich/iso-prolog/conformi...."
Regarding warranty guarantees of Scryer Prolog, may I suggest you contact its author if you need to negotiate arrangements that are not catered for by the only licence terms you currently have access to?
One important advantage you get from the strict syntactic conformance of Scryer Prolog is that it reliably tells you what is Prolog syntax and what is not. In this way, you can use it as a free reference system to learn what Prolog is. The conformance makes it easier to switch to other conforming systems, such as SICStus Prolog which also offers different licences and commercial support, when you need to.
> The users in these super-sensitive domains don't seem as sensitive about them as you claim.
I am at a loss at this phrasing and also about the content of this text. Apart from the facts that I did not use the wording "super-sensitive", and that the importance of standards is explicitly stated in the paper I quoted above, is there even the slightest doubt about the great importance of standards when building and operating giant particle accelerators or devising dose escalation trials in clinical oncology?
I acknowledge that you also included your nice talking point in a paper you published on arXiv. Citing yourself doesn't convince me any more of the credibility of this argument.
> is there even the slightest doubt about the importance of standards when building and operating giant particle accelerators
The particle accelerator application is a checker for existing JSON config files. The accelerator is already running with those files. The proposed project is in an early stage. The checker will add more assurance, which is nice. The checker's author does not talk about the importance of warranties or standards. The checker could just as well be implemented in some non-ISO dialect as long as that dialect has a reliable specification and implementation.
So yes, there is the slightest doubt.
Edit: BTW, your oncology paper heavily uses CLP(Z), which does not have an ISO standard, so your argument is... The base language must be standardized, but arbitrary nonstandard extensions are OK? Please clarify as I've probably misunderstood.
CLP(FD/Z) is a candidate for inclusion in the Prolog standard: Several Prolog systems provide it with notable commonalities in features, it fits perfectly into the existing language, and it follows the logic of the standard including its error system. It can even be implemented within Prolog, provided a few basic features are present in a Prolog system. For instance, the CLP(Z) system I provide and which is used in the paper runs with little modifications already in several different Prolog systems, including SICStus, Scryer and Trealla. CLP(FD/Z) is an admissible extension of the existing standard:
5.5 Extensions
A processor may support, as an implementation specific
feature, any construct that is implicitly or explicitly
undefined in the part of ISO/IEC 13211.
This is completely different from modifications of the standard that do not fit at all into the standard. For instance, interpreting double-quoted strings differently from what the standard prescribes is not an extension in the sense the standard defines it, but a modification of the standard.
In addition, Scryer Prolog has an execution mode where all its extensions are turned off. This is called a strictly conforming mode, and is also prescribed by the standard:
5 Compliance
5.1 Prolog processor
A conforming Prolog processor shall:
...
e) Offer a strictly conforming mode which shall reject
the use of an implementation specific feature in Prolog
text or while executing a goal.
In Scryer Prolog, the strictly conforming mode is the default execution mode.
Regarding the other points you mention: Even though it may sound easy to say "as long as that dialect has a reliable specification and implementation", I know no such system that exists, and what I see from systems that do not adhere to the Prolog standard makes me doubt that such a thing is possible. The systems that do not follow the standard often have elementary syntactic problems, such as reading a Prolog term that they themselves emit into a different Prolog term, a recipe for disaster and unacceptable in every domain I know.
> For instance, interpreting double-quoted strings differently from what the standard prescribes is not an extension in the sense the standard defines it, but a modification of the standard.
Agreed, but also minor as you can and should set the double_quotes flag, otherwise your program doesn't have portable semantics even among ISO Prolog systems.
> Even though it may sound easy to say "as long as that dialect has a reliable specification and implementation", I know no such system that exists, and what I see from systems that do not adhere to the Prolog standard makes me doubt that such a thing is possible.
Of course it is possible to program against the quirks of a given implementation. That's what you yourself are doing with your CLP libraries. As you note, your main target has different quirks from other targets.
More broadly, Scryer itself demonstrates that it's possible to program against a programming language that doesn't have an ISO standard but does have a good enough specification and an implementation that adheres to that specification.
> The systems that do not follow the standard often have elementary syntactic problems, such as reading a Prolog term that they themselves emit into a different Prolog term, a recipe for disaster and unacceptable in every domain I know.
You're painting with a very broad brush here. What implementations, and what kinds of terms? If your examples involve infix dot, that would be the kind of term nobody uses and nobody should use in modern Prolog, as you well know. Some of these syntactic problems only appear if you go looking for them. Minor syntactic annoyances will be caught in testing.
I agree that such things are bad, but they are knowable, controllable, and quite probably much less relevant in practice than you suggest.
Very very tangentially: The company I work for is very serious about its software supply chains. If we want to use external software for development, we must apply for permission. For that permission, actual programmers and lawyers trawl through the code and licenses and documentation. Scryer's license file lists one copyright holder, and there are many source files without copyright headers, and then there are many source files with copyright headers that name another copyright holder. Our lawyers would not allow us to touch such a system. If you're serious about promoting Scryer as a serious Prolog for serious use, you might want to consider cleaning this up.
Codd's seminal paper, A Relational Model of Data for Large Shared Data Banks, states that a language based on applied predicate calculus "would provide a yard-stick of linguistic power for all other proposed data languages". Quoting from https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf:
"1.5 Some linguistic aspects
The adoption of a relational model of data, as described above,
permits the development of a universal data sub-language based on an
applied predicate calculus. A first-order predicate calculus suffices
if the collection of relations is in normal form. Such a language
would provide a yard-stick of linguistic power for all other proposed
data languages, and would itself be a strong candidate for embedding
(with appropriate syntactic modification) in a variety of host
languages (programming, command- or problem-oriented)."
Languages based on predicate calculus indeed seem extremely suitable for reasoning about relational data. Datalog is a well-known example. It is more directly based on predicate logic, and much simpler than SQL.
Regarding relational algebra in particular: It is interesting that important and frequently needed relations on graphs cannot be expressed in relational algebra. The transitive closure of a relation is a well-known example, and as you nicely show in your article this relation can be easily and very naturally expressed in two lines of Datalog. For example, we can easily express reachability in a graph:
One can show that Datalog with two very conservative and simple extensions (allowing negation of extensional database relations, and assuming a total order on the domain elements) captures the complexity class P, so can be used to decide exactly those properties of databases (and hence graphs) that are evaluable in polynomial time, a major result from descriptive complexity theory.
An example of such a property is CONNECTIVITY ("Is the graph connected?"), which can be easily expressed with Datalog on ordered databases, where we assume 3 built-in predicates (such as first/1, succ/2 and last/1) to express an ordering of domain elements:
If such an ordering is not available via built-in predicates, then we can easily define it ourselves for any given concrete database by adding suitable facts. Also negated EDB relations can be easily defined for any database as concrete additional relations.
Yes, you're right that one cannot express Datalog semantics (and also transitive closure semantics) with just one "query", as queries cannot be recursive.
If you view each rule as a query, however, looping over rules does capture Datalog semantics. Furthermore, by optimizing over rules using the relational algebra, one can derive algorithms "equivalent" to traditional graph algorithms.
(I don't think you would disagree with me; just want to clarify for other people who might be reading.)
Very interesting! Let's consider for example the definition of sum_list/2 which is shown in Fig. 1, Fig. 2 and Fig. 3:
%% sum_list(+Number_List, ?Result)
% Unifies Result with the sum of the numbers in Number_List;
% calls error/1 if Number_List is not a list of numbers.
sum_list(Number_List, Result) :-
sum_list(Number_List, 0, Result).
% sum_list(+Number_List, +Accumulator, ?Result)
sum_list([], A, A). % At end: unify with accumulator.
sum_list([H|T], A, R) :- % Accumulate first and recur.
number(H),
!,
B is A + H,
sum_list(Rest, B, R).
sum_list(_, _A, _R) :- % Catch ill-formed arguments.
error('first arg to sum_list/2 not a list of numbers').
Compiling it with Scryer Prolog, I get a warning:
$ scryer_prolog sum_list.pl
Warning: singleton variables T, Rest at line 4 of sum_list.pl
true.
A singleton variable often indicates a mistake in the code. And indeed, the sample code uses Rest where it apparently meant to use T (or vice versa). So, I change the second clause of sum_list/3 to:
sum_list([H|T], A, R) :-
number(H),
!,
B is A + H,
sum_list(T, B, R).
And now we're ready to use the predicate! Let's ask Prolog the most general query: Which answers are there in general? So, a query where all arguments are logic variables:
?- sum_list(Ls, R).
Ls = [], R = 0
; error(existence_error(procedure,error/1),error/1).
The existence error is due to the use of the non-standard predicate error/1 in the code sample. The predicate apparently meant to throw an exception telling us:
first arg to sum_list/2 not a list of numbers
But the query did not restrict the first argument at all, so it may just as well be a list of numbers! The predicate probably meant to say that the argument is not sufficiently instantiated. In that case, it should have thrown an instantiation error. The standard predicate (is)/2 throws such an instantiation error for us in such cases. Also, it throws type errors for us! A type error is categorically different from an instantiation error: From a logical perspective, a type error can be replaced by silent failure, but an instantiation error can not.
We can therefore write the second clause as:
sum_list([H|T], A, R) :-
B is A + H,
sum_list(T, B, R).
and also remove the third clause entirely. We now get:
?- sum_list(Ls, R).
Ls = [], R = 0
; error(instantiation_error,(is)/2).
From a logical perspective, that's OK: The predicate tells us that too little is known to make any statement, and a more specific query may yield solutions.
Also, this version correctly distinguishes between type and instantiation errors, and we now get for example:
As I see it, a key attraction of logic programming is that we are able to reason logically about our code. This holds as long as certain logical properties are preserved. The paper hints at such properties for example with the concept of steadfastness, which it defines in Section 5.1: A predicate "must work correctly if its output variable already happens to be instantiated to the output value". How can we tell though which variables are output variables, and also why even distinguish a particular variable as "output variable"? Should this not hold for all variables?
A particularly important logical property is called monotonicity: Generalizing a query (or program) can at most add solutions, never remove them. With monotonic predicates, debugging is very nice: For instance, if a predicate unexpectedly fails, then we can generalize it by removing goals, and if the remaining fragment still fails unexpectedly, then there must be a mistake in that fragment. Scryer Prolog provides library(debug) for this approach of declarative debugging:
Higher-order predicates such as maplist/N and foldl/N retain logical properties of the predicates that occur as arguments.
The most general query now works as expected:
?- list_sum(Is, Sum).
Is = [], Sum = 0
; Is = [Sum], clpz:(Sum in inf..sup)
; Is = [_A,_B], clpz:(_A+_B#=Sum)
; Is = [_A,_B,_D], clpz:(_A+_B#=_C), clpz:(_C+_D#=Sum)
; ... .
The predicate does not terminate, as expected, because we expect solutions for lists of all lengths:
?- list_sum(Is, Sum), false.
loops.
And other cases are simply specific instances of the most general query:
?- list_sum([1,2,3], Sum).
Sum = 6.
Note that I have changed the predicate name from sum_list/2 to list_sum/2, because the list is the first argument, and the sum is the second argument. So, I am now using "sum" no longer as a verb, but as a noun, because that seems more appropriate for code that is declarative, not imperative: We describe what is true, not what must be done, and our code works in all directions and also with different execution strategies. integers_sum/2 may be an even better name in this case.
One other naming convention I like to use is to append an "s" for logic variables that stand for lists, such as "Is" for a list of integers.
Great post, showing how these coding guidelines are showing their age. So much boilerplate comments, which clearly are not pulling their weight. A simple coding guideline like "eliminate all singleton variable warnings" would serve better, and it is enforced automatically to boot.
One thing we have learned is that if a coding guideline is mandatory, it just has to be automatically checkable and enforceable.
Not in this case. In this case the author claims that if there are infinitely many solutions, the predicate should give infinitely many solutions. It's similar to how you want functions in languages that use functions to fail predictably. Eg. you want to receive ENOENT exit code if you try to open a file that doesn't exist. Or, you want to block forever if you join a thread that runs an infinite loop.
Do you want to have any of those behaviors in your program? -- rarely, and sometimes not at all. But you want your program to fail in such a way that would indicate that you coded it in a particular way that should result in such an error.
The article suggests using Boolean logic, so let's apply it: One way to solve this is to introduce a Boolean variable for each of A,...,G, and to use 1 to denote that a statement is true, and also to denote that the corresponding person tells the truth. It then remains to relate the truth of each statement to the truthfulness of the person making the statement.
In Prolog, we can express these relations with CLP(B), constraint logic programming over Boolean variables:
Yielding 4 solutions that satisfy all constraints:
G = 1, E = 1, C = 0, D = 1, A = 0, B = 0, F = 1
; G = 1, E = 1, C = 0, D = 1, A = 0, B = 1, F = 0
; G = 1, E = 1, C = 1, D = 0, A = 0, B = 0, F = 1
; G = 1, E = 1, C = 1, D = 0, A = 0, B = 1, F = 0.
From this, it is clear that there are 3 engineers, in all possible situations consistent with the description.
If we omit the labeling/1 goal which enumerates all solutions, then we get a symbolic representation of all remaining constraints:
G = 1, E = 1, A = 0, clpb:sat(C=:=D#B#F), clpb:sat(C=\=D).
From this, it is clear that there are at least 2 engineers in every solution: A (as stated in the description of the puzzle), and either C or D (but not both).
The thing I find especially interesting about these sorts of puzzles is the translation from the word problem to the logical formalism. It seems like a separate domain from solving the problem itself.
With Prolog, we can often very naturally map such puzzles to programs, or at least to declarative descriptions that can be easily interpreted by Prolog programs.
For instance, in this concrete case, with a suitable operator definition for the operator says, we can write:
:- op(800, xfy, says).
solution([A,B,C,D,E,F,G]) :-
G = salesman,
E = salesman,
C says D = engineer,
A = engineer,
A says B says C says D says E says F says G = engineer.
It is then left to interpret the statements, which we can do for example with:
?- solution(S).
S = [engineer,engineer,engineer,salesman,salesman,salesman,salesman]
; S = [engineer,salesman,engineer,salesman,salesman,engineer,salesman]
; S = [engineer,engineer,salesman,engineer,salesman,salesman,salesman]
; S = [engineer,salesman,salesman,engineer,salesman,engineer,salesman]
; false.
This is also how Prolog terms are represented on the heap in the Warren Abstract Machine (WAM). For instance, taking the example of the article, if we have an expression such as the Prolog term +(*(a,b), c), written using operator notation as:
expr(E) :-
E = a*b + c.
Then we get a flattened representation on the global stack of the virtual machine. In Scryer Prolog, we can inspect the WAM instructions with:
Note how both compound terms are linearized, and appear on the heap as: functor, followed by arguments, each occupying exactly one memory cell of the WAM. The arguments can point to other memory cells. The heap is an array of such cells, all of the same concrete (as opposed to abstract, i.e., WAM-level) type. For example, Scryer Prolog uses 8 bytes for each cell, making cell access and modification very efficient on 64-bit architectures.
It works well so far. One of the few limitations I noticed so far pertains to the formatting of tables. For instance, consider the table used in library(format) to describe control sequences:
It contains several entries that span multiple lines, yet are meant to denote only a single row of the table, such as:
% | `~Nr` | where N is an integer between 2 and 36: format the |
% | | next argument, which must be an integer, in radix N. |
% | | The characters "a" to "z" are used for radices 10 to 36. |
% | | If N is omitted, it defaults to 8 (octal). |
It appears that Djot requires to write the entire entry in one long line, otherwise it gets formatted as shown currently at:
From a readability perspective in the source file itself, a very long line is suboptimal. Maybe there is a way to retain optimal readability in the source file, and still get the intended rendering?
This is a rather mundane and unsurprising line tbh. Finite automata are not Turing complete. Computers are not in reality since they do not have infinite memory and do not have infinite precision. I'm not sure if it'll lead to arbitrary precision machines since the lack of real world Turing completeness hasn't stopped us so far and likely won't.
Is there a formal, mathematical way of saying "real-world computers may not have infinite memory, but have more than enough, so they can be treated as Turing-complete for a subset of programs that are well-behaving - i.e. don't end up hitting the memory limit"?
And in general, there surely is a way of formally saying "this is theoretically X, but effectively Y, for the [hand-waves] kind of inputs"?
Not really, formal mathematics in complexity theory is involved when talking about asymptotics, constants like “for N < 10 trillion, a desktop computer is good enough” isn’t very interesting from a mathematical perspective.
That said, some simple intuition is the following: PSPACE is a subset of EXP is a subset of EXPSPACE
(We think these are all strict separations but technically that’s not fully proven)
If you use the shorthand intuition that we can handle polynomial scaling but can’t handle exponential scaling, this means that we hit a time barrier (EXP-complete problems) before we hit a space barrier (EXPSPACE-complete problems)
Another bit of intuition: you can store a very big number in very few bits in binary because binary holds an exponentially large number in linear bits. But you can’t loop over that number in a humans’ lifespan.
Edit:
> they can be treated as Turing-complete for a subset of programs that are well-behaving - i.e. don't end up hitting the memory limit
Just to be clear, it’s a matter of input size and not the programs themselves. Technically you could say we haven’t “solved” sorting on real hardware because nobody can sort the first 2^1000 digits of pi. But realistically we don’t care to do so.
Space complexity helps characterize this: real-world computers can (arguably) emulate a linear-bounded automaton, so anything in DSPACE(O(n)) is fair game if you can wait long enough.
For the arguably part: I am assuming that the machine can access all of the input at once, so it is reasonable to expect available memory to be a multiple of the input, so you get O(n) memory.
Computers can do side effects so their state is effectively the universe. Which is still not infinite, but for all practical purposes the distinction doesn’t matter.
I'm not a PL person or someone that focuses on computability, but I think you'd refer to it as "bounded," "pseudo," or "quasi." To give an example, you might call a transform on an image quasi-equivariant if it introduces some aliasing (obviously not destructing the image). See quasimorphism[0] as an example.
Usually people just let it go unless someone brings up specifically something that implies that there is, or might, not a shared mutual understanding (as done here). Maybe it is shared, maybe not, maybe someone reading it doesn't understand the difference. I mean we're humans. We compress a lot of information into language that is not directly expressed in our words (this is also why it is often hard to talk to people on the internet since there's a wide audience with vastly different priors. Relevant XKCD[1]).
An important distinction is that computers can do arbitrary many iterations of algorithms, while most neural networks have to operate on some fixed size, so the practical limits are very different.
It won't, it just means this is another pointless theoretical study seeking to interpret Transformers in a framework that has no explanatory value for AI.
As I see it, the result is rather establishing a very fundamental property pertaining to the expressive power of a mechanism, and it can be useful also in practice.
For instance, I have many potential applications of Turing complete formalisms, because I am interested in results of arbitrary computations. The result obtained in the article means that I can use a Neural Network to obtain this, under the conditions outlined in the article, and in the way shown in the article.
This may simplify software architectures, especially in situations where Neural Networks are already applied, and additional mechanisms would otherwise be needed to cover arbitrary computations.
Something being Turing Complete just that means in principle it could be used to solve any computation problem. But this may require infinite memory or infinite time.
The paper showed that Transformer with positional encodings and rational activation functions is Turing complete.
Rational activation functions with arbitrary precision make sure that you are in the smaller countable infinities, where floats run into that cardinality of the continuum problem.
While all nets that use attention are feed forward and thus effectively DAGs, they add in positional encodings to move it from well-founded to well ordered.
While those constraints allow the authors to make their claims in this paper, they also have serious implications for real world use as rational activation functions are not arbitrarily precise in physically realizable machines in finite time and you will need to find a well-ordering of your data or find a way to force one one it which is not a trivial task.
So while interesting, just as it was interesting when someone demonstrated sendmail configurations were Turing complete, it probably isn't as practical as you seem to think of it.
As attention is really runtime re-weighting and as feed forward networks are similar to DAGs it is not surprising to me that someone found a way to prove this, but just as I am not going to use the C preprocessor as a universal computation tool as it is also TC, I wouldn't hold your breath waiting for attention to be a universal computation tool either.
I'm not going to engage with this directly, but for any other readers passing through - this is nonsense. One more drop in the flood of uninformed AI noise that's been drowning out the signal.
Choose the sources you trust very carefully, and look to the people actually working on real-world AI systems, not the storytellers and hangers-on.
Their proof depends on arbitrary precision and they explicitly state that the finite case is not TC.
But if you are talking about arbitrary precision floats, or the computable set of the reals it is equivalent.
The computable reals are just the concatenation of the natural numbers/ints
So it is the countable infinity and thus the cardinality of Aleph-nought.
That adds to the time complexity, while the unbounded memory requirement comes from the definition of a Turing machine which is roughly a finite state machine+ an infinite tape.
As the reals are uncomputable almost everyplace, you would need an activation function that only produced computable reals, as they are equivalent rational activation functions are simpler for the proof.
Solaris in particular seems more relevant than ever with the rise of ChatGPT and other generative AI services that do not understand what their outputs mean to us, and often produce eery simulacra of life.
The final scenes of Solaris show this situation brilliantly in that their content matches the way it is shown: The scenes themselves mirror the depicted content with perplexing compositions, zooms and transitions, almost as if they were themselves created by an entity that does not understand the content or medium:
I really didn't enjoy the ending of Solaris and, for that matter, The Little Prince. You don't need to have a punchy ending for me not to feel like I wasted my time with your movie. It's okay to let the journey stand on its own rather than throwing in a climax that feels haphazard and spontaneous.
The original Solaris novel had a different ending, a disillusioned reflection by Kelvin, which I found much better. It was probably not dramatic enough for a movie, too analytical.
Personally, I always found that the novel ends on a comparatively hopeful note, especially given the circumstances, the final word being cudów ("of miracles").
Aside from this, perhaps another reason why the first part of the movie looks and feels more promising than the second one is more trivial. As far as I know they experienced a budget shortage at some point of production.
It's one of his most accessible movie, and usually resonates with tech people for obvious reasons. His best movies imo are Mirror, Ivan's Childhood, Andrei Rublev, Stalker and then Solaris, Nostalgia and Sacrifice.
Now Solaris book by Lem is far superior to Tarkovsky's rendition.
edit re: slowness, it's a lot less slow than his final few works which I also adore. they're all great I just think the rest of his catalog gets less good word simply for drastically less popular exposure.
It's definitely a lot slower paced than our current ADD society is used to. An esteemed director like Kubrick's movies would seem unbearably slow to many.
I think Kubrick's movies are fantastic, deep, atmospheric, thought provoking. True art.
I have never made it all the way through one without falling asleep. 2001, Blade Runner, Dr. Strangelove.... Even movies loosely associated with Kubrick like A.I. cause me to nod off. I have to stop when I'm nodding off and come back later fresh in order to finish.
Maybe it would be different if I had seen them in theaters.
> "Maybe it would be different if I had seen them in theaters."
There's still chances to see 2001 in cinemas! It gets re-released somewhat regularly for anniversaries and Kubrick retrospectives, etc. I was born long after 2001 came out, but I've seen it on cinema screens many times. It was so far ahead of it's time and looks so incredibly good in 70mm that it's mind boggling to consider that it was actually made in the 1960s!
I've never fallen asleep while watching 2001. It's definitely slow, but riveting for me, because (like Tarkovsky's film) there's a lot of meaning in the scenes, and they give me a lot to think about.
Gladiator was shot by Ridley Scott, just like Blade Runner. Kubrick (who i love) is so legendary, he really should just get credit for every great movie in existence ))
Wait, you hated Solaris? It was the first Tarkovsky movie that I watched and I really connected with its themes of the "real" vs "memories". It's hard for me to imagine anyone hating Solaris...
I still have not seen Mirror, though a few of my film friends have told me to check it out for years.
I started watching it tonight, because of this thread. It was my second attempt. I quit about 40 minutes in and started to browse IMDB user reviews in hope to understand it. The 1 star reviews resonated a lot with me. Hate is a strong word but definitely not my cup of tea.
I found the first part with all the meetings and philosophical debates on earth to be boring and hard to
follow too, but if you stop there you’re missing out on the payoff of all that exposition. It really picks up once he gets to the space station.
But I found the opening shot of the plants in the stream to be revelatory, a tiny bit of film making that changed the way I viewed life.
I did not feel engaged to the plot. The dialogue felt abstract and kind of went over my head. Someone mentioned that the film is "very Russian" and perhaps that is part of it. I have extremely limited exposure to Russian culture but it feels quite introspective to me. I noticed the dialogue sounding a little like Putin, just sound-wise. Lots of "mmm" sounds which in my language people make when they are considering their next words. Somehow I see a connection there (that probably don't exist, if I am being rational)
You might like the original book Solaris better. It has a unique atmosphere, isolation and mystery of the alien planet, and explores a different idea of alien life than we usually get in movies. This is hard to communicate in a movie, Tarkovsky did well, but it does not hold up as well as the book.
With Scryer Prolog and other recent systems that implement this representation, such as Trealla Prolog, we can easily process many GBs of text with DCGs, arguably realizing the full potential of the originally intended use case of Prolog for the first time. Trealla Prolog goes even further already, and allows overhead-free processing of files, using the system-call mmap(2) to virtually map files to memory, delegating the mapping to the operating system instead of the Prolog system.
The linked benchmarks do not test these aspects at all, and in addition use a version of Scryer Prolog that was completely outdated already at the time the benchmarks were made: The benchmarks use Scryer Prolog v0.8.127, which was tagged in August 2020, more than 3 years (!) before the benchmarks were posted. The linked benchmarks thus ignore more than 3 years of development of a system that was at that time 7 years old. Newer versions of Scryer Prolog perform much better due to many improvements that have since been applied. More than 1700 commits were applied between these dates.
In the face of the 24-fold reduction of memory use that the above-mentioned efficient string representation enables, small factors of difference in speed between different systems are in my opinion barely worth mentioning at all in any direction.
And yes, in addition to this great space efficiency, the strong ISO conformance of Scryer Prolog is also a major attraction especially when using it in highly regulated areas. For example, here is a recently envisaged application of Scryer Prolog in the context of machine protection systems (MPS) of giant particle accelerators, where adherence to industry standards is of great importance for warranty reasons among others:
https://github.com/mthom/scryer-prolog/discussions/2441
As another example, a medical application of Scryer Prolog, in the highly regulated domain of oncology trial design:
https://github.com/mthom/scryer-prolog/discussions/2332
Here is an overview of syntactic ISO conformance of different Prolog systems:
https://www.complang.tuwien.ac.at/ulrich/iso-prolog/conformi...