I remember when that went into Python. It seemed too cute. There was a lot of that around the Python 3.4, 3.5 era.
The case where all items are the same type is reasonable, but if implicit type conversions are invoked, it gets really complicated. Whatever the types are, Python will probably do something. Just not necessarily what you want.
I could see having this if there was a compile-time constraint that all variables be the same type. That would disallow the bad cases.
In defense of Python here, a==b==c -> (a ==_string b) ==_bool c is an extremely confusing way to write (predicate) nxor c. A principle here could be "operators that look the same shouldn't do different things when they're written close together."
Ah. I'm thinking of [1] from 2016, when there was a plan to add short-cut evaluation to short-cut comparisons. That avoids evaluating all the terms when not necessary. But that was deferred.
> Whatever the types are, Python will probably do something. Just not necessarily what you want.
Are you sure you are not confusing it with PHP? Python is not statically typed, but it is strongly typed. There is no implicit type conversion unless it's safe to do so (such as between a float and an int).
For example, print(1 > "foo") will raise a TypeError, and print("1" == 1) will print False.
In Python, "==" calls "__eq__", a class function. The left argument to "==" determines which class is involved. The right argument is just passed into the selected class member function. There's an asymmetry there. Operators are not necessarily commutative.
This can lead to some strange operator semantics in mixed-mode expressions.
Whether chained expressions left-associate or right-associate matters.
numpy.array operations do a lot of implicit conversions. You can use Python arrays on many operations that expect a numpy.array.
Mixing them works. Mostly.
python3
Python 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> a1 = numpy.array([1,2,3])
>>> a2 = [1,2,3]
>>> a1 == a2
array([ True, True, True])
>>> a2 == a1
array([ True, True, True])
>>> a1 == a2 == a1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
>>> a2 == a2 == a2
True
>>> a1 == a1 == a1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
The case where all items are the same type is reasonable, but if implicit type conversions are invoked, it gets really complicated. Whatever the types are, Python will probably do something. Just not necessarily what you want.
I could see having this if there was a compile-time constraint that all variables be the same type. That would disallow the bad cases.