The main reason this happens in Python is that creating actual datatypes is incredibly clunky (by Python standards) because of the tedious "def __init__(self, x): self.x = x". The solution here is to have a very lightweight syntax for more specific types, e.g. Scala's "case class".
I'd also argue for using thrift, protobuf or even WS-* to put a little more strong typing into what goes over the network. Such schemata won't catch everything (they have to have a lowest-common-denominator notion of type) but distributed bugs are the hardest bugs to track down; anything that helps you spot a bad network request earlier is well worth having.
An article about the "attrs" library was posted here a couple weeks ago. Really highlighted the tedium of Python objects while offering a neat solution.
Regarding protobuf, I'm a bit disappointed with the direction of version 3. Fields can no longer be marked as required - everything is optional; i.e. almost every protobuf needs to be wrapped with some sort of validator to ensure that necessary fields are present. I understand the arguments, but I did enjoy letting protobuf do the bulk of the work making sure fields were present.
You should be very careful about marking fields as required. If at some point you wish to stop writing or sending a required field, it will be problematic to change the field to an optional field – old readers will consider messages without this field to be incomplete and may reject or drop them unintentionally. You should consider writing application-specific custom validation routines for your buffers instead.
They're a tradeoff. Sometimes you really are confident enough that this attribute will be required forever that the saving of not having to write custom validation is worth it.
Named tuples assign meaning to each position in a tuple and allow for more readable, self-documenting code. They can be used wherever regular tuples are used, and they add the ability to access fields by name instead of position index.
At one company I worked at, we used Avro to transfer data over the network. It's strongly typed with schemas, and it has both a compact binary form for transfer over the network and a text-based form for storage on disk that looks like JSON except field order matters (the schema and data are stored in separate files).
aeruder already posted the awesome glyphobet post on attrs; I agree with everything in there. The Python object protocol is great, but difficult to use for small classes. If you are not doing some kind of schema validation on REST endpoints, you're doing it wrong, I would say. But JSONSchema is also really sucky; write more JSON to validate JSON is not my idea of simplicity. Will have to look at the alternatives at some point.
> The main reason this happens in Python is that creating actual datatypes is incredibly clunky
It's not clunky, it's outright impossible. Datatypes are inhabited by compound values (data constructors applied to arguments), but Python simply doesn't have compound values. All it has is object identities, which are primitive and indecomposable values no matter how compound the object is.
I'd also argue for using thrift, protobuf or even WS-* to put a little more strong typing into what goes over the network. Such schemata won't catch everything (they have to have a lowest-common-denominator notion of type) but distributed bugs are the hardest bugs to track down; anything that helps you spot a bad network request earlier is well worth having.