Does any of your Python processes open a subprocess that then spawns another Python process as a child (and both gets logged)? If so, have you found any good way to associate the the parent and child logging messages?
If you just write to stdout everything makes sense in the file top to bottom, but if you're logging json objects into a database I had problems; if your parent process echos stdout from the child you lose a lot of context about the child, you can also double-log child process messages (because the parent is also echoing them), and you can't easily associate the parent/child objects. I came up with a hacky solution, but I wasn't logging these events to a centralized server at the time.
(I haven't used gcloud's dashboard, so I might be making wrong assumptions)
To respond your question: it sounds like you have something that works fine for logging messages. Personally, I split it into categories 1) logs (serial events that needed context for any usefulness) and 2) events (metrics or exceptions). I wrote 1) to traditional log files and wrote 2) to an ElasticSearch or statsd database to log exceptions to. Ideally, I wanted to use the same mechanism for both and peel off relevant data into separate databases.
A metrics database like Elasticsearch will let you query things like, "What modules give the most errors?" "Has this function been called more often this week than last?" "Is this process taking longer when using the newly released version compared to the old one?" etc.
If you just write to stdout everything makes sense in the file top to bottom, but if you're logging json objects into a database I had problems; if your parent process echos stdout from the child you lose a lot of context about the child, you can also double-log child process messages (because the parent is also echoing them), and you can't easily associate the parent/child objects. I came up with a hacky solution, but I wasn't logging these events to a centralized server at the time.
(I haven't used gcloud's dashboard, so I might be making wrong assumptions)
To respond your question: it sounds like you have something that works fine for logging messages. Personally, I split it into categories 1) logs (serial events that needed context for any usefulness) and 2) events (metrics or exceptions). I wrote 1) to traditional log files and wrote 2) to an ElasticSearch or statsd database to log exceptions to. Ideally, I wanted to use the same mechanism for both and peel off relevant data into separate databases.
A metrics database like Elasticsearch will let you query things like, "What modules give the most errors?" "Has this function been called more often this week than last?" "Is this process taking longer when using the newly released version compared to the old one?" etc.