Reading one paper a week is 520 papers a decade, and there's your "Oh wait, I've seen a solution to this problem before..." superpower as a senior dev. Not your only one, but one that's easy to acquire.
Based on my experience reading academic papers, I would suggest that you would often be better off skimming 3 papers in a week than reading one closely.
I would often do this in grad school:
* Go search for papers that broadly had to do with some structure or other mathematical gadget I was interested in at the time,
* Read the abstracts of those papers to find the ones that looked most interesting,
* Take the most interesting papers, and read the statements of the theorems,
* Finally, devote a little more attention to those papers that had interesting theorems that seemed to fall within the domain of what I was working on.
I did this with math papers, but there's no particular reason you can't generalize this to other fields. CS in particular can use almost the exact same methods. For less mathematical fields, you'd need to make some substitutions, such as "section headers and key topic sentences" for "statements of theorems," but you can make it work there, too.
Doing this, in a decade, you end up reading 1560 abstracts, which is probably more useful in terms of "Oh, wait, I've seen this before" type insights than reading 520 entire papers.
I absolutely agree with this. A lot of what’s in papers is boilerplate - stuff to say “yes, I’ve read the other relevant works. Yes, I understand how qualitative research works. No, my study is not based on the opinions of my four closest friends”. In my field I’d usually skim the abstract, and maybe from there read the description of the apparatus, conclusions, and further work if the study looked particularly interesting. But you can get 60-90% of a paper’s value from the abstract.
I mostly agree with this sentiment - with the added note that there is significant value in 'going deep' on a small subset of those papers. In my opinion, best bang for the buck there is to focus on the well known and impactful papers in that domain. I think there are big benefits to really digging into what makes a particular solution work, and how the authors really 'prove out' the full idea in the paper.
Honestly I don't think I could keep up one paper a day. I had a reading course which as 3 per week (in detail) and that was enough work for me. It could take 45 minutes to read a paper
I was an on-and-off-again ACM member for 40 years, and one of the better publications was ACM Computing Surveys: https://dl.acm.org/journal/csur -- even older issues are pretty high value, and there are tons of references to follow.
[edit: update Usenix link to something much more current]
You can go to Semantic Sanity and set up feeds of papers which you can seed with example papers. I’ve found some great (I.e. highly relevant to my projects) ML papers this way
It can depend on your research interests, but Google Scholar is my go to first dip into any topic. Then its a bit of rabbit-holing by looking at cited sources and reading them or reading other papers that were a part of the same journal/conference.