Python's facilities for calling subprocesses are pretty inconvenient compared to bash IMO. It defaults to binary output instead of UTF-8 so I almost always have to set an option for that. I wind up having to define threads in order to run programs in the background and do anything with their output in real time, which has an awkward syntax. The APIs for checking the exit code vs raising an error are pretty non-obvious and I have to look them up every time. And I always wind up having to write some boilerplate code to strip whitespace from the end of each line and filter out empty lines, like p.stdout.rstrip().split('\n') which can be subtly incorrect depending on what program I'm invoking.
"subprocess.run" appeared in python 3.5, and it's pretty nice - for example you so "check=True" to raise on error exit code, and omit it if you want to check exit code yourself. And to get text output you put "text=True" (or encoding="utf-8" if you are unsure what the system encoding is)
As for your boilerplate, it seems "p.stdout.splitlines()" is what you want? it's what you normally want to use to parse process output line-by-line
The background process is the hardest part, but for the most common case, you don't need any thread:
proc = subprocess.Popen(["slow-app", "arg"], stdout=subprocess.PIPE, text=True)
for line in proc.stdout:
print("slow-app said:", line.rstrip())
print("slow-app finished, exit code", proc.wait())
sadly if you need to parse multiple streams, threads are often the easiest.