Below is a short script that downloads and makes a PDF from the image files. No browser required.
The script uses a feature of HTTP/1.1 called pipelining; proponents of HTTP/2 and HTTP/3 want people to believe it has problems because it does not fit their commercialised web business model.
As demonstrated by the script below, it has no problems.
It's a feature that simply does not suit the online ad industry-funded business model with its gigantic corporate browser, bloated conglomeration web pages and incessant data collection.
Here, only 2 TCP connections are used to retrieve 141 images.
Most servers are less restrictive and allow more than 100 requests per TCP connection.
Pipelining works great. Much more efficient than browsers which open hundreds of connections.
IMHO.
(export Connection=keep-alive
x1=http://www.minimizedistraction.com/img/vrg_google_doc_final_vrs03-
x2(){ seq -f "$x1%g.jpg" $1 $2;};
x3(){ yy025|nc -vvn 173.236.175.199 80;};
x2 1 100|x3;
x2 101 200|x3;
)|exec yy056|exec od -An -tx1 -vw99999|exec tr -d '\40'|exec sed 's/ffd9ffd8/ffd9\
ffd8/g'|exec sed -n /ffd8/p|exec split -l1;
for x in x??;do xxd -p -r < $x > $x.jpg;rm $x;done;
convert x??.jpg 1.pdf 2>/dev/null;rm x??.jpg
ls -l ./1.pdf
The script uses a feature of HTTP/1.1 called pipelining; proponents of HTTP/2 and HTTP/3 want people to believe it has problems because it does not fit their commercialised web business model. As demonstrated by the script below, it has no problems. It's a feature that simply does not suit the online ad industry-funded business model with its gigantic corporate browser, bloated conglomeration web pages and incessant data collection. Here, only 2 TCP connections are used to retrieve 141 images. Most servers are less restrictive and allow more than 100 requests per TCP connection. Pipelining works great. Much more efficient than browsers which open hundreds of connections. IMHO.
More details on yy025 and yy056 here: https://news.ycombinator.com/item?id=27769701