In my experience with scientists, they just use the tools that know how to use and have available. So maybe they just got the set-up for 2.7 on their personal/work computers or maybe some tool/library is using 2.7?
for x in $(curl -s https://alpha.wallhaven.cc/random | pcregrep -o1 "https://alpha.wallhaven.cc/wallpaper/(\d+)" | sort | uniq) ; do wget "https://wallpapers.wallhaven.cc/wallpapers/full/wallhaven-$x.jpg" ; done
To the hardcore bash users, I think they call that 'easy'. I have someone like that on my team - holy crap some of the bash stuff they can come up with
Hardcore bash sure sounds like fun, but when things start getting too big or messy I usually find a Python script with some `subprocess` [0] tricks to be way easier on the eye.
I hope none of that stuff makes it into your main codebase? If there is even a remote chance of requiring maintenance by someone other than the person that wrote it, that is. Which is probably 99.99% the chance, unless it's temporary/throwaway code.
I guess wget has a useful spidering function that could probably page through the websites search results page, downloading all the preview and real images. You'd have to do the login bit as a different call and fetch the first search pages url yourself?