Author here; I think I understand where you might be coming from. I find functional nature of R combined with pipes incredibly powerful and elegant to work with.
OTOH in a pipeline, you're mutating/summarising/joining a data frame, and it's really difficult to look at it and keep track of what state the data is in. I try my best to write in a way that you understand the state of the data (hence the tables I spread throughout the post), but I do acknowledge it can be inscrutable.
A "pipe" is simply a composition of functions. Tidyverse adds a different syntax for doing function composition, using the pipe operator, which I don't particularly like. My general objection to Tidyverse is that it tries to reinvent everything but the end result is a language that is less practical and less transparent than standard R.
The following code essentially redoes what the code up to the first conf_interval block does there. Which one is more clear may be debatable but it's shorter by a factor of two and faster by a factor of ten (45 seconds vs 4 for me).
I am old, so I do not like tidyverse either -- I can concede it is of personal preference though. (Personally do not agree with the lattice vs ggplot comment for example.)
The direct peering to the router is likely going to have a bad time, but route advertisement interval I mention in the article is going to coalesce all of those updates together. Downstream peers would only see the one update every 30 seconds (or so).
That’s only true if they can be coalesced. Even with RPKI an intermediate transit router can path length flap 100,000 routes every 30 second interval.
Depending on the RA interval alone is negligence and if you encountered a small ISP that isn’t dampening your updates directly, their peering session is at risk with any of the major transit providers.
Route dampening guardrails were super common 7 years ago and there isn’t any technological development that fixes what they did so I highly doubt they fell out of favor.
See my adjacent comment to yours. I would like to see why you think dampening is out of favor. Interval batching is not an equivalent protection. If you were playing BGP battleships you were likely playing at a rate where a single prefix was not updating more than once per minute.
That wouldn’t land in the dampening levels that were normally configured that encountered with all of the transit providers.
First off, the HTTP HTTP 301s to the HTTPS site, so HTTPS is still the likely trigger.
Second, I see that whatever client he's using is specifying a very old TLS 1.0. If its not MTU (which others have mentioned), then my guess would be a firewall with a policy specifying a minimum TLS version, and dropping this connection on the floor.
Certainly weird that wireshark shows TLSv1 while curl shows TLSv1.3. That shouldn't happen unless something interfered with the Client Hello. (or the wireshark version is outdated)
If a TLS handshake is aborted partway through, Wireshark will label it “TLSv1”. It actually retroactively labels the 1.0 TLS packets as 1.3 after a successful TLS 1.3 handshake finishes.
This makes sense because a TLSv1.3 handshake actually starts as 1.0 and then upgrades to 1.3 only with IIRC the Server Hello response to the ClientHello.
The following links document this behavior, in case you or your organization’s security team is nervous TLSv1 is actually being used:
Oh, indeed, that's quite surprising. A TLSv1.3 Client Hello always contains the supported_versions extension, which should allow wireshark to label it correctly, regardless of whether or not the handshake actually finishes. Though, tbf, it does say TLSv1 and not TLSv1.0. I wonder how it would look had TLSv1.3 been named TLSv2.0 after all...
Right - the old rule of thumb used to be an Ethernet collision domain should have a maximum of five segments, with four repeaters, and three populated mixing segments (i.e. buses like 10BASE5 or 10BASE2), the other two segments being link segments (i.e. point-to-point like 10BASE-T or 10BASE-FL).
I think the idea was that anything more than this was likely to lead to long enough round-trip times between devices at the far ends of the Ethernet that the CSMA/CD algorithm couldn't be guaranteed to work.