Anyone remember the promise of ATM networking in the 90's? It was telecom grade networking which used circuit switched networking that would handle voice, video and data down one pipe. Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes. You called a computer as if it were a telephone (or maybe that was Datakit?) and ATM handed the user a byte stream like TCP. Imagine never needing an IP stack or setting traffic priority because the network already handles the QoS. Was it simple to deploy? No. Was it cheap? Nooohooohooohooo. Was Ethernet any of those? YES AND YES. ATM was superior but lost to the simpler and cheaper Ethernet which was pretty crappy in its early days (thinnet, thicknet, terminators, vampire taps, AUI, etc.) but good enough.
The funny part is this has the unintended consequences of needing to reinvent the wheel once you get to the point where you need telecom sized/like infrastructure. Ethernet had to adapt to deterministic real-time needs so various hacks and standards have been developed to paper over these deficiencies which is what TSN is - reinventing ATM's determinism. In addition we also now have OTN, yet another protocol to further paper over the various other protocols to mux everything down a big fat pipe to the other end which allows Ethernet (and IP/ATM/etc) to ride deterministically between data-centers.
Without being able to get too into the telco detail, I think the lesson was that hard realtime is both much harder to achieve and not actually needed. People will happily chat over nondeterministic Zoom and Discord.
It's both psychological and slightly paradoxical. Once you let go of saying "the system MUST GUARANTEE this property", you get a much cheaper, better, more versatile and higher bandwidth system that ends up meeting the property anyway.
What you need is more that enough bandwidth.
Think of the difference between a highway with few cars versus a highway filled to the brim with cars. In the latter case traffic slows to a crawl even for ambulances.
It seems like it was just cheaper and easier to build more bandwidth than it was to add traffic priority handling to internet connectivity.
That was a selling point, because "hey we guarantee this circuit" but it was also very expensive and labor intensive
Where just dumping your bits into the internet and letting the network figure it out outsourced a lot of that complexity to every hop along the network you didn't own. But, because they care about their networks everyone would (in theory) make sure each hop was healthy, so you didn't need to hand hold your circuit or route completely end to end
Seeing that the tech would never be good enough, they sold off the whole thing for cheap. Years later, they bought it back for way, way more money because they desperately needed to get into the cell phone business that was clearly headed to the moon.
I totally understand the pride they had in the reliability of their system, but it turns out that dropped calls just aren't that big of a deal when you can quickly redial and reconnect.
Those old phones had a long range. It was hard to make small ones because the old AT&T towers were much farther apart, up to 40km. Meanwhile, their competitors focused on smaller coverage areas (e.g. 2km or less for PCS) and better tech (CDMA), and it seemed to pay off.
It wasn't until phones shrank and service got cheaper, that consumer adoption took off. Businesses and early adopters will pay even if the product is inconvenient and costly to use, as long as the benefit exceeds the cost.
Well, not "happily". (Doesn't every video conference do the "hold on, can you hear me? I have wifi issues" dance every other day?) But it works on a good day.
In my club when there is a virtual club meeting however, where people don't have frequent video meetings there is always somebody with trouble ... often the same.
ATM was nifty if you had a requirement of establishing voice-style, i.e. billable, connections. No thanks. It was an interesting technology but hopelessly hobbled by the desire to emulate a voice call that fit into a standard invoice line.
That approach of course didn’t age well when voice almost became a niche application.
I think standards are important, and I'm sad that no one bothers anymore, but stuff like this and the inclusion of interlace in digital video for that little 3 year window when it might have mattered does really sour one on the process.
BTW, I searched Kagi for "tolerable latency without echo cancellation in France" and saw your comment. Wow. I didn't realize web crawlers were that current these days.
There's likely an element of the "layering TCP on TCP" problem going on, too.
The classic popular treatment of the subject is: https://www.wired.com/1996/10/atm-3/
When I pointed out in a previous post how much X.400 sucked, even that never got anywhere near X.25. X.25 is the absolute zero on any networking scale, the scale starts with X.25 at -273degC and goes up from there.
These days I think all of AT&T's flavors of DSL, including their IPTV-supporting VDSL, is considered 'legacy', but for the longest time their "IP-DSL" was the future, and for 15+ years they've been trying to shed this ATM-based DSL
however, actually building a functional routing infrastructure that supported QOS was pretty intractable. that was one of several nails in ATMs coffin (I worked a little on the PNNI routing proposal).
edit: I should have admitted that yes, loss does have a relationship to queue depth, but that doesn't result in infinite queues here. it does mean that we have to know the link delay and the target bandwidth and have per-flow queue accounting, which isn't a whole lot better really. some work was done with statistical queue methods that had simpler hardware controllers - but the whole thing was indeed a mess.
For Linux heads, it was doubly annoying, as ATM was not directly supported in the kernel. You had to download a separate patch to compile the necessary modules, then install and run three separate system daemons, all with the correct arguments for our network, just to get a working network device. And of course you had to download all the necessary packages with another computer, since you couldn’t get online yet. This was the early 2000s, so WiFi was not really common yet.
Even once you got online, one of the admins would randomly crash every so often and you’d have to restart to get back online. It was such a pain.
What year was that? I could see a college/university network department dominated by old school telecom guys deciding to use ATM to connect buildings, but it's kind of insane to think anyone at any point in time thought it was reasonable to push to individual endpoints
Once those requirements dropped down (partially because people just started to accept weird echo) the replacement became MPLS and whatever you can send IP over where Ethernet sometimes shows as package around the IP frame but has little relation to Ethernet otherwise.
And this sort of checks out, most of the complaints about the internet architecture is when someone starts putting put smart middle boxes in a load bearing capacity and now it becomes hard to deploy new edge devices.
I love this. Ethernet is such shit. What do you mean the only way to handle a high speed to lower speed link transition is to just drop a bunch of packets? Or sending PAUSE frames which works so poorly everyone disables flow control.
To handle a speed transition without dropping packets, the switch or router at the congestion point needs to be able to buffer the whole receive window. It can hold the packets and then dribble them out over the lower speed link. The server won’t send more packets until the client consumes the window and sends an ACK.
But in practice the receive window for an Internet scale link (say 1 gigabit at 20 ms latency) is several megabytes. If the receive window was smaller than that, the server would spend too much time waiting for ACKs to be able to saturate the link. It’s impractical to have several MB of buffer in front of every speed transition.
Instead what happens is that some switch or router buffer will overflow and drop packets. The packet loss will cause the receive window, and transfer rate, to collapse. The server will then send packets with a small window so it goes through. Then the window will slowly grow until there’s packet loss again. Rinse and repeat. That’s what causes the saw-tooth pattern you see on the linked page.
We were expecting to get some sort of unbelievably fast internet experience, but it was awful as the internet gateway was 1 Gb or something similar.
We just wanted our own stuff. We did not want to coordinate with a proprietary vendor to network or be charged by the byte to do so.
I worked on a network that used RSVP ( https://en.wikipedia.org/wiki/Resource_Reservation_Protocol ) to emulate the old circuit-switched topology. It was kinda amazing to see how it could carve guaranteed-bandwidth paths through the network fabric.
Of course, it also never really worked with dynamic routing and brought in tons of complexity with stuck states. In our network, it eventually was just removed entirely in favor of 1gbit links with VLANs for priority/normal traffic.
It was complete garbage.
Another lab of theirs proudly made a Winsock that would use ATM SVCs instead of TCP and proudly made a brochure extolling their achievement "Web protocol without having to use TCP". Because clearly it was TCP hindering adoption of the Web /s
The Bellhead vs. Nethead was a real thing back then. To paraphrase an old saying about IBM, Telcos think if they piss on something, it improves the flavor.
One of the jobs I had applied out of college was to lead Schengen's central police database (think stolen car reports, arrest warrants etc) which would federate national databases. For some unfathomable reason, they chose X.400 as messaging bus for that replication, and endured massive delays and cost overruns for that reason. I guess I dodged a bullet by not going there.
Maybe X.500 - also known as LDAP, and widely deployed across enterprises in the form of Active Directory.
Of course, the biggest--and weirdest--success of the ITU standards is that the OSI model is still frequently the way networking stacks are described in educational materials, despite the fact that it bears no relation to how any of the networking stack was developed or is used. If you really dig into how the OSI model is supposed to work, one of the layers described only matters for teletypes--which were are a dying, if not dead, technology when the model was developed in the first place.
On the ITU side, they have made improvements including allowing a plain fully qualified domain name as the subject of a certificate, as an alternative to sequence of set of attributes.
Now I'm young enough not to have seen teletypes in an actual production use setting, but I've never heard anyone suggesting the presentation layer was for teletypes. That's just Google-level FUD.
X.500 was stripped down to form LDAP
No, LDAP was a student project from UMich that somehow gained mindshare because (a) it wasn't ISO, and (b) it cleverly had an 'L' in front of it. It's now more complex and heavyweight than the original DAP, but people think it isn't because of that original clever bit of marketing.The other difference from that era, and even the early internet era, is how much is no longer standardised at all, but decided by global monopolies. Back then it was a given that Everything would at least need to interoperate at the national level. But we may be returning to that .
ISPs and providers (fiber or not) that started out as cable companies (Comcast, Charter) are hitching to SCTE/CableLabs standards and equipment from their traditional vendors (Commscope, wherever Cisco's Scientific Atlanta business lines ended up)
In the US there is little need for interoperability since networks don't have to be unbundled or support any kind of competition, outside of cable networks have to allow a customer to bring their own CPE (and only for copper networks, when they move to fiber all bets are off)
The only benefit interoperability brings is pricing--if a vendor can sell their platform to many ISPs, they get an economy of scale. This doesn't mean standards win, even in the cable world in DOCSIS 4.0 there are two flavors, with Comcast being just about the only company that has picked one flavor, with the rest of the industry picking the other
Have you ever tried to implement an ITU standard from just reading the specs? It's hard. Firstly you have to spend a lot of money just to buy the specs. Then you find the spec is written by somebody who has a proprietary product, and is tiptoeing along a line that reveals enough information to keep the standards body happy (ie, has enough info to make it worthwhile to purchase the specification), and not revealing the secret sauce in their implementation.
I've done it, and it's an absolute nightmare. The IETF RFCs are a breath of fresh air in comparison. Not only can you read the source, there are example implementations!
And if you think that didn't lead to a better outcome, you're kidding yourself. The ITU process naturally leads to a small number of large engineering orgs publishing just enough information so they can interoperate, while keeping enough hidden so the investment discourages the rise of smaller competitors. The result is, even now I can (and do) run my own email server. If the overly complicated bureaucratic ITU standards had won the day, I'm sure email would have been run by a small number of CompuServe like rent seeking parasites for decades.
UPDATE: some say that's because XMPP was too encompassing of a standard (if a format allows to do too much it loses usefulness, like saying that binary files format can store anything). IMO that's not the reason, they could just support they own subset. They scrapped interoperability for competition only IMO.
I don't think that's IETF policy. Individual IETF working groups decide whether to request publication of an RFC, and the availability of open source implementations is a strong argument in favour of publication, but not a hard requirement.
If the IETF standards are sometimes useful, it's more a matter of culture than of policy.
However, I think DER is good (and is better than BER, PER, etc in my opinion). (I did make up a variant with a few additional types, though.)
OID is also a good idea, although I had thought they should add another arc for being based on various kind of other identifiers (telephone numbers, domain names, etc) together with a date for which that identifier is valid (to avoid issues with reassigned identifiers) as well as possibility of automatic delegation for some types (so that e.g. if you register an account on another system then you can get a free OID from it too; there is a bit of difficulty in some cases but it might be possible). (I have written a file about how to do this, although I did not publish it yet.)
Besides LDAP and X.509, you've got old standards that were very successful for a while. I'm perhaps a little bit too young for this, but I vaguely remember X.25 practically dominated large-scale networking, and for a while inter-network TCP/IP was often run over X.25. X.25 eventually disappeared because it was replaced by newer technology, but it didn't lose to any contemporary standard.
And if you're looking for new technology, CTAP (X.1278) is a part of the WebAuthn standard, which does seem to be winning.
I'm pretty sure there are other X-standards common in the telco industry, but even if we just look at the software industry, some ITU-T standards won out. This is not to say they weren't complex or that we didn't have simpler alternatives, but sometimes the complex standards does win out. The "worse is better" story is not always true.
The OP article is definitely wrong about this:
> “Of all the things OSI has produced, one could point to X.400 as being the most successful,
There are many OSI standards that are more successful than X.400, by the seer virtue of X.400 being an objective failure. But even putting that aside, there are X-family standards that are truly successful and ubiquitous.X.500 and X.509 are strong contenders, but the real winner is ASN.1 (the X.680/690 family, originally X.208/X.209).
ASN.1 is everywhere: It's obviously present in other ITU-T based standards like LDAP, X.509, CTAP and X.400, but it's been widely adopted outside of ITU-T in the cryptography world. PKCS standards (used for RSA, DSA, ECDSA, DH and ECDH key storage and signatures), Kerberos, S/MIME, TLS. It's also common in some common non-cryptographic protocols like SNMP and EMV (chip and pin and contactless payment for credit cards). Even if your using JOSE or COSE or SSH (which are not based on ASN.1), ASN.1-based PKCS standards are often still used for storing the keys. And this is completely ignoring all the telco standards. ASN.1 is everywhere.
I’ve never been a fan
Using ITU voice codecs!
Another was NIH in considerable important places.
Yet another was that ITU standards promoted use of compilers generating serialization code from schema, and that required having that compiler. One common issue I found out from trying to rescue some old Unix OSI code was that the most popular option in use at many universities was apparently total crap.
In comparison, you could plop a grad student with telnet to experiment with SMTP. Nobody cared that it was shitty, because it was not supposed to be used long. And then nobody wanted to invest in better.
(Presentation and Session are currently taught in terms of CSS and cookies in HTML and HTTP, respectively. When the web stack became Officially Part of the Officiously Official Network Stack is quite beyond me, and rather implies that you must confound the Web and the Internet in order to get the Correct Layering.)
https://computer.rip/2021-03-27-the-actual-osi-model.html - The Actual OSI Model
> I have said before that I believe that teaching modern students the OSI model as an approach to networking is a fundamental mistake that makes the concepts less clear rather than more. The major reason for this is simple: the OSI model was prescriptive of a specific network stack designed alongside it, and that network stack is not the one we use today. In fact, the TCP/IP stack we use today was intentionally designed differently from the OSI model for practical reasons.
> The OSI model is not some "ideal" model of networking, it is not a "gold standard" or even a "useful reference." It's the architecture of a specific network stack that failed to gain significant real-world adoption.