• cm0002@lemmy.world
    link
    fedilink
    arrow-up
    47
    arrow-down
    4
    ·
    5 months ago

    Does anyone else have the thought that maybe it’s time to just replace these 30+ year old ancient protocols? Seems like the entire networking stack is held together with string and duct tape and unnecessarily complicated.

    A lot of the decisions made sense somewhat in the 80s and 90s, but seems ridiculous in this day and age lmao

    • NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      88
      arrow-down
      1
      ·
      edit-2
      5 months ago

      Seems like the entire networking stack is held together with string and duct tape and unnecessarily complicated.

      The more you learn about network technology the more you realize how cobbled together it all is. Old, temporary fixes become permanent standards as new fixes are written on top of them. Apache, which was the most widely used web server for a long time, is literally named that because it was “a patchy” server. It’s amazing that any of it works at all. It’s even more amazing that it’s been developed to the point where people with no technical training can use it.

      The open nature of IP is what allows such a varied conglomerate of devices to share information with each other, but it also allows for very haphazard connections. The first modems were just an abuse of the existing voice phone network. The internet is a functional example of building the airplane while you’re flying it. We try to revise the standards as we go, but we can’t shut the whole thing down and rebuild it from scratch. There are no green fields.

      It has always been so. It must be so. It will continue to be so.

      (the flexibility of it all is really amazing though - in 2009 phreakmonkey was able to connect a laptop to the internet with a 1964 Livermore Data Systems Model A acoustic coupler modem and access Wikipedia!)

    • words_number@programming.dev
      link
      fedilink
      arrow-up
      30
      ·
      5 months ago

      Some ancient protocols get replaced gradually though. Look at http3 not using TCP anymore. I mean at least it’s something.

        • words_number@programming.dev
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          5 months ago

          Nope, it uses a protocol on top of UDP called QUIC. If you count underlying protocols further down the stack, obviously all of them are really old.

    • ryannathans@aussie.zone
      link
      fedilink
      arrow-up
      21
      ·
      5 months ago

      Wait till you hear about when ipv6 was first introduced (90s) and how 50% of the internet still doesn’t work with it.

      Businesses don’t want to change shit that “works” so you still have stuff like the original KAME project code floating around from the 90s.

    • Dangdoggo@kbin.social
      link
      fedilink
      arrow-up
      20
      arrow-down
      1
      ·
      5 months ago

      I definitely would love to see a rework of the network stack at large but idk how you’d do it without an insane amount of cooperation among tech giants which seems sort of impossible

    • Railing5132@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      5 months ago

      I may be waaaay off here, but the internet as it exists is pretty much built on DNS, isn’t it? I mean, the whole idea of DARPANet back in the 60s and 70s was to build a robust, redundant, and self-healing network to survive nuclear armageddon, and except when humans f it up (intentional or otherwise), it generally does what it says on the tin.

      Now, there’s arguments to beade about securing the protocol, but to rip and replace the routing protocols, I think you’d have to call it something other than the Internet.

      • Inktvip@lemm.ee
        link
        fedilink
        arrow-up
        10
        ·
        5 months ago

        Making a typo in the BGP config is the internet’s version of nuclear Armageddon

    • somnuz@lemm.ee
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      5 months ago

      Same unfortunately goes for a big chunk of the law on a global scale… Constant progress, new possibilities and technologies, changes in general are really outpacing some dusted and constantly abused solutions. Every second goes by and any “somehow still holding” relic is under more pressure. As a species we can have some really great ideas but the long-term planning or future-proofing is still not our strongest suit.

  • zepplenzap@lemmy.one
    link
    fedilink
    arrow-up
    18
    ·
    5 months ago

    Am I the only one who can’t think of a time DNS has caused a production outage on a platform I worked on?

    Lots of other problems over the years, but never DNS.

    • bamboo@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      17
      ·
      5 months ago

      I have a coworker who always forgets TTL is a thing, and never plans ahead. On multiple occasions they’ve moved a database, updated DNS to reflect the change, and are confused why everything is broken for 10-20 minutes.

      I really wish the first time they learned, but every once and a while they come to me to troubleshoot the same issue.

        • synae[he/him]@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          5 months ago

          While planning your change (or project requiring such change), check the relevant(* see edit) DNS TTL. Figure out the point in the future you want to do the actual change (time T), and set the TTL to 60 seconds at T-(TTL*2) or earlier. Then when it comes to the point where you need to make your DNS change, the TTL is reasonable and you can verify your change in some small amounts of minutes instead of wondering for hours.

          Edit: literally check all host names involved. They are all suspect

          • bamboo@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            5
            ·
            5 months ago

            This. For example, if you have a DNS entry for your DB and the TTL is set to 1 hour, an hour before you intend to make the changes, just lower the TTL of the record to a minute. This allows all clients to be told to only cache for a minute and to do lookups every minute. Then after an hour, make the necessary changes to the record. Within a minute of the changes, the clients should all be using the new record. Once you’ve confirmed that everything is good, you can then raise TTL to 1 hour again.

            This approach does require some more planning and two or three updates to DNS, but minimizes downtime. The reason you may need to keep TTL high is if you have thousands of clients and you know the DNS won’t be updated often. Since most providers charge per thousand or million lookups, that adds up quickly when you have thousands of clients who would be doing unnecessary lookups often. Also a larger TTL would minimize the impact of a loss of DNS servers.

        • bamboo@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 months ago

          ??? Is when the underwear gnomes send you a massive bill because you’re paying per 1k lookups. They profit, you don’t

        • Tankton@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          “yes boss we need another 20 dns servers” “idk why dns traffic is so heavy these days”

  • tempest@lemmy.ca
    cake
    link
    fedilink
    arrow-up
    6
    ·
    5 months ago

    Actually while for myself it is sometimes DNS, if I see an internet wide outage it’s usually BGP.

  • Omega_Haxors@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    I’m not going to get old at the beach

    There’s no way it doesn’t hold logically

    I got old