The much maligned “Trusted Computing” idea requires that the party you are supposed to trust deserves to be trusted, and Google is DEFINITELY NOT worthy of being trusted, this is a naked power grab to destroy the open web for Google’s ad profits no matter the consequences, this would put heavy surveillance in Google’s hands, this would eliminate ad-blocking, this would break any and all accessibility features, this would obliterate any competing platform, this is very much opposed to what the web is.

  • Adora 🏳️‍⚧️@beehaw.org
    link
    fedilink
    English
    arrow-up
    32
    ·
    1 year ago

    I’m a non-techie and don’t understand half of this, but from what I do understand, this is a goddamn nightmare. The world is seriously going to shit.

    • JVT038@feddit.nl
      link
      fedilink
      English
      arrow-up
      53
      ·
      1 year ago

      My ELI5 version:

      Basically, the ‘Web Environment Integrity’ proposal is a new technique that verifies whether a visitor of a website is actually a human or a bot.

      Currently, there are captchas where you need to select all the crosswalks, cars, bicycles, etc. which checks whether you’re a bot, but this can sometimes be bypassed by the bots themselves.

      This new ‘Web Environment Integrity’ thing goes as follows:

      1. You visit a website
      2. Website wants to know whether you’re a human or a bot.
      3. Your browser (or the ‘client’) will send request an ‘environment attestation’ from an ‘attester’. This means that your browser (such as Firefox or Chrome) will request approval from some third-party (like Google or something) and the third-party (which is referred to as ‘attester’) will send your browser a message, which basically says ‘This user is a bot’ or ‘This user is a human being’.
      4. Your browser receives this message and will then send it to the website, together with the ‘attester public key’. The ‘attester public key’ can be used by the website to verify whether the attester (a.k.a. the third-party checking whether you’re a human or not) is trustworthy and will then check whether the attester says that you’re a human or not.

      I hope this clears things up and if I misinterpreted the GitHub explainer, please correct me.

      The reason people (rightfully) worry about this, is because it gives attesters A LOT of power. If Google decides they don’t like you, they won’t tell the website that you’re a human. Or maybe, if Google doesn’t like the website you’re trying to visit, they won’t even cooperate with attesting. Lots of things can go wrong here.

        • Pigeon@beehaw.org
          link
          fedilink
          English
          arrow-up
          18
          ·
          1 year ago

          It sounds like VPN’s would also get flagged as bots? Or could easily be treated as such.

          • floofloof@lemmy.ca
            link
            fedilink
            English
            arrow-up
            24
            ·
            edit-2
            1 year ago

            They could get rid of ad blockers, anonymity, Tor, VPNs, Firefox, torrenting sites, independently hosted websites, open-source servers and non-Google Linux clients all in one go. It would be a corporate dream come true.

            Or we could stop using their tools and services and fork off the internet run for people from the internet run for profit. It doesn’t need to be big or slick; it just needs to be there.

            • Senex@reddthat.com
              link
              fedilink
              English
              arrow-up
              12
              ·
              1 year ago

              I like the idea of Internet 2.0. Kinda like what we are doing here on Lemmy. Corporate ruins it, we build it anew!

            • Tau@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              There are even alternative root-servers so we can even escape from the TLD hell

      • HarkMahlberg@kbin.social
        link
        fedilink
        arrow-up
        19
        ·
        1 year ago

        Your final paragraph is the real kicker. Google would love nothing more than to be the ONLY trusted Attester and for Chrome to be the ONLY browser that receives the “Human” flag.

        • will6789@feddit.uk
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 year ago

          And I’m sure Google definitely wouldn’t require your copy of Chrome to be free of any Ad-Blocking or Anti-Tracking extensions to get that “Human” flag /s

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Too late.

          Microsoft, Apple, and most hardware manufacturers have been the ONLY trusted attester on their own hardware for years already.

          Also Microsoft on most PCs.

      • jarfil@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago
        1. You open an app…

        The rest already works like that.

        You can replace Google with Apple, Microsoft, any other hardware manufacturer, or any company hardware attestation software.

    • ricecake@beehaw.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      So, a lot of the replies are highlighting how this is “nightmare fuel”.
      I’ll try to provide insight into the “not nightmare” parts.

      The proposal is for how to share this information between parties, and they call out that they’re specifically envisioning it being between the operating system and the website. This makes it browser agnostic in principle.

      Most security exploits happen either because the users computer is compromised, or a sensitive resource, like a bank, can’t tell if they’re actually talking to the user.
      This provides a mechanism where the website can tell that the computer it’s talking to is actually the one running the website, and not just some intermediate, and it can also tell if the end computer is compromised without having access to the computer directly.

      The people who are claiming that this provides a mechanism for user tracking or leaks your browsing history to arrestors are perhaps overreacting a bit.

      I work in the software security sector, specifically with device management systems that are intended to ensure that websites are only accessed by machines managed by the company, and that they meet the configuration guidelines of the company for a computer accessing their secure resources.

      This is basically a generalization of already existing functionality built into Mac, windows, Android and iPhones.

      Could this be used for no good? Sure. Probably will be.
      But that doesn’t mean that there aren’t legitimate uses for something like this and the authors are openly evil.
      This is a draft of a proposal, under discussion before preliminary conversations happen with the browser community.