I can see some minor benefits - I use it for the odd bit of mundane writing and some of the image creation stuff is interesting, and I knew that a lot of people use it for coding etc - but mostly it seems to be about making more cash for corporations and stuffing the internet with bots and fake content. Am I missing something here? Are there any genuine benefits?

  • SorteKanin@feddit.dk
    link
    fedilink
    English
    arrow-up
    65
    arrow-down
    3
    ·
    edit-2
    4 months ago

    Much like automated machinery, it could in theory free the workers to do more important, valuable work and leave the menial stuff for the machine/AI. In theory this should make everyone richer as the companies can produce stuff cheaper and so more of the profits can go to worker salaries.

    Unfortunately what happens is that the extra productivity doesn’t go to the workers, but just let’s the owners of the companies take more of the money with fewer expenses. Usually rather firing the human worker rather than giving them a more useful position.

    So yea I’m not sure myself tbh

    • SinningStromgald@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      3
      ·
      4 months ago

      No no you found the actual “use” for AI as far as businesses go. They don’t care about the human cost of adopting AI and firing large swaths of workers just profits.

      Which is why governments should be quickly moving to highly regulate AI and it’s uses. But governments are slow plodding things full of old people who get confused with toasters.

      As always capitalism kills.

        • Jojo@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Trouble is the best way to regulate it isn’t clear. If the new tool can do the job at least as well and cheaper, just disallowing it is less beneficial to society. You can tax its use until it is only a little cheaper, but then you have to get people to approve of taxes. Et cetera

    • TheMurphy@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      4 months ago

      This already happened with the industrial revolution. It did make the rich awfully rich, but let’s be honest. People are way better off today too.

      It’s not perfect, but it does help in the long run. Also, there’s a big difference in which country you’re in.

      Capitalist-socialism will be way better off than hard core capitalism, because the mind set and systems are already in place to let it benefit the people more.

      • deafboy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Yes, that way the government will be able to make sure it benefits the right people. And we will call it the national socialism… wait… no!

    • doctorcrimson@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      4 months ago

      The question wasn’t "In Theory, are there any genuine benefits" it was if there are currently right now.

  • gorysubparbagel@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    4 months ago

    Most email spam detection and antimalware use ML. There are use cases in medicine with trying to predict whether someone has a condition early

    • Lemminary@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      4 months ago

      It’s also being used in drug R&D to find similar compounds like antimicrobial activity, afaik.

  • Bjornir@programming.dev
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    4
    ·
    4 months ago

    Medical use is absolutely revolutionary. From GP’s consultations to reading tests results, radios, AI is already better than humans and will be getting better and better.

    Computers are exceptionally good at storing large amount of data, and with ML they are great at taking a lot of input and inferring a result from that. This is essentially diagnosing in a nutshell.

    • yesman@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      3
      ·
      4 months ago

      I read that one LLM was so good at detecting TB from Xrays that they reverse engineered the “black box” code hoping for some insight doctors could use. Turns out, the AI was biased toward the age of the Xray machine that took each photo because TB is more common in developing countries that have older equipment. Womp Womp.

    • fiddlestix@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      4 months ago

      I hadn’t considered this. It’s interesting stuff. My old doctor used to just Google stuff in front of me and then repeat the info as if I hadn’t been there for the last five minutes.

  • QuadratureSurfer@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    4 months ago

    AI is a very broad topic. Unless you only want to talk about Large Language Models (like ChatGPT) or AI Image Generators (Midjourney) there are a lot of uses for AI that you seem to not be considering.

    It’s great for upscaling old videos: (this would fall under image generating AI since it can be used for colorizing, improving details, and adding in additional frames) so that you end up with something like: https://www.youtube.com/watch?v=hZ1OgQL9_Cw

    It’s useful for scanning an image for text and being able to copy it out (OCR).

    It’s excellent if you’re deaf, or sitting in a lobby with a muted live broadcast and want to see what is being said with closed captions (Speech to Text).

    Flying your own drone with object detection/avoidance.

    There’s a lot more, but basically, it’s great at taking mundane tasks where you’re stuck doing the same (or similar) thing over, and over, and over again, and automating it.

    • doctorcrimson@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      I think most of those are only labelled AI to generate tech hype, though? Like, sure, machine learning and maybe even LLM can and are used for those, but it isn’t a machine given human discernable input and pretending to give human output.

      • QuadratureSurfer@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        “AI” is the broadest umbrella term for any of these tools. That’s why I pointed out that OP really should be a bit more specific as to what they mean with their question.

        AI doesn’t have the same meaning that it had over 10 years ago when we used to use it exclusively for machines that could think for themselves.

  • Paragone@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    4 months ago

    They are the greatest gift to solo-brainstorming that I’ve ever encountered.

    _ /\ _

      • boatswain@infosec.pub
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        You’re confusing brainstorming with content generation. LLMs are great for brainstorming: they can quickly churn out dozens of ideas for my D&D campaign, which I then look through, discard the garbage, keep the good bits of, and riff off of before incorporating into my campaign. If I just used everything it suggested blindly, yeah, nightmare fuel. For brainstorming though, it’s fantastic.

        • Jojo@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          4 months ago

          Exactly. It can generate those base-level ideas much faster and worth higher fidelity than humans can without it, and that can see us at the hobby level with DND, or up at the business level with writers rooms and such.

          The important point is that you still need someone good at making the thing you want to look at and finish the thing you’re making, or you end up with paintings with too many fingers or stories full of contradictions

          • doctorcrimson@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago

            Any kid who uses it to craft their campaign is lazy and depriving themselves of a valuable experience, any professional who uses it to write a book, script, or study is wildly unethical, and both are creating a much much worse product than a human without reliance on them. That is the reality of a model who at 100% accuracy would be exactly as flawed as human output, and we’re nowhere near that accuracy.

            • Jojo@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              4 months ago

              But the point is that you don’t use it to make the campaign or write the book. You use it as a tool to help yourself make a campaign or write a book. Ignoring the potential of ai as a tool is silly just because it can’t do the whole job for you. That would be a bit like saying you are a fool for using a sponge when washing because it will never get everything by itself…

              • doctorcrimson@lemmy.today
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                4 months ago

                I get it now! You don’t use it for the thing you use it for but instead as a tool to create the thing that you’ve used it for for yourself because the magic was inside all of us but also the GPT all along. /sarcasm

                • Jojo@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  4 months ago

                  “don’t feed the trolls,” they said, but did she ever listen?

                  No, I guess I didn’t…

        • doctorcrimson@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          I would retort that the exact opposite is true, that content generation is the only thing LLMs are good at because they often forget the context of their previous statements.

          • boatswain@infosec.pub
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago

            I think we’re saying the same thing there: LLMs are great at spewing out a ton of content, which makes them a great tool for brainstorming. The content they create is not necessarily trustworthy or even good, but it can be great fuel for the creative process.

            • doctorcrimson@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              4 months ago

              My stance is that spewing out a ton of flawed unrelated content is not conducive to creative good content, and therefor LLM is not useful for writing. That hasn’t changed.

  • 🇰 🔵 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    edit-2
    4 months ago

    Anything that requires tons of iteration can be done way faster with AI. Finding new chemical formulas for medicine, as an example. It takes a “throw everything at the wall and see what sticks” approach, but it’s still more effective than a human.

        • Jojo@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          As long as everything gets thrown it’s still brute force, but the reason they use ai for it is because it can throw a lot more a lot faster.

      • doctorcrimson@lemmy.today
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        I think by broad definitions it can be, yes.

        Think about it. AI is just throwing a ton of sample data in and filtering out the results that are least correct.

      • mounderfod@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        Presumably in order to determine whether the eg chemical is worth looking at in the first place

  • Rooki@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    edit-2
    4 months ago

    AI has some interesting use cases, but should not be trusted 100%.

    Like github copilot ( or any “code copilot”):

    • Good for repeating stuff but with minor changes
    • Can help with common easy coding errors
    • Code quality can take a big hit
    • For coding beginners, it can lead to a deficit of real understanding of your code
      ( and because of that could lead to bugs, security backdoors… )

    Like translations ( code or language ):

    • Good translation of the common/big languages ( english, german…)
    • Can extend a brief summary to a big wall of text ( and back )
    • If wrong translated it can lead to that someone else understands it wrong and it misses the point
    • It removes the “human” part. It can be most of the time depending on the context easily identified.

    Like classification of text/Images for moderation:

    • Help for identify bad faith text / images
    • False Positives can be annoying to deal with.

    But dont do anything that is IMPORTANT with AI, only use it for fun or if you know if the code/text the AI wrote is correct!

    • Lemminary@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 months ago

      Adding to the language section, it’s also really good at guessing words if you give it a decent definition. I think this has other applications but it’s quite useful for people like me with the occasionally leaky brain.

    • fiddlestix@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      4 months ago

      Actually the summaries are good, but you have to know some of it anyway and then check to see if it’s just making stuff up. That’s been my experience.

  • AnarchistArtificer@slrpnk.net
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    4 months ago

    An interesting point that I saw about a trail on one of the small, London Tube stations:

    • most of the features involved a human who could come and assist or review the footage. The AI being able to flag wheelchair users was good because the station doesn’t have wheelchair access with assistance.

    • when they tried to make a heuristic for automatically flagging aggressing people, they found that people with the arms up tend to be aggressive. This flagging system led to the unexpected feature that if a Transport For London (TFL) staff member needed assistance (i.e. if medical assistance was necessary, or if someone was being aggressive towards them, the TFL staff member could put their arms up to bring the attention onto them.

    That last one especially seems neat. It seems like the kind of use case where AI has the most power when it’s used as a tool to augment human systems, rather than taking humans out of stuff.

  • FellowEnt@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 months ago

    It’s sped up my retouching workflows. I can automate things that a few years ago would’ve needed quite a lot of time spent with manual brush work.

    Also in the creative industries, it’s a massive time saver for conceptual work. Think storyboarding and scamping, first stage visuals that kind of thing.

    • TheMurphy@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      6
      ·
      4 months ago

      Very true. I learned how to code surprisingly fast.

      And even the mistakes the AI made was good, because it made me learn so much seeing what changes it did to fix it.

      • doctorcrimson@lemmy.today
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        4 months ago

        Bullshit. Reading a book on a language is just as fast and it doesn’t randomly lie or make up entire documentations as an added bonus.

  • hubobes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    4 months ago

    Our software uses ML to detect tax fraud and since tax offices are usually understaffed they can now go after more cases. So yes?

  • thorbot@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    4 months ago

    I use it daily to generate basic Perl scripts that I cant be bothered to write myself. It’s fantastic.

  • sunbeam60@lemmy.one
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    4 months ago

    Someone I know recently published in Nature Communications an enormous study where they used machine learning to pattern match peptides that are clinically significant/bioactive (don’t forget, the vast amount of peptides are currently believed to be degradation products).

    Using mass spectrometry, they effectively shoot a sawed off shotgun at a wall then using machine learning to detect pellets that may have interesting effects. This opens up for new understanding in the role peptides play in the translational game as well as a potential for a huge amount of new treatments for a vast swathe of diseases.

    • Fried_out_Kombi@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Sounds similar to some of the research my sister has done in her PhD so far. As I understand, she had a bunch of snapshots of proteins from a cryo electron microscope, but these snapshots are 2D. She used ML to construct 3D shapes of different types of proteins. And finding the shape of a protein is important because the shape defines the function. It’s crazy stuff that would be ludicrously difficult and time-consuming to try to do manually.

  • coolkicks@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 months ago

    Lots of boring applications that are beneficial in focused use cases.

    Computer vision is great for optical character recognition, think scanning documents to digitize them, depositing checks from your phone, etc. Also some good computer vision use cases for scanning plants to see what they are, facial recognition for labeling the photos in your phone etc…

    Also some decent opportunities in medical research with protein analysis for development of medicine, and (again) computer vision to detect cancerous cells, read X-rays and MRIs.

    Today all the hype is about generative AI with content creation which is enabled with Transformer technology, but it’s basically just version 2 (or maybe more) of Recurrent Neural Networks, or RNNs. Back in 2015 I remember this essay, The Unreasonable Effectiveness of RNNs being just as novel and exciting as ChatGPT.

    We’re still burdened with this comment from the first paragraph, though.

    Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense.

    This will likely be a very difficult chasm to cross, because there is a lot more to human knowledge than thinking of the next letter in a word or the next word in a sentence. We have knowledge domains where, as an individual we may be brilliant, and others where we may be ignorant. Generative AI is trying to become a genius in all areas at once, and finds itself borrowing “knowledge” from Shakespearean literature to answer questions about modern philosophy because the order of the words in the sentences is roughly similar given a noun it used 200 words ago.

    Enter Tiny Language Models. Using the technology from large language models, but hyper focused to write children’s stories appears to have progress with specialization, and could allow generative AI to stay focused and stop sounding incoherent when the details matter.

    This is relatively full circle in my opinion, RNNs were designed to solve one problem well, then they unexpectedly generalized well, and the hunt was on for the premier generalized model. That hunt advanced the technology by enormous amounts, and now that technology is being used in Tiny Models, which is again looking to solve specific use cases extraordinarily well.

    Still very TBD to see what use cases can be identified that add value, but recent advancements to seem ripe to transition gen AI from a novelty to something truly game changing.