This is starting to tick me off. Now you’ve got me all wound up!
This is starting to tick me off. Now you’ve got me all wound up!
For me, the article makes it seem like there’s some new announcement that the FBI has put out about a newly discovered vulnerability. Turns out, the announcement is about vulnerabilities we’ve known about for a long time.
I understand, that definitely makes sense then.
This isn’t an article about mistranslations.
This is an article focusing on how asking about US election questions in Spanish will give you answers that are for the wrong country, or just wrong in most cases when compared to asking the same question in English.
One example is that, if someone in Puerto Rico were to ask ChatGPT 4/Claude/Gemini/Llama/Mixtral a US Election question, it would respond with information for Venezuela/Mexico/Spain instead.
Whisper isn’t a large language model.
It’s a speech to text (STT) model.
Rather than making it illegal to use, people need to use these tools responsibly. If any of these companies are using almost any kind of AI/machine learning they need to include a human in the loop that can verify that it’s working correctly. That way if it starts hallucinating things that were never said, it can be caught and corrected.
I’ve found that Whisper generally does a better job at translating/transcribing audio than other open source tools out there, so it’s not garbage… But it absolutely is a hazard if you’re trying to rely solely on it for official documents (or legal issues).
As far as promotion goes… It’s open source software, it’s not being sold.
As someone who uses Whisper fairly often, it’s obvious that they’ve trained off of a bunch of YouTube videos.
Most of the time it’s very accurate, but there have definitely been a few times in long transcription sessions where it will randomly hallucinate that someone is saying “Don’t forget to like and subscribe!” When nothing was said anywhere near that.
Thanks! I definitely need to upgrade from the starter ship.
I think that’s kind of the point of this expedition and it plays well into the theme for me.
After a while your brain will just start to ignore it and look past it as they pop up.
I’m sure at some point you’ll be able to figure out how to evade the voices. At least by the very end… Right?
It wouldn’t bother me if you were still able to earn the mount in-game.
Looks like they’re also planning on releasing without any DRM:
For me, I use Whisper for transcribing/translating audio data. This has helped me to double check claims about a video’s translation (there’s a lot of disinformation going around for topics involving certain countries at war).
Nvidia’s DLSS for gaming.
Different diffusion models for creating quick visual recaps of previous D&D sessions.
Tesseract OCR to quickly copy out text from an image (although I’m currently looking for a better one since this one is a bit older and, while it gets the text mostly right, there’s still a decent amount that it gets wrong).
LLMs for brainstorming or in the place of some stack overflow questions when picking up a new programming language.
I also saw an interesting use case from a redditor:
I had about 80 VHS family home videos that I had converted to digital
I then ran the 1-4 hour videos through WhisperAI Large-v3 transcription and pasted those transcripts into a prompt which had a little bit of background information on my family like where we live and names of everyone who might show up in the videos, and then gave the prompt some examples of how I wanted the file names to look, for example:
1996 Summer - Jane’s birthday party - Joe’s Soccer game - Alaska cruise - Lanikai Beach
And then had Claude write me titles for all the home videos and give me a little word doc to put in each folder which catalogues all the events in each video. It came out so good I have been considering this as a side business
You’re right, whether it’s AI generated or not doesn’t matter.
This is a copyright infringment matter in which “Fair Use” will become a major factor. https://fairuse.stanford.edu/overview/fair-use/four-factors/
In this case, if the courts rule in favor of Alcon there’s a danger that this expands how copyright law is judged and future cases can use that ruling in their favor. It would make it a lot easier for them to only prove that someone wanted an image that “looks like” even when the image wouldn’t normally be held to that level of scrutiny at face value.
You’re right that there are other factors at play here:
The “Hollywood talent pool market generally is less likely to deal with Alcon, or parts of the market may be, if they believe or are confused as to whether, Alcon has an affiliation with Tesla or Musk,” the complaint said.
They are absolutely concerned that Musk is trying to associate his product with Blade Runner and if the case hinges on the association rather than the image in question then I don’t see a problem with that.
But it’s very concerning that the image itself seems to be a major factor in this case, specifically that they are accusing “(WBD) of conspiring with Musk and Tesla to steal the image and infringe Alcon’s copyright”.
So you saying that anything AI generated that is similar to something else will get sued for copyright infringement makes no sense, unless you can already do that for hand drawn images.
Yes, you can already sue someone else for copyright infringment with hand drawn images. What matters for the decision are a number of factors (as listed out on that link to fair use) one of them being how closely your drawing resembles the copyrighted material. Here’s an article about a photographer who successfully sued a painter who plagiarized her work: https://boingboing.net/2024/05/17/photographer-wins-lawsuit-against-alleged-painter-who-plagiarized-her-work.html
Very small… Only 17 participants.
The game is also having one of it’s bigger sales at the moment (for Steam/PC), so it’s a good time to jump in.
Or you can find it slightly cheaper on sites like CDKeys.com: https://www.cdkeys.com/no-mans-sky-pc-steam-cd-key
@kameecoding@lemmy.world exactly this.
In the U.S. we have what’s known as a legal" precedent". If a court case makes a decision on something, it massively increases the chances that other courts will use that same decision in similar future cases.
You saw a single low quality post made with Generative AI and all of a sudden want to ban any image that has even a little bit of AI used in it?
Edit: I just double checked the updated community rules and saw that you left exceptions for AI background removers and AI Upscaling. Thank you for at least considering minor uses of AI.
I understand making a rule if it becomes a problem with a bunch of low quality posts spamming the place, but this just feels like a knee jerk reaction because of your disdain for anything related to AI (although it’s worth pointing out that the DLSS option in No Man’s Sky is using AI models to improve quality/boost fps).
If you find a compelling example of AI content which wouldn’t be widely detested, maybe we could make an exemption if the community agrees with you.
Alright, here’s an example of some really cool AI generated content: https://www.youtube.com/watch?v=FMRi6pNAoag
The way I see it, generative AI is going through some early phases similarly to how cameras were treated in the past. Photographers, for a long time, weren’t regarded as artists for a while simply because it was so easy to just go out and take a picture of anything. But even photography can vary from the random cell phone pictures that anyone can take to the prize winning photographers that capture an amazing moment or who spend a lot of time getting the composition just right for the shot that they want to take. They might even wait in a specific spot for hours just to get a picture when the sun is at the right position.
Likewise with AI generated content, it’s easy to quickly create a lot of junk. But there are artists out there that use AI as another tool in their belt in addition to everything else that they have learned.
The producers think the image was likely generated—“even possibly by Musk himself”—by “asking an AI image generation engine to make ‘an image from the K surveying ruined Las Vegas sequence of Blade Runner 2049,’ or some closely equivalent input direction,” the lawsuit said.
In my opinion, I hope that this lawsuit fails. I know that the movie industry already follows similar practices to what Musk has done. If a studio goes to a certain musician and the price is too high to include their music in the show, they’ll go to a different artist and ask them to create a song that sounds like the song that they originally wanted.
If this lawsuit succeeds it’s going to open the door for them to sue anyone that makes art that’s remotely close to their copyrighted work. All they will need to do is claim that it “might have been created by AI with a prompt specifying our work” without actually having to have any proof beforehand.
According to the complaint,
Elon Musk’s image:
Infringes on the copyright of this image from Blade Runner 2049:
Ackshully… It should be: “AaaS”.