You can get whatever result you want if you’re able to define what “better” means.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
You can get whatever result you want if you’re able to define what “better” means.
Why publish books of it, then?
The whole point of poetry is that it’s an original expression of another human.
Who are you to decide what the “point” of poetry is?
Maybe the point of poetry is to make the reader feel something. If AI-generated poetry can do that just as well as human-generated poetry, then it’s just as good when judged in that manner.
I do get the sense sometimes that the more extreme anti-AI screeds I’ve come across have the feel of narcissistic rage about them. The recognition of AI art threatens things that we’ve told ourselves are “special” about us.
Indeed, there are whole categories of art such as “found art” or the abstract stuff that involves throwing splats of paint at things that can’t really convey the intent of the artist because the artist wasn’t involved in specifying how it looked in the first place. The artist is more like the “first viewer” of those particular art pieces, they do or find a thing and then decide “that means something” after the fact.
It’s entirely possible to do that with something AI generated. Algorithmic art goes way back. Lots of people find graphs of the Mandelbrot Set to be beautiful.
Yeah, I like a light-hearted approach to life but that one particular “joke” should be shot on sight. I’m convinced it plays an actual role in why we haven’t seen much serious discussion of sending a probe there.
That’s not how synthetic data generation generally works. It uses AI to process data sources, generating well-formed training data based on existing data that’s not so useful directly. Not to generate it entirely from its own imagination.
The comments assuming otherwise are ironic because it’s misinformation that people keep telling each other.
The “how will we know if it’s real” question has the same answer as it always has. Check if the source is reputable and find multiple reputable sources to see if they agree.
“Is there a photo of the thing” has never been a particularly great way of judging whether something is accurately described in the news. This is just people finding out something they should have already known.
If the concern is over the verifiability of the photos themselves, there are technical solutions that can be used for that problem.
I’ve found my participation slowly declining here on the Fediverse, and ramping back up again on Reddit. I think I’m never going to stop coming here entirely, there’s plenty of neat links that come along to explore, but the main thing that’s causing decline is that IMO the communities here are a lot “bubblier.” It’s probably inherent in the simple fact that they’re small, and that they’re populated by a very self-selected fragment of social media, but the result is that if I “say the wrong thing” I get pummeled with downvotes and snide comments a lot easier here. Makes it less interesting to comment at all. Some of Reddit’s communities are pretty insular too but at least there are enough of them that I can find ones to my taste.
As a major example that comes to mind, all of the technology communities I’ve found here seem to be quite strongly anti-AI. I have an interest in AI, but when I click through to the comments on stories about AI topics it’s often nothing but rants about how awful it is. And if I say anything - even to correct a factual error - I get piled on. So lately I just sigh and move on.
Not necessarily. If they’re low on cash then cutting unnecessary costs is not unreasonable. What is Mozilla’s core goal? Perhaps the “advocacy” and “global programs” divisions weren’t all that relevant to it, and so their funding is better put elsewhere.
Entertainment.
If you think it’s supposed to be predictive you’re perhaps confusing it with futureology, which is a more scientific field.
Fearing AI because of what you saw in “The Terminator” is like fearing sleeping pills because of what you saw in “Nightmare on Elm Street.”
IMO the best feature of democracy is not that it results in better selection of who gets to lead, because it doesn’t really - the vast majority of the electorate is not educated in the sorts of things they’d need to be educated in to make truly good decisions about this. The best feature is that every few years we “throw the bums out” and put a new batch of people in charge.
I used to be kind of ambivalent about term limits, I figured it was kind of suboptimal to have to get rid of a leader who’s doing well at some point. But with the size of the population of most democracies there’s really no constraint on the pool of perfectly adequate candidates to draw on. I’m starting to think that “one and done” might be an even better approach, at least for the highest levels. Make it so that there’s no motivation whatsoever to cling to power. Do the same with congressmen and senators, perhaps. Let them prove their capabilities with a political career in local politics, where it’s less important if someone ends up with some kind of corrupt fiefdom because the higher levels of government can keep them in check.
There are people who want AI, crypto, and IoT things. If there weren’t then there’d be no money to be made in selling it.
There have been many systems developed over the years for handling decentralized data storage, decentralized user identities, and decentralized decision-making. There are excellent options out there for all this stuff.
IMO the problem is that there’s a huge “not invented here” problem, combined with a popular “ew, I don’t want to be associated with that technology (or more accurately with the group behind that technology)” reflex that has nothing to do with the technology itself. So projects like the Fediverse keep reinventing the wheel over and over, and whenever a project manages to do something right it’s rare for the other projects to abandon their own implementations to borrow from the best.
Those jobs are also being replaced by AI. Modern AIs are trained on synthetic data, which is data that was generated from source material specifically for training purposes by other AIs. AIs reformat, rewrite, and vet the source material more reliably and efficiently than humans.
AI models don’t actually contain the text they were trained on, except in very rare circumstances when they’ve been overfit on a particular text (this is considered an error in training and much work has been put into coming up with ways to prevent it. It usually happens when a great many identical copies of the same data appears in the training set). An AI model is far too small for it, there’s no way that data can be compressed that much.
Those sadly will also be replaced over time with machines and there is something we will really lose.
Not all of the things we lose will be sad, though. An AI researcher has the potential to be more thorough and less biased when it comes to digging up and interpreting resources.
If it’s not communicating anything, what’s the point?
The Darvaza gas crater is a hole in Turkmenistan that’s leaking natural gas and is on fire. I’m quite sure they don’t have a “poet laureate”, it’s literally just a hole in the ground.
But even if it was some metropolis, yeah, he’d be just some guy.