Explain?
Explain?
Lots of negativity and whataboutism in this thread (which I don’t disagree with), but this is still a good move.
The Hungarian twitter community is very small, I’d be surprised if it were a censorship target. Do you have a source on this?
I don’t think this is a real issue in the age of bespoke design for applications. Only a minority of then use the OS widgets for their interface. You can argue that this is a bad thing, but then the context menus are just a tiny portion of the entire issue.
Manifest v3 is already supported in Firefox (they must support it to keep the extension ecosystem alive), but they implemented it without the user-hostile restrictions.
While I don’t disagree with the general idea, Section 230 would introduce an uncontrollable risk into running any website with user-generated content and would essentially shut them down.
You are not any more secure with google authenticator for 2fa, are you?
These days you don’t get much extra benefit on a VPN over TLS which you get on 99% of websites.
It really isn’t superior. It’s just the hivemind that gets annoyed with Plex being stagnant, not open source etc. that claims it is. At best it has feature parity for some use-cases. Don’t get me wrong, it’s neat, but it’s not as polished as Plex.
Firefox also implements manifest v3, just without the user-hostile restrictions.
The entire macromedia suite was so good. I had so much fun and learned so much as a teen.
Theoretically, what would the utility of AI summaries in Google Search if not getting exact information?
So what, you keep an ungoogled-chromium around and use it occasionally for compatibility, if you really need to. Doesn’t mean you are obligated to use it as your daily driver.
FWIW they are cannibalizing ads right now with AI summaries, since people will navigate less to websites (in the world where they are useful, which they don’t seem to be at the moment).
Peer review, for all its flaws is a good minimum before a paper is worth taking seriously.
In your original comment you said tha model collapse can be easily avoided with this technique, which is notably different from it being mitigated. I’m not saying that these findings are not useful, just that you are overselling them a bit with this wording.
That paper is yet to be peer reviewed or released. I think you are jumping into conclusion with that statement. How much can you dilute the data until it breaks again?
OpenAI clearly already scraped the pre-LLM (aka actually useful) content from SO, this entire deal is happening after the fact to avoid litigation.
I think it’s a function of greater screen resolutions being available.