![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/eb9cfeb5-4eb5-4b1b-a75c-8d9e04c3f856.png)
Eh, that’s a mixed bag. Absolutely, one could setup shared delete requests, to federate a delete request, but it would be a bit of a lie as anyone could simply… update their instance to simply ignore delete requests.
For now, simply not having a delete feature is a more honest to the realities of the fediverse. There’ll never be a “true” delete, even if they do eventually support one that’s “good enough”.
I don’t necessarily disagree that we may figure out AGI, and even that LLM research may help us get there, but frankly, I don’t think an LLM will actually be any part of an AGI system.
Because fundamentally it doesn’t understand the words it’s writing. The more I play with and learn about it, the more it feels like a glorified autocomplete/autocorrect. I suspect issues like hallucination and “Waluigis” or “jailbreaks” are fundamental issues for a language model trying to complete a story, compared to an actual intelligence with a purpose.