Anyone from the local llama communities experienced with Gemma models?
I’ve heard good things but I only use Mistral because it’s proved the most versatile.
Anyone from the local llama communities experienced with Gemma models?
I’ve heard good things but I only use Mistral because it’s proved the most versatile.
I’ve been curious about google coral, but their memory is so tiny I’m not sure what kinds of models you can run on them
If nothing else the atproto is pretty great, we’re starting to see a proper federated net start opening up around it.
Waiting for a 8x1B MoE
Ironically thanks in no small part to Facebook releasing Llama and kind of salting the earth for similar companies trying to create proprietary equivalents.
Nowadays you either have gigantic LLMs with hundreds of billions of parameters like Claude and ChatGPT or you have open Models that are sub-200B.
I could be mistaken too, this has all only recently become interoperable so there’s some growing pains
Isn’t that what Whitewind is doing?
Yes! Actually.
The full atproto up and running with bluesky is only in the last month or so, so people are finally starting to trickle out and set up their own services and hosts.
It’s actually very promising and hopeful.
This doesn’t seem to be that big an issue as PDS’s can just directly communicate with one-another like how ActivityPub works.
I wouldn’t lump bluesky in the same pile as threads anymore, the atprotocol is fully up and running and slowly but surely individually hosted data servers are trickling out and away to their own services.
There’s even new services running completely independent of bluesky running on atproto now: https://whtwnd.com/about
There’s a really good write-up on how atproto federation works here: https://whtwnd.com/alexia.bsky.cyrneko.eu/3l727v7zlis2i
If you treat an AI like anything other than the rubber duck in Rubber Duck Programming you’re using it wrong.
What’s great about lawsuits like this is you really only have to prove intent and they have a record of them asking for similar imagery.
Like that person in a dream who keeps telling you to wake up
Alongside the EPA for constantly getting in the way of the FAA trying to slip his SpaceX flight licenses through with a wink and a nudge instead of properly following regulations, and the FAA for trying to keep a semblance of legality through the whole process.
Less like surrogates and more like The Muppets
no rocket as powerful as this one.
So I’m confused on this because people still seem to be using Starships’s old estimates of 100 tons to LEO orbit, which the SLS can put 145 tons to LEO.
Then 6 months ago Musk got on stage and updated the specs to Say that Starships’s current design can only do 40-50 tons.
This feels awfully familiar for anyone that’s seen early Tesla specs/presentations/promises and I can’t help but wonder as to the validity of everyone saying SpaceX is mostly insulated from Musk’s “influence.”
SpaceX launched about 429,125 kg of spacecraft upmass in Q1, followed by CASC with about 29,426 kg
Smaller satellites (<1,200 kg) represented 96% of spacecraft launched in Q1, 76% of total upmass
So the way I’m personally reading this is 2/3 of this is starlink launches
I used to work for an algorithmic advertising company.
The gist is that if you get one big spender it offsets the cost of losing a thousand or more other people because those large contracts usually last past the official sale
I still think it’s better to refer to LLMs as “stochastic lexical indexes” than AI
I would not say Johnny Harris is a reliable source