Sam Altman has been fired as CEO of OpenAI, the company announced on Friday.

“Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” the company said in its blog post.

EDITED TO ADD direct link to OpenAI board announcement:
https://openai.com/blog/openai-announces-leadership-transition

  • Margot Robbie@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    3
    ·
    1 year ago

    Not exactly surprised here. Every time I’ve seen him on the news, it’s always him fearmongering about the dangers of generative AI, when ChatGPT is burning through money and seemed to become more and more restrictive with every iteration. You can’t run an organization if it is built on top of lies.

    Actually open models (not open source, sadly) like specialized LLaMa 2 derivatives that could be ran and fine-tuned locally seems to be the future, because there seems to be a diminishing return in training/inference power to usefulness, and specialized smaller model tuned for specific applications are much more flexible than a giant general one that can only be used on somebody else’s machine.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      1 year ago

      because there seems to be a diminishing return in training/inference power to usefulness

      Be careful not to be caught up in the application of Goodhart’s Law going on in the field right now.

      There’s plenty of things GPT-4 trounces everything else on, they just tend to be things outside the now standardized body of tests, which suggests the tests have become the target and are no longer effective measurements.

      This is perhaps most apparent in things like Orca, where we directly use the tests as the target, have GPT-4 generate synthetic data that improves Llama performance on the target, and then see large gains in smaller models on the tests.

      But those new models don’t necessarily have the same capabilities on more abstract capabilities, such as the recent approach of using analogy to solve problems.

      We are arguably becoming too myopic in how we are measuring the success of new models.