First two books in the series were “Fellowship of the King” and “The Two Trees” so…I’m not entirely convinced they were even very original stories…
First two books in the series were “Fellowship of the King” and “The Two Trees” so…I’m not entirely convinced they were even very original stories…
One of the earliest pieces of media I can remember consuming was the mid-90s TV show Viper, where James played the main character. I remember very little about the show except James’s face and that he played his character cool as fuck.
I’ve been replaying Alan Wake and Control recently, and I have such a soft spot for his roles in them because I loved that stupid show when I was a kid.
I…don’t think that’s what the referenced paper was saying. First of all, Toner didn’t co-author the paper from her position as an OpenAI board member, but as a CSET director. Secondly, the paper didn’t intend to prescribe behaviors to private sector tech companies, but rather investigate “[how policymakers can] credibly reveal and assess intentions in the field of artificial intelligence” by exploring “costly signals…as a policy lever.”
The full quote:
By delaying the release of Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid
exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur. Anthropic achieved this goal by leveraging installment costs, or fixed costs that cannot be offset over time. In the framework of this study, Anthropic enhanced the credibility of its commitments to AI safety by holding its model back from early release and absorbing potential future revenue losses. The motivation in this case was not to recoup those losses by gaining a wider market share, but rather to promote industry norms and contribute to shared expectations around responsible AI development and deployment.
Anthropic is being used here as an example of “private sector signaling,” which could theoretically manifest in countless ways. Nothing in the text seems to indicate that OpenAI should have behaved exactly this same way, but the example is held as a successful contrast to OpenAI’s allegedly failed use of the GPT-4 system card as a signal of OpenAI’s commitment to safety.
To more fully understand how private sector actors can send costly signals, it is worth considering two examples of leading AI companies going beyond public statements to signal their commitment to develop AI responsibly: OpenAI’s publication of a “system card” alongside the launch of its GPT-4 model, and Anthropic’s decision to delay the release of its chatbot, Claude.
Honestly, the paper seems really interesting to an AI layman like me and a critically important subject to explore: empowering policymakers to make informed determinations about regulating a technology that almost everyone except the subject-matter experts themselves will *not fully understand.
We replaced our HP OfficeJet with a Brother this year. I don’t even know what we were thinking getting the HP 5 years ago or so, it was gross overkill for us. But of all the things it could do, it was most consistent with printing like shit and jamming paper. Part of the problem was that we just print too infrequently, but having to replace overpriced cartridges from HP didn’t help. You also have to install apps for wireless printing (or if there’s a workaround we didn’t bother with it).
The Brother is a color laser printer and it’s perfect for us. No apps needed, super quiet and hassle-free (there have been no paper jams or transmission errors), and the print quality is crisp as hell.
Google may be evil, but you can’t deny they still attract top talent.
A generation living too late to explore the Earth and too early to explore space–also doomed to live so long in the era between a fledgling, pre-corporatized internet and a free and open post-corporatized internet (which I consider inevitable, eventually, because a capitalist, enshittified internet can’t sustain indefinitely…right?).
So many layers.