That's exactly why we complain about the Rs in strawberry. We can get funny-stupid human interpretations all day long. What we can't get is cold facts, and isn't that what was promised by AI (at least, before ChatGPT was released in 2022)?
Unfortunately we fed this current iteration of AI with human behaviour (not only that: human behaviour on the Internet...)
> What we can't get is cold facts, and isn't that what was promised by AI (at least, before ChatGPT was released in 2022)?
Not that I know of. An entity dealing only in cold facts is not intelligent, it's a theorem prover- extremely narrow, rigid and incapable of interpretation and insight- basically of bridging the smallest gap of knowledge. That's exactly what intelligence isn't.
Because I see this becoming bigger than me and a separate organization made sense. There is a super thin backend component right now too. There is the potential to also add in some extra features that require a server/db. I'm kind of inspired by the atuin model of things.
I've got a LONG list of features I'd like to implement over time.
You said factual. But what is factual for you and I may not be for someone else. There are a lot of recollections in the article where sama remembers one version or doesn't remember at all and the other party remembers something else. Combine that with the nature of the article and the legal issues considering egos and sums involved. To top all of that New Yorker is known for fact checking that is exhaustive to the point of paranoia.
I am just speculating but if @ronanfarrow is still checking the discussion here, it would be amazing to hear the actual reasons.
> There are a lot of recollections in the article where sama remembers one version or doesn't remember at all and the other party remembers something else.
Unfortunately not right now, it's in the works. Polyphonic guitar to midi is a problem I am yet to understand and try solving in this one. Jam Origin's Midi Guitar is good like that, I still need to get there.
Everyone is surprised at the $300k/year figure, but that seems on the low end. My previous work place spends tens of millions a year on GPU continuous integration tests.
reply