upvote
I think those are likely the only useful or net-positive things for society AI will do, at least for some time until there’s a fundamental advancement beyond LLMs. It can obviously do more than that now, like impersonate people for scams, induce psychosis in vulnerable people, shill and astroturf at a scale we haven’t seen before, spam open source projects with terrible PRs and vulnerability reports, and quite a bit more.
reply
why do people believe stuff like this? this is obviously untrue -- AI is already solving open problems in mathematics.
reply
Getting back to a functional search engine is the most interesting part of this technology to me. Something that just gives links to the most relevant pages without a bunch a LLM editorializing on top of it.

But do current LLMs solve that, or do they still ultimately depend on making calls to other search indexes? Seems like they could theoretically be trained to semantically match urls from their training set, but I think the models would have to be specifically architected for that, so I'm curious if anyone knows more about this.

I'd also be interested if there's any small open models working towards that.

reply
It's strange reading people who I see as very intelligent and very interesting who are so, so AI-skeptical, and especially in this case where Doctorow has interacted with other people who I assume are very smart and not prone to buzz word psychosis, who see AI as an immanent existential threat ala sci fi novels. We have a lot of very smart and capable people who are split on this, although I think the split is heavily weighted in favor of people who see the tech as being really freaking amazing/scary
reply
the answer to your question is that society at large finds skepticism or pessimism more interesting. which is why we end up with dilettantes like this guy.
reply
Seeing how it sucks at languages you may be right, even transcribing may be dubious.
reply
how does it suck at languages?
reply