upvote
The current crop of LLM-backed chatbots do have a bit of that “old, good internet” flavor. A mostly unspoiled frontier where things are changing rapidly, potential seems unbounded, the people molding the actual tech and discussing it are enthusiasts with a sort of sorcerer’s apprentice vibe. Not sure how long it can persist, since I’ve seen this story before and we all understand the incentive structures at play. Does anyone know how if there are precedents for PBCs or B-Corp type businesses to be held accountable for betraying their stated values? Or is it just window dressing with no legal clout? Can they change to a standard corporation on a whim and ditch the non-shareholder maximization goals?
reply
No, they don't. They soak up tons of your most personal and sensitive information like a sponge, and you don't know what's done with it. In the "good old Internet", that did not happen. Also in the good old Internet, it wasn't the masses all dependent on a few central mega-corporations shaping the interaction, but a many-to-many affair, with people and organizations of different sizes running the sites where interaction took place.

Ok, I know I'm describing the past with rosy glasses. After all, the Internet started as a DARPA project. But still, current reality is itself rather dystopic in many ways.

reply
> This is one of those “don’t be evil” like articles that companies remove when the going gets tough but I guess we should be thankful that things are looking rosy enough for Anthropic at the moment that they would release a blog like this.

Exactly this. Show me the incentive, and I'll show you the outcome, but at least I'm glad we're getting a bit more time ad-free.

reply
> I guess we should be thankful that things are looking rosy enough for Anthropic

Forgive me if I am not.

reply
Current LLMs often produce much, much worse results than manually searching.

If you need to search the internet on a topic that is full of unknown unknowns for you, they're a pretty decent way to get a lay of the land, but beyond that, off to Kagi (or Google) you go.

Even worse is that the results are inconsistent. I can ask Gemini five times at what temperature I should take a waterfowl out of the oven, and get five different answers, 10°C apart.

You cannot trust answers from an LLM.

reply
> I can ask Gemini five times at what temperature I should take a waterfowl out of the oven, and get five different answers, 10°C apart.

Are you sure? Both Gemini and ChatGPT gave me consistent answers 3 times in a row, even if the two versions are slightly different.

Their answers are inline with this version:

https://blog.thermoworks.com/duck_roast/

reply
I created an account just to point out that this is simply not true. I just tried it! The answers were consistent across all 5 samples with both "Fast" mode and Pro (which I think is really important to mention if you're going to post comments like this - I was thinking maybe it would be inconsistent with the Flash model)
reply
It obviously takes discipline, but using something like Perplexity as an aggregator typically gets me better results, because I can click through to the sources.

It's not a perfect solution because you need the discipline/intuition to do that, and not blindly trust the summary.

reply
Did you actually ask the model this question or are you fully strawmanning?
reply