Seems short-sighted to commit to not running ads exactly like OpenAI plans to. You win in the court of public opinion for a few months, then look like a hypocrite when you're inevitably forced to run ads because your investors demand it. Sort of like how safetyism was convenient marketing until it became clear that it was revenue repellant and they quietly walked it away.
The current crop of LLM-backed chatbots do have a bit of that “old, good internet” flavor. A mostly unspoiled frontier where things are changing rapidly, potential seems unbounded, the people molding the actual tech and discussing it are enthusiasts with a sort of sorcerer’s apprentice vibe. Not sure how long it can persist, since I’ve seen this story before and we all understand the incentive structures at play. Does anyone know how if there are precedents for PBCs or B-Corp type businesses to be held accountable for betraying their stated values? Or is it just window dressing with no legal clout? Can they change to a standard corporation on a whim and ditch the non-shareholder maximization goals?
There’s nothing old internet about these AI companies. Old internet was about giving out and asking for nothing in return. These companies take everything and give back nothing, unless you are willing to pay that is.
I get the sentiment, but if you can't acknowledge that AI is useful and currently a lot better than search for a great many things, then it's hard to have a rational conversation.
why do they need to acknowledge something outside of the point they're trying to make?
No, they don't. They soak up tons of your most personal and sensitive information like a sponge, and you don't know what's done with it. In the "good old Internet", that did not happen. Also in the good old Internet, it wasn't the masses all dependent on a few central mega-corporations shaping the interaction, but a many-to-many affair, with people and organizations of different sizes running the sites where interaction took place.
Ok, I know I'm describing the past with rosy glasses. After all, the Internet started as a DARPA project. But still, current reality is itself rather dystopic in many ways.
And it's very timely and intentional, as Gemini is already shoveling product links on my face repeatedly, while OpenAI is testing ads recently. [0]
[0] https://openai.com/index/our-approach-to-advertising-and-exp...
> This is one of those “don’t be evil” like articles that companies remove when the going gets tough but I guess we should be thankful that things are looking rosy enough for Anthropic at the moment that they would release a blog like this.
Exactly this. Show me the incentive, and I'll show you the outcome, but at least I'm glad we're getting a bit more time ad-free.
> I guess we should be thankful that things are looking rosy enough for Anthropic
Forgive me if I am not.
Current LLMs often produce much, much worse results than manually searching.
If you need to search the internet on a topic that is full of unknown unknowns for you, they're a pretty decent way to get a lay of the land, but beyond that, off to Kagi (or Google) you go.
Even worse is that the results are inconsistent. I can ask Gemini five times at what temperature I should take a waterfowl out of the oven, and get five different answers, 10°C apart.
You cannot trust answers from an LLM.
> I can ask Gemini five times at what temperature I should take a waterfowl out of the oven, and get five different answers, 10°C apart.
Are you sure? Both Gemini and ChatGPT gave me consistent answers 3 times in a row, even if the two versions are slightly different.
Their answers are inline with this version:
https://blog.thermoworks.com/duck_roast/
I created an account just to point out that this is simply not true. I just tried it! The answers were consistent across all 5 samples with both "Fast" mode and Pro (which I think is really important to mention if you're going to post comments like this - I was thinking maybe it would be inconsistent with the Flash model)
It obviously takes discipline, but using something like Perplexity as an aggregator typically gets me better results, because I can click through to the sources.
It's not a perfect solution because you need the discipline/intuition to do that, and not blindly trust the summary.
Did you actually ask the model this question or are you fully strawmanning?