This is actually very cool. Not really replacing a browser, but it could enable an alternative way of browsing the web with a combination of deterministic search and prompts. It would probably work even better as a command line tool.
A natural next step could be doing things with multiple "tabs" at once, e.g: tab 1 contains news outlet A's coverage of a story, tab 2 has outlet B's coverage, tab 3 has Wikipedia; summarize and provide references. I guess the problem at that point is whether the underlying model can support this type of workflow, which doesn't really seem to be the case even with SOTA models.
For me, a natural next step would be to turn this into a service -- rather than doing it in the browser, this acts as a proxy, strips away all the crud and serves your browser clean text. No need to install a new browser, just point the browser to the URL via the service.
But if we do it, we have to admit something hilarious: we will soon be using AI to convert text provided by the website creator into elaborate web experiences, which end users will strip away before consuming it in a form very close to what the creator wrote down in the first place (this is already happening with beautifully worded emails that start with "I hope this email finds you well").
> tab 1 contains news outlet A's coverage of a story, tab 2 has outlet B's coverage, tab 3 has Wikipedia; summarize and provide references.
I think this is basically what https://ground.news/ does.
(I'm not affiliated with them; just saw them in the sponsorship section of a Kurzgesagt video the other day and figured they're doing the thing you described +/- UI differences.)
I am a ground news subscriber (joined with a Kurzgesagt ref link) and it does work that way (minus the wikipedia summary). It's pretty good and I particularly like their "blindspot" section showing news that is generally missing from a specific partisan new bubble.
Thank you.
I was thinking of showing multiple tabs/views at the same time, but only from the same source.
Maybe we could have one tab with the original content optimised for cli viewing, and another tab just doing fact checking (can ground it with google search or brave). Would be a fun experiment.
Interestingly, the original idea of what we call a "browser" nowadays – the "user agent" – was built on the premise that each user has specific needs and preferences. The user agent was designed to act on their behalf, negotiating data transfers and resolving conflicts between content author and user (content consumer) preferences according to "strengths" and various reconciliation mechanisms.
(The fact that browsers nowadays are usually expected to represent something "pixel-perfect" to everyone with similar devices is utterly against the original intention.)
Yet the original idea was (due to the state of technical possibilities) primarily about design and interactivity. The fact that we now have tools to extend this concept to core language and content processing is… huge.
It seems we're approaching the moment when our individual personal agent, when asked about a new page, will tell us:
Because its "browsing history" will also contain a notion of what we "know" from chats or what we had previously marked as "known".It would have to have a pretty good model of my brain to help me make these decisions. Just as a random example, it will have to understand that an equation is a sort of thing that I’m likely to look up even if I understand the meaning of it, just to double check and get the particulars right. That’s an obvious example, I think there must be other examples that are less obvious.
Or that I’m looking up a data point that I already actually know, just because I want to provide a citation.
But, it could be interesting.
When I was a child we knew that the North Star consisted of five suns. Now we know that it is only three suns, and through them we can see another two background stars that are not gravitationally bound to the three suns of the Polaris system.
Maybe in my grandchildren lifetimes we'll know something else about the system.
Well we should first establish some sort of contract how to convey the "I feel that I actually understand this particular piece of information, so when confronted with it in the future, you can mark is as such". My lines of thought were more about a tutorial page that would present the same techniques as course you have finished a week prior, or news page reporting on an event you just read about on a different news site a minute before … stuff like this … so you wold potentially save the time skimming/reading/understanding only to realise there was no added value for you in that particular moment. Or while scrolling through a comment section, hide comment parts repeating the same remark, or joke.
Or (and this is actually doable absolutely without any "AI" at all):
(There is one page nearby that would be quite unusable for me, had I not a crude userscript aid for this particular purpose. But I can imagine having a digest about "What's new here?" / "Noteworthy responses?" would be way better.)For the "I need to cite this source", naturally, you would want the "verbatim" view without any amendments anyway. Also probably before sharing / directing someone to the resource, looking at the "true form" would be still pretty necessary.
I can definitely see a future in which we are qch have our own personal memetic firewall, keeping us safe and cozy in our personal little worldview bubbles.
Some people think the sunglasses in They Live let you see through the propaganda, others think that the sunglasses themselves are just a different kind of pysop.
So, you gonna “put on those sunglasses, or start chewing on that trashcan?” It’s a distinction without a difference!
https://www.youtube.com/watch?v=1Rr4mQiwxpA
> Well, there's nothing new of interest for you, frankly
For this to work like a user would want, the model would have to be sentient.
But you could try to get there with current models, it'd just be very untrustworthy to the point of being pointless beyond a novelty
Not any more "sentient" than existing LLMs even in the limited chat context span are already.
Naturally, »nothing new of interest for you« here is indeed just a proxy for »does not involve any significant concept that you haven't previously expressed knowledge about« (or how to put it), what seems pretty doable, provided that contract of "expressing knowledge about something" had been made beforehand.
Let's say that all pages you have ever bookmarked you have really grokked (yes, a stretch, no "read it later" here) - then your personal model would be able to (again, figuratively) "make qualified guess" about your knowledge. Or some kind of tag that you could add to any browsing history entry, or fragment, indicating "I understand this". Or set the agent up to quiz you when leaving a page (that would be brutal). Or … I think you got the gist now.
In your cleanup step, after cleaning obvious junk, I think you should do whatever Firefox's reader mode does to further clean up, and if that fails bail out to the current output. That should reduce the number of tokens you send to the LLM even more
You should also have some way for the LLM to indicate there is no useful output because perhaps the page is supposed to be a SPA. This would force you to execute Javascript to render that particular page though
Just had a look and three is quite a lot going into Firefox's reader mode.
https://github.com/mozilla/readability
For the vast majority of pages you'd actually want to read, isProbablyReaderable() will quickly return a fair bool guess whether the page can be parsed or not.
> I was thinking of showing multiple tabs/views at the same time, but only from the same source.
I think the primary reason I use multiple tabs but _especially_ multiple splits is to show content from various sources. Obviously this is different that a terminal context, as I usually have figma or api docs in one split and the dev server on the other.
Still, being able to have textual content from multiple sources visible or quickly accessible would probably be helpful for a number of users
Would really love to see more functionality built into this. Handling post requests, enabling scripting, etc could all be super powerful
wonder if you can work on the DOM instead of HTML...
almost unrelated, but you can also compare spegel to https://www.brow.sh/
LLMs to generate SEO slop of the most utterly piss-poor quality, then another LLM to lossilly "summarise" it back. Brave new world?