Am I the only one who doesn’t use any of this AI shit?
Everything is AI nowadays. Heck AI uses you more than you use it if you’ve spent any amount of time online. It’s too late now though, unless you could you know what Sam Altman back in 2015.
Honestly: does anyone at this point think to themselves that using an OpenAI browser is a good idea? What does it even provide in terms of benefit over literally any alternative?
Nobody on this platform. But on the normie web there’s probably some folks who think it’s a good idea
So many of them that it is scary. And educating them will probably often elicit the stubborn response: “I don’t care, I like it, it’s convenient and the errors won’t kill me” (at least if their attitude towards privacy is any indication).
The overwhelming majority of users will use whatever’s preinstalled on their platform. I dunno if OpenAI can go pay some cell phone manufacturer to preinstall their browser, but if they want marketshare, I’m pretty sure that that’s the only realistic route to do so.
Not really, Chrome has an overwhelming dominance on desktop despite not being preinstalled on any desktop operating system.
Often preinstalled on prebuilts and laptops though, along with the OEMs bloat
Yes
does anyone at this point think to themselves that (…)
Yes.
Whatever the rest of this sentence would be, the answer is “yes”.
Furthermore, I’ve found the answer to this being not just ”yes” but ”yes, most of them”. I think I’ll just give up.
In the marketing materials and demonstrations of Atlas, OpenAI’s team describes the browser as being able to be your “agent”, performing tasks on your behalf.
But in reality, you are the agent for ChatGPT.
During setup, Atlas pushes very aggressively for you to turn on “memories” (where it tracks and stores everything you do and uses it to train an AI model about you) and to enable “Ask ChatGPT” on any website, where it’s following along with you as you browse the web. By keeping the ChatGPT sidebar open while you browse, and giving it permission to look over your shoulder, OpenAI can suddenly access all kinds of things on the internet that they could never get to on their own.
Those Google Docs files that your boss said to keep confidential. The things you type into a Facebook comment box but never hit “send” on. Exactly which ex’s Instagram you were creeping on. How much time you spent comparing different pairs of shoes during your lunch hour. All of those things would never show up in ChatGPT’s regular method of grabbing content off the internet. Even Google wouldn’t have access to that kind of data when you use their Chrome browser, and certainly not in a way that was connected to your actual identity.
But by acting as ChatGPT’s agent, you can hold open the door so that the AI can now see and access all kinds of data it could never get to on its own. As publishers and content owners start to put up more effective ways of blocking the AI platforms from exploiting their content without consent, having users act as agents on behalf of ChatGPT lets them get around these systems, because site owners are never going to block their actual audience.
And while ChatGPT is following you around, it can create a complete and comprehensive surveillance profile of you — your personality, your behaviors, your private documents, your unfinished thoughts, how long you lingered on that one page before hitting the back button — at a level that the search companies and social networks of the last generation couldn’t even dream of. We went from worrying about being tracked by cookies to letting an AI company control our web browser and watch everything we do. The amount of data they’re gathering is unfathomable.
During setup, Atlas pushes very aggressively for you to turn on “memories” (where it tracks and stores everything you do and uses it to train an AI model about you)
I wonder, do memories really train a model about the user? Or are they just shoved in the context window strategically? Possibly selected by a small performant model in the background based on relevance to the current context window?
Training millions of mini models on people would be really interesting, and I don’t think I’ve noticed anything saying that is happening, yet. Even tho it seems like a logical idea.