In the marketing materials and demonstrations of Atlas, OpenAI’s team describes the browser as being able to be your “agent”, performing tasks on your behalf.
But in reality, you are the agent for ChatGPT.
During setup, Atlas pushes very aggressively for you to turn on “memories” (where it tracks and stores everything you do and uses it to train an AI model about you) and to enable “Ask ChatGPT” on any website, where it’s following along with you as you browse the web. By keeping the ChatGPT sidebar open while you browse, and giving it permission to look over your shoulder, OpenAI can suddenly access all kinds of things on the internet that they could never get to on their own.
Those Google Docs files that your boss said to keep confidential. The things you type into a Facebook comment box but never hit “send” on. Exactly which ex’s Instagram you were creeping on. How much time you spent comparing different pairs of shoes during your lunch hour. All of those things would never show up in ChatGPT’s regular method of grabbing content off the internet. Even Google wouldn’t have access to that kind of data when you use their Chrome browser, and certainly not in a way that was connected to your actual identity.
But by acting as ChatGPT’s agent, you can hold open the door so that the AI can now see and access all kinds of data it could never get to on its own. As publishers and content owners start to put up more effective ways of blocking the AI platforms from exploiting their content without consent, having users act as agents on behalf of ChatGPT lets them get around these systems, because site owners are never going to block their actual audience.
And while ChatGPT is following you around, it can create a complete and comprehensive surveillance profile of you — your personality, your behaviors, your private documents, your unfinished thoughts, how long you lingered on that one page before hitting the back button — at a level that the search companies and social networks of the last generation couldn’t even dream of. We went from worrying about being tracked by cookies to letting an AI company control our web browser and watch everything we do. The amount of data they’re gathering is unfathomable.
During setup, Atlas pushes very aggressively for you to turn on “memories” (where it tracks and stores everything you do and uses it to train an AI model about you)
I wonder, do memories really train a model about the user? Or are they just shoved in the context window strategically? Possibly selected by a small performant model in the background based on relevance to the current context window?
Training millions of mini models on people would be really interesting, and I don’t think I’ve noticed anything saying that is happening, yet. Even tho it seems like a logical idea.
In the marketing materials and demonstrations of Atlas, OpenAI’s team describes the browser as being able to be your “agent”, performing tasks on your behalf.
But in reality, you are the agent for ChatGPT.
During setup, Atlas pushes very aggressively for you to turn on “memories” (where it tracks and stores everything you do and uses it to train an AI model about you) and to enable “Ask ChatGPT” on any website, where it’s following along with you as you browse the web. By keeping the ChatGPT sidebar open while you browse, and giving it permission to look over your shoulder, OpenAI can suddenly access all kinds of things on the internet that they could never get to on their own.
Those Google Docs files that your boss said to keep confidential. The things you type into a Facebook comment box but never hit “send” on. Exactly which ex’s Instagram you were creeping on. How much time you spent comparing different pairs of shoes during your lunch hour. All of those things would never show up in ChatGPT’s regular method of grabbing content off the internet. Even Google wouldn’t have access to that kind of data when you use their Chrome browser, and certainly not in a way that was connected to your actual identity.
But by acting as ChatGPT’s agent, you can hold open the door so that the AI can now see and access all kinds of data it could never get to on its own. As publishers and content owners start to put up more effective ways of blocking the AI platforms from exploiting their content without consent, having users act as agents on behalf of ChatGPT lets them get around these systems, because site owners are never going to block their actual audience.
And while ChatGPT is following you around, it can create a complete and comprehensive surveillance profile of you — your personality, your behaviors, your private documents, your unfinished thoughts, how long you lingered on that one page before hitting the back button — at a level that the search companies and social networks of the last generation couldn’t even dream of. We went from worrying about being tracked by cookies to letting an AI company control our web browser and watch everything we do. The amount of data they’re gathering is unfathomable.
I wonder, do memories really train a model about the user? Or are they just shoved in the context window strategically? Possibly selected by a small performant model in the background based on relevance to the current context window?
Training millions of mini models on people would be really interesting, and I don’t think I’ve noticed anything saying that is happening, yet. Even tho it seems like a logical idea.