Let me share some fun Mozilla facts about their previous CEO who has now stepped down to “executive chairwoman” last week.
She received 6.9 million dollars in 2022 and 5 million in 2021, 3 million in 2020.
Her replacement is an executive from AirBnB and eBay. We will find out how much both of these are earning in 2025 when they release their financial statements.
They fired 60 staff and are adding AI to their flagship program to earn more money.
Mozilla has long been the most ethical player in this space (while still producing SOTA ML). All of their datasets/models are open source and usually crowdsourced. Not to mention, their existing work is primarily in improving accessibility.
ALSO, the other half of this story is that Firefox is becoming the primary focus again. Everybody’s freaking out about the AI stuff but that’s because they’re only reading the headlines. The programs they’ve shut down are things like Hubs (Mozilla’s metaverse platform), the VPN, and the sensitive data scrubber (which was using a third party service anyway).
As a software developer I am huge supporter of Mozilla’s developer initiatives from Manifest V2 implementation to MDN. But it’s also important to be realistic Mozilla has long had major money problems, and not the kind that giving them more would fix.
The Lunduke shit again? The one that takes offense to money being donated to support “politics” i.e abortion rights?
Take a look at the other trash he posts on his reddit profile. That blog is not a trustworthy source, by any stretch, and it’s sweetly ignoring that he’s not looking at Mozilla’s spending alone, but of 3 separate entities that exist under the umbrella of the Mozilla Foundation.
I don’t know anything about him, but the criticism of them spending money on donating to other charities rather than focusing on making Mozilla’s core projects sustainable IMO is correct.
I don’t think this is a money making move. The previous CEO was absolutely overly focused on monetization and this move is a step away from that. I should’ve addressed this more explicitly in the above comment but even for the players who actively monetize, AI is a money incinerator.
Ok. Mozilla was spreading itself too thin, spending resources trying to compete with multiple products against established brands that were already way ahead of them. They needed to focus down onto their core product rather than frivolously cast about.
And AI is the technology of the future, despite all the whinging and griping by commenters on the subject. It’s being incorporated into the other major browsers, it’s a must-have if Firefox is to remain relevant. I’m sure you’ll be able to turn it off in the settings if you don’t want it and if you’re really concerned about getting AI cooties there’ll be niche forks that are compiled without it.
And the ever increasing CEO wages and hiring of AirBnB/eBay executive as CEO? Their previous CEOs salary alone could’ve covered everyone of those employees fired.
I agree with you that Mozilla is spreading itself too thin. And don’t get me wrong, I love Firefox and am a long time user. But they do need to understand their user base better.
They aren’t going to become a sustainable business by copying more popular browsers. It’s their differences from the mainstream that make them appealing as an alternative in the first place. I already don’t like them foisting Pocket on me, which 100% should have remained an extension. I don’t like the fact that Google is their default search engine, which goes against all their privacy messaging. I understand the reason is money, but that’s kind of the definition of being a sellout isn’t it? Their core values should always come first.
Fact is, those employees weren’t fired for any good reason other than to hop on the latest tech trend. It’s this sort of corporate “profit before people” bullshit that will erode any goodwill that people still have towards Mozilla. I couldn’t give a fuck about adding a stupid AI driven chatbot to Mozilla, and neither, I imagine, do many of their current users. Honestly, I think “AI” has ruined the internet in a lot of ways already. It’s already had a massive negative impact on the quality of search results, across all major search engines, because of all the low quality llm content that has been produced already, and it’s only going to get worse. And you can’t trust a single thing that comes out of those models, so what is even the point of them?
As fair as I am aware, Mozilla so far is only thinking about integrating AI in relatively smart ways that leverage their limited resources well. (There were some rumours a while back about using ai locally to search your history and tabs, as well as (arguable if this counts as AI, but branding is everything) on device translation)
I think the obvious worry being alluded to is the reason they had 400m in cash due to their arrangement with Google. Their primary sustenance comes from an entity actively seeking their destruction.
And AI is the technology of the future, despite all the whinging and griping by commenters on the subject.
Yeah because we’ve never seen tech fads before heralded as the next big thing. If I could roll my eyes any harder we could harness that for power generation.
And AI is the technology of the future, despite all the whinging and griping by commenters on the subject.
You have no idea, any more than the rest of us. Like, please tell me you understand “____ is the technology of the future” has been said more times than it’s ever been true.
The idea of AI is a technology of the future, but what we have growing now is not AI, not really, and this iteration can be just as big a flop as any other technology of the moment.
LLMs are what everyone dunks on, and “image generators are coming for our jobs! Think of artists! It’s not real art if a cheating machine does it!” is also a common cry.
But do any of those people even know about the new class of antibiotics a neural network trained to find patterns in protein folding discovered? Do any of them know about the accuracy of diagnosis that IBM Watson was able to make in cases of rare cancers, even when doctors didn’t see it? What about changes in weather prediction accuracy? Novel suggestions in materials science?
We are mimicking neural patterns, similar to the way our own minds work, to achieve pattern recognition and even extrapolate from them. And yeah, right now we’re brute forcing it, and we’re not even entirely sure how these relationships develop. It’s in its infancy, and growing fast.
This is technology considered the holy grail of computing. We have been chasing this concept since the 1940s. There are a million sci-fi stories about it and there are a million more attempts to make it work before one really stuck.
And now we’re at the beginning of it being practical and you think we’re just gonna go “eh it’s a wet fart like the Virtual Boy. Oh well, let’s make some new phones or something”?
No. This is literally the technology of the future. Within your lifetime (assuming you live a reasonable while longer) there will come a point when you won’t be able to buy a CPU without some type of neural engine in it.
And yes, people will (and already are) do horrific shit with it. It will fuck over a large portion of the white collar economy; a portion of which were told to go into the careers they did because they’d be safe from automation. “Get a degree and you’ll be safe!” they told us! Now they tell us “you better work at two different targets to make that payment, should have studied a trade!”
So the reason for skepticism and animosity is almost certainly the fear of being replaced; but look at how far these AI models have come in the last month alone. We’re already in “this is changing the future” territory and those things are just getting started.
Dude. Take a chill in the bathtub and touch grass. AI is never taking my job, since it’s physical labor since I removed myself from the computer industry 15 years ago. But as someone who studied AI and LISP (which was mired in the previous AI craze), it’s not actually wrong to have animosity and be skeptical about the current AI. we’re literally using the same techniques than we did 30 years ago. We’ve invented nothing new since the last AI fad. What is driving this craze is the brute force approach of massive parallel processing, not actual innovation.
There’s been some minor refinement, so it’s not exactly identical, but to use a metaphor… We’ve using more Lego bricks and different colours now to build our castles, but they’re all still lego bricks. Nothing has fundamentally changed.
… and you should know by now that tech industry is funded by hype machine, so temper your expectations. Current machine learning techniques are limited and inefficient, it’s not actually really a solvable problem with the current approach.
TLDR; LLMs are a super far cry from actually being “intelligent” and calling it AI is the equivalent of calling a wheeled electric self balance board a “Hoverboard”.
This is technology considered the holy grail of computing.
This shit is just analog computing though, right? Like at it’s base, we’re just reproducing analog computation in a digital environment and then we’re framing that in a million different ways, like we’ve been doing since the seventies. We’ve actually had this shit since the first computers, which were analog. The whole reason we moved to digital, though, is because the results were easier to break down, parse, and we had control over every step of the process to confirm it was correct, and it was going to be correct every time. A clearer sense of limitations and constraints, basically.
Now I’m not entirely against analog computing as a matter of fact, right, in fact I think it can be pretty cool if we recognize it for what it is, but at the same time I can’t help but think that the level of hype around it is fucking insane. Primarily because it’s not easily controllable or reproducible. Not in the sense that we’re gonna somehow invent a rogue AI that will kill us all, or whatever garbage, but in the sense that, while you can get easily reproducible results (such is the nature of computation), it is very hard to control what the output is of a given neural network. You can process loads of information extremely quickly, but, like, what use is that if I don’t know whether or not the solution is correct, or if it’s just a kind of ballpark figure? That’s the main issue.
Again, fine if we recognize it, but I don’t think we’re really close at all to just like, randomly inventing a rogue consciousness. We’re not anywhere close to that, from what I’ve seen. We’re still barely good at image recognition and generation in an actually complicated environment, and even then it’s still pretty hard to get what it is that you specifically want, partially because the hype is driving so much development at this point, and the implementation is bunk and, again, kind of uncontrollable. Venture capital jumping down this thing’s throat has partially blocked it’s airway, as I see it. Still a useful technology, potentially, but a million stupid tech demos and image generators for nonsensical memes that we can flood everyone with is the dumbest shit imaginable, and even dumber than that is the level of venture capitalists I see that want to somehow monetize that.
And so I have to ask, right, if I want a robot to sort through the different colors of little plastic beads, right, do I get a large language model on that, or do I just run a pretty basic and more efficient algorithm that just narrows the parameter of beads to a certain color, as recorded by the camera, and then that’s it? Do I want to translate a sentence with AI, or do I want to just manually run a straight word to word conversion that maybe changes based on a couple passes I’m gonna run at it to check whether or not it contextually makes sense with something like a markov chain? Trick question, they are both the same approach, AI has just done it in a way where I could apply a kind of broader paintbrush to the thing and get my results a little faster and with a little less thought even if I have less control over it.
And AI is the technology of the future, despite all the whinging and griping by commenters on the subject.
The entire discussion is to distract ourselves from the raw truth:
Fax machines are the technology of the future.
Fax machines will outlive us all. AI and VR will reach their heyday, then wane with years and be replaced. But whatever replaces them will sit quietly in the shadow of the everlasting Fax machine.
The answers to both of those things depends very heavily on the details. I think focusing on their main products is a good thing, but adding AI sounds like one of those likely terrible decisions. We definitely need privacy friendly & open source based AI though, in all areas, so I hope this is Mozilla pushing for something sensible here.
You’re right. Mozilla is the devil. Everyone go to the better option in Silicon Valley for web browsing…
…
…
…
…
Her replacement is an executive from AirBnB and eBay. We will find out how much both of these are earning in 2025 when they release their financial statements.
Can you tell me what they were doing at either of those companies, or what they’ve been doing at Mozilla since they were hired there? Have you done any actual research into this, at all, are you just assuming that because you saw two shitty companies on the resume, they must be a champion of those shitty companies?
Let me share some fun Mozilla facts about their previous CEO who has now stepped down to “executive chairwoman” last week.
She received 6.9 million dollars in 2022 and 5 million in 2021, 3 million in 2020.
Her replacement is an executive from AirBnB and eBay. We will find out how much both of these are earning in 2025 when they release their financial statements.
They fired 60 staff and are adding AI to their flagship program to earn more money.
Tell me this is a good thing.
Mozilla has long been the most ethical player in this space (while still producing SOTA ML). All of their datasets/models are open source and usually crowdsourced. Not to mention, their existing work is primarily in improving accessibility.
ALSO, the other half of this story is that Firefox is becoming the primary focus again. Everybody’s freaking out about the AI stuff but that’s because they’re only reading the headlines. The programs they’ve shut down are things like Hubs (Mozilla’s metaverse platform), the VPN, and the sensitive data scrubber (which was using a third party service anyway).
As a software developer I am huge supporter of Mozilla’s developer initiatives from Manifest V2 implementation to MDN. But it’s also important to be realistic Mozilla has long had major money problems, and not the kind that giving them more would fix.
The Lunduke shit again? The one that takes offense to money being donated to support “politics” i.e abortion rights?
Take a look at the other trash he posts on his reddit profile. That blog is not a trustworthy source, by any stretch, and it’s sweetly ignoring that he’s not looking at Mozilla’s spending alone, but of 3 separate entities that exist under the umbrella of the Mozilla Foundation.
I don’t know anything about him, but the criticism of them spending money on donating to other charities rather than focusing on making Mozilla’s core projects sustainable IMO is correct.
I don’t think this is a money making move. The previous CEO was absolutely overly focused on monetization and this move is a step away from that. I should’ve addressed this more explicitly in the above comment but even for the players who actively monetize, AI is a money incinerator.
Cloud AI is, but for local AI, they only need to incinerate enough money to train it. That’s none if they just end up using mixtral or something
I agree it’s probably not for money making, that’s my point, its instead that their management doesn’t know how to spend money.
Ok. Mozilla was spreading itself too thin, spending resources trying to compete with multiple products against established brands that were already way ahead of them. They needed to focus down onto their core product rather than frivolously cast about.
And AI is the technology of the future, despite all the whinging and griping by commenters on the subject. It’s being incorporated into the other major browsers, it’s a must-have if Firefox is to remain relevant. I’m sure you’ll be able to turn it off in the settings if you don’t want it and if you’re really concerned about getting AI cooties there’ll be niche forks that are compiled without it.
And the ever increasing CEO wages and hiring of AirBnB/eBay executive as CEO? Their previous CEOs salary alone could’ve covered everyone of those employees fired.
That part’s not good. I was addressing the “They fired 60 staff and are adding AI to their flagship program to earn more money.” Part.
I know, I was more looking at the bigger picture.
Adding AI could be fine, but with the direction the leadership is going I can’t see it as good in this case.
They didn’t hire the AirBnB/eBay executive to be CEO, they’ve been there for a while.
Also, you understand that people can work for companies without supporting their agendas, right?
I agree with you that Mozilla is spreading itself too thin. And don’t get me wrong, I love Firefox and am a long time user. But they do need to understand their user base better.
They aren’t going to become a sustainable business by copying more popular browsers. It’s their differences from the mainstream that make them appealing as an alternative in the first place. I already don’t like them foisting Pocket on me, which 100% should have remained an extension. I don’t like the fact that Google is their default search engine, which goes against all their privacy messaging. I understand the reason is money, but that’s kind of the definition of being a sellout isn’t it? Their core values should always come first.
Fact is, those employees weren’t fired for any good reason other than to hop on the latest tech trend. It’s this sort of corporate “profit before people” bullshit that will erode any goodwill that people still have towards Mozilla. I couldn’t give a fuck about adding a stupid AI driven chatbot to Mozilla, and neither, I imagine, do many of their current users. Honestly, I think “AI” has ruined the internet in a lot of ways already. It’s already had a massive negative impact on the quality of search results, across all major search engines, because of all the low quality llm content that has been produced already, and it’s only going to get worse. And you can’t trust a single thing that comes out of those models, so what is even the point of them?
Sorry in advance for the old man rant lol.
As fair as I am aware, Mozilla so far is only thinking about integrating AI in relatively smart ways that leverage their limited resources well. (There were some rumours a while back about using ai locally to search your history and tabs, as well as (arguable if this counts as AI, but branding is everything) on device translation)
Then Mozilla, please, emphasize the results instead of saying “We’re adding AI!”
They had 400m in cash in 2022, they don’t have any sustainability issues.
I think the obvious worry being alluded to is the reason they had 400m in cash due to their arrangement with Google. Their primary sustenance comes from an entity actively seeking their destruction.
Yeah because we’ve never seen tech fads before heralded as the next big thing. If I could roll my eyes any harder we could harness that for power generation.
The tools I want to see integrated into Firefox already exist. I’ve used them. It’s just a matter of putting them together with it.
You have no idea, any more than the rest of us. Like, please tell me you understand “____ is the technology of the future” has been said more times than it’s ever been true.
The idea of AI is a technology of the future, but what we have growing now is not AI, not really, and this iteration can be just as big a flop as any other technology of the moment.
LLMs are what everyone dunks on, and “image generators are coming for our jobs! Think of artists! It’s not real art if a cheating machine does it!” is also a common cry.
But do any of those people even know about the new class of antibiotics a neural network trained to find patterns in protein folding discovered? Do any of them know about the accuracy of diagnosis that IBM Watson was able to make in cases of rare cancers, even when doctors didn’t see it? What about changes in weather prediction accuracy? Novel suggestions in materials science?
We are mimicking neural patterns, similar to the way our own minds work, to achieve pattern recognition and even extrapolate from them. And yeah, right now we’re brute forcing it, and we’re not even entirely sure how these relationships develop. It’s in its infancy, and growing fast.
This is technology considered the holy grail of computing. We have been chasing this concept since the 1940s. There are a million sci-fi stories about it and there are a million more attempts to make it work before one really stuck.
And now we’re at the beginning of it being practical and you think we’re just gonna go “eh it’s a wet fart like the Virtual Boy. Oh well, let’s make some new phones or something”?
No. This is literally the technology of the future. Within your lifetime (assuming you live a reasonable while longer) there will come a point when you won’t be able to buy a CPU without some type of neural engine in it.
And yes, people will (and already are) do horrific shit with it. It will fuck over a large portion of the white collar economy; a portion of which were told to go into the careers they did because they’d be safe from automation. “Get a degree and you’ll be safe!” they told us! Now they tell us “you better work at two different targets to make that payment, should have studied a trade!”
So the reason for skepticism and animosity is almost certainly the fear of being replaced; but look at how far these AI models have come in the last month alone. We’re already in “this is changing the future” territory and those things are just getting started.
Dude. Take a chill in the bathtub and touch grass. AI is never taking my job, since it’s physical labor since I removed myself from the computer industry 15 years ago. But as someone who studied AI and LISP (which was mired in the previous AI craze), it’s not actually wrong to have animosity and be skeptical about the current AI. we’re literally using the same techniques than we did 30 years ago. We’ve invented nothing new since the last AI fad. What is driving this craze is the brute force approach of massive parallel processing, not actual innovation.
There’s been some minor refinement, so it’s not exactly identical, but to use a metaphor… We’ve using more Lego bricks and different colours now to build our castles, but they’re all still lego bricks. Nothing has fundamentally changed.
… and you should know by now that tech industry is funded by hype machine, so temper your expectations. Current machine learning techniques are limited and inefficient, it’s not actually really a solvable problem with the current approach.
TLDR; LLMs are a super far cry from actually being “intelligent” and calling it AI is the equivalent of calling a wheeled electric self balance board a “Hoverboard”.
Here’s one of the big issues: Basically all of the AI is not even happening on your CPU, it’s happening on the cloud.
And that wouldn’t be in issue if companies stopped shoving “AI” into everything not originally built for AI.
And even that wouldn’t be as big of an issue if the companies talked about the benefits of the new tech instead of just going “AI!!!1!!! drops mic”
This shit is just analog computing though, right? Like at it’s base, we’re just reproducing analog computation in a digital environment and then we’re framing that in a million different ways, like we’ve been doing since the seventies. We’ve actually had this shit since the first computers, which were analog. The whole reason we moved to digital, though, is because the results were easier to break down, parse, and we had control over every step of the process to confirm it was correct, and it was going to be correct every time. A clearer sense of limitations and constraints, basically.
Now I’m not entirely against analog computing as a matter of fact, right, in fact I think it can be pretty cool if we recognize it for what it is, but at the same time I can’t help but think that the level of hype around it is fucking insane. Primarily because it’s not easily controllable or reproducible. Not in the sense that we’re gonna somehow invent a rogue AI that will kill us all, or whatever garbage, but in the sense that, while you can get easily reproducible results (such is the nature of computation), it is very hard to control what the output is of a given neural network. You can process loads of information extremely quickly, but, like, what use is that if I don’t know whether or not the solution is correct, or if it’s just a kind of ballpark figure? That’s the main issue.
Again, fine if we recognize it, but I don’t think we’re really close at all to just like, randomly inventing a rogue consciousness. We’re not anywhere close to that, from what I’ve seen. We’re still barely good at image recognition and generation in an actually complicated environment, and even then it’s still pretty hard to get what it is that you specifically want, partially because the hype is driving so much development at this point, and the implementation is bunk and, again, kind of uncontrollable. Venture capital jumping down this thing’s throat has partially blocked it’s airway, as I see it. Still a useful technology, potentially, but a million stupid tech demos and image generators for nonsensical memes that we can flood everyone with is the dumbest shit imaginable, and even dumber than that is the level of venture capitalists I see that want to somehow monetize that.
And so I have to ask, right, if I want a robot to sort through the different colors of little plastic beads, right, do I get a large language model on that, or do I just run a pretty basic and more efficient algorithm that just narrows the parameter of beads to a certain color, as recorded by the camera, and then that’s it? Do I want to translate a sentence with AI, or do I want to just manually run a straight word to word conversion that maybe changes based on a couple passes I’m gonna run at it to check whether or not it contextually makes sense with something like a markov chain? Trick question, they are both the same approach, AI has just done it in a way where I could apply a kind of broader paintbrush to the thing and get my results a little faster and with a little less thought even if I have less control over it.
The entire discussion is to distract ourselves from the raw truth:
Fax machines are the technology of the future.
Fax machines will outlive us all. AI and VR will reach their heyday, then wane with years and be replaced. But whatever replaces them will sit quietly in the shadow of the everlasting Fax machine.
Don’t forget that Mozilla even had a Metaverse instance, chasing the VR fad, only to turn around and chase the latest trendy subject.
$6.9 million dollars?
Nice.
Ni.ce
The answers to both of those things depends very heavily on the details. I think focusing on their main products is a good thing, but adding AI sounds like one of those likely terrible decisions. We definitely need privacy friendly & open source based AI though, in all areas, so I hope this is Mozilla pushing for something sensible here.
You’re right. Mozilla is the devil. Everyone go to the better option in Silicon Valley for web browsing…
…
…
…
…
Can you tell me what they were doing at either of those companies, or what they’ve been doing at Mozilla since they were hired there? Have you done any actual research into this, at all, are you just assuming that because you saw two shitty companies on the resume, they must be a champion of those shitty companies?