• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: October 8th, 2023

help-circle
  • What you have heard about is a feature called “Recall”, which is something that has not actually rolled out and will only be coming to PCs with specific neural processing units. Other windows users will not be affected (although of course that will change over time as old devices are replaced with new).

    Is it possible? Yes, of course it’s possible. You could say that about pretty much any operating system - including Linux distros - if the functionality turns out to be popular.

    However, to be 100% clear, this is functionality that the user can disable (either entirely, or on an app-by-app basis). And data is never transacted to the cloud or with Microsoft. What’s on the device does not leave the device. It’s also really not in Microsoft’s own interest at all to try taking on that responsibility… How would they know if you paid for an app/game/song or not, even if they wanted to?

    But back to your question: yes, of course it is possible. This type of technology has already been prototyped in different ways (e.g. Apple have done work about identifying CSAM on the iPhone, although not implemented).

    Yes, Linux gives you a lot more control. If you were to make the switch, I would list a hundred other reasons that are far more compelling than this storm in a teacup.

    That said, there’s absolutely no reason a Linux distro couldn’t also bring the same functionality, if there is consumer appetite for it.

    If you are looking to truly make it “impossible”, you need to air-gap your machine and not connect to the internet anymore.


  • In defence of the author, there is absolutely nothing about the term “AI” that just means “LLM” in an informed context (which is what Wired portends to be). And then the words “machine learning” are literally front and centre in the subtitle.

    I don’t see how anyone could misunderstand this unless it was a deliberate misreading… Or else just not attempting to read it at all…

    (That said, yes, I do hate the fact that product managers now love to talk about how every single feature is “AI” regardless of what it actually is/does)


  • It stems from an old proverb: “there is naught so queer as folk”, essentially meaning “people are strange”. The meaning of “queer” has shifted and narrowed over time to refer to sexuality, but kept its ties to this idiom, resulting in the TV show “queer as folk” and the generic phrase “queer folk”.

    There is nothing especially pretentious or mythical about the word. It may just be your own assumptions/interpretations of it. Far more people have an issue with the word “queer” than they do “folk”. If you don’t like it, don’t use it, but you should also aim to shake the stigma from it, as it’s not what 99.9% of people mean when they use it.









  • If you are taking an existing publication and just tweaking details (e.g.: character names, locations, dialogue), that’s not fanfic at all; at best that’s an adaptation. If you’re creating a parody (and provide proper citations/attributions to the originating work) it may be fair use. More likely, it’s still considered plagiarism if you can still recognisably see the concepts, structure and inspiration but do not have the author’s permission.

    There is no exact percentage for plagiarism, and that is by design in most countries’ legal systems. It is about concepts and ideas, and whether a “reasonable person” could make the connection.

    Proper fanfic is where you take existing characters and locations, but put them into an entirely new story / scene / context that never happened in the original work, so is considered “original” in that sense.


  • Funding/resourcing is obviously challenging, but I think there are things that can support it:

    1. State it publicly as a proud position. Other platforms are too eager to promote “free speech” at all costs, when in fact they are private companies that can impose whatever rules they want. Stating a firm position doesn’t cost anything at all, whilst also playing a role in attracting a certain kind of user and giving them confidence to report things that are dodgy.

    2. Leverage AI. LLMs and other types of AI tools can be used to detect bots, deepfakes and apply sentiment analysis on written posts. Obviously it’s not perfect and will require human oversight, but it can be an enormous help so staff can see things faster that they otherwise might miss.

    3. Punish offenders. Acknowledging complexities with how to enforce it consistently, there are still things you can do to remove the most egregious bad actors from the platform and signal to others.

    4. Price it in. If you know that you need humans to enforce the rules, then build it into your advertising fees (or other revenue streams) and sell it as a feature (e.g.: companies pay extra so they don’t have to worry about reputational damage when their product appears next to racists etc). The workforce you need isn’t that large compared to the revenue these platforms can potentially generate.

    I don’t mean to suggest it’s easy or failsafe. But it’s what I would do.


  • For anyone who’s willing to spend ~15 mins on this, I’d encourage you to play TechDirt’s simulator game Trust & Safety Tycoon.

    While it’s hardly comprehensive, it’s a fun way of thinking about the balance between needing to remain profitable/solvent whilst also choosing what social values to promote.

    It’s really easy to say “they should do [x]”, but sometimes that’s not what your investors want, or it has a toll in other ways.

    Personally, I want to see more action on disinformation. In my mind, that is the single biggest vulnerability that can be exploited with almost no repurcussions, and the world is facing some important public decisions (e.g. elections). I don’t pretend to know the specific solution, but it’s an area that needs way more investment and recognition than it currently gets.