Rule of thumb: if it sits between you and the ground for an extended period, don’t cheap out on it or settle if you have the choice.
Shoes, desk chair, mattress, pillow, car seat.
Life is too short to be uncomfortable.
Rule of thumb: if it sits between you and the ground for an extended period, don’t cheap out on it or settle if you have the choice.
Shoes, desk chair, mattress, pillow, car seat.
Life is too short to be uncomfortable.
There’s actually no digital audio involved anywhere in this process. It’s all analog.
A magnetic tape cassette holds raw wave data of the sounds it records. Just like a vinyl record, except the groove is in the magnetic field instead of physically etched into the surface of the tape, and the needle is an electromagnet instead of, well, a needle.
An audio cable using a standard 3.5mm jack also transmits raw wave data. It has to, because the electromagnetic pulses in the cable are what directly drive the electromagnets in whatever speakers they’re hooked up to. If it’s coming out of a digital player, the player has to convert the signal on its own using an onboard digital-to-analog converter (a DAC).
The neat part is that since a tape deck read head is looking for an analog wave signal, and an analog wave signal is what an aux cable carries, the two are directly compatible with one another. If you actually crack one of these tape deck hacks open, you’ll find the whole thing is completely empty, save for the audio cable wires going directly to the write head that mimics the tape. Beyond that, there’s no conversion equipment, no circuit board, nothing. It’s a direct pass-through.
The body of the thing is nothing more than an elaborate way to trip all the mechanisms in the tape deck to trick it into thinking it’s holding a valid cassette, while simply holding the write head fixed in the proper spot.
I’m sure you already know all of this. I just think it’s really cool and I enjoy talking about it. Analog tech is amazing.
Votes on Lemmy are public, fyi.
You have to host your own Lemmy instance to see them for yourself, but you can check if you were so inclined.
I love cats. Other peoples’ cats.
I will never own my own cat because I don’t want to accept the burden of responsibility that responsible pet ownership demands.
I guess, in a very liberal definition of the term, “cloud gaming”. Specifically the old LodgeNet systems in hotels where you could rent Nintendo games by the hour to be streamed to your room from a physical console somewhere behind the front desk. Every room had a special controller with oodles of extra buttons on it hardwired to the television that also functioned as television remotes.
The service was objectively awful, of course, when factoring in how much the hotel charged compared to what little you got for it. But I’ve always found it fascinating.
My true hell would be instances only federating explicitly through whitelist. If what the other reply I received about Mastodon is correct, and if Lemmy behaves similary, then they operate on an implicit auto-federation with every other instance. Actual transaction of data needs to be triggered by some user on that instance reaching out to the other instance, but there’s no need for the instances involved to whitelist one another first. They just do it. To stop the transfer, they have to explicitly defed, which effectively makes it an opt-out system.
The root comment I initially replied to made it sound, to me, like Mastodon instances choose not to federate with one another. Obviously they aren’t preemptively banning one another, so, I interpreted that to mean Mastodon instances must whitelist one another to connect. But apparently what they actually meant was, “users of Mastodon instances rarely explore outward”? The instances would auto-federate, but in practice, the “crawlers” (the users) aren’t leaving their bubbles often enough to create a critical mass of interconnectedness across the Fediverse?
The fact we have to have this discussion at all is more proof to my original point regardless. Federation is pure faffery to people who just want a platform that has everything in one place.
That sounds worse than I thought it was. I just assumed Mastodon was like Lemmy, where every instance federates with every other instance basically by default and there’s only some high-profile defed exceptions.
A Fediverse where federations are opt-in instead of opt-out sounds like actual hell. Yeah, more control to instances, hooray, but far less seamless usability for people. The only people you will attract with that model are the ones who think having upwards of seven alts for being in seven different communities isn’t remotely strange or cumbersome. That, and/or self-hosting your own individual instances. Neither of these describe the behavior of the vast majority of Internet users who want to sign up on a platform that just works with one account that can see and interact with everything.
Season’s the reason!
This is basically asking why anyone would live in or near a city like Los Angeles or New York City when Minot exists and has everything you could possibly need.
If you had to look up where Minot even is, you’ve proven my point.
Say what you will about whether living near the proverbial big city is worth it or not. But it cannot be denied, there is a world of experiences on offer at larger platforms that a smaller platform simply cannot provide. Network effect can be a cruel mistress.
I’m pretty sure they’re referring to the concept of defederation and how that can splinter the platform.
Bluesky is ““federated”” in largely the same ways as Mastodon, but there’s basically one and only one instance anyone cares about. The federation capability is just lip service to the minority of dorks like us who care.
To the vast majority of Twitter refugees, federation as a concept is not a feature, it’s an irritation.
It’s “I am working” not “I be working”.
From how it’s used and understood, it’s a lot closer to, “I am in a situation where I find myself working from time to time”. “I am working” suggests you’re doing it right now, “I be working” does not. This example is a unique, condensed way to convey a very specific idea that your idea of “proper English” cannot convey without a boatload of extra words.
If that’s still bothersome to you, well, I guess have fun kicking that proverbial land-crawling fish back into the sea if that’s where you get your jollies. IMO some prescriptivism is okay to get people on the same page, but the moment you use it as a cudgel to beat people who are very clearly already being understood, you’re being a prude.
I’ve heard this one phrased: “Newbs deserve a helping hand. Noobs deserve a kicking.”
Happy Debian daily driver here. I would never ever recommend raw Debian to a garden variety would-be Linux convert.
If you think something like Debian is something a Linux illiterate can just pick up and start using proficiently, you’re severely out of touch with how most computer users actually think about their machines. If you even so much as know the name of your file explorer program, you’re in a completely different league.
Debian prides itself on being a lean, no bloat, and stable environment made only of truly free software (with the ability to opt-in to nonfree software). To people like us, that’s a clean, blank canvas on a rock-solid, reliable foundation that won’t enshittify. But to most people, it’s an austere, outdated, and unfashionable wasteland full of flaky, ugly tooling.
Debian can be polished to any standard one likes, but you’re expected to do it yourself. Most people just aren’t in the game to play it like that. Debian saddles questions of choice almost no one is asking, or frankly, even knew was a question that was ask*-able*. Mandatory customizeability is a flaw, not a feature.
I am absolutely team “just steer them to Mint”. All the goodness of Debian snuck into their OS like medicine in a kid’s dessert, wrapped up in something they might actually find palatable. Debian itself can be saved for when, or shall I say if, the user eventually goes poking under the hood to discover how the machine actually ticks.
Everything works the same, times of website incompatibility are long gone.
Not completely true. It’s mostly true. I’ve daily driven Firefox for years, and the number of websites I’ve crossed that wouldn’t function in it correctly but would work just fine in Chrome was very slim… but not zero. Definitely not comparable to the complete shitshow of the 90’s and 00’s. That’s true. But it’s not a completely solved problem.
And with Mozilla’s leadership practically looking for footguns to play with combined with the threat of Google’s sugar daddy checks drying up soon due to the antitrust suit (how utterly ironic that busting up the monopoly would actually harm the only competition…), that gap can get much worse in very little time if resources to keep full time devs paid disappear.
I recently had a rather baffling experience trying to preemptively avoid this by downloading the stupid app right away, only to discover I needed the website version anyway.
I was attempting to add my Known Traveler Number to an already booked trip with Southwest Airlines, booked by someone else. I was able to link the trip to my account right away in the app, no issue. And I could see the KTN field for my ticket sitting there, empty, greyed-out, and not interactible. I opened up the moble version of their website, completely unsurprised to find it was identical to the app, except for the detail that the KTN field there was functional. Put in the information, changes reflected in the app instantly, and I was in the TAS-pre line that afternoon.
Why did the two versions obviously built from the same codebase have two different sets of capabilities? Why was the website the more capable of the two this time? I have no clue. All I know is I never want to be a developer at a corporation where I’d have to be responsible for this flavor of trash.
The more egalitarian principle would be to not assume. I won’t deny that. People from more minority locales have every right to be upset at being marginalized.
But at the same time, whenever I read passive aggressive comments on socials from residents of crown countries or from EAASL people around the world bitching about US defaultism as if people are doing it just to be ignorant dicks, I can only think to myself, “Uhh, hello? What do you think the demographics of this space were? What did you expect?”
Americans are hardly the majority of the world’s English speakers, but for all the reasons you listed, they tend to remain a massive plurality, if not an outright overwhelming majority, of any mainstream online English language platform. No, that’s not a license to perpetuate US defaultism. But like… read the room, people. Your good fight is far more uphill than you seem to think it is.
I’ve seen at least one company press kit in rules on how to display their logo refer to it as “respect distance”.
It’s also part of what makes FOSS niche.
I recognize three kinds of comments that have different purposes.
The first kind are doc block comments. These are the ones that appear above functions, classes, class properties, methods. They usually have a distinct syntax with tags, like:
/*
* A one-line description of this function's job.
*
* Extra details that get more specific about how to use this function correctly, if needed.
*
* @param {Type} param1
* @param {Type} param2
* returns {Type}
*/
function aFunctionThatDoesAThing(param1, param2) {
// ...
}
The primary thing this is used for is automatic documentation generators. You run a program that scans your codebase, looks for these special comments, and automatically builds a set of documentation that you could, say, publish directly to a website. IDEs can also use them for tooltip popups. Generally, you want to write these like the reader won’t have the actual code to read. Because they might not!
The second kind is standalone comments. They take up one or more lines all to themselves. I look at these like warning signs. When there’s something about the upcoming chunk of code that doesn’t tell the whole story obviously by itself. Perhaps something like:
/* The following code is written in a weird way on purpose.
I tried doing <obvious way>, but it causes a weird bug.
Please do not refactor it, it will break. */
Sometimes it’s tempting to use a standalone comment to explain what dense, hard-to-read code is doing. But ideally, you’d want to shunt it off to a function named what it does instead, with a descriptive doc comment if you can’t cram it all into a short name. Alternatively, rewrite the code to be less confusing. If you literally need the chunk of code to be in its confusing form, because a less confusing way doesn’t exist or doesn’t work, then this kind of comment explaining why is warranted.
The last kind are inline comments. More or less the same use case as above, the only difference being they appear on the same line as code, usually at the very end of the line:
dozen = 12 + 1; // one extra for the baker!
In my opinion, these comments have the least reason to exist. Needing one tends to be a signal of a code smell, where the real answer is just rewriting the code to be clearer. They’re also a bit harder to spot, being shoved at the ends of lines. Especially true if you don’t enforce maximum line length rules in your codebase. But that’s mostly personal preference.
There’s technically a fourth kind of comment: commented-out code. Where you select a chunk of code and convert it to a comment to “soft-delete” it, just in case you may want it later. I highly recommend against this. This is what version control software like Git is for. If you need it again, just roll back to it. Don’t leave it to rot in your codebase taking up space in your editor and being an eyesore.
Nah. The real cancer is the quiet plurality of users who just scroll through the post feed and only voting, not even reading comments. The ones who are responsible for the occasional thread that has entirely negative comments but gets upvoted to the stratosphere anyways.