I’m a lonely smut writer in Portugal! Feel free to say hello! :3

  • 0 Posts
  • 14 Comments
Joined 4 months ago
cake
Cake day: November 4th, 2025

help-circle
  • There was an interview I saw recently with Asmodei where he said that Anthropic aren’t categorically against autonomous weapons, only that they didn’t think they were ready, seemingly implying they would make mistakes similar to how LLMs hallucinate. A lot of the media coverage around them seemed to imply that they had a higher ethical standard than the others, and I mean… maybe? I guess it could be argued that wanting to minimize collateral damage is more ethical, but regardless, I think it’s important to keep perspective when we see how they act in the coming weeks and months.




  • For your first question, what you’re describing is a problem with education and staffing, not a problem of the tool itself. I’m not suggesting you keep around ‘one old man who hates AI’, my pitch you bar the use of AI for human-level checks.

    For your second, yes I saw the part about how news and media are representing AI in healthcare, but I don’t really see how news or media are relevant here. Could you explain this a bit for me?

    I don’t intend to gloss over the issues with Generative AI/LLMs, I tried to be specific in my separation of ML from them in my original comment where I said LLMs in their public facing version (ChatGPT, Claude, whatever) aren’t very useful.

    The original comment I replied to asked “is “AI” even useful (etc)” but also mentioned LLMs. I was trying to make the point that LLMs aren’t the only type of AI and that others can be employed to great effect. If that was unclear, that’s my bad but that was my intention.

    The reason I don’t want to engage with a hypothetical is because I could just as easily counter with “what if it diagnoses at a 100% success rate? What if fear of losing skills results in doctors never wanting to use AI, resulting in more deaths?” Neither hypothetical argument is really very helpful for the discussion. I promise you I’ve thought about this a lot (but again, I’m not an expert, nor am I in the field), but more importantly I have friends finishing doctorates in the bioinformatics field whom I get some insight from, and I’m, at least at this point, convinced of the benefits.


  • I read both articles you linked, but I’m not really seeing how they support your point. The first article seemed to support the idea that healthcare staff would welcome more seamless, user-friendly AI tools in the field and the second discussed biases within tools they selected for cancer diagnoses and a tool they used to reduce those biases. Am I misunderstanding what you’re saying somewhere?

    Also, with regard to the reduction in diagnostic accuracy of diagnosticians with AI, I would need to see the specific article to be sure, but if it’s the one that was posted across reddit a few months back, I read through that one as well. It seemed to agree with a similar article about students writing papers with and without the use of ChatGPT (group A writes with it, group B writes without it, and afterwards they are asked to both write without the LLM. Group B’s essay was shown to be better. This is a hugely reductive description of the experiment, but gets the idea across). Again, it makes sense that if you use a tool to facilitate an action, that tool is replacing that skill and you get “rusty”. It does not mean that the existence of a tool would reduce skill in those who do not use it, though. My suggestion of using it as a screening tool wouldn’t affect the diagnostician’s skill unless they also used it, which sorta defeats the purpose of them being a human check on the process, post-screening flag.

    I can’t speak to your other points as they’re hypothetical. Obviously, I wouldn’t advocate for an inaccurate tool that causes an already overworked field to take on more work. I’m only suggesting that ML is a tool that has use-cases and can be used to supplement current processes to improve outcomes. They can, and are, being improved constantly. If they’re employed thoughtfully, I just think they can be a huge benefit.


  • Regarding the doctor’s signature thing, that seems a bit preemptive to say a single flawed study invalidates the entire field and tech, especially when the tech is working as intended in that case and it is user error in the study.

    And of course, like any tool it should be utilized thoughtfully. Any form of technology directly takes away from the skill previously utilized to get results. Flint and steel took away from the rubbing sticks together skill. The combustion engine took away from many different professional skills.

    Consider that, in this case, we don’t just have to replace diagnosis but could augment it instead. What if every hospital around the world could augment regular medical care with a single machine processing results. Every single check-up could include a quick cancer screening. If the machine flags you as ‘at risk’, a doctor could then see you for human diagnosis and validation. The skill of diagnosis is still needed and utilized, but now everyone can have regular screening instead of overwhelming an already overtaxed healthcare system.

    Again, all I’m saying is that there are practical, useful use-cases for the technology, they’re just not what we are doing with them.

    Edit: as an after thought, I’m no expert here. As far as I understood, LLMs are a type ML, but ML encompasses a way broader category of ‘AI’. I’m mostly against LLMs for just general use like they are currently. I am advocating for ML as a whole, with thoughtful application.


  • Generative AI in its current, public-facing form? Probably not. It’s sort of like an invention of the internet situation. It CAN be used to facilitate learning, share information, and improve lives. Will it be used for that? No.

    A friend of mine is training local LLMs to work in tandem for early detection of diseases. I saw a pitch recently about using AI to insulate moderators from the bulk of disturbing imagery (a job that essentially requires people to frequently look at death, CSAM, and violence and SIGNIFICANTLY ruins their mental health). There are plenty of GOOD ways to use it, but it’s a flawed tech that requires people to responsibly build it and responsibly use it, and it’s not being used that way.

    Instead it’s being scaled up and pushed into every possible application both to justify the expenses and enrich terrible people, because we as a society incentivize that.

    Edit: hugely belated, I misspoke here after checking with my friend. He’s using local models, but they aren’t LLMs. This is why I’m no expert. 😅





  • You’re 18. It’s somewhere in the early 2000s and you’ve just graduated. You’re soaking in the warm summer night air in your bed watching The Office. The world is so far away and simultaneously rushing at you at the speed of fuck. Your Blink-182 CD loops back again.

    Suddenly, a bright pinprick of light bathes your dim room in an eerie blue-white glow. The light begins to grow and you realize it’s undulating, like a fluid unbound by gravity as it roils in the air. You’re too stunned to speak and cover your eyes at the harsh light. Your hairs stand on end and chills run along your skin. Somewhere inside, you associated such luminosity with heat, but the sphere—no, the disc, seems to be consuming the energy in the room, like some kind of ethereal whirlpool.

    You gasp as a shadow moves through the shimmer. First a hand, then the upper half of what looks like a torso. The figure cocks their head as they look around the room. You can only make out their silhouette, but… they’re vaguely familiar.

    It’s… you! They’re different, a bit more worn down, perhaps, but they’re unmistakably you. After a moment, your breath catches in your throat. They’re older. Your mind, stunned by the absurdity of what has just occurred, finally catches up.

    “You’re me… from the future,” you say. The statement immediately sounds stupid. Of course they are. The portal, the older you, what else could be happening. You scramble for a pen and an old school journal at your nightstand. You’ve fantasized about this before. You know what to do. Write down what they say and you’ll be rich. No, you’ll stop some horrible cataclysm. Maybe you’ll keep your true love from leaving!

    You turn back to yourself expectantly, anxiety causing your hand to shake on the page. You’re holding your breath. Your lungs burn but you hardly can bring yourself to care.

    The older you looks down at you from the swirling light.

    “You are eighteen,” they say with a shit-eating grin. In an instant, the light is gone. Darkness floods your room again as if nothing at all had happened. Outside your window, crickets continue to chirp. Your mind races, generations of genetically perfected pattern recognition searching for meaning in the words until you remember shitpostd about this exact scenario on 4chan.

    “Oh go fuck yourself,” you say, tossing aside the journal.


  • I’m no expert, but I feel like a data center in space is a super niche use case. Bandwidth seems like it would be a major issue. Heat seems like it would as well. And as you said, jurisdiction would be a problem that many businesses wouldn’t necessarily want to contend with.

    While the devices are difficult to get to physically, should an adversarial state actor send something up, it’s not like we could stop them from accessing the devices in a way we could if they were within the borders of a country. They’re harder to reach for smaller adversaries, and significantly easier for bigger ones. Not to mention significantly harder for us to repair if something goes wrong.

    I’m not saying data centers in space are a bad idea in general, but I am not seeing a huge benefit to them right now.