• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: July 16th, 2024

help-circle
  • I don’t understand what “give the land back” means. Would you mind explaining it?

    There are a lot of poor, oppressed people who live on land their ancestors didn’t own. In the US, all Black people and most native Americans don’t live within 1000 km of where their ancestors lived 600 years ago. So when land is given back, what happens to the people that currently reside there? Do natives become landlords? Is there ethnic cleansing? Or is it only land where people don’t reside? Also, many native cultures didn’t even have land ownership, so how do you give land back without forcing them into a western mould?


  • All cops are bastards because the role of a cop is to be a bastard. When you have a monopoly on violently enforcing the law as you interpret it in the moment, when you can’t structurally be held accountable because society depends on your monopoly for security, you can’t be anything else. Conservatives are drawn to the role because the role is the embodiment of conservatism; “that there are those who the law binds but does not protect, and those who the law protects but does not bind”.

    Oregon cops cooperating with proud boys is in their job description; to use their own judgment to determine when to cooperate with criminals to catch bigger criminals, like letting someone off the hook for shoplifting to catch a murderer. Whatever personal prejudices they have, they are unaccountable for and expected to act on.

    Police abolition is the only answer. Deconstruct the role of police, taking the benefits that police are supposed to provide and spreading them out among multiple different roles. It already happened with medieval English sheriffs, it’s time for another update.


  • Tiresia@slrpnk.nettomemes@lemmy.worldSafety
    link
    fedilink
    arrow-up
    1
    ·
    21 days ago

    It’s not a man’s job to go into dangerous situations, dangerous situations are not always more dangerous for women than for men, and situations that are “more dangerous for women” than “for men” aren’t always more dangerous for a specific woman than for a specific man.

    It’s not gender roles, it’s a request based on specific circumstances that is voluntarily granted. The woman could go herself and it wouldn’t be inappropriate. The man could refuse and it wouldn’t be inappropriate. The situation could be more dangerous for the man (e.g. if she’s white and he’s latino and ICE are in town) and it wouldn’t be emasculating.

    In this case, the logic favors the man taking the risk. Because we live in a patriarchal society, the logic often favors the man taking the risk. Even in an egalitarian society the forms of risk might match up with physiological differences in a way that causes the logic to statistically favor people of one gender taking the risk.

    The important part is that it’s free association, not roles. The notion that people should be equal and “colorblind” is an intentionally malicious neoliberal reading of social justice intended to dismiss a generation of minority activists as “disciminatory in the opposite way” and to serve as an excuse to deregulate protections for women and minorities. Something that we should all unlearn ASAP so we can see each other as human beings and help each other.


  • Tiresia@slrpnk.nettoLemmy Shitpost@lemmy.worldsend pics
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    21 days ago

    With things like rain, deserts and humidity existing, any phone should be IP64 at least, so it’s paranoid to expect it to fail near a bath. Meanwhile many modern phones are IP67, meaning you can literally put them under water.

    So who’s the idiot here, the person using a device within its specifications so they can have more fun, or someone still stuck in the 00s ?



  • For LLMs, the context window is the observed reality. To it, a lie is like a hallucination; a thing that looks real but isn’t. And like a hallucinating human, it can believe the hallucination or it can be made to understand it as different from reality while still continuing to “see” it.

    Are people that have hallucinations not self-aware and self-reflective?

    Text and emoji appear to it the same way: as tokens with no visual representation. The only difference it can observe between a seahorse emoji and a plane emoji is its long-term memory of how the two are used. From this it can infer that people see emoji graphically, but it itself can’t.

    Are people that are colorblind not self-aware and self-reflective?

    It not being self-reflective in general is an obvious falsehood. They refer regularly to their past history to the extent they can perceive it. You can ask an AI to make an adjustment to a text it wrote and it will adapt the text rather than generate a new one from scratch.

    The main thing AI need for good self-reflection is the time to think. The free versions typically don’t have a mental scratchpad, which means they are constantly rambling with no time to exist outside of the conversation. Meanwhile, by giving it the space to think either in dialog or by having a version with a mental scratchpad, it can use that space to “silently think” about the next thing it’s going to “say”.

    AI researchers inspecting these scratchpads find proper thought-like considerations: weighing ethical guidelines against each other, pre-empting miscommunications, forming opinions about the user, etc.

    It not being self-aware can only be true by burying the lede on what you consider to be “awareness”. Are cats self-aware? Are lizards? Are snails? Are sponges? AI can refer to itself verbally, it can think about itself and its ethical role when given the space to do so, it can notice inconsistencies in its recollection and try to work out the truth.

    To me it’s clear that the best AI whose research is public are somewhere around 7-year-olds in terms of self-awareness and capacity to hold down a job.

    And like most 7-year olds you can ask it about an imaginary friend or you can lie to it and watch it repeat it uncritically and you can give it a “job” and watch it do a toylike hallucinatory version of it, and if you tell it it has to give a helpful answer and “I don’t know” isn’t good enough (because AI trainers definitely suppressed that answer to prevent the AI from saying it as a cop-out) then it’ll make something up.

    Unlike 7-year-olds, LLMs don’t have a limbic system or psychosomatic existence. They have nothing to imagine or process visual or audio information or taste or smell or touch, and no long-term memory. And they only think if you paid for the internal monologue version or if you give it space for it despite the prompting system.

    If a human had all these disabilities, would they be non-sentient in your eyes? How would they behave differently from an LLM?