

Still weird to me that we don’t have evidence in the other direction! It’s been 20 years


Still weird to me that we don’t have evidence in the other direction! It’s been 20 years


Note these methods are enough to support/detect effects of other safety practices.
Science isn’t certain. We make mistakes. The rule ‘doubt all who hedge’ is how we get the republican party.


For those who don’t click: it’s a myth.


Do check the vlogbros summary of the AI water issue. TLDR: it’s negligible compared to the real water hog (corn), and being managed.


Np, search is getting terrible. Thanks for looking!


Source appreciated? Was this inside the research paper?


Just noticing: there’s 0 evidence in article that anyone is doing this. I just don’t buy that this is happening enough to matter. Interesting as interpretability research at best


That’s also precedent, and a template for using on institutions to break copyright. Still seems like good news to me.


Precedent is, in effect, new law and it absolutely does change who gets taken to court and the costs of defending your case. So, depending on which arguments the court accepts, I won’t need fancy lawyer. And it won’t require nearly the risk, creativity, or time that it requires of Meta’s legal reps today. Look at civil rights or environmental protections case law; big profile early cases were horrifically costly, and now compliance by company’s is largely by default.
Horrible people and companies can set good precedent, often without intending to. For example, plenty of criminals set and clarified due process law. So we absolutely could all benefit from Meta’s bad intentions.
We benefit from institutions that will be training their own AI, hosting data publicly, and have the resources to mirror a precedent. Care to cite sources that the arguments being accepted are going to carve out Mark Zuckerberg by name as the one person who can ignore copyright? I haven’t read the fillings, but this should be easy.


I read this as setting precedent that others couldn’t. Court cases like this are one way to make it possible for everyone to break an absurd law.


Worth remembering that any group could make a company. They are work, but not particularly class locked.


Just noting that the mirror test is a bad way of studying theory of mind.
https://en.wikipedia.org/wiki/Mirror_test#Criticism
It’s interesting as a silly and absurd way humans used to demean other species. But I think it says a lot more about those who use it than the animals.


Do tell: do you check if they have before you boo?
I know I haven’t. And the few times I’ve checked, it has always been more of an edge case than expected.


If you’ve ever been booed, I’m not sure this take would feel great.
I support making Vance feel unwelcome (and that was indeed the target, not team USA). I’m less enthused about the athletes being targeted. The optics of ‘we shame all associated to things we dislike’ is aspect conservative voters claim to fear/dislike from progressives.


It does seem like the headline + mechanics are entirely uninteresting and unsurprising. I guess the ‘newsworthy’ thing here is that substack platforms the neo-natzis?
It also platforms a bunch of ex-guardian journalists, who will say plenty about the harm being done by corporate buyouts and influence in traditional media. So I have a hard time taking this article, from this venue, very seriously.
For example: fox news, every podcast service, the opinion pages (and some news sections) of most major newspapers, and (I assume) more have all been profiting off of amplifying fringe right-wing folks. Is substack substantially worse? Are they doing anything policy wise that we should advocate for? Regulators who aren’t doing something they should?


… Don’t they take a cut of most subs?


Claiming some old public figure was a newly discovered pedo, and including a quote of them saying terrible things.
Except the quote was 5 years old, not from the Epstein files, the figure had apologized and been publicly forgiven by the victim, and the files revealed nothing new.


The ratio is a vibe, and I kinda regret posting a precise one. The case I checked carefully is this one: everything this guy posts
which I noted in November, and blocked very shortly thereafter. I vaguely recall finding a handful more examples of ‘too good to be true’ headlines, which were in fact not true, but I did not save links.
What made me sad is that even bereft of the algorithm and bad incentives in system, if the headline is ‘directionally correct’ still seems like the most important thing. Very interested in a social media where correct is ranked over good feels.
(and then there’s the regular examples like this, which are not slop but are heavily disputed/recontextualized by the top comment. Correction highly upvoted, yet the OP itself is still doing well))


I gave Lemmy a dedicated year. A few notes:
Very few people click through.
Lots of rage bait.
Communities split over instances make it pretty hard to know where to post things, what with defederation and such.
I didn’t miss much “news”; lemmy was functional for reporting what people were talking about.
No notification of moderation actions taken against you is a choice.
Those who post small websites that do cool things: thank you! I did discover several other cool places and tools.
I found that about 1/10 of the top lemmy posts (after filtering out jokes and sports) are links to AI slop that nobody bothered to check, comments just take the headline as real if they affirm. Pointing this out in the comments did not reduce engagement or drop the posts.
Cutting it out of my routine, at last for awhile.
One thing I really hoped for from the social Internet was access to people and data that could correct me/fill in gaps. But lemmy doesn’t do this, as people see what is upvoted and upvotes are used for affirmations to the reader.
We have 5 review studies. The material to review is mid, but all find essentially no benefit from stretching.
I hedge in the title, because I’d love someone to pull up with a controlled modern trial. Alas, no such luck.