• 8 Posts
  • 89 Comments
Joined 1 year ago
cake
Cake day: January 30th, 2025

help-circle

  • What exactly is this for? I understand LLMs have there limits with understanding physical reality, but at least they have a use case of theoretically automating the “symbolic work” ie moving symbols around on a screen or piece of paper, that white collar workers do.

    Yes it’ll never be able to cook a meal or change a lightbulb, but neither will this without a significant enhancement in robotics to embody this AI. What’s the use case? Being able to better tell you how to throw a ball then a person?






  • Here’s the source it’s from open AI but it is peer reviewed. Here’s another source that uses it as a baseline to compare the relative scores and according to the tables in 2023 it got a 610, putting it around the 75th percentile, and that’s just for math which the open AI study showed it did about 5% worse then it’s average so ~80th percentile for a total score. Again this is for students who are usually more prepared for the SAT than the general population, so it’s still probably in the 90th percentile for the general population.

    Again for the car wash example that is not declaritive knowledge, like the pizza glue that is knowledge derived from experience and reason which I’ve said that LLMs aren’t the best at. The fact that they had to make a riddle for the AI to trip it up if anything shows how good it is. If it was as bad as you say it is then anyone could easily trip it up and get it to give a wrong answer and a study like that wouldn’t be relevant. Seriously if you think the LLM is so inaccurate, come up with your own test to stump it, it should be easy by the way you talk about them.


  • I think you are underestimating how accurate LLMs are because you probably don’t use them much, and only see there mistakes posted for memes. No one’s going to post the 99 times an LLM gives the correct answer, but the one time it says to put glue on pizza it’s going to go viral. So if your only view on LLM output is from posts, you’re going to think it’s way worse than it is.

    Even if you mark it down for incorrect answers it’s still going to beat most people. An LLM can score in the 90th percentile in the SAT, and around the 80th percentile in the LSAT. If you take into account that people taking those tests are more prepared for them then the general population they’re probably in the 99th percentile. It doesn’t matter if you mark wrong answers negative if it’s getting 95% of the answers correctly and your average percent is getting 50% of the answers correctly.

    People guess things too and will also state things confidently that they don’t completely know. If a person has a little bit of knowledge on a subject they are likely to give confidently wrong answers due to the dunning Krueger effect. If you pick a random person you’re probably just as likely to get one of these people as you are that the LLM is wrong. So is it more useful to ask something that has a 95% chance to be correct, and 5% chance to be confidently wrong, or ask a person who has a 50% chance of being correct, that includes those who guessed correctly, 5% chance of being confidently wrong and a 45% chance of saying I don’t know.

    If you’re doubting my percentages on the accuracy of LLMs I’d encourage you to test them yourself. See if you can stump it on declaritive knowledge, it’s harder than the posts make it seem.


  • They are good at making declarative statements.
    That’s not the same thing.

    What’s the difference between making correct declaritive statements and having declaritive knowledge? If I am able to accurately state every president of the US, wouldn’t you say I have knowledge of the list of US presidents? The only way you can judge my declaritive knowledge of something is by my ability to make accurate declaritive statements, that’s what a test is. If making accurate declaritive statements is not the measure of declaritive knowledge then what is?

    An LLM will give more accurate declaritive statements on more question then any human can, would that not mean that an LLM has more declaritive knowledge than any human? So is it not more trustworthy for giving declaritive statements than any random human? Would you not trust an LLMs answer on who the 4th president is over a random human?


  • I never said I don’t believe in truth, I said there are different definitions of truth and different kinds of truth, the study of this is called epistemology and I’d encourage you to look into it to better understand truth. I believe in truth derived from experience, and reasoning from first principles, 2+2=4 is true, I had coffee this morning is true. For things outside of my direct experience or that can’t be reasoned I accept that truth can be derived from trustworthy external sources. Therefore Washington was the first president is true because I’ve heard it many times from multiple trustworthy sources.

    The question is whether you believe truth can be derived from external sources or are you a Cartesian skeptic? It doesn’t seem like it because that sort of worldview is very limiting. The question remains how do you know that Washington was the first president? Or even better how do you know that an LLM said to put glue on pizza? You never experienced it giving that answer, you got the idea from another source, maybe you saw a picture that could’ve easily been edited. The truth of that idea can only be derived from the trustworthiness of that source.

    LLMs can’t know everything, again they have good declaritive knowledge but they completely lack experiential knowledge and struggle with reasoning. Knowing not to put glue on pizza is knowledge gained from experience: glue tastes bad and is usually inedible, and reasoning: therefore adding glue to pizza will make it taste bad and be inedible.

    Every day you also probably see a new post of humans being blatantly wrong, does that mean humans can’t know things? No it just means humans have a limited area of knowledge. Same with an LLM, it can know that Washington was the first president while not knowing to not put glue on pizza, so you have to be careful what you ask it, just like when you ask human something outside their area of expertise.


  • How do you know that George Washington is the first president? You weren’t around in 1784, you have no experiential knowledge, you only have declarative knowledge of it, you read it from a book or heard it from a person enough to repeat the fact when asked. You are guessing what your history teacher would have said in elementary school. Declaritive knowledge is just memory and repetition, and an LLM can do memory and repetition.

    Whether an LLM can determine truth depends on your definition of truth. If truth can only be obtained from experience and reasoning from first principles then an LLM can’t determine truth. Then a statement like George Washington was the first president can’t be true then because you can’t derive it from experience or first principles, you weren’t there, no one alive was there. George Washington was the first president derives it’s validity and truth from the consensus of trustworthy people who say it’s true. An LLM can derive this sort of truth by determining the consensus of its training data assuming its training data is from trustworthy sources or the more trustworthy sources are more reinforced.



  • An LLM has a great deal of declarative knowledge. Eg. It knows that the first president of the US is George Washington. Like humans it has built up this knowledge through reinforcement, the more a fact is reinforced by external sources, the more you/ it knows it. Like with humans when it reaches the edge of its knowledge base it will guess. If I ask someone who the 4th president of the US was they may guess Monroe, that person isn’t lying, it’s just an area that hasn’t been reinforced (studied) as much so they are making their best guess, LLMs do the same. That doesn’t mean that person cannot and will not ever know the 4th president, it just means they need more reinforcement / training / studying.

    Humans as well as LLMs have a declarative knowledge area with a lot of grey area between knowing and not knowing. It’d be like a spectrum starting on one end with stuff that has been reinforced many times by people with high authority, what is your name would probably be the furthest on one side, to another end with stuff you’ve never heard or heard from untrustworthy sources. LLMs may not have the other dimension of trustworthiness that people do but the humans training it will usually compensate that with more repetition from trustworthy sources, eg. They’ll put 10 copies of the new York times and only one of younewsnow.com or whatever in the training data.


  • And a human’s task, along with any other lifeform, is to survive and reproduce. In pursuit of that goal we have learned many different complex strategies and methods to achieve it, same with an llm.

    Peoples tasks are also not to provide accurate information, write code, provide legal advice etc. If a person can earn a living, attract a mate and raise children by lying, writing bad code, giving shitty legal advice etc. they will. It takes external discipline to make sure agents don’t follow those behaviors. For humans that discipline is provided by education, socialization, legal systems etc. For LLMs that discipline is provided by fine tuning, ie. The lying models get down rated while the more truthful models get boosted.




  • SS is not universal though, it effects a specific group. People on SS tend to be unemployed / underemployed. Yes if you compare home prices of a community with a high percentage of SS to the whole country home prices will be less. But that’s an unfair comparison, you have to compare them to a community with a similar pre-SS income level. So say the SS community average income without SS is $20,000 from pensions, 401k etc. if you compare that community with another with low to no SS with average income of $20,000 then the SS community will have higher home values.

    Also people on SS tend to be the only ones in the US not trying to upgrade their housing, if anything they may be trying to downsize. So they aren’t trying to use their money to outbid someone else to go up the housing hierarchy. They also tend not to be renters so they don’t have to outbid someone else to keep their current housing.



  • The same can be said of your positive claims for UBI. There’s no evidence for anything on the macro scale for UBI since it’s never been done on a societal scale. The best we can do is theorize based off economic principles, which is what I was doing.

    If you think my theory or reasoning is wrong show it.


  • Not_mikey@lemmy.dbzer0.comto Memes of Production@quokk.auubisoft
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    8 days ago

    landlords primarily trade on their tenants’ inability to borrow money to buy a house.

    Yes, and UBI would further increase home prices. If everyone can now afford an extra $1,000 a month for mortgages then they’ll be able to / forced to due to competition get a bigger mortgage and bid up prices for the home.

    Same thing for rent, if everyone has an extra $1,000 a month they’ll just bid up rent prices until you’re back to square 1. Say there’s three people in a rental market, me who pays $1,500 for rent, another guy who pays $1,000 for rent and an unhoused person. After UBI he other guy may try to rent my apartment, so I now have to offer a higher rent to out bid him, and the other guy unable to get a better apartment has to now outbid the unhoused person to keep their place. Eventually this reaches an equilibrium where I’m paying $2,500 to outbid the other guy, and there now paying $2,000 to outbid the unhoused person. The housing hierarchy remains the same, and the landlord gets all the extra money.

    The problem isn’t lack of money, the problem is a lack of supply and a hierarchical wage system that determines who gets that supply, UBI doesn’t address either of those problems.

    Increasing aggregate demand without increasing aggregate supply just leads to inflation. UBI has no mechanism to increase aggregate supply and discourages the government from doing it because they are using all their money for UBI instead of building social housing, providing food etc. and they can turn to UBI and say that’s all you need now, we aren’t going to supply any services.

    This is why we need universal basic services backed by a jobs guarantee instead. It still gives a mechanism to raise the floor on wages and benefits, private enterprise now has to compete with the government for labor, without causing inflation because the government is actually using the labor for productive purposes, eg. Building social housing, thus increasing aggregate supply.