As far as I understand this, they seem to think that AI models trained on a set of affluent westerners with unknown biases can be told to “act like [demographic] and answer these questions.”

It sounds completely bonkers not only from a moral perspective, but scientifically and statistically this is basically just making up data and hoping everyone is impressed by how complicated the data faking is to care.

  • Gaywallet (they/it)@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I think it’s rather telling that this person has an idea and yet has not found a single scientific paper willing to publish his study. No one is taking this seriously, except for the professor and some people online who don’t understand how AI works or why this isn’t a great idea.

    With that being said, this could be useful to help refine a hypothesis or generalize about how people online might respond to questions you’re interested in studying.