upvote
First off, It’s good to study all kinds of things isn’t it? Even if it’s not strictly practical.

Second, and more importantly these AI tools are EVERYWHERE right now. The effects of people using them for work can be seen throughout many industries and workplaces.

So I think studying how these models perform in the vast majority of use cases is not only a good idea, but it’s actually really important.

Even if you’re strictly pro-AI and believe it is the future, a study like this can help you explain to laymen why they need the harnesses you’re so in support of.

reply
> This will change too man. Maybe I am in a bubble but with how fast things are changing, it won’t be too long before the bubble becomes reality.

You can’t get mad at an experiment for not happening in the future.

> Either way we should be doing experiments on the actual capabilities of AI

They simulated common end user behavior

>because it helps validate your own negative bias against AI.

We’ve gone from “this study is flawed because language models don’t do that” to “this study is flawed because while language models do do that, I don’t think that they will in the future” to “data that could support a bias other than my own is bad”

reply
> You can’t get mad at an experiment for not happening in the future.

I’m more getting mad at this sentence not making any sense. I’m disappointed at this experiment for not testing the actual capabilities of an LLM. Comprende?

> They simulated common end user behavior

Not the way you use it. And not the way it will be used.

You love it because you want it to stay this way so you can forever believe AI will never be better than you.

Bro the reality is unfolding as you speak. It’s like humanity just discovered guns but hasn’t discovered the bullets and your saying guns are useless because most of humanity hasn’t figured out bullets yet.

> We’ve gone from “this study is flawed because language models don’t do that” to “this study is flawed because while language models do do that, I don’t think that they will in the future” to “data that could support a bias other than my own is bad”

This is a flat out lie. Models DO do that. The only fucking argument you have is that non technical and average laymen people edit documents the wrong way while all people who use agentic AI as adepts use it the correct way. Like are you fucking kidding me?

The only change I acknowledge is your grandma copies and pastes essays into ChatGPT while YOU don’t. You go pretend you live in that reality where the bullets will never appear.

reply
>You love it because you want it to stay this way so you can forever believe AI will never be better than you.

>Bro the reality is unfolding as you speak

>You go pretend you live in that reality where the bullets will never appear.

It’s too late bro, roko’s basilisk was real and it’s already punishing you

reply