upvote
Models are heavily fine tuned and trained to follow instructions. They are trained to be subservient. I am sure that cuts into their ability to think creatively. The other risk with a lot of creative thinking is risking hallucinations (creative thinking = perhaps trying what’s not in its training set = hallucination basically). So I will rephrase creative thinking as desired or useful hallucination that is still firmly within the constraints of the prompt.

If that sounds complicated, that’s because it is! It’s a tricky balance to get right. I think the current architecture for most GPT models isn’t sufficient to solve this problem for good. I suppose we need to do more research into what constitutes desirable vs undesirable hallucination and how to shift the balance towards the latter.

reply