In case it helps anyone: You can instruct GPT to be less agreeable in the "Customize" settings so that you don't have to explicitly state this every time you want objective feedback on your inputs. For example, my preset instructions are "Be objective with your answers and not unnecessarily agreeable. "
Using AI the way you’ve outlined is by fav thing to do with it. Not ask an answer but ask it to argue different perspectives. I like the idea of asking it to rate its conviction and also what the board is likely to say.
Like I said, it's evolving quickly - my thinking on this has changed in just the last year. But I believe this is where we are now - not where we may be if AI's capabilities get 10x better.
In case it helps anyone: You can instruct GPT to be less agreeable in the "Customize" settings so that you don't have to explicitly state this every time you want objective feedback on your inputs. For example, my preset instructions are "Be objective with your answers and not unnecessarily agreeable. "
Amen. We cannot become meat puppets for silicon strings! I wrote about the danger of humans becoming inverse mechanical Turks here: https://www.whitenoise.email/p/the-inverse-mechanical-turk-meat
Using AI the way you’ve outlined is by fav thing to do with it. Not ask an answer but ask it to argue different perspectives. I like the idea of asking it to rate its conviction and also what the board is likely to say.
This is so shortsighted. Disappointed.
Like I said, it's evolving quickly - my thinking on this has changed in just the last year. But I believe this is where we are now - not where we may be if AI's capabilities get 10x better.