Discussion about this post

User's avatar
Rosetta The Stoned's avatar

There's a phenomenon in AI training where you can compute a circle of strategies that continuously looks like it's improving. Imagine a game of rock paper scissors where the first strategy is to always play rock, which is beaten by the strategy to always play paper, which is beaten by scissors, which is beaten by rock. It looks as of our AI is always improving, but it ends up where it started. I think that the Flynn effect is a sort of example of this, where, by rotating the tasks we train and improve on we can constantly see improvement in our areas of focus without ever actually improving.

Expand full comment
Graham Cunningham's avatar

Interesting (and intelligent) discussion thread this. If you'd put the mic in front of me, I would have said that, if we appear to be getting smarter, it may be that we are getting less good at wisely defining what 'smart' might mean.

Expand full comment
4 more comments...

No posts