Are we really getting smarter?
And other questions researchers want the answers to...
Last month, we released part one of our two part ISIR International Society for Intelligence Research film.
Part two is below:
If you enjoyed this, please consider supporting Aporia with a paid subscription!
There's a phenomenon in AI training where you can compute a circle of strategies that continuously looks like it's improving. Imagine a game of rock paper scissors where the first strategy is to always play rock, which is beaten by the strategy to always play paper, which is beaten by scissors, which is beaten by rock. It looks as of our AI is always improving, but it ends up where it started. I think that the Flynn effect is a sort of example of this, where, by rotating the tasks we train and improve on we can constantly see improvement in our areas of focus without ever actually improving.
Interesting (and intelligent) discussion thread this. If you'd put the mic in front of me, I would have said that, if we appear to be getting smarter, it may be that we are getting less good at wisely defining what 'smart' might mean.