I’ll drift a bit from the central thread with a comment on this article where a robot can beat a human 100% of the time in the rock-paper-scissors game, simply by cheating, i.e. watching the human hand and then reacting quickly enough to what it now knows is your play. An interesting technical achievement (the rapid processing of the visual signal and analysis of what the human hand will do) but a a rather boring game.
Nonetheless for a long time I’ve felt that having computers “cheat” is a good way to make them more interesting. Poker, esp. really simple game like Texas Holdem, is very human. The odds and combination of hands are simple enough that it entirely comes down to effective bluffing (or detection of bluffing). I’m sure there is some computer program that mechanically plays Texas Holdem and is boring as toejam. Most gambling that depends of human psychology to be interesting is badly done by computers.
So I always thought cheating could make it more interesting. Obviously the computer “knows” the hands and therefore who will win or lose based solely on the cards. Therefore the software could make bets with this foreknowledge to explore the risk preferences and bluffing behavior of its human opponent and once it thought it understand the person then beat them, not with just better hands, but with better bluffs. If nothing else a program like this would be better for training to play human players.
Since I know little about poker and esp. how to bluff, I tried to enlist a friend of mine who is good at it to work together. I’d do all the programming, he’d do all the design. We only did a bit of development before getting bored because neither of us had much idea how to predict future moves based on “learning”. That is, we could easily set up the program to bluff (make too big a bet based on eventual outcome, get scared from betting, deliberately, even though the odds favored it). Poker players look for patterns so that’s how you can take advantage. Take an absurd case, one player is superstitious and will always fold when they have a pair of sevens; the bluffing player would learn this and thus bet a little higher when the vulnerable player has at least one 7 (pointless to bet higher on two 7’s since the vulnerable player will always fold). But how do you even put this unrealistic case into a learning program? We didn’t solve it and gave up.
But I think there is still advantage to cheating, not for the robot to win, but to equalize the robot’s play to something more human. Now relative to the main part of the thread in my 3a post I suggested that it might not matter if teens doing excessive social networking don’t learn to recognize facial expressions as what difference do these make when you interact with others digitally anyway. But what about the bots watching us? I also commented that Samsung’s new smartphone watches you, for the purpose of dimming the screen when you’re not looking. That’s just a first step, how about the software being enhanced to actually recognize, esp. very subtle and unconscious, facial expressions – what an advantage to the robot in playing poker.
So one way or another we’re going to make robots appear to be more human, not mere number crunchers following conservative patterns of the odds but actual gamblers, programs that can guess and mislead. It will make games with them more fun, but what are the side-effects of a bot that maybe knows our emotional state better than we do.