Friday, April 29, 2011

Solution of the Poincaré Conjecture

I'm not sure if you heard about this, but the hundred year old Poincaré conjecture was solved in 2002 by Grigori Perelman. In 2006 he was awarded the Fields Medal, and in 2010 the Clay Mathematics Institute awarded him the $1 million Millennium Prize for solving the problem. Remarkably, he turned down both awards, and the money.[1]

I guess he just doesn't like "the organized mathematical community." He considers their decisions unfair, and he thought he should have shared the prize with Richard Hamilton who suggested the research program that ultimately lead to the proof of the conjecture. [2][3]

Sounds like a nice guy.

Wikipedia
[1] Solution of the Poincar$C3%A9 conjecture
[2] Gregori Perelman
[3] Richard Hamilton

Video
Reclusive Eccentric Russian Genius

Wednesday, July 22, 2009

Blackjack

I'm working on writing a program to solve for finding the optimal strategy in blackjack. Supposedly this is simply the basic strategy. I can measure my success by how close it is to the basic strategy. If I get the basic strategy exactly, then I can safely assume my methods will work for when I calculate the strategies for when the true count is +1, -1, etc.

The true count is the count divided by the number of decks left. In single deck, with half a deck left, that means multiply the count by two. In six deck, two decks in (usually after about six hands or so at a full table), that means divide by four.

Anyways, the math I need to do is calculate the expected value for every possible action of every possible situation (this is why I'm using a computer). I already have an algorithm for calculating the dealer's odds of ending up with a 17-21 or bust for a given count. Now I need some kind of dynamic solution to solve for the player's expectation based on these odds.

If a dealer's odds of having 17, 18, 19, 20, 21, bust are .11, .12, .13, .14, .15, .35, respectively (based on some face up card), and the count is 0 (each card has an equal probability), then how do we play a 13? (These numbers are totally fictitious, so this does not necessarily apply to real blackjack.) It actually depends already on the strategy defined for playing a 21, 20, 19, 18, 17, 16, 15, and 14. Since, in order to calculate our EV of hitting, we need to know the optimal strategy of all those hands, for when we get an ace, 2, 3, 4, 5, 6, 7, or 8.

So, in order to better understand how to write the dynamic programming version, I guess I should just solve a simple case, as illustrated above. Solve for 21. Put that strategy into the array. Solve for 20. Into the array. But now, do I solve for soft 20, or hard 19? Obviously these strategies are trivial, but lets say soft 17 vs. hard 16. Okay, hard 16 never becomes a soft 17, so solve for hard 10-20, and then soft 20, then hard 9, then soft 19. Since a hard 9 never becomes a soft 19, but a hard 8 sometimes does, this is a possible order. Another order is solve hard 21-12, then soft 20. Since an 11 never becomes a soft 20, and a soft 20 never becomes an 11, this is fine. A soft 20 does become a hard 12 sometimes, so we must solve hard 12 before soft 20. But only hard 9 becomes a soft 20. So we have options to solve for soft 20 between solving for hard 12 and hard 9. So just based on points, hard or soft, we have an array that is ordered something like this:
This is of course not including pairs. Since the only way to have a 4 is with two 2's, I need to add them to this array somewhere, but where? Last I guess. Just tag them on to the end. A soft 15 will never become a pair of 4s, so these strategies can be solved last, and use the previous solutions in the array.

Yay! I've created my dynamic blackjack algorithm. Algorithmics is a very mathematical part of computer science. Since I was working on this, I just thought you might enjoy it.

Friday, July 17, 2009

Optimal Strategy in Blackjack

I'm writing an algorithm to calculate optimal blackjack strategies based on the running count. I'm also learning to count cards. I plan on determining if it is possible to have an advantage purely by playing the optimal strategy based on the count. Also, I want to know, if not, how much do I need to spread my bet in order to gain an edge.

That's all. Details in the other post. (Link above.) Enjoy.

Monday, June 15, 2009

Aiming Problem

So, yesterday my brother and I worked on an aiming problem for an AI. The problem was this:

A shooter wants to know where to aim based on his target's position, velocity, and projectile speed. I came up with the formula once I drew it as shown. I was thinking maybe I should try it in polar form, but I didn't have to. The real solution comes from thinking in terms of the distance from the origin (in 2d). Set the distance of the projectile from the origin equal to the distance to the target from the origin and you get a nice quadratic equation that can be solved. As shown there may be two points to aim at (if the projectile and target have close speeds).


As it says, the cone is the possible location of a shot fired at t=0.

To simplify we call the shooter's position the origin. Also assume the projectile is faster than the target, so we are guaranteed a solution.

Let the enemy position be x = (x1, x2). Let the enemy's velocity be v=(v1,v2). Let s be the bullet speed.

(We use |a| to mean the length of the vector a. |a| = sqrt(a1^2 + a2^2) ).

Target distance from origin is

|x + vt|

Projectile distance from origin is

ts

So when these are equal we have

ts = |x + vt|

which leads to

(ts)^2 = (x1 + v1t)^2 + (x2 + v2t)^2

0 = (v1^2 + v2^2 - s^2)t^2 + 2(x1v1 + x2v2)t + x1^2 + x2^2

a = v1^2 + v2^2 - s^2

b = 2(x1v1 + x2v2)

c = x1^2 + x2^2


t1 = -(b - sqrt(b^2 - 4ac))/(2a)

t2 = -(b + sqrt(b^2 - 4ac))/(2a)

use smallest t1 or t2 > 0, and call this t*.

Now we just use this time with our targets velocity to calculate the new position to aim at at this time. Location to aim at is y = x + vt*.

This was fun. Lets do it again!

Monday, May 18, 2009

So much for that stuff. I meant to write here a little more frequently but oh well.

So I'm writing this poker AI for an AI class, but I'm not writing "the good one", since that would be too hard. "The good one" would be an AI that used Bayesian Inference about its opponent(s) and some formulas about strategy to determine its own strategy. One rule I came up with was:

if b*( P(opponent is value betting)) > (2*a + b)*P(opponent is bluffing) then fold
else call.

for b being the bet size, P being the standard notation for "probability of", and a being the ante size.

In essence, b is the bet you would lose if you call and lose. And you would lose that bet if your opponent is value betting (this is using the assumption that you can beat a bluff and only a bluff), so you should fold if that amount (that you expect to lose weighted by the probability that you will lose it) is greater than this other amount (that you expect to win when your opponent is bluffing weighted by the probability that he is bluffing).

Note the cool bolding to reveal a summary of the confusing babble.

Anyways, I thought it was cool that for a whole range of hands (that can beat a bluff but only a bluff) you should completely fold, or completely call based solely on whether your opponent bluffs too much or not.

This leads me to believe that this notion of bluffing a fixed portion of the time equal to

P(bet) = (2*a + b)/b P(bluff)

so that your opponent is indifferent of calling or folding seems like a brittle strategy, since if you bluff too much your opponent should always call, and if you bluff too little your opponent should always fold.

Saturday, June 30, 2007

Some useful mathematics

Grant and I had an argument about some probability when we were playing a drinking game a while back. I said I was right, and I said I would prove it. So here I am, proving it. ;)

In the game of "up the river, down the river" each player is dealt four cards face up one at a time. For the first card dealt, each player tries to guess if it will be red or black (suit). Guessing right means you win. For the second card, each player is asked to pick higher or lower (in rank compared to their first card dealt). For the third card, you have to guess inside (between the two first cards) or outside (higher or lower than both). For these second and third questions, a tie means you loose. And of course it's a drinking game so every time you loose you drink. Then for the fourth card, you guess which suit.

Then eight cards are dealt face down, and you turn them up one at a time, in one column, you have to drink for each card you have that matches, and in another column, you get to give drinks to the person(s) of your choice. What's more, the number of times you have/get to drink/give is 1, 2, 3, 4 depending on the row. (Sometimes this is 2, 4, 6, 8, depending on the crowd).

Note: with this game the number of people, call it p, must satisfy 4p + 8 <= 52. In other words, you can play with at most 11 people. (Incidentally, this is the same number of cards per person and community cards, counting the three burn cards, as in the game Omaha.)

When I'm dealing, if there's enough cards, because there are few enough people playing, I like to deal a fifth card, asking even or odd. Either everyone drinks or no one drinks on an ace because it is both 1 and 14. This can only be done when 5p + 8 <= 52, or when there are eight or less people. (Unfortunately, I don't know a poker game where you get five hole cards and five community cards.)

Grant doesn't like the fifth card though. He thinks (or at least he used to) that it makes it less likely that on of your four matches the eight, because some of your "outs" (term from poker meaning a card that would make your hand, in this case, any card the same rank as one of your four) will be dealt when the fifth card is dealt around.

It is perfectly understandable that one might feel that they are less likey to pair one of their cards to the board when everyone gets dealt a fifth card. It is more likely that you will see all four cards of one rank, and then you know you have no chance of pairing that card. And if two of those four are in your hand, you might really be mad, because that fifth card that someone got, if it is the last card dealt to eight players, could have been a give eight, and you would've had two of those, so you would've gotten to dish out 16 drinks!

Now I'd like to show that your odds are greater to give and take more drinks with five cards.

Proof
Consider the game above, dealt with four cards to each player. Now, suppose that after you play the game (with eight or less people) when all of the river cards are face up and all the drinks have been dealt, you deal everyone an extra card, and if it matches one of the river cards, you get to give/take the corresponding number of drinks. In some cases drinks will be given or taken. For example, if you get dealt a seven of hearts, and the seven of diamonds was the "give 2" card. You would get to give 2. Now more drinks have been given. And the only thing we did was deal a fifth card. This is not even considering that some people will guess the "even or odd" part correctly. So clearly, in the game "up the river, down the river," more drinks are given when you play with "even or odd" (the fifth card). ■

Stay tunned for another drinking game related discussion on the topic of "racing aces."

Labels: , , , , , ,

Sunday, November 13, 2005

A Starting Place

Lets begin with definitions , then a theorem, and proceed with a proof and an example.

Definition: A prime is a positive integer greater than 1 that is divisible by no positive integer other than 1 and itself.

Definition: If a and b are integers with a ≠ 0, we say that a divides b if there is an integer c such that b = ac. If a divides b, we also say that a is a divisor or factor of b and that b is a multiple of a.

Theorem 0.1: Let S = { 6n ± 1, where n is a natural number}, and let p be a natural number with p > 3. If p is a prime number, then p is in S.

[Note: For my purpouses I am considering the natural numbers to be integers strictly greater than zero.]

Proof 0.1 (proof by contradiction): Let S = { 6n ± 1, where 0 < n and n is a natural number}. Let p be a natural number and let p > 3. Assume p is a prime number, and assume p is not in S. Because p is not in S, p is a natural number, and p > 3, then by inspection
p = 6k - 2, p = 6k, p = 6k, p = 6k + 2, or p = 6k +3 where k is natural.

Case 1: Consider when p = 6k - 2 or p = 6k + 2. Now we may write
6k - 2 = 2(3k - 1), and 6k + 2 = 2(3k + 1) by factoring out a 2. Now because k is a natural number 3k - 1 and 3k + 1 are also natural numbers. Then there exists a natural number m such that
p = 2m, so p is not prime because p is divisible by 2 and m (by the definitions of prime and divisible).

Case 2: Consider when p = 6k or p = 6k + 3. By factoring out a 3 we get 6k = 3(2k) and 6k + 3 = 3(2k + 1). We know 2k and 2k+1 are natural numbers because k is natural, so there exists a natural number l such that p = 3l. So p is not prime because it is divisible by 3 and l (by the definitions of prime and divisible).

Now we have a contradiction, because we assumed p was prime, and we showed that when p is not in S p is never prime. Therefore if p is a prime number, then p is in S. □

Example 0.1: Show that the prime number 101 is in set S.
101 = 6*17 - 1 and 17 is a natural number so 101 is in S.

[Definitions of prime and division taken from Elementary Number Theory and Its Applications Fifth Edition by Kenneth H. Rosen and AT&T Laboratories]

On the subject of the set S := { x , x = 6n ± 1 with n being a natural number }

For every x = 6n ± 1 (n being natural), let i be called the primary index number for x, such that when x = 6n - 1, i = 2n - 1, and when x = 6n + 1, i = 2n. Call n the secondary index number.

Claim 1: Every element of S that is not prime may be written as the product of two other elements of S.

Example 1: 25 = 6*4+1 so 25 is in S. 25 = 5*5 so it is not prime. 5 = 6*1 - 1, so 5 is in S. So 25 which is in S and not prime can be written as the product of two numbers also in S.

Claim 2: If m is an element of S is not prime, then i, the index number for m, may be written as i = 2kx + j OR i = 2kx - (j+1), where x is an element of set S, j is the primary index number for x, and k is a natural number.

Claim 2.1: In this form x is a factor of m and k is the secondary index number for another factor of m, call this factor y (y in S), where y = 6k - 1 when i = 2kx - (j+1), and y = 6k + 1 when i = 2kx + j.

Example 2: For m = 25, i = 8. We can write 8 = 2*1*5 - (1+1), where 5 is in S (shown in example 1), and 1, so the other factor of 25 is 6*1 - 1 = 5 (because we wrote 8 = 2*1*5 - (1+1)). So the factors of of 25 are 5 and 5.