Dynamic Programming – basic examples

The key idea in Dynamic Programming (DP) is to generate a set of subproblems from a larger problem, and then use recursion or a bottoms up approach to combine the subproblem results to solve the original problem.

Recursion lends itself nicely to DP as the program stack itself becomes the mechanism to generate, traverse and combine the subproblems, and memoization is used to prevent re-solving the same subproblem if there’s overlap.

Let’s think about the classic recursive Fibonacci algorithm.

def fib_rec(n):
    if n<2: return n
    return fib_rec(n-1) + fib_rec(n-2)

The subproblems is calculating each number as the sum of the prior two. We use the program stack to generate these nodes depth first, with the base cases being the numbers in positions 0 and 1. Combing these sub-results is straightforward – we add them all the way back up the stack.

Now we just have to optimise with memoization to prevent re-calculating nodes.

def fib_dyn(n,memo={}):
    if n<2: return n
    if len(memo)>n:
        return memo[n]
    else: 
        fibn = fib_dyn(n-1,memo) + fib_dyn(n-2,memo)
        memo[n] = fibn
        return fibn

DP can also be done without recursion, via a bottoms up approach while adhering to the same principles.

def fib_bot_up(n):
    f=[]
    for k in range(n+1):
        if k<2: f.append(k)
        else: f.append(f[k-1]+f[k-2])
    return f[k]

You can see that in the bottoms up approach we start with the base cases, and move up the sub-results until we get to the desired position in the sequence. We could optimize further by keeping track of only the prior two numbers in our loop.

Let’s think about another simple problem: Let’s say you can climb a staircase of n steps by one or two or three steps at a time. How many ways are there to get to the top?

First, let’s think about the subproblems. We’ll call step zero as the top step – the base case. Once I’m there, there’s only one way to climb the staircase. From any step, I can get to the top by adding the ways to get to the top from 1 step up, 2 steps up and 3 steps up from where I am. Like the Fibonacci sequence, these subproblems can be nicely generated by the program stack using recursion.

def countways_rec(n):
    if n<0: return 0
    if n==0: return 1
    return countways_rec(n-1)+countways_rec(n-2)+countways_rec(n-3)

Now we add memoization:

def countways_dyn(n,memo={}):
    if n<0: return 0 
    if n==0: return 1
    if len(memo)>n:
        return memo[n]
    else:
        return countways_dyn(n-1,memo)+countways_dyn(n-2,memo)+countways_dyn(n-3,memo)

Now a bottoms up approach. Again, we start with the base case and build up from there. We can keep a lookup table to keep track of the previously calculated values that we’re building our final result on.

def countways_bottomsup(n):
    lookup = {}
    lookup[0]=1
    lookup[1]=1
    lookup[2]=2
    for step in range(3,n+1):
        lookup[step]=lookup[step-1]+lookup[step-2]+lookup[step-3]
    return lookup[n]

These are the basic principles of all DP problems. Figuring out the subproblems, and how to generate and combine them can get trickier in some problems. I’ll explore some of these in subsequent posts.

For completeness, here’s a solution of the steps problem with a variable number of steps that can be jumped.

def countways_bottomsup_varjumps(n,jumps):
    lookup = {}
    lookup[0]=1
    for step in range(1,n+1):
        lookup[step]=0
        for jump in jumps:
            if step-jump >= 0: lookup[step]=lookup[step]+lookup[step-jump]
    return lookup[n]
Posted in programming | Tagged | 15 Comments

A simple Monte Carlo simulation in Python

Monte Carlo is a simulation method that can be useful in solving problems that are difficult to solve analytically.

Here’s an interesting application of the technique to estimate the value of pi.

Consider a circular dartboard placed against a square backing, with the sides of the square perfectly touching the circle tangentially at the top, bottom, left and right.

If you throw a dart blindly so it lands inside of the square, what are the chances the dart has landed inside the circle?

Given a circle radius of r, the chances are:


= area of circle / area of square
= (pi * r ^ 2) / (4 * r ^ 2)
= pi / 4

So the chances of hitting the circle are pi divided by 4.   In other words:

pi = 4 * the probability of hitting the circle.

So we can set up a Monte Carlo simulation that will give us the probability of getting the dart inside of the circle, and then we’ll be able to derive an approximation of the value of pi.

To do this, let’s think of the quarter circle below:

Note that the chances of hitting the quarter circle inside of its square is the same as hitting the larger circle inside of the bigger square.  

How can we easily simulate hitting the quarter circle?  We can obtain two random numbers between  0 and 1, representing the x and y coordinates of a point with the origin at the circle’s center.  If the distance between this point and the origin is less than 1, then we’re inside the quarter circle.  Otherwise we’re outside it.  

If we simulate this enough times, and multiply the percentage of times we’re inside the circle by 4, we should converge on the value of pi. Here’s python shell code that simulates this:

>>> from random import random
>>> len(filter(lambda x:(random()**2+random()**2)**.5<1,range(1000000)))/1000000.0*4
3.14064
Posted in programming | Tagged , | 3 Comments

Generating prime numbers using Numpy

I’ve been looking at generating primes, and using various element-wise operations in Numpy arrays to do so. To cut the the chase, prime6 below is the fastest implementation. The ones before that tell the story of how I got there.

The following function uses a for loop to generate primes less than 100k and takes about 7 secs on my laptop:

import numpy
import math
def prime2(upto=100000):
    return filter(lambda num: numpy.array([num % factor for factor in range(2,1+int(math.sqrt(num)))]).all(), range(2,upto+1))

If we remove the for loop we cut the time to about 1.3 secs:

import numpy
import math
def prime(upto=100000):
    return filter(lambda num: (num % numpy.arange(2,1+int(math.sqrt(num)))).all(), range(2,upto+1))

By not including even numbers, we can run in half the time again to about .7 secs:

import numpy
import math
def prime3(upto=100000):
    return [2]+filter(lambda num: (num % numpy.arange(3,1+int(math.sqrt(num)),2)).all(), range(3,upto+1,2))

The above is illustrative of efficiencies in using element-wise operations in Numpy arrays rather than looping.

A good way of speeding up the above code even further is by breaking after finding the first modulo equal zero condition. The code below runs in about .4 seconds:

import numpy
import math
def prime4(upto=100000):
    primes=[2]
    for num in range(3,upto+1,2):
        isprime=True
        for factor in range(3,1+int(math.sqrt(num)),2):
             if not num % factor: isprime=False; break
        if isprime: primes.append(num)
    return primes

Ok Scruffy Pete has laid down the challenge in the comments section so I need to get serious about optimization. The code below uses a few Numpy tricks for a zippy implementation and generates 1 million primes in about .03 secs, about 6X faster than Scruffy Pete’s.

import math
import numpy
def prime5(upto=100000):
    primes=numpy.arange(2,upto+1)
    isprime=numpy.ones(upto-1,dtype=bool)
    for factor in primes[:int(math.sqrt(upto))]:
        if isprime[factor-2]: isprime[factor*2-2::factor]=0
    return primes[isprime]

UPDATE 9/3 – did a half sieve style implementation of the above (see comments) which runs a little faster – under .02 secs for primes up to 1 million. Updated for Python 3 and incorporating suggestion in comments from Paul Weemaes

import math
import numpy
def prime6(upto=1000000):
    primes=numpy.arange(3,upto+1,2)
    isprime=numpy.ones((upto-1)//2,dtype=bool)
    for factor in primes[:int(math.sqrt(upto))//2]:
        if isprime[(factor-2)//2]: isprime[(factor*3-2)//2::factor]=0
    return numpy.insert(primes[isprime],0,2)
Posted in programming | Tagged , , | 12 Comments

Birthday simulations using Python and Numpy

I’ve written previously about the probability of finding a shared birthday in a room full of people. I wanted to run some simulations on this using Python. As an aside, you’ll find some of the techniques below bear a similarity to that old interview question of finding if a list has duplicate elements.

The following code generates birthday samples for a given number of people, then works out the proportion of these samples that contain at least one shared birthday:

import numpy
def sharedbday1(people,simulations=50000):
    samples = numpy.random.random_integers(0,365, size=(simulations, people))
    hasnoshared=map(lambda x: len(x)==len(set(x)),samples)
    return 1-float(sum(hasnoshared))/simulations

Timing this in IPython on my laptop for 50k simulations takes about .5 secs to run. For 23 people, it returns the expected result of around .5. As we saw in a previous post on this blog, it takes 23 people in a room to have at least a 50% chance of a shared birthday.

In [579]: time sharedbday1(23)
CPU times: user 0.56 s, sys: 0.00 s, total: 0.56 s
Wall time: 0.56 s
Out[580]: 0.50678000000000001


Since we love Numpy, it’s likely we’ll have a cleverer and faster way of doing this. We can sort the array, then compare it element-wise against a slice of itself offset by one – and if we find any equivalent elements we’ve found a sample with a shared birthday. With Numpy arrays this is achieved very neatly as operations in Numpy are performed element-wise by default:

import numpy
def sharedbday2(people, simulations=50000):
    samples = numpy.random.random_integers(0,365, size=(simulations, people))
    samples.sort()
    hasshared = ((sample[1:]==sample[:-1]).any() for sample in samples)
    return float(sum(hasshared))/simulations

Well, unfortunately and somewhat surprisingly, this runs twice as slow as the original code:

In [582]: time sharedbday2(23)
CPU times: user 1.01 s, sys: 0.00 s, total: 1.01 s
Wall time: 1.02 s
Out[583]: 0.50988


I exchanged some emails with Travis Oliphant of Numpy fame, who suggested the following code which is really neat and worth spending time to understand – I think it shows off the real power of Numpy:

import numpy
def sharedbday3(people, simulations=50000):
    samples = numpy.random.random_integers(0,365, size=(simulations, people))
    samples.sort(axis=-1)
    hasshared = (samples[:,1:] == samples[:,:-1]).any(axis=-1)
    return float(sum(hasshared))/simulations

This is fast and shows a speed-up of least 5X on my original code – with a wall time of .09s:

In [585]: time sharedbday3(23)
CPU times: user 0.10 s, sys: 0.00 s, total: 0.10 s
Wall time: 0.09 s
Out[586]: 0.50119999999999998

As Travis explains it:
You create a 2-d array up-front with the simulations along one axis (i.e. every row of the 2-d array is a separate simulation). Then you can use the sample approach used above, but this time letting NumPy do the loop over the rows (down in C). The sort and any methods on NumPy arrays take an axis keyword which determines the dimension over which the operation works. The operation is then repeated implicitly over the other dimensions. So, in the 3rd version shared birthday there are no for loops. The penalty paid is memory usage: a 2-d array instead of throw-away 1-d arrays. But, of course, less memory management is also a major reason for the speed gain.


For completeness, I made some minor modifications to the original code above to simulate the probability that at least one person in a group shares your birthday (rather than of finding a shared birthday between them).

import numpy
def sharemybday(people,simulations=50000):
    mybday=0
    samples = numpy.random.random_integers(0,365, size=(simulations, people))
    hasshared=map(lambda x: mybday in x,samples)
    return float(sum(hasshared))/simulations
In [1509]: sharemybday(253)
Out[1509]: 0.50431999999999999
Posted in programming | Tagged , , , | Leave a comment

Dijkstra Shortest Path using Python

This post uses python and Dijkstra’s algorithm to calculate the shortest path given a start node (or vertex), an end node and a graph. The function will return the distance from the start node to the end node, as well as the path taken to get there.

The implementation below sticks pretty closely to the algorithm description in the wikipedia entry, which I turned into something a little more pseudo-code-ish to help me implement it:

Initial state:

1. give nodes two properties - node.visited and node.distance
2. set node.distance = infinity for all nodes except set start node to zero
3. set node.visited = false for all nodes
4. set current node = start node.

Current node loop:

1. if current node = end node, finish and return current.distance & path
2. for all unvisited neighbors, calc their tentative distance (current.distance + edge to neighbor).
3. if tentative distance < neighbor's set distance, overwrite it.
4. set current.isvisited = true.
5. set current = remaining unvisited node with smallest node.distance

Here’s my implementation – it’s recursive, as suggested by the algorithm description:

import sys
def shortestpath(graph,start,end,visited=[],distances={},predecessors={}):
    """Find the shortest path btw start & end nodes in a graph"""
    # detect if first time through, set current distance to zero
    if not visited: distances[start]=0
    # if we've found our end node, find the path to it, and return
    if start==end:
        path=[]
        while end != None:
            path.append(end)
            end=predecessors.get(end,None)
        return distances[start], path[::-1]
    # process neighbors as per algorithm, keep track of predecessors
    for neighbor in graph[start]:
        if neighbor not in visited:
            neighbordist = distances.get(neighbor,sys.maxint)
            tentativedist = distances[start] + graph[start][neighbor]
            if tentativedist < neighbordist:
                distances[neighbor] = tentativedist
                predecessors[neighbor]=start
    # neighbors processed, now mark the current node as visited 
    visited.append(start)
    # finds the closest unvisited node to the start 
    unvisiteds = dict((k, distances.get(k,sys.maxint)) for k in graph if k not in visited)
    closestnode = min(unvisiteds, key=unvisiteds.get)
    # now take the closest node and recurse, making it current 
    return shortestpath(graph,closestnode,end,visited,distances,predecessors)

graph = {'a': {'w': 14, 'x': 7, 'y': 9},
        'b': {'w': 9, 'z': 6},
        'w': {'a': 14, 'b': 9, 'y': 2},
        'x': {'a': 7, 'y': 10, 'z': 15},
        'y': {'a': 9, 'w': 2, 'x': 10, 'z': 11},
        'z': {'b': 6, 'x': 15, 'y': 11}}
print shortestpath(graph,'a','a')
print shortestpath(graph,'a','b')
print "\nExpected Result:\n(0, ['a'])\n(20, ['a', 'y', 'w', 'b'])"

As you can see I turned the example in wikipedia into a graph for the test case in the code above:

I also found this useful about how to represent and process graphs in python: http://ftp.ntua.gr/mirror/python/doc/essays/graphs.html. I think this was written by Guido himself.

There’s a bunch of interesting implementations on the web, including two here. Faster and more elegant perhaps, but it took me a little longer to understand them.

Posted in programming | Tagged , , , | 7 Comments

Cyclic Linked List – finding the start of the loop using Python

I’d blogged previously about writing a python function to find out whether a linked list is cyclic or acyclic – it’s worth a read before tackling this one.

The challenge here is to return the node that is at the start of the loop given the head node of a cyclic linked list.

A cyclic linked list would look something like this:

A->B->C->D->E->F->C

In the case above, A is the head node, and C is the node at the start of the loop.

Solution

Let’s call x the distance from the head to the start of the loop, and c the number of nodes in the loop.

We kick off a fast and a slow pointer from the head, with the fast pointer going twice as fast as the slow pointer (skipping every other node). By the time the slow pointer gets to the start of the loop, the fast pointer will have a head start of x (the fast pointer is always the same distance from the slow pointer as the slow pointer is from the head).

Given its head start of x, when will the fast pointer catch up to the slow pointer? The fast pointer always needs twice the initial distance from the slow pointer to catch up. Moving forward in the loop, the fast pointer is c-x nodes from the slow pointer, so it needs 2(c-x) nodes to catch up. In that time, the slow pointer will have managed to move c-x nodes away from the start (half what the fast pointer travelled).

The key thing to note here is that both pointers are now x nodes away from the start of the loop looking in a forwards direction. So you’ll see that to find the start of the loop the code below gets the fast and slow pointer to meet in the loop, then walks the slow pointer forward x nodes – the distance from the head to the start.

def startOfCyclicLoop(head):

    """
    >>> e1=Element(1)
    >>> e2=Element(2)
    >>> e3=Element(3)
    >>> e4=Element(4)
    >>> e5=Element(5)
    >>> e1.next=e2
    >>> print startOfCyclicLoop(e1)
    None
    >>> e2.next=e3
    >>> e3.next=e4
    >>> e4.next=e5
    >>> print startOfCyclicLoop(e1)
    None
    >>> e5.next=e3
    >>> print startOfCyclicLoop(e1)
    3
    >>> e5.next=e4
    >>> print startOfCyclicLoop(e1)
    4
    >>> e5.next=e2
    >>> print startOfCyclicLoop(e1)
    2
    """

    slow=head
    fast=head
    while fast!=None and fast.next!=None:
        slow=slow.next
        fast=fast.next.next
        if (fast==slow):
            slow=head
            while (fast!=slow):
                slow=slow.next
                fast=fast.next
            return fast
    return None

class Element:
    def __init__(self,data,next=None):
        self.data=data
        self.next=next
    def __str__(self):
        return str(self.data)

if __name__ == "__main__":
    import doctest
    doctest.testmod()
Posted in programming | Tagged , , , | Leave a comment

Anagrams using Python

Python can be an elegant language. This is an example – a Python function that finds the anagrams for a supplied word. For the word dictionary I found one in OS X at /usr/share/dict/words.

def anagrams(wordIn):
    f=open('/usr/share/dict/words')                                                                                                                                                                                 
    ana=dict()
    for word in f.readlines():
        ana.setdefault(''.join(sorted(word.rstrip())),[]).append(word.rstrip())
    return ana.get(''.join(sorted(wordIn)))
>>> anagrams("cat")
['act', 'cat']
>>> 
Posted in programming | Tagged , , | 2 Comments

Birthday pairings

Here’s two classic birthday puzzles:

  1. How many people do you need in a room to have a better than 50% chance of finding at least one shared birthday among them?
  2. How many people do you need in a room with you to have a better than 50% chance of finding someone who shares your birthday?

Finding a solution to these is conceptually quite straightforward, but also quite instructive on various useful probabilistic calculation techniques.

Let’s take these one at a time.

  1. How many people do you need in a room to have a better than 50% chance of finding at least one shared birthday among them?

Like a lot of these kinds of problems, it can be easier to first find the chances of the proposition not happening. In this case, finding the chances of not finding a shared birthday.

I have a tendency to think of card decks in these situation – so in this case I think of having a bunch of 365-card decks. I deal one card from each deck. What are the chances that I haven’t dealt the same card after after dealing from r decks? You can see that this is an analogy for our problem. Looking back at the previous post on dealing a perfect hand may be a good primer for what follows.

The answer to our card analogy is to take the number of ways I can deal the cards (one from each deck) and get no duplicates divided by the total number of ways I can deal the cards.

From the first deck, I have 365 ways to deal one card. From the second deck, I have 364 ways to deal one card and not have duplicates. From the third deck, there are 363 ways, and so on until I get to 365-r+1 ways. So the number of ways I can deal cards without duplicates is:

= 365 * 364 * ... * (365-r+1)
= 365! / (365-r)!

Now lets work out how many ways there are to deal the cards without restrictions. The answer is simply:

= 365 ^ r

So now, the probability of not dealing any duplicate cards (and also the chances of not finding anyone with a shared birthday) is:

= (365! / (365-r)!) / (365 ^ r)

I was a little stumped at how to solve this as 365! will make any calculator blow up. But we can use Sterling’s approximation to find that:

ln(365!) = 1792.33142
ln((365-22)!) = 1663.17935
ln((365-23)!) = 1657.34162

So for r=22, our probability of no shared birthdays:

= e ^ (1 792.33142 - 1 663.17935) / (365 ^ 22) 
= 0.524310203

For r=23:

= e ^ (1792.33142 - 1657.34162) / (365 ^ 23) 
= 0.492707724

Now remember that this is for finding no shared birthdays. To have at least 1 shared birthday, we need to subract from 1.

So for r=22, the chances of finding at least 1 shared birthday are:

= 1 - 0.524310203 
= 0.475689797

For r=23, the chances are:

= 1 - 0.492707724 
= 0.507292276

So you can see that it takes at least 23 people in a room to have a greater than 50/50 chance of finding a shared birthday.

By the way, to get the Stirling approximation of ln(365!) I typed the following into google:

365 * ln(365) - (365) + ln(365)/2 + ln (2*pi) / 2)

Ok so let’s move on to our second problem:

2. How many people do you need in a room with you to have a better than 50% chance of finding someone who shares your birthday?

This one’s a little simpler than the first. Again, it’s easier to first find the probability of no one having your birthday. Let’s imagine people walking in to a room where you are. The first person has a 364/365 chance of not having your birthday. And it’s the same for all subsequent people who walk into the room. So the chances of no one sharing your birthday after r people have walked in are:

= (364/365)^r

So the chance that at least one person shares your birthday is:

= 1 - (364/365) ^ r

Let’s find r when the equation is 1/2, representing the at least 50% chance we were looking for:

1/2 = 1 - (364/365) ^ r
(364/365) ^ r = 1/2
r * log (364/365) = log (1/2)
r = log (1/2) / log (364/365)
r = 252.651989

So you need to have at least 253 people with you in a room to have a better than 50/50 chance of finding someone sharing your birthday. It’s quite amazing how different this answer is to the answer for the first part.

Posted in puzzles | Tagged , , , | 1 Comment

A perfect hand, with some combinatorial tricks

Question: You deal the first 13 cards from a well shuffled, 52-card deck. What are the chances the 13 cards you deal are of the same suit?

There’s a couple of ways to do this – both incorporate useful techniques for solving these kind of problems.

First Answer – simple probability

The first way is just to look at the first 13 cards dealt and work out how many ways there are to deal a single-suit hand, and divide this by the number of ways there are to deal any hand.

There are 52 ways you can deal a first card. There are then 12 ways you can deal a second card if we restrict it to the suit of the first card. So we now have 52 * 12 permutations. With the third card, we’re up to 52 * 12 * 11 permutations. So we can see that the ways to deal a single-suit 13 card hand is:

= 52 * 12!

Using the same logic, we can see that:

Ways to deal any 13 card hand = 52 * 51 * 50 * … * 40. This equals:

= 52! / 39!

(Hopefully this is clear – the remaining sequence to finish making 52! is 39!).

So the probability of dealing a single-suit 13 card hand:

= (52 * 12!) / (52! / 39!) 
= 12! * 39! / 51!

Is there a way to work this out without blowing up our calculator? We can go old school and get out our factorial log tables.

The log of the answer equals the log of (12! * 39! / 51!). If you remember your log rules, this is the same as log(12!) + log (39!) – log(51!). If you don’t have log of factorial tables you can type that equation into google and it magically gives you the answer. So you get:

= log(12!) + log (39!) - log(51!) 
= -11.200723

So the log of the probability is -11.200723. Now let’s take the antilog to get our answer:

= 10 ^ -11.200723
= 6.299078 * 10 ^ -12

This is the probability of dealing a perfect deck. Take the reciprocal to get the odds to 1 against:

1 in 158,753,387,100

Holy cow – that’s what in Australia we call between Buckley’s and none.

Second Answer – let’s use some combinatorial tricks!

The first thing you have to know is: if I have a group of things of size n, how many ways are there to select a subset of size r? The anser is: n! / (r! * (n – r)!).

So for example if I had 5 balls, I could arrange 2 balls out of the 5 in: 5! / (2! * 3!) = 15 ways.

As a side note – and I cover this more at the end of this post – I’m treating this as an order doesn’t matter, repetitions not allowed arrangement.

How does this help us in this problem? Let’s think of the number of ways that I can select 13 cards from this deck. The order in which the cards are selected doesn’t matter. The answer, following our example above would be 52! / (13! * 39!). Out of all of these possibilities, how many of these give us 13 cards a particular chosen suit? The answer is only 1. The reason is that there’s only 13 cards of the chosen suit, and the arrangement I’m looking for has all of them. And since order doesn’t matter, there’s only one arrangement that has all of these 13 particular cards.

So dividing the number of arrangements with 13 cards being our chosen suit by the total number of arrangements gives us: 1 / (52! / (13! * 39!))

Now we have to multiply this by 4, as we have 4 suits.

= 4 * 1 / (52! / (13! * 39!))
= 4 * 13! * 39! / 52!

We can see that this becomes our original answer above:

12! * 39! / 51!

Some useful combinatorial definitions

It’s probably a good idea at this point to formalize a little the definitions of permutations and combinations.

Permutations are when the order of the arrangement does matter – the order is a factor in making the arrangement unique.

Combinations are when we want to get the number of arrangements without regard to order. So we can treat the arrangements as buckets we can shake up and it’s still the same arrangement.

Within these, we can also allow repetitions or not. In other words, in a given arrangement, can a particular item be used more than once?

Some examples will help:

Combinations

Repetitions allowed: Let’s say you’re ordering a sandwich at Subway and can choose 3 meats out of a possible 5. If you love salami, you could chose salami, salami and salami for your meats. And we don’t really care what order we get the meats in the sandwich – for example, bologna, salami, bologna is the same as salami, bologna, bologna.

No Repetitions: Let’s say Subway had only one of each of the meats left – so you couldn’t choose the same meat twice. Another good example is a lottery where you need to match 6 numbers drawn from a pool of 40. Once a number is drawn it doesn’t go back in to the pool.

Permutations

Repetitions: the classic example is that of a combination lock (which should really be called a permutation lock!). Order does matter, and repetitions are allowed as your combination could be 7777.


No Repetitions: How many ways are there to get the first 3 positions in a 100 person race? Order matters, and you can’t have the same person in more than one position.

The formulas:

Permutation, repetition: n ^ r
Permutation, no repetition: n! / (n-r)!
Combination, repetition: (n + r – 1)! / (r! * (n – 1)!)
Combination, no repetition: n! / (r! * (n-r)!)

It’s interesting to note that in the above, the combination no repetition case is just the permutation no repetition case divided by r!. You can see this by thinking that for arrangements with 3 elements, there will be 3! more ways to arrange them if we care about order as in the permutation case than in the combination case.

The combination no repetition case is probably the most common and comes up in card-dealing problems and the like. It is sometimes called the “n choose r” case – or nCr, and also known as the “Binomial Coefficient”.

Posted in puzzles | Tagged , | 1 Comment

Puzzle – A safe place to drop an egg

This is a problem where it’s fairly easy to find the solution via trial and error, but not so easy to generalize.

Question

Let’s say you need to find out the highest floor in a 100 story building from which you could safely drop an egg without it breaking. What’s your strategy to minimize the worst case number of drops you have to try?

If you just had one egg with which to conduct your experiment, you’d have to start at floor one, and go up floor by floor looking for the first floor at which your egg broke until you reached floor 100. So your maximum number of drops would be 100.

What would be the worst case number of drops if you could use two eggs?

Answer

With two eggs you can be cleverer, and drop the first egg at floor 50, and effectively divide the floor space in half for your second egg.

You could also drop the first egg at floor 10, then assuming it doesn’t break, floor 20, 30, and so on. As soon as it broke, you’d try the 9 floors underneath with your second egg. So worst case if you get all the way to floor 100 before it breaks, then start at floor 91 with the second egg and get all the way to floor 99, then the total worst case drops is 19.

Can we do better?

Picking 10 as number of floors to skip with the first egg is not particularly efficient in dividing the floor space because if the first egg breaks on floor 10, the worst case is 10. But if the first egg breaks on floor 20, then the worst case is 11. As we saw, by the time you get to floor 100 your worst case is 19. The most efficient solution will be when no matter where the first egg breaks, our worst case stays the same.

To do that we could say that we’ll start with our first egg at floor 10, but the second drop will be on floor 19, preserving our 10 drop worst case. The problem you’ll find is that you end up reducing the spacing to 1 before you get anywhere close to floor 100. By trial and error you can find the best number to start with, and it turns out to be 14, giving us a 14 drop worst case:

14, 27, 39, 50, 60, 69, 77, 84, 90, 95, 99, 100.

Generalization

Is there a formula we can use to work this out, so we could do it for any number of floors?

Let’s look at the number of floors we skip at each turn:

n, n-1, n-2, n-3 … 1

Flipping that around, we have:

1, 2, 3, 4….. n.

We also know that the sum of the number of floors skipped will equal the total number of floors.

We know that the sum of a sequence like the above is (n+1) * n/2. You can see this if you pair the first and last numbers, the the second and second to last, etc.

So if we say h is the total number of floors in the building, we have:

h = (n+1) * n/2

So:

n^2 + n = 2h.

To solve for n, let’s use the completing the square trick:

n^2 + n + .25 = 2h + .25
(n+.5)(n+.5) = 2h + .25
n+.5 = sqrt (2h+.25)
n = sqrt(2h+.25) – .5

If we try this on our 100 floor problem, we get:

n = sqrt(200.25)-.5
n=13.65

So seeing as we need integer floors, we need 14 floors, which corresponds with our trial and error result.

I have to give props to Scruffy Pete for helping out on this one.

Posted in puzzles | Tagged | 1 Comment