Entrepreneurials‎ > ‎Jobs‎ > ‎Interview‎ > ‎

## Master Theorem

Let a >= 1 and b > 1, f(n) asymptotically positive, and let T(n) be defined by:
T(n) = aT(n/b) + f(n)
where n/b can be interpreted to be either floor(n/b) or ceiling(n/b).  Then T(n) can be bounded asymptotically as follows:

1. If f(n) < nlogba
T(n) = Θ( nlogba )

2. If f(n) = nlogba
T(n) = Θ( nlogba  * lg n )

3. If f(n) > nlogba
T(n) = Θ( f(n) )

Take Out Message: between nlogba and f(n) always pick the biggest. If equal nlogba  * lg n

1 + 2 + ... + n = n (n+1)/2
a + aq + aq^2 + .... + aq^q = a(q^n - 1) / (q-1)

1^2 + 2^2 + ... + n^2 = n (n+1)(2n+1)/2
1^3 + 2^3 + ... + n^3 = n^2 (n+1)^2/2^2

1 + 1/2 + 1/3 + 1/4 + .... + 1/n = ln n + O(1)

P(n,r) = nPr = n! / (n - r)!
C(n,r) = nCr = n! / ((n - r)! r!)    Combination
nCr = nC(n-r)

Pr{A | B} = Pr{A} Pr{B|A} / (Pr{A} Pr{B|A}  +   Pr{A'} Pr{B|A'})

```A and B are conditionally independent given C just in case B doesn't

(2)  p(A | C) = p(A | B, C)

It's easy to see that (1) and (2) are equivalent by observing that in
general

(3)  p(A, B | C) = p(A, B, C) / p(C) = p(A | B, C) * p(B, C) / p(C)
= p(A | B, C) * p (B | C).

Then it's trivial to derive (1) from (2) and vice versa. Pr{R,B | Y} = 2/12Pr{R|Y} Pr{B|Y} = 4/12 * 6/12 = 1/3 * 1/2 P{R|Y} = Pr{R|Y,B}4/12 = 2/6```
independence:   Pr{A, B} = Pr{A} Pr{B}        occurrence of one does not affect the probability of the other (AND in combinatorics)
the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset.
Gives the probabilities of various values of the variables in the subset without reference to the values of the other variables. This contrasts with a conditional distribution, which gives the probabilities contingent upon the values of the other variables.

These terms are dubbed "marginal" because they used to be found by summing values in a table along rows or columns, and writing the sum in the margins of the table.

The distribution of the marginal variables (the marginal distribution) is obtained by marginalizing over the distribution of the variables being discarded, and the discarded variables are said to have been marginalized out.

x1 x2 x3 x4 py(Y)↓
y1 432 232 132 132 832
y2 232 432 132 132 832
y3 232 232 232 232 832
y4 832 0 0 0 832
px(X) → 1632 832 432 432 3232
Joint and marginal distributions of a pair of discrete, random variables X,Y having nonzero mutual information I(X; Y). The values of the joint distribution are in the 4×4 square, and the values of the marginal distributions are along the right and bottom margins.

It is the probability distribution of X when the value of Y is not known.

Subpages (3):