This page provides an introduction to Probability.
Overview
Probability is a measure of the likelihood or chance that a specific event will occur. It ranges from 0 to 1. Let's understand some of the terminlogies of probabilities as below:
Random Experiment
Random experiment is the experiment whose outcome is known.
Sample Space
Sample space is set of all possible outcomes. For e.g. throwing a dice has sample space as below, and it is denoted by S \text{S} S :
S = { 1 , 2 , 3 , 4 , 5 , 6 } \text{S} = \{1, 2, 3, 4, 5, 6\} S = { 1 , 2 , 3 , 4 , 5 , 6 }
Event
Event is Subset of sample space, and it is denoted by E \text{E} E .
Equally Likely Events
Equally likely events are those who has equal chances of occurence or have equal probability.
P(E 1 ) = P(E 2 ) \text{P(E}_1) = \text{P(E}_2) P(E 1 ) = P(E 2 )
Mutually Exclusive Events
Mutually exclusive are those whose simultaneous occurance is impossible. It is also known as Disjoint or Incompatible events.
E 1 ∩ E 2 = ϕ \text{E}_1 \cap \text{E}_2 = \phi E 1 ∩ E 2 = ϕ
Mutually Exhaustive Events
Mutually exhaustive events are those whose union is sample space.
E 1 ∪ E 2 = S \text{E}_1 \cup \text{E}_2 = \text{S} E 1 ∪ E 2 = S
Classic Probability
If a random experiment has total (m + n) \text{(m + n)} (m + n) mutually exclusive and equally likely outcomes, out of which m \text{m} m favours event E \text{E} E , then probability of occurrence of Event E \text{E} E is denoted by P(E) \text{P(E)} P(E) .
P(E) = favourable outcomes total outcomes = m m+n \text{P(E)} = \frac{\text{favourable outcomes}}{\text{total outcomes}} = \frac{\text{m}}{\text{m+n}} P(E) = total outcomes favourable outcomes = m+n m
When outcomes are not equally likely P(E) = Sum of probability of favourable outcomes \text{P(E)} = \text{Sum of probability of favourable outcomes} P(E) = Sum of probability of favourable outcomes .
Below are some of the important characterstics of probability:
Sum of probabilities of each sample point or probability of sample space = 1 = 1 = 1 .
If P(E) = 0 , \text{P(E)} = 0, P(E) = 0 , then event E \text{E} E is impossible event.
If P(E) = 1 , \text{P(E)} = 1, P(E) = 1 , then event E \text{E} E is sure event.
Probability of non-occurrence of an event is:
E = P ( E ‾ ) = 1 − P(E) \text{E} = \text{P}(\overline{\text{E}}) = 1 - \text{P(E)} E = P ( E ) = 1 − P(E)
Formula for odds in favour of any event is:
E = Number of favourable outcome Number of unfavourable outcome \text{E} = \frac{\text{Number of favourable outcome}}{\text{Number of unfavourable outcome}} E = Number of unfavourable outcome Number of favourable outcome
Formula for odds against event is:
E = Number of unfavourable outcome Number of favourable outcome \text{E} = \frac{\text{Number of unfavourable outcome}}{\text{Number of favourable outcome}} E = Number of favourable outcome Number of unfavourable outcome
Example 1
A fair dice is rolled, calculate the probability of getting a prime number.
Solution
There are total three prime numbers on the fair dice 2 , 3 , 5 {2, 3, 5} 2 , 3 , 5 in a sample set { 1 , 2 , 3 , 4 , 5 , 6 } \{1, 2, 3, 4, 5, 6\} { 1 , 2 , 3 , 4 , 5 , 6 } for throwing a dice. So probability of getting a prime number will be:
P(E) = 3 6 = 1 2 \text{P(E)} = \frac{3}{6} = \frac{1}{2} P(E) = 6 3 = 2 1
Example 2
Two coins are tossed simultaneously, what is the probability of getting at least one head.
Solution
Cases where we will get at least one head when two coins are tossed will be:
Case 1: H H
Case 2: H T
Case 3: T H
So total probability of getting at least one head when two coin are tossed will be:
P(E) = 3 4 \text{P(E)} = \frac{3}{4} P(E) = 4 3
Example 3
A dice is rolled, if it shows prime number then a coin is tossed, what is the probability of getting head.
Solution
Let's write the sample set first:
S = { ( 1 ) , ( 2 , H ) , ( 2 , T ) , ( 3 , H ) , ( 3 , T ) , ( 4 ) , ( 5 , H ) , ( 5 , T ) , ( 6 ) } \text{S} = \{
(1),
(2, \text{H}),
(2, \text{T}),
(3, \text{H}),
(3, \text{T}),
(4),
(5, \text{H}),
(5, \text{T}),
(6)
\} S = {( 1 ) , ( 2 , H ) , ( 2 , T ) , ( 3 , H ) , ( 3 , T ) , ( 4 ) , ( 5 , H ) , ( 5 , T ) , ( 6 )}
Now, let's write probability of each of the event:
P(1) = 1 6 \text{P(1)} = \large\frac{1}{6} P(1) = 6 1
P(2, H) = 1 6 ∗ 1 2 \text{P(2, H)} = \large\frac{1}{6} * \frac{1}{2} P(2, H) = 6 1 ∗ 2 1
P(2, T) = 1 6 ∗ 1 2 \text{P(2, T)} = \large\frac{1}{6} * \frac{1}{2} P(2, T) = 6 1 ∗ 2 1
P(3, H) = 1 6 ∗ 1 2 \text{P(3, H)} = \large\frac{1}{6} * \frac{1}{2} P(3, H) = 6 1 ∗ 2 1
P(3, T) = 1 6 ∗ 1 2 \text{P(3, T)} = \large \frac{1}{6} * \frac{1}{2} P(3, T) = 6 1 ∗ 2 1
P(4) = 1 6 \text{P(4)} = \large\frac{1}{6} P(4) = 6 1
P(5, H) = 1 6 ∗ 1 2 \text{P(5, H)} = \large \frac{1}{6} * \frac{1}{2} P(5, H) = 6 1 ∗ 2 1
P(5, T) = 1 6 ∗ 1 2 \text{P(5, T)} = \large \frac{1}{6} * \frac{1}{2} P(5, T) = 6 1 ∗ 2 1
P(6) = 1 6 \text{P(6)} = \large\frac{1}{6} P(6) = 6 1
As in this case all the events in the sample space doesn't have equally likely probability, we apply formula below:
P(E) = Sum of probability of favourable outcomes \text{P(E)} = \text{Sum of probability of favourable outcomes} P(E) = Sum of probability of favourable outcomes
So probability of getting a head on coin toss after rolling a prime number on a die will be:
P(E) = 1 4 \text{P(E)} = \frac{1}{4} P(E) = 4 1
Example 4
An urn contains nine balls of which three are red, four are blue and two are green. Three balls are drawn at random without replacement from the urn. calculate the probability that three balls have different color.
Solution
Number of ways we can draw 3 3 3 balls from 9 9 9 balls without replacement will be:
9 C 3 ^9\text{C}_3 9 C 3
Number of ways we can draw 3 3 3 balls from 9 9 9 balls and all of them are of different color will be:
3 C 1 ∗ 4 C 1 ∗ 2 C 1 ^3\text{C}_1 \ \ * \ \ ^4\text{C}_1 \ \ * \ \ ^2\text{C}_1 3 C 1 ∗ 4 C 1 ∗ 2 C 1
Let's apply classic probability formula as drawing a ball from urn has equal probability. So probability that three balls have different colors will be:
3 ∗ 4 ∗ 2 9 C 3 = 2 7 \frac{3 \ \ * \ \ 4 \ \ * \ \ 2}{^9\text{C}_3} = \frac{2}{7} 9 C 3 3 ∗ 4 ∗ 2 = 7 2
Example 5
Two numbers are selected randomly from the set S = { 1 , 2 , 3 , 4 , 5 , 6 } \text{S} = \{1, 2, 3, 4, 5, 6\} S = { 1 , 2 , 3 , 4 , 5 , 6 } without replacement one by one. Calculate the probability that minimum of the two numbers is less than 4 4 4 .
Solution
Let's generate the cases for favourable outcomes.
Case 1 : Both of them are less than 4 4 4 .
3 C 2 ^3\text{C}_2 3 C 2
Case 2 : One of them is less than 4 4 4 .
3 C 1 ∗ 3 C 1 ^3\text{C}_1 \ \ * \ \ ^3\text{C}_1 3 C 1 ∗ 3 C 1
So probability that minimum of the two numbers is less than 4 4 4 will be:
( 3 C 2 + 3 C 1 ∗ 3 C 1 ) 6 C 2 \frac{(^3\text{C}_2 \ \ + \ ^3\text{C}_1 \ \ * \ \ ^3\text{C}_1)}{^6\text{C}_2} 6 C 2 ( 3 C 2 + 3 C 1 ∗ 3 C 1 )
Example 6
Three boys and two girls stand in a queue. Calculate the probability such that the number of boys ahead of every girl is at least one more than the number of girls ahead of her.
Solution
This problem can't be solved with permutation and combinations as this is not asking you to just arrange 2 2 2 boys across 3 3 3 girls, but there is a condition attached to it which say number of boys ahead of every girl is at least one.
Let's make cases for favourable outcomes.
Case 1: B 1 B 2 B 3 G 1 G 2 \text{B}_1\text{B}_2\text{B}_3\text{G}_1\text{G}_2 B 1 B 2 B 3 G 1 G 2
Case 2: B 1 B 2 G 1 G 2 B 3 \text{B}_1\text{B}_2\text{G}_1\text{G}_2\text{B}_3 B 1 B 2 G 1 G 2 B 3
Case 3: B 1 B 2 G 1 B 3 G 2 \text{B}_1\text{B}_2\text{G}_1\text{B}_3\text{G}_2 B 1 B 2 G 1 B 3 G 2
Case 4: B 1 G 1 B 2 G 3 B 3 \text{B}_1\text{G}_1\text{B}_2\text{G}_3\text{B}_3 B 1 G 1 B 2 G 3 B 3
Case 5: B 1 G 1 B 2 B 3 G 2 \text{B}_1\text{G}_1\text{B}_2\text{B}_3\text{G}_2 B 1 G 1 B 2 B 3 G 2
Now, 3 3 3 boys can arrange among themself in 3 ! 3! 3 ! ways and 2 2 2 girls can arrange among themself in 2 ! 2! 2 ! ways, it gives us total number of favourable outcomes as 5 ∗ 2 ! ∗ 3 ! 5 * 2! * 3! 5 ∗ 2 ! ∗ 3 ! .
So probability such that the number of boys ahead of every girl is at least one more than the number of girls ahead of her will be:
5 ∗ 2 ! ∗ 3 ! 5 ! \frac{5 \ \ * \ \ 2! \ \ * \ \ 3!}{5!} 5 ! 5 ∗ 2 ! ∗ 3 !
Infinite Trials
In probability theory, infinite trials refer to a situation where an experiment or process is repeated an infinite number of times.
Below are some of the examples of infinite trials:
Example 1
A \text{A} A and B \text{B} B inorder throws a dice, whoever gets 6 6 6 first wins the game, find the probabiliy that A \text{A} A wins.
Solution
P ( AWins ) = P(A) + [ P ( A ‾ ) ∗ P ( B ‾ ) ∗ P(A) ] + [ P ( A ‾ ) ∗ P ( B ‾ ) ∗ P ( A ‾ ) ∗ P ( B ‾ ) ∗ P(A) ] + . . . ∞ \text{P}(\text{AWins}) = \text{P(A)} \ \ + \ \
[\text{P}(\overline{\text{A}}) * \text{P}(\overline{\text{B}}) * \text{P(A)}] \ \ + \ \
[\text{P}(\overline{\text{A}}) * \text{P}(\overline{\text{B}}) * \text{P}(\overline{\text{A}}) * \text{P}(\overline{\text{B}}) * \text{P(A)}] \ \ + \ \
... \infty P ( AWins ) = P(A) + [ P ( A ) ∗ P ( B ) ∗ P(A) ] + [ P ( A ) ∗ P ( B ) ∗ P ( A ) ∗ P ( B ) ∗ P(A) ] + ...∞
Where,
P ( AWins ) \text{P}(\text{AWins}) P ( AWins ) is Probability that A \text{A} A wins the game.
P(A) \text{P(A)} P(A) is Probability that A \text{A} A gets 6 6 6 .
P ( A ‾ ) \text{P}(\overline{\text{A}}) P ( A ) is Probability that A \text{A} A does not gets 6 6 6 .
P(B) \text{P(B)} P(B) is Probability that B \text{B} B gets 6 6 6 .
P ( B ‾ ) \text{P}(\overline{\text{B}}) P ( B ) is Probability that B \text{B} B does not gets 6 6 6 .
Substituting the values in the formula above we get:
P ( AWins ) = 1 6 + [ 5 6 ∗ 5 6 ∗ 1 6 ] + [ 5 6 ∗ 5 6 ∗ 5 6 ∗ 5 6 ∗ 1 6 ] + . . . ∞ \text{P}(\text{AWins}) = \frac{1}{6} \ \ + \ \
[\frac{5}{6} * \frac{5}{6} * \frac{1}{6}] \ \ + \ \
[\frac{5}{6} * \frac{5}{6} * \frac{5}{6} * \frac{5}{6} * \frac{1}{6}] \ \ + \ \ ... \ \ \infty P ( AWins ) = 6 1 + [ 6 5 ∗ 6 5 ∗ 6 1 ] + [ 6 5 ∗ 6 5 ∗ 6 5 ∗ 6 5 ∗ 6 1 ] + ... ∞
Simplifying equation above we get:
P ( AWins ) = 1 6 ∗ [ 1 + ( 5 6 ) 2 + ( 5 6 ) 4 + . . . ∞ ] \text{P}(\text{AWins}) =
\frac{1}{6} \ \ * \ \ [1 + (\frac{5}{6})^{2} + (\frac{5}{6})^{4} + \ \ ... \ \ \infty] P ( AWins ) = 6 1 ∗ [ 1 + ( 6 5 ) 2 + ( 6 5 ) 4 + ... ∞ ]
Applying formula for sum of infinte geometric progression we get:
P ( AWins ) = 1 6 ∗ 1 1 − ( 5 6 ) 2 \text{P}(\text{AWins}) = \frac{1}{6} * \frac{1}{1 \ \ - \ \ \large(\frac{5}{6})^{2}} P ( AWins ) = 6 1 ∗ 1 − ( 6 5 ) 2 1
Example 2
The probability of a man hitting a target is 1 10 \large\frac{1}{10} 10 1 . Calculate the least number of shots required such that probability of man hitting the target at least once is greater than 1 4 \large\frac{1}{4} 4 1 .
Solution
Since we want to calculate probability of hitting the target atleast one, we apply below formula:
P(Hitting atleast once) = Total probability - P(Hitting never) \text{P(Hitting atleast once) = Total probability - P(Hitting never)} P(Hitting atleast once) = Total probability - P(Hitting never)
P(Hitting at least once) = 1 − ( 9 10 ) n \text{P(Hitting at least once)} = 1 - (\frac{9}{10})^\text{n} P(Hitting at least once) = 1 − ( 10 9 ) n
Now, we want P(Hitting target at least once in n attempts) \text{P(Hitting target at least once in n attempts)} P(Hitting target at least once in n attempts) to be > 1 4 > \frac{1}{4} > 4 1 , so we get:
1 − ( 9 10 ) n > 1 4 1 - (\frac{9}{10})^\text{n} > \frac{1}{4} 1 − ( 10 9 ) n > 4 1
Simplifying the equation further we get:
1 0 n ∗ 3 > 9 n ∗ 4 10^\text{n} * 3 > 9^\text{n} * 4 1 0 n ∗ 3 > 9 n ∗ 4
Now, we can use hit and trial method for value of n \text{n} n which satisfy the equation and get n = 3 \text{n} = 3 n = 3 .
So least number of short required such that probability of man hitting the target at least once is greater than 1 4 \large\frac{1}{4} 4 1 will be 3 3 3 .
Example 3
Calculte the minimum number of times a fair coin needs to be tossed, such that probability of getting atleast two heads is atleast 0.96 0.96 0.96 ?
Solution
Let's say we have to toss the coin n \text{n} n times so that probability of getting atleast two heads is atleast 0.96 0.96 0.96 .
Since we want to calculate probability of getting at least two heads, we apply below formula:
P(Atleast 2 heads) = Total probability - P(Exactly 0 head) - P(Exactly 1 head) \text{P(Atleast 2 heads) = Total probability - P(Exactly 0 head) - P(Exactly 1 head)} P(Atleast 2 heads) = Total probability - P(Exactly 0 head) - P(Exactly 1 head)
Now,
P(Exactly 0 head) = 1 2 , and P(Exactly 1 head) = n C 1 ∗ ( 1 2 ) n \text{P(Exactly 0 head)} = \ \ \frac{1}{2}, \ \ \text{and} \ \ \text{P(Exactly 1 head)} = \ \ ^\text{n}\text{C}_1 * (\frac{1}{2})^{\text{n}} P(Exactly 0 head) = 2 1 , and P(Exactly 1 head) = n C 1 ∗ ( 2 1 ) n
Here we have n C 1 ^\text{n}\text{C}_1 n C 1 because we want exactly 1 head but we still have to select a location of head in n \text{n} n attempts.
Substituting the values in the formula we get:
P(Atleast 2 heads) = 1 − ( 1 2 ) n − n C 1 ∗ ( 1 2 ) n \text{P(Atleast 2 heads)} = 1 - (\frac{1}{2})^{\text{n}} \ \ - \ \ ^\text{n}\text{C}_1 * (\frac{1}{2})^{\text{n}} P(Atleast 2 heads) = 1 − ( 2 1 ) n − n C 1 ∗ ( 2 1 ) n
Now, we want P(Atleast 2 heads) \text{P(Atleast 2 heads)} P(Atleast 2 heads) should be atleast 0.96 0.96 0.96 .
1 − ( 1 2 ) n − n C 1 ∗ ( 1 2 ) n > 0.96 1 - (\frac{1}{2})^{\text{n}} \ \ - \ \ ^\text{n}\text{C}_1 * (\frac{1}{2})^{\text{n}} \ \ > \ \ 0.96 1 − ( 2 1 ) n − n C 1 ∗ ( 2 1 ) n > 0.96
By hit and trial method we will get the value of n \text{n} n as 8 8 8 . So minimum number of times a fair coin needs to be tossed will be 8 8 8 .
Set Theory
Below are some of the important points for set theory related to probability:
Probability of A \text{A} A and B = P ( A ∩ B ) = P(AB) \text{B} = \text{P}(\text{A} \cap \text{B}) = \text{P(AB)} B = P ( A ∩ B ) = P(AB)
Probability of A \text{A} A or B = P ( A ∪ B ) = P(A + B) = P(A) + P(B) − P ( A ∩ B ) \text{B} = \text{P}(\text{A} \cup \text{B}) = \text{P(A + B) = P(A) + P(B)} - \text{P}(\text{A} \cap \text{B}) B = P ( A ∪ B ) = P(A + B) = P(A) + P(B) − P ( A ∩ B )
If A \text{A} A and B \text{B} B are mutually exclusive = P ( A ∩ B ) = 0 = \text{P}(\text{A} \cap \text{B}) = 0 = P ( A ∩ B ) = 0
Probability of non-occurance of A = P ( A ‾ ) = 1 − P(A) \text{A} = \text{P}(\overline{\text{A}}) = 1 - \text{P(A)} A = P ( A ) = 1 − P(A)
If P ( A ∩ B ) = P(A) . P(B) \text{P}(\text{A} \cap \text{B}) = \text{P(A) . P(B)} P ( A ∩ B ) = P(A) . P(B) then A \text{A} A and B \text{B} B are independent events.
If A \text{A} A and B \text{B} B are independent events then all possible combinations like ( A ‾ , B ) (\overline{\text{A}}, \text{B}) ( A , B ) Or ( A , B ‾ ) (\text{A}, \overline{\text{B}}) ( A , B ) etc are also independent events.
Independent events and mutually exclusive events are two different things.
If A \text{A} A and B \text{B} B are independent events, then P ( A ∩ B ) = P(A) . P(B) \text{P}(\text{A} \cap \text{B}) = \text{P(A) . P(B)} P ( A ∩ B ) = P(A) . P(B)
If A \text{A} A and B \text{B} B are mutually exclusive events, then P ( A ∩ B ) = 0 \text{P}(\text{A} \cap \text{B}) = 0 P ( A ∩ B ) = 0
Conditional Probability
Conditional probability is the probability of an event occurring, given that another event has already occurred.
P ( A B ) = P ( A ∩ B ) P(B) \text{P}(\frac{\text{A}}{\text{B}}) = \frac{\text{P}(\text{A} \cap \text{B})}{\text{P(B)}} P ( B A ) = P(B) P ( A ∩ B )
It means calculate P(A) \text{P(A)} P(A) given P(B) \text{P(B)} P(B) , i.e. sample space for calculating P(A) \text{P(A)} P(A) is P(B) \text{P(B)} P(B) and favourable outcomes are P ( A ∩ B ) \text{P}(\text{A} \cap \text{B}) P ( A ∩ B ) .
Example 1
A unbiased dice is thrown such that event E 1 \text{E}_1 E 1 is an event for getting an odd number and event E 2 \text{E}_2 E 2 is an event for getting a prime number. Find probability of E 1 \text{E}_1 E 1 given E 2 \text{E}_2 E 2 .
Solution
Number of possible outcomes for event E 1 \text{E}_1 E 1 is { 1 , 3 , 5 } \{1, 3, 5\} { 1 , 3 , 5 } , so P ( E 1 ) = 3 6 \text{P}(\text{E}_1) = \large\frac{3}{6} P ( E 1 ) = 6 3 .
Number of possible outcomes for event E 2 \text{E}_2 E 2 is { 2 , 3 , 5 } \{2, 3, 5\} { 2 , 3 , 5 } , so P ( E 2 ) = 3 6 \text{P}(\text{E}_2) = \large\frac{3}{6} P ( E 2 ) = 6 3 .
Number of possible outcomes for E 1 ∩ E 2 \text{E}_1 \cap \text{E}_2 E 1 ∩ E 2 is { 3 , 5 } \{3, 5\} { 3 , 5 } , so P ( E 1 ∩ E 2 ) = 2 6 \text{P}(\text{E}_1 \cap \text{E}_2) = \large\frac{2}{6} P ( E 1 ∩ E 2 ) = 6 2 .
Applying the formula for conditional probability we get:
P ( E 1 E 2 ) = P ( E 1 ∩ E 2 ) P ( E 2 ) \text{P}(\frac{\text{E}_1}{\text{E}_2}) = \frac{\text{P}(\text{E}_1 \cap \text{E}_2)}{\text{P}(\text{E}_2)} P ( E 2 E 1 ) = P ( E 2 ) P ( E 1 ∩ E 2 )
P ( E 1 ∩ E 2 ) = 2 6 , and P ( E 2 ) = 3 6 \text{P}(\text{E}_1 \cap \text{E}_2) = \frac{2}{6}, \ \ \text{and} \ \ \text{P}(\text{E}_2) = \frac{3}{6} P ( E 1 ∩ E 2 ) = 6 2 , and P ( E 2 ) = 6 3
So probability of E 1 \text{E}_1 E 1 given E 2 \text{E}_2 E 2 will be:
P ( E 1 E 2 ) = 2 3 \text{P}(\frac{\text{E}_1}{\text{E}_2}) = \frac{2}{3} P ( E 2 E 1 ) = 3 2
Total Probability Theorem
If there is an event A \text{A} A such that it overlaps with n \text{n} n mutually exclusive events E 1 \text{E}_1 E 1 , E 2 \text{E}_2 E 2 , E 3 \text{E}_3 E 3 .... E n \text{E}_\text{n} E n .
Then total probability of event A \text{A} A will be:
P(A) = P ( A ∩ E 1 ) + P ( A ∩ E 2 ) + P ( A ∩ E 3 ) + . . . + P ( A ∩ E n ) \text{P(A)} = \text{P}(\text{A} \cap \text{E}_1) + \text{P}(\text{A} \cap \text{E}_2) + \text{P}(\text{A} \cap \text{E}_3) \ \ + \ \ ... \ \ + \ \ \text{P}(\text{A} \cap \text{E}_\text{n}) P(A) = P ( A ∩ E 1 ) + P ( A ∩ E 2 ) + P ( A ∩ E 3 ) + ... + P ( A ∩ E n )
Example 1
A bag contains 4 4 4 red ball and 6 6 6 black balls. A ball is drawn at random from the bag, it's color is observed and this ball along with two additional ball of the same colour are returned to bag. Calculate the probability such that, if a ball is drawn at random from the bag is red ball.
Solution
Let's draw possibilities mentioned in the question.
Given we drawn a red ball and replaced it with 2 2 2 extra red balls in the first step, then probability of getting a red ball when drawn randomly will be:
4 10 ∗ 6 12 \frac{4}{10} * \frac{6}{12} 10 4 ∗ 12 6
Given we drawn a black ball and replaced it with 2 2 2 extra black balls in the first step, then probability of getting a red ball when drawn randomly will be:
6 10 ∗ 4 12 \frac{6}{10} * \frac{4}{12} 10 6 ∗ 12 4
Now, we need to calculate the probability of drawing a red ball at random from the bag after replace of 2 2 2 extra balls. According to the total Probability Theorem, we'll add the probabilities of getting a red ball from each path as below:
( 6 10 ∗ 4 12 ) + ( 4 10 ∗ 6 12 ) (\frac{6}{10} * \frac{4}{12}) \ \ + \ \ (\frac{4}{10} * \frac{6}{12}) ( 10 6 ∗ 12 4 ) + ( 10 4 ∗ 12 6 )
So probability of getting a red ball when drawn randomly will be:
2 5 \frac{2}{5} 5 2
Bayes Theorm
From conditional probability formula we know:
P ( A B ) = P ( A ∩ B ) P(B) \text{P}(\frac{\text{A}}{\text{B}}) \ \ = \frac{\text{P}(\text{A} \cap \text{B})}{\text{P(B)}} P ( B A ) = P(B) P ( A ∩ B )
It can also be re-written as:
P ( A ∩ B ) = P ( A B ) ∗ P(B) \text{P}(\text{A} \cap \text{B}) = \text{P}(\frac{\text{A}}{\text{B}}) \ \ * \ \ \text{P(B)} P ( A ∩ B ) = P ( B A ) ∗ P(B)
Substituting this value in the equation for total probability of event A \text{A} A we will get:
P(A) = P ( A E 1 ) ∗ P ( E 1 ) + P ( A E 2 ) ∗ P ( E 2 ) + . . . + P ( A E n ) ∗ P ( E n ) \text{P(A)} = \text{P}(\frac{\text{A}}{\text{E}_1}) * \text{P}(\text{E}_1) \ \ + \ \
\text{P}(\frac{\text{A}}{\text{E}_2}) * \text{P}(\text{E}_2) \ \ + \ \ ... \ \ + \ \
\text{P}(\frac{\text{A}}{\text{E}_\text{n}}) * \text{P}(\text{E}_\text{n}) P(A) = P ( E 1 A ) ∗ P ( E 1 ) + P ( E 2 A ) ∗ P ( E 2 ) + ... + P ( E n A ) ∗ P ( E n )
Bayes' theorem enables the computation of posterior probabilities, that is the probability that event A \text{A} A was caused by event E 1 \text{E}1 E 1 , given A \text{A} A has occurred using below formula:
P ( E 1 A ) = P ( A E 1 ) ∗ P ( E 1 ) P(A) \text{P}(\frac{\text{E}_1}{\text{A}}) = \frac{\text{P}(\frac{\text{A}}{\text{E}_1}) * \text{P}(\text{E}_1)}{\text{P(A)}} P ( A E 1 ) = P(A) P ( E 1 A ) ∗ P ( E 1 )
Where,
P ( E 1 A ) \text{P}(\large\frac{\text{E}_1}{\text{A}}\normalsize) P ( A E 1 ) , is probability that event A \text{A} A was caused by event E 1 \text{E}_1 E 1 .
P ( A E 1 ) \text{P}(\large\frac{\text{A}}{\text{E}_1}\normalsize) P ( E 1 A ) , is probability that event A \text{A} A will occur given event E 1 \text{E}_1 E 1 has occurred.
P ( E 1 ) \text{P}(\text{E}_1) P ( E 1 ) , is probability of event E 1 \text{E}_1 E 1 .
P ( A ) \text{P}(\text{A}) P ( A ) , is total probability of event A \text{A} A .
Example 1
A letter is come from either LONDON \text{LONDON} LONDON or CLIFTON \text{CLIFTON} CLIFTON . The postal mark on the letter legibly shows consecutive letters ON \text{ON} ON . Calcualte the probability that the letter has come from LONDON \text{LONDON} LONDON .
Solution
Let's draw possibilities mentioned in the question.
We have P ( LO ) \text{P}(\text{LO}) P ( LO ) as 2 6 \large\frac{2}{6} 6 2 because there are 2 2 2 consecutive places in word L ON ‾ D ON ‾ \text{L}\overline{\text{ON}}\text{D}\overline{\text{ON}} L ON D ON , where we have word ON \text{ON} ON out of 6 6 6 total consecutive word possible.
Similarly for CLIFTON \text{CLIFTON} CLIFTON we have P ( CO ) \text{P}(\text{CO}) P ( CO ) as 1 6 \large\frac{1}{6} 6 1 .
Now, as per total probability theorem probabilty that letter shows ON \text{ON} ON will be:
P ( ON ) = ( 1 2 ∗ 2 6 ) + ( 1 2 ∗ 1 6 ) \text{P}(\text{ON}) = (\frac{1}{2} * \frac{2}{6}) \ \ + \ \ (\frac{1}{2} * \frac{1}{6}) P ( ON ) = ( 2 1 ∗ 6 2 ) + ( 2 1 ∗ 6 1 )
1 2 \large\frac{1}{2} 2 1 in above equation represent probability that letter come from either London or Clifton.
But question is asking us probability of event when using a particular path is taken, so let's apply Bayes' theorem formula below:
P ( LO ON ) = P ( ON LO ) ∗ P ( LO ) P(ON) \text{P}(\frac{\text{LO}}{\text{ON}}) = \frac{\text{P}(\frac{\text{ON}}{\text{LO}}) * \text{P}(\text{LO})}{\text{P(ON)}} P ( ON LO ) = P(ON) P ( LO ON ) ∗ P ( LO )
Where,
P ( LO ON ) \text{P}(\large\frac{\text{LO}}{\text{ON}}\normalsize) P ( ON LO ) , is probability that event ON \text{ON} ON (i.e. letter showing ON \text{ON} ON ) was caused by event LO \text{LO} LO (i.e. letter is from London).
P ( ON LO ) \text{P}(\large\frac{\text{ON}}{\text{LO}}\normalsize) P ( LO ON ) , is probability that event ON \text{ON} ON will occur given event E 1 \text{E}1 E 1 has occurred.
P ( LO ) \text{P}(\text{LO}) P ( LO ) , is probability of event LO \text{LO} LO (i.e. letter is from London).
P ( ON ) \text{P}(\text{ON}) P ( ON ) , is total probability of event ON \text{ON} ON (ie. letter showing ON \text{ON} ON ).
Substituting the values in Bayes' theorem formula we get:
( 1 2 ∗ 2 6 ) ( 1 2 ∗ 2 6 ) + ( 1 2 ∗ 1 6 ) \frac{(\frac{1}{2} * \frac{2}{6})}{(\frac{1}{2} * \frac{2}{6}) \ \ + \ \ (\frac{1}{2} * \frac{1}{6})} ( 2 1 ∗ 6 2 ) + ( 2 1 ∗ 6 1 ) ( 2 1 ∗ 6 2 )
So probability probability that the letter has come from LONDON \text{LONDON} LONDON and showing ON \text{ON} ON will be:
2 3 \frac{2}{3} 3 2
Probabilty Distribution
A probability distribution is a mathematical function that describes the probability of each possible outcome.
Example 1
If 3 3 3 coins are tossed simultaneously find the probability distribution of number of heads.
Solution
When 3 3 3 coins are tossed simultaneously, there are 2 3 = 8 2^3 = 8 2 3 = 8 possible outcomes:
HHH, HHT, HTH, THH, HTT, THT, TTH, TTT \text{HHH, HHT, HTH, THH, HTT, THT, TTH, TTT} HHH, HHT, HTH, THH, HTT, THT, TTH, TTT .
Probability Distribution of Number of Heads will be as below:
P(X=0) = 1/8 (TTT) \text{P(X=0) = 1/8 (TTT)} P(X=0) = 1/8 (TTT)
P(X=1) = 3/8 (HTT, THT, TTH) \text{P(X=1) = 3/8 (HTT, THT, TTH)} P(X=1) = 3/8 (HTT, THT, TTH)
P(X=2) = 3/8 (HHT, HTH, THH) \text{P(X=2) = 3/8 (HHT, HTH, THH)} P(X=2) = 3/8 (HHT, HTH, THH)
P(X=3) = 1/8 (HHH) \text{P(X=3) = 1/8 (HHH)} P(X=3) = 1/8 (HHH)
Now if question further asks us to calculate Mean, Variance or Standard Deviation of probability distribution we can calculate them as below.
Mean of probability distribution will be:
mean ( μ ) = ∑ x i ∗ P ( x i ) = 3 2 \text{mean}(\mu) = \sum \text{x}_\text{i} * \text{P}(\text{x}_\text{i}) = \frac{3}{2} mean ( μ ) = ∑ x i ∗ P ( x i ) = 2 3
Variance of probability distribution will be:
variance ( σ x 2 ) = ∑ ( x i ) 2 ∗ P ( x i ) − ( μ ) 2 = 3 4 \text{variance}(\sigma_x^2) = \sum (\text{x}_\text{i})^{2} * \text{P}(\text{x}_\text{i}) - (\mu)^2 = \frac{3}{4} variance ( σ x 2 ) = ∑ ( x i ) 2 ∗ P ( x i ) − ( μ ) 2 = 4 3
Standard deviation of probability distribution will be:
standard deviation ( σ ) = σ x = 3 8 \text{standard deviation}(\sigma) = \sqrt{\sigma_\text{x}} = \sqrt{\frac{3}{8}} standard deviation ( σ ) = σ x = 8 3
Example 2
A person throws two fair dice. He wins 15 15 15 points for throwing a doublet(Same numbers on the two dice), wins 12 12 12 points when the throw results in sum of 9 9 9 and losses 6 6 6 points for any other outcomes on the throw. Calculate the expected gain/lose of the person?
Solution
Possible Outcomes:
Doublet (same numbers): 6 6 6 outcomes (1 − 1 , 2 − 2 , . . . , 6 − 6 1-1, 2-2, ..., 6-6 1 − 1 , 2 − 2 , ... , 6 − 6 )
Sum 9 9 9 : 4 4 4 outcomes (3 − 6 , 4 − 5 , 5 − 4 , 6 − 3 3-6, 4-5, 5-4, 6-3 3 − 6 , 4 − 5 , 5 − 4 , 6 − 3 )
Other outcomes: 36 − 6 − 4 = 26 36 - 6 - 4 = 26 36 − 6 − 4 = 26 outcomes
Probabilities:
Doublet: 6 / 36 = 1 / 6 6/36 = 1/6 6/36 = 1/6
Sum 9 9 9 : 4 / 36 = 1 / 9 4/36 = 1/9 4/36 = 1/9
Other outcomes: 26 / 36 = 13 / 18 26/36 = 13/18 26/36 = 13/18
Points:
Doublet: 15 15 15 points
Sum 9 9 9 : 12 12 12 points
Other outcomes: − 6 -6 − 6 points
Expected Gain or Loss = ( 15 ∗ 1 6 ) + ( 12 ∗ 1 9 ) + ( − 6 ∗ 13 18 ) \text{Expected Gain or Loss} = (15 * \frac{1}{6}) + (12 * \frac{1}{9}) + (-6 * \frac{13}{18}) Expected Gain or Loss = ( 15 ∗ 6 1 ) + ( 12 ∗ 9 1 ) + ( − 6 ∗ 18 13 )
So expected gain or loss for the person will be: − 0.5 -0.5 − 0.5