This practice problem set is to reinforce the 3-part discussion on insurance payment models (Part 1, Part 2 and Part 3). The practice problems in this post are basic problems on calculating average insurance payment (per loss or per payment).
|Practice Problem 7A|
Losses follow a uniform distribution on the interval .
|Practice Problem 7B|
Losses for the current year follow a uniform distribution on the interval . Further suppose that inflation of 25% impacts all losses uniformly from the current year to the next year.
|Practice Problem 7C|
|Losses follow an exponential distribution with mean 5,000. An insurance policy covers losses subject to a franchise deductible of 2,000. Determine the expected insurance payment per loss.|
|Practice Problem 7D|
|Liability claim sizes follow a Pareto distribution with shape parameter and scale parameter . Suppose that the insurance coverage pays claims subject to an ordinary deductible of 20,000 per loss. Given that a loss exceeds the deductible, determine the expected insurance payment.|
|Practice Problem 7E|
|Losses follow a lognormal distribution with and . For losses below 1,000, no payment is made. For losses exceeding 1,000, the amount in excess of the deductible is paid by the insurer. Determine the expected insurance payment per loss.|
|Practice Problem 7F|
Losses in the current exposure period follow a lognormal distribution with and . Losses in the next exposure period are expected to experience 12% inflation over the current year. Determine the expected insurance payment per loss if the insurance contract has an ordinary deductible of 1,000.
|Practice Problem 7G|
Losses follow an exponential distribution with mean 2,500. An insurance contract will pay the amount of each claim in excess of a deductible of 750. Determine the standard deviation of the insurance payment for one claim such that a claim includes the possibility that the amount paid is zero.
|Practice Problem 7H|
Liability losses for auto insurance policies follow a Pareto distribution with and . These insurance policies have an ordinary deductible of 1,250. Determine the expected payment made by these insurance policies per loss.
|Practice Problem 7I|
|Liability losses for auto insurance policies follow a Pareto distribution with and . These insurance policies make no payment for any loss below 1,250. For any loss greater than 1,250, the insurance policies pay the loss amount in excess of 1,250 up to a limit of 5,000. Determine the expected payment made by these insurance policies per loss.|
|Practice Problem 7J|
|Losses follow a lognormal distribution with and . For losses below 1,000, no payment is made. For losses exceeding 1,000, the amount in excess of the deductible is paid by the insurer. Determine the average insurance payment for all the losses that exceed 1,000.|
All normal probabilities are obtained by using the normal distribution table found here.
Daniel Ma actuarial
Dan Ma actuarial
This post is a continuation of the discussion on models of insurance payments initiated in two previous posts. Part 1 focuses on the models of insurance payments in which the insurance policy imposes a policy limit. Part 2 continues the discussion by introducing models in which the insurance policy imposes an ordinary deductible. In each of these two previous posts, the insurance coverage has only one coverage modification. A more interesting and more realistic scenario would be insurance coverage that contains a combination of several coverage modifications. This post is to examine the effects on the insurance payments as a result of having some or all of these coverage modifications – policy limit, ordinary deductible, franchise deductible and inflation. Additional topics: expected payment per loss versus expected payment per payment and loss elimination ratio.
Ordinary Deductible and Policy Limit
The previous two posts discuss the expectations and . The first is the expected insurance payment when the coverage has a policy limit . The second is the expected insurance payment when the coverage has an ordinary deductible . They are the expected values of the following two variables.
It is easy to verify that . Buying a coverage with an ordinary deductible and another coverage with policy limit equals full coverage. Thus we have the following relation.
The limited expectation has expressions in closed form in some cases or has expressions in terms of familiar functions (e.g. gamma function) in other cases. Thus the expectation can be computed by knowing the original expected value and the limited expectation .
Suppose that losses (or claims) in the next period are expected to increase uniformly by %. For example, means 10%. What would be the effect of inflation on the expectations and ?
First, the effect on the limited expectation . As usual, is the random loss and is the policy limit. With inflation rate , the loss variable for the next period would be . One approach is to derive the distribution for the inflated loss variable and use the new distribution to calculate . Another approach is to express it in terms of the limited expectation of the pre-inflated loss . The following is the expectation , assuming that there is no change in the policy limit.
Relation (2) relates the limited expectation of the inflated variable to the limited expectation of the pre-inflated loss . It says that the limited expectation of the inflated variable is obtained by inflating the limited expectation of the pre-inflated loss but at a smaller policy limit .
The following is the expectation .
Similarly, (3) expresses the expected payment on the inflated loss in terms of the expected payment of the pre-inflated loss. It says that when the loss is inflated, the expected payment per loss is obtained by inflating the expected payment on the pre-inflated loss but at a smaller deductible.
Insurance Payment Per Loss versus Per Payment
The previous post (Part 2) shows how to evaluate the average amount paid to the insured when the coverage has an ordinary deductible. The average payment discussed in Part 2 is the average per loss (over all losses). As a simple illustration, let’s say the amounts of losses in a given period for an insured are 7, 4, 33 and 17 subject to an ordinary deductible of 5. Then the insurance payments are: 2, 0, 28 and 12. The average payment per loss would be (2+0+28+12)/4 = 10.5. If we only count the losses that require a payment, the average is (2+28+12)/3 = 14. Thus the average payment per payment is greater than the average payment per loss since only the losses exceeding the deductible are counted in the average payment per payment. In the calculation discussed here, the average payment per payment is obtained by dividing the average payment per loss by the probability that the loss exceeds the deductible. Note that 10.5/0.75 = 14.
Suppose that the random variable is the size of the loss (if a loss occurs). Under an insurance policy with an ordinary deductible , the first dollars of a loss is responsible by the insured and the amount of the loss in excess of is paid by the insurer. Under such an arrangement, a certain number of losses are not paid by the insurer, precisely those losses that are less than or equal to . If we only count the losses that are paid by the insurer, the payment amount is the conditional random variable . The expected value of this conditional random variable is denoted by or if the loss is understood.
Given that , the variable is called the excess loss variable. Its expected value is called the mean excess loss function. Other names are mean residual life and complete expectation of life (when the context is that of a mortality study).
For the discussion in this post and other posts in the same series, we use to denote . The P stands for payment so that its expected value would be average insurance payment per payment, i.e. the expected amount paid given that the loss exceeds the deductible. When the random variable is the age at death, would be the expected remaining time until death given that the life has survived to age .
The expected value is thus the expected payment per payment (or expected cost per payment) under an ordinary deductible. In contrast, the expected value is the expected payment per loss (or expected cost per loss), which is discussed in the previous post. The two expected values are related. The calculation of one will give the other. The following compares the two calculation. Let and be the PDF and CDF of , respectively.
Note that is calculated using a conditional density function. As a result, can be obtained by dividing the expected payment per loss divided by the probability that there is a payment. Thus the expected payment per payment is the expected payment per loss divided by the probability that there is a payment. This is described in the following relation.
Distributions of Insurance Payment Variables
Relation (1) and Relation (4) are used for the calculation of expected insurance payment when the coverage has an ordinary deductible. For deriving other information about insurance payment in the presence of an ordinary deductible, it is helpful to know the distributions of the insurance payment (per loss and per payment).
Let (payment per loss) and let (payment per payment). We now discuss the distribution for . First, the following gives the PDF, CDF and the survival function of .
|Payment Per Loss|
Note that the above functions have a point mass at to account for the losses that are not paid. For , there is no point mass at . We only need to consider . Normalizing the function would give the PDF of . Thus the following gives the PDF, the survival function and the CDF of .
|Payment Per Payment|
The following calculates the two averages using the respective PDFs, as a result deriving the same relationship between the two expected values.
Relation (1) and Relation (7) are identical. The former is calculated using the distribution of the original loss and the latter is calculated using the distribution of the payment per loss variable. Similarly, compare Relation (5) and Relation (8). The former is computed from the distribution of the unmodified loss and the latter is computed using the distribution of the payment per payment variable.
Note that Relation (7) and Relation (8) also lead to Relation (6), which relates the expected payment and the expected payment .
With the PDFs and CDFs for the payment per loss and the payment per payment developed, other distributional quantities can be derived, e.g. hazard rate, variance, skewness and kurtosis.
Loss Elimination Ratio
When the insurance coverage has an ordinary deductible, a natural question is (from the insurance company’s perspective), what is the impact of the deductible on the payment that is made to the insured? More specifically, on average what is the reduction of the payment? The loss elimination ratio is the proportion of the expected loss that is not paid to the insured by the insurer. For example, if the expected loss before application of the deductible is 45 (of which 36 is expected to be paid by the insurer), the loss elimination ratio is 9/45 = 0.20 (20%). In this example, the insurer has reduced its obligation by 20%. More formally, the loss elimination ratio (LER) is defined as:
LER is the ratio of the expected reduction in payment as a result of imposing the ordinary deductible to the expected payment without the deductible. Though it is possible to define LER as the ratio of the reduction in expected payment as a result of a coverage modification to the expected payment without the modification for a modification other than an ordinary deductible, we do not attempt to further generalize this concept.
Policy with a Limit and a Deductible
In part 1, expected insurance payment under a policy limit is developed. In part 2, expected insurance payment under a policy with an ordinary deductible is developed. We now combine both provisions in the same insurance policy. First, let’s define two terms. A policy limit is the maximum amount that will be paid by a policy. For example, if the policy limit is 10,000, the policy will paid at most 10,000 per loss. If the actual loss is 15,000, then the policy will paid 10,000 and the insured will have to be responsible for the remaining 5,000. On the other hand, maximum covered loss is the level above which no loss will be paid. For example, Suppose that the policy covers up to 10,000 per loss subject to a deductible of 1,000. If the actual loss is 20,000, then the maximum covered loss is 10,000 with the policy limit being 9,000. This is because the policy only covers the first 10,000 with the first 1,000 paid by the insured.
Suppose that an insurance policy has an ordinary deductible , a maximum covered loss with and no other coverage modifications. Any loss below is not paid by the insurer. For any loss exceeding , the insurer pays the loss amount in excess of up to the maximum covered loss , with the policy limit being . The following describes the payment rule more explicitly.
Under such a policy, the maximum amount paid (policy limit) is , which is the maximum covered loss minus the deductible. The policy limit is reached when . When , the payment is and the payment is . Then the expected payment per loss under such a policy is:
There is not special notation for the expected payment. In words, it is the limited expected value at (the maximum covered loss) minus the limited expected value at the ordinary deductible . The higher moments can be derived by evaluating using the PDF of the loss where .
The payment is on a per loss basis. It is also possible to consider payment or payment by removing the point mass at zero. The expected payment per payment is obtained by dividing (9) by the probability of a positive payment.
An alternative to the ordinary deductible is the franchise deductible. It works like an ordinary deductible except that when the loss exceeds the deductible, the policy pays the loss in full. The following gives the payment rule.
Note that when the loss exceeds the deductible (), the policy with a franchise deductible pays more than a policy with the same ordinary deductible. By how much? By the amount . Thus the expected payment per loss under a franchise deductible is . The addition of reflects the additional benefit when . Instead of deriving calculation specific to franchise deductible, we can derive the payments under franchise deductible by adding the additional benefit appropriately.
The previous two posts and this post discuss coverage with two types of deductible (ordinary and franchise) as well as coverage that may include a limit. In order to keep the expected payments straight, the following table organizes the different combinations of coverage options discussed up to this point.
|Expected Cost Per Loss||Expected Cost Per Payment|
Row A shows the expected payment in a coverage with an ordinary deductible for both per loss and per payment. The expected per loss payment in Row A is the subject of the previous post and the expected payment per payment is discussed earlier in this post. Row B is for expected payments in a coverage with a franchise deductible. Note that the coverage with a franchise deductible pays more than a coverage with an identical ordinary deductible. Thus Row B is Row A plus the added benefit, which is the deductible .
Row C is for the coverage of an ordinary deductible with a policy limit. The per loss expected payment is . The per payment expected value is a conditional one, conditional on the loss greater than the deductible. Thus the per payment expected value is the per loss expected value divided by .
Note that Row D is Row C plus the amount of , the additional benefit as a result of having a franchise deductible instead of an ordinary deductible.
Rather than memorizing these formulas, focus on the general structure of the table. For example, understand the payments involving ordinary deductible (Row A and Row C). Then the payments for franchise deductible are obtained by adding an appropriate additional benefit.
The following table shows the expected payments under the influence of claim cost inflation. The maximum covered loss and the deductible are identical to the above benefit table. The losses are subject to a % inflation in the next exposure period. Note that and are the modified maximum covered loss and deductible that will make the formulas work.
|Expected Cost Per Loss (with Inflation)||Expected Cost Per Payment (with Inflation)|
There is really no need to mathematically derive the formulas for the second insurance benefit table (Rows E through H). Recall the effect of inflation on the expected payments and (see Relation (2) and Relation (3)). Then apply the inflation effect on the first insurance benefit table. Note that the second table is obtained by inflating the first table by but with smaller upper limit and deductible . Also note the relation between Row E and Row F (same relation between Row A and Row B) and the relation between Row G and Row H (same relation between Row C and Row D).
One approach of modeling insurance payments is to assume that the unmodified insurance losses are from a catalog of parametric distributions. For certain parametric distributions, the limited expectations have convenient forms. Examples are: exponential distribution, Pareto distribution and lognormal distribution. Other distributions do not close form for but the limited expectation can be expressed as a function that can be numerically calculated. One example is the gamma function.
The table in this link is an inventory of distributions that may be useful in modeling insurance losses. Included in the table are the limited expectations for various distributions. The following table shows the limited expectations for exponential, Pareto and lognormal.
See the table in this link for the limited expectations of the distributed not shown in the above table. Once is computed or estimated, the expected payment for various types of coverage provisions can be derived.
The first example demonstrates the four categories of expected payments in the table in the preceding section.
Suppose that the loss distribution is described by the PDF with support . Walk through all the expected payments discussed in the above table (Rows A through D) assuming and .
First, the basic calculation.
Four categories of expected payments are derived and are shown in the following table.
|Expected Cost Per Loss||Expected Cost Per Payment|
Suppose that a coverage has an ordinary deductible and suppose that the CDF of the insurance payment per loss is given by:
Determine the expected value and the variance of the insurance cost per loss.
Note that there is a jump in the CDF at . Thus is a point mass. The following is the PDF of the insurance payment.
It is clear that where is the PDF of the exponential distribution with mean 100. Thus is the PDF of the insurance payment per loss when the coverage has an ordinary deductible of 20 where the loss has exponential distribution with mean 100. We can use to calculate the mean and variance of . Using , the mean and variance are:
Note that the calculation for and is done by multiplying by mean and second moment of the exponential distribution with mean 100.
You are given the following:
- Losses follow a Pareto distribution with parameters and .
- The coverage has an ordinary deductible of 100.
Determine the PDF and CDF of the payment to the insured per payment. Determine the expected cost per payment.
Note that the setting of this example is identical to Example 3 in this previous post. Since this only concerns the insurance when the loss exceeds the deductible, the insurance payment is the conditional distribution where is the Pareto loss distribution.
The following gives the PDF and CDF and other information of the loss .
The following shows the PDF and CDF for the payment per payment variable .
Note that these PDF and CDF are for a Pareto distribution with and . Thus the expected payment per payment and the variance are:
Suppose that losses follow a Pareto distribution with shape parameter and scale parameter . Suppose that an insurance coverage pays each loss subject to an ordinary deductible . Derive a formula for the expected payment per payment .
Let be the payment per payment. In Example 3, we see that the CDF and PDF of are also Pareto but with shape parameter (same as for ) and scale parameter (the original scale parameter plus the deductible). Thus the following gives the expected payment per payment.
Note that in this case is an increasing linear function of the deductible . The higher the deductible, the larger the expected payment. This is a clear sign that the Pareto distribution is a heavy tailed distribution.
Suppose that losses follow a lognormal distribution with parameters and . An insurance coverage has an ordinary deductible of 100. Compute the expected payment per loss. Compute the expected payment per loss if the deductible is a franchise deductible.
The following calculates and .
As a result, the expected payment per loss is . If the deductible of 100 is a franchise deductible, the expected payment per loss is obtained by adding an additional insurance payment, which is . The following is the added payment.
Thus the expected payment per loss with a franchise deductible of 250 is 84.70 + 74.54 = 159.24.
The following practice problem sets are to reinforce the concepts discussed here.
|Practice Problem Set 7|
|Practice Problem Set 8|
|Practice Problem Set 9|
models of insurance payment
ordinary deductible, franchise deductible
maximum covered loss
Daniel Ma Math
Daniel Ma Mathematics
Dan Ma Actuarial
This post has exercises on negative binomial distributions, reinforcing concepts discussed in
this previous post. There are several versions of the negative binomial distribution. The exercises are to reinforce the thought process on how to use the versions of negative binomial distribution as well as other distributional quantities.
|Practice Problem 6A||
The annual claim frequency for an insured from a large population of insured individuals is modeled by the following probability function.
Determine the following:
|Practice Problem 6B|
The number of claims in a year for an insured from a large group of insureds is modeled by the following model.
The parameter varies from insured to insured. However, it is known that is modeled by the following density function.
Given that a randomly selected insured has at least one claim, determine the probability that the insured has more than one claim.
|Practice Problem 6C||
Suppose that the number of accidents per year per driver in a large group of insured drivers follows a Poisson distribution with mean . The parameter follows a gamma distribution with mean 0.6 and variance 0.24.
Determine the probability that a randomly selected driver from this group will have no more than 2 accidents next year.
|Practice Problem 6D||
Suppose that the random variable follows a negative binomial distribution such that
Determine the mean and variance of .
|Practice Problem 6E||
Suppose that the random variable follows a negative binomial distribution with mean 0.36 and variance 1.44.
|Practice Problem 6F||
A large group of insured drivers is divided into two classes – “good” drivers and “bad”drivers. Seventy five percent of the drivers are considered “good” drivers and the remaining 25% are considered “bad”drivers. The number of claims in a year for a “good” driver is modeled by a negative binomial distribution with mean 0.5 and variance 0.625. On the other hand, the number of claims in a year for a “bad” driver is modeled by a negative binomial distribution with mean 2 and variance 4.
For a randomly selected driver from this large group, determine the probability that the driver will have 3 claims in the next year.
|Practice Problem 6G||
The number of losses in a year for one insurance policy is the random variable where . The random variable is modeled by a geometric distribution with mean 0.4 and variance 0.56.
What is the probability that the total number of losses in a year for three randomly selected insurance policies is 2 or 3?
|Practice Problem 6H||
The random variable follows a negative binomial distribution. The following gives further information.
Determine and .
|Practice Problem 6I||
Coin 1 is an unbiased coin, i.e. when tossing the coin, the probability of getting a head is 0.5. Coin 2 is a biased coin such that when tossing the coin, the probability of getting a head is 0.6. One of the coins is chosen at random. Then the chosen coin is tossed repeatedly until a head is obtained.
Suppose that the first head is observed in the fifth toss. Determine the probability that the chosen coin is Coin 2.
|Practice Problem 6J||
In a production process, the probability of manufacturing a defective rear view mirror for a car is 0.075. Assume that the quality status of any rear view mirror produced in this process is independent of the status of any other rear view mirror. A quality control inspector is to examine rear view mirrors one at a time to obtain three defective mirrors.
Determine the probability that the third defective mirror is the 10th mirror examined.
|6D||mean = 0.65, variance = 0.975|
Daniel Ma Math
Daniel Ma Mathematics
This post shows how to work with negative binomial distribution from an actuarial modeling perspective. The negative binomial distribution is introduced as a Poisson-gamma mixture. Then other versions of the negative binomial distribution follow. Specific attention is paid to the thought processes that facilitate calculation involving negative binomial distribution.
Negative Binomial Distribution as a Poisson-Gamma Mixture
Here’s the setting for the Poisson-gamma mixture. Suppose that has a Poisson distribution with mean and that is a random variable that varies according to a gamma distribution with parameters (shape parameter) and (rate parameter). Then the following is the unconditional probability function of .
The distribution described in (1) is one parametrization of the negative binomial distribution (derived here). It has two parameters and (coming from the gamma mixing weights). The following is another parametrization.
The distribution described in (2) is obtained when the gamma mixing weight has a shape parameter and a scale parameter . Since the gamma scale parameter and rate parameter is related by , (2) can be derived from (1) by setting .
Both (1) and (2) contain the ratio that is expressed using the gamma function. The next task is to simplify the ratio using a general notion of binomial coefficient.
The Poisson-gamma mixture is discussed in this blog post in a companion blog called Topics in Actuarial Modeling.
General Binomial Coefficient
The familiar binomial coefficient is the following:
where the top number is a positive integer and the bottom number is a non-negative integer such that . Other notations for binomial coefficient are , and . The right hand side of the above expression can be simplified by canceling out .
The expression in (4) is obtained by canceling out in (3). Note that does not have to be an integer for the calculation in (4) to work. The bottom number has to be a non-negative number since is involved. However, can be any positive real number as long as .
Thus the expression in (4) gives a new meaning to the binomial coefficient where is a positive real number and is a non-negative integer such that .
For example, and . The thought process is that the numerator is obtained by subtracting 1 times from . If , this thought process would not work. For convenience, when is a positive real number.
We now use the binomial coefficient defined in (5) to simplify the ratio where is a positive real number and is a non-negative integer. We use a key fact about gamma function: . Then for any integer , we have the following derivation.
The right hand side of the above expression is precisely the binomial coefficient when . Thus we have the following relation.
where is an integer with .
Negative Binomial Distribution
With relation (6), the two versions of Poisson-gamma mixture stated in (1) and (2) are restated as follows:
The above two parametrizations of negative binomial distribution are used if information about the Poisson-gamma mixture is known. In (7), the gamma distribution in the Poisson-gamma mixture has shape parameter and rate parameter . In (8), the gamma distribution has shape parameter and scale parameter . The following is a standalone version of the binomial distribution.
In (9), the negative binomial distribution has two parameters and where . In this parmetrization, the parameter is simply a real number between 0 and 1. It can be viewed as a probability. In fact, this is the case when the parameter is an integer. Version (9) can be restated as follows when is an integer.
In version (10), the parameters are (a positive integer) and a real number with . Since is an integer, the usual binomial coefficient appears in the probability function.
Version (10) has a natural interpretation. A Bernoulli trial is an random experiment that results in two distinct outcome – success or failure. Suppose that the probability of success is in each trial. Perform a series of independent Bernoulli trials until exactly successes occur where is a fixed positive integer. Let the random variable be the number of failures before the occurrence of the th success. Then (10) is the probability function for the random variable .
A special case of (10). When the parameter is 1, the negative binomial distribution has a special name.
The distribution in (11) is said to be a geometric distribution with parameter . The random variable defined by (11) can be interpreted as the number of failures before the occurrence of the first success when performing a series of independent Bernoulli trials. Another important property of the geometric distribution is that it is the only discrete distribution with the memoryless property. As a result, the survival function of the geometric distribution is where .
More about Negative Binomial Distribution
The probability functions of various versions of the negative binomial distribution have been developed in (1), (2), (7), (8), (9), (10) and (11). Other distributional quantities can be derived from the Poisson-gamma mixture. We derive the mean and variance of the negative binomial distribution.
Suppose that the negative binomial distribution is that of version (8). The conditional random variable has a Poisson distribution with mean and the random variable has a gamma distribution with a shape parameter and a scale parameter . Note that and . Furthermore, we have the conditional mean and conditional variance
The following derives the mean and variance of .
The above mean and variance are for parametrization in (8). To obtain the mean and variance for the other parametrizations, make the necessary translation. For example, to get (7), plug into the above mean and variance. For (9), let . Then solve for and plug that into the above mean and variance. Version (10) should have the same formulas as for (9). To get (11), set . The following table lists the negative binomial mean and variance.
|(9) and (10)|
The table shows that the variance of the negative binomial distribution is greater than its mean (regardless of the version). This stands in contrast with the Poisson distribution whose mean and the variance are equal. Thus the negative binomial distribution would be a suitable model in situations where the variability of the empirical data is greater than the sample mean.
Modeling Claim Count
The negative binomial distribution is a discrete probability distribution that takes on the non-negative integers . Thus it can be used as a counting distribution, i.e. a model for the number of events of interest that occur at random. For example, the described above can be a good model for the frequency of loss, i,e, the random variable of the number of losses, either arising from a portfolio of insureds or from a particular insured in a given period of time.
The Poisson-gamma model has a great deal of flexibility. Consider a large population of individual insureds. The number of losses (or claims) in a year for each insured has a Poisson distribution with mean . From insured to insured, there is uncertainty in the mean annual claim frequency . However, the random variable varies according to a gamma distribution. As a result, the annual number of claims for an “average” insured or a randomly selected insured from the population will follow a negative binomial distribution.
Thus in a Poisson-gamma model, the claim frequency for an individual in the population follows a Poisson distribution with unknown gamma mean. The weighted average of these conditional Poisson claim frequencies is a negative binomial distribution. Thus the average claim frequency over all individuals has a negative binomial distribution.
The table in the preceding section shows that the variance of the negative binomial distribution is greater than the mean. This is in contrast to the fact that the variance and the mean of a Poisson distribution are equal. Thus the unconditional claim frequency is more dispersed than its conditional distributions. The increased variance of the negative binomial distribution reflects the uncertainty in the parameter of the Poisson mean across the population of insureds. The uncertainty in the parameter variable has the effect of increasing the unconditional variance of the mixture distribution of . Recall that the variance of a mixture distribution has two components, the weighted average of the conditional variances and the variance of the conditional means. The second component represents the additional variance introduced by the uncertainty in the parameter .
We present two examples. More examples to come at the end of the post.
For a given insured driver in a large portfolio of insured drivers, the number of collision claims in a year has a Poisson distribution with mean . The Poisson mean follows a gamma distribution with mean 4 and variance 80. For a randomly selected insured driver from this portfolio,
- what is the probability of having exactly 2 collision claims in the next year?
- what is the probability of having at most one collision claim in the next year?
The number of collision claims in a year is a Poisson-gamma mixture and thus is a negative binomial distribution. From the given gamma mean and variance, we can determine the parameters of the gamma distribution. In this example, we use the parametrization of (8). Expressing the gamma mean and variance in terms of the shape and scale parameters, we have and . These two equations give and . The probabilities are calculated based on (8).
The answer for the first question is . The answer for the second question is . Thus there is a closed to 65% chance that an insured driver has at most one claim in a year.
For an automobile insurance company, the distribution of the annual number of claims for a policyholder chosen at random is modeled by a negative binomial distribution that is a Poisson-gamma mixture. The gamma distribution in the mixture has a shape parameters of and scale parameter . What is the probability that a randomly selected policyholder has more than two claims in a year?
Since the gamma shape parameter is 1, the unconditional number of claims in a year is a geometric distribution with parameter . The following is the desired probability.
A Recursive Formula
The probability functions described in (1), (2), (7), (8), (9), (10) and (11) describe clearly how the negative binomial probabilities are calculated based on the two given parameters. The probabilities can also be calculated recursively. Let where . We introduce a recursive formula that allows us to compute the value if is known. The following is the form of the recursive formula.
In (12), the numbers and are constants. Note that the formula (12) calculates probabilities for all . It turns out that the initial probability is determined by the constants and . Thus the constants and completely determines the probability distribution represented by . Any discrete probability distribution that satisfies this recursive relation is said to be a member of the (a,b,0) class of distributions.
We show that the negative binomial distribution is a member of the (a,b,0) class of distributions. First, assume that the negative binomial distribution conforms to the parametrization in (8) with parameters and . Then let and be defined as follows:
Let the initial probability be . We claim that the probabilities generated by the formula (12) are identical to the ones calculated from (8). To see this, let’s calculate a few probabilities using the formula.
The above derivation demonstrates that formula (12) generates the same probabilities as (8). By adjusting the constants and , the recursive formula can also generate the probabilities in the other versions of the negative binomial distribution. For the negative binomial version (9) with parameters and , the and should be defined as follows:
With the initial probability , the recursive formula (12) will generate the same probabilities as those from version (9).
Suppose that an insured will produce claims during the next exposure period is
where . Furthermore, the parameter varies according to a distribution with the following density function:
What is the probability that a randomly selected insured will produce more than 2 claims during the next exposure period?
Note that the claim frequency for an individual insured has a Poisson distribution with mean . The given density function for the parameter is that of a gamma distribution with and rate parameter . Thus the number of claims in an exposure period for a randomly selected (or “average” insured) will have a negative binomial distribution. In this case the parametrization (7) is the most useful one to use.
The following calculation gives the relevant probabilities to answer the question.
Summing the three probabilities gives . Then . There is a 19.42% chance that a randomly selected insured will have more than 2 claims in an exposure period.
The number of claims in a year for each insured in a large portfolio has a Poisson distribution with mean . The parameter follows a gamma distribution with mean 0.75 and variance 0.5625.
Determine the proportion of insureds that are expected to have less than 1 claim in a year.
Setting and gives and . Thus the parameter follows a gamma distribution with shape parameter and scale parameter . This is an exponential distribution with mean 0.75. The problems asks for the proportion of insured with . Thus the answer is . Thus about 74% of the insured population are expected to have less than 1 claim in a year.
Suppose that the number of claims in a year for an insured has a Poisson distribution with mean . The random variable follows a gamma distribution with shape parameter and scale parameter .
One thousand insureds are randomly selected and are to be observed for a year. Determine the number of selected insureds expected to have exactly 3 claims by the end of the one-year observed period.
With this being a Poisson-gamma mixture, the number of claims in a year for a randomly selected insured has a negative binomial distribution. Using (8) and based on the gamma parameters given, the following is the probability function of negative binomial distribution.
The following gives the calculation for .
With , about 149 of the randomly selected insureds will have 3 claims in the observed period.
Suppose that the annual claims frequency for an insured in a large portfolio of insureds has a distribution that is in the (a,b,0) class. Let be the probability that an insured has claims in a year.
Given that , and , determine the probability that an insured has no claims in a one-year period.
Given , and , find . Based on the recursive relation (12), we have the following two equations of and .
Solving these two equations gives and . Plugging and into the recursive relation gives the answer.
Dan Ma actuarial
Daniel Ma actuarial
Dan Ma math
Dan Ma mathematics
Revised Nov 2, 2018.
This problem set has exercises to reinforce the various parametric continuous probability models discussed in the companion blog on actuarial modeling. Links are given below for the models involved.
This blog post in Topics in Actuarial Modeling has a catalog for continuous models.
|Practice Problem 5A||
Claim amounts for collision damages to insured cars are mutually independent random variables with common density probability density function
For three claims that are expected to be made, calculate the expected value of the largest of the three claims.
|Practice Problem 5B|
The lifetime of an electronic device is modeled using the random variable where is an exponential random variable with mean 0.5.
Determine the variance of .
|Practice Problem 5C|
The lifetime (in years) of an electronic device is where and are independent exponentially distributed random variables with mean 3.5.
Determine the probability density function of the lifetime of the electronic device.
|Practice Problem 5D|
The time (in years) until the failure of a machine that is brand new is modeled by a Weibull distribution with shape parameter 1.5 and scale parameter 4.
Calculate the 95th percentile of times to failure of the machines that are 2-year old.
|Practice Problem 5E|
The size of a bodily injury claim for an auto insurance policy follows a Pareto Type II Lomax distribution with shape parameter 2.28 and scale parameter 200.
Calculate the proportion of claims that are within one-fourth standard deviations of the mean claim size.
|Practice Problem 5F|
Suppose that the size of a claim has the following density function.
where . A coverage pays claims subject to an ordinary deductible of 20.
Determine the expected amount paid by the coverage per claim.
|Practice Problem 5G|
An actuary determines that sizes of claims from a large portfolio of insureds are exponentially distributed. For about 60% of the claims, the claim sizes are modeled by an exponential distribution with mean 1.2. For about 30% of the claims, the claim sizes are modeled by an exponential distribution with mean 2.8. For the remaining 10% of the claims, the claim sizes are considered high claim sizes and are modeled by an exponential distribution with mean 7.5.
Determine the variance of the size of a claim that is randomly selected from this portfolio.
|Practice Problem 5H|
Losses are modeled by a loglosistic distribution with shape parameter and scale parameter . When a loss occurs, an insurance policy reimburses the loss in excess of a deductible of 5.
Determine the 75th percentile of the insurance company reimbursements over all losses.
|Practice Problem 5I|
Losses are modeled by a loglosistic distribution with shape parameter and scale parameter . When a loss occurs, an insurance policy reimburses the loss in excess of a deductible of 5.
Determine the 75th percentile of the insurance company reimbursements over all payments.
|Practice Problem 5J|
Claim sizes for a certain class of auto accidents are modeled by a uniform distribution on the interval . Five accidents are randomly selected.
Determine the expected value of the median of the five accident claims.
|Problem||Links for the relevant distributions|
|5F||Lognormal distribution and limited expectation|
Daniel Ma Math
Daniel Ma Mathematics
The previous post is a discussion of the Pareto distribution as well as a side-by-side comparison of the two types of Pareto distribution. This post has several practice problems to reinforce the concepts in the previous post.
|Practice Problem 4A|
The random variable is an insurer’s annual hurricane-related loss. Suppose that the density function of is:
Calculate the inter-quartile range of annual hurricane-related loss.
Note that the inter-quartile range of a random variable is the difference between the first quartile (25th percentile) and the third quartile (75th percentile).
|Practice Problem 4B|
|Claim size for an auto insurance coverage follows a Pareto Type II Lomax distribution with mean 7.5 and variance 243.75. Determine the probability that a randomly selected claim will be greater than 10.|
|Practice Problem 4C|
|Losses follow a Pareto Type II distribution with shape parameter and scale parameter . The value of the mean excess loss function at is 32. The value of the mean excess loss function at is 48. Determine the value of the mean excess loss function at .|
|Practice Problem 4D|
For a large portfolio of insurance policies, the underlying distribution for losses in the current year has a Pareto Type II distribution with shape parameter and scale parameter . All losses in the next year are expected to increases by 5%. For the losses in the next year, determine the value-at-risk at the security level 95%.
|Practice Problem 4E (Continuation of 4D)|
For a large portfolio of insurance policies, the underlying distribution for losses in the current year has a Pareto Type II distribution with shape parameter and scale parameter . All losses in the next year are expected to increases by 5%. For the losses in the next year, determine the tail-value-at-risk at the security level 95%.
|Practice Problem 4F|
For a large portfolio of insurance policies, losses follow a Pareto Type II distribution with shape parameter and scale parameter . An insurance policy covers losses subject to an ordinary deductible of 500. Given that a loss has occurred, determine the average amount paid by the insurer.
|Practice Problem 4G|
The claim severity for an auto liability insurance coverage is modeled by a Pareto Type I distribution with shape parameter and scale parameter . The insurance coverage pays up to a limit of 1200 per claim. Determine the expected insurance payment under this coverage for one claim.
|Practice Problem 4H|
For an auto insurance company, liability losses follow a Pareto Type I distribution. Let be the random variable for these losses. Suppose that and . Determine .
|Practice Problem 4I|
For a property and casualty insurance company, losses follow a mixture of two Pareto Type II distributions with equal weights, with the first Pareto distribution having parameters and and the second Pareto distribution having parameters and . Determine the value-at-risk at the security level of 95%.
|Practice Problem 4J|
The claim severity for a line of property liability insurance is modeled as a mixture of two Pareto Type II distributions with the first distribution having and and the second distribution having and . These two distributions have equal weights. Determine the limited expected value of claim severity at claim size 1000.
This post complements an earlier discussion of the Pareto distribution in a companion blog (found here). This post gives a side-by-side comparison of the Pareto type I distribution and Pareto type II Lomax distribution. We discuss the calculations of the mathematical properties shown in the comparison. Several of the properties in the comparison indicate that Pareto distributions (both Type I and Type II) are heavy tailed distributions. The properties presented in the comparison (and the thought processes behind them) are a good resource for studying actuarial exams.
The following table gives a side-by-side comparison for Pareto Type I and Pareto Type II.
One item that is not indicated in the table is for Pareto Type II, which is given below.
where is the incomplete beta function, which is defined as follows:
for any , and .
The above table describes two distributions that are called Pareto (Type I and Type II Lomax). Each of them has two parameters – (shape parameter) and (scale parameter). The support of Pareto Type I is the interval . In other words, Pareto type I distribution can only take on real numbers greater than the scale parameter . On the other hand, the support of Pareto Type II is the interval . So a Pareto Type II distribution can take on any positive real numbers.
The two distributions are mathematically related. Judging from the PDF, it is clear that the PDF of Pareto Type II is the result of shifting Type I PDF to the left by the magnitude of (the same can be said about the CDF and survival function). More specifically, let be a random variable that follows a Pareto Type I distribution with parameters and . Let . It is straightforward to verify that has a Pareto Type II distribution, i.e. its CDF and other distributional quantities are the same as the ones shown in the above table under Pareto Type II. If having the same parameters, the two distributions are essentially the same, in that each one is the result of shifting the other one by the amount .
A further indication that the two types are of the same distributional shape is that the variances are identical. Note that shifting a distribution to the left (or right) by a constant does not change the variance.
Since the two Pareto Types are the same distribution (except for the shifting), they share similar mathematical properties. For example, both distributions are heavy tailed distributions. In other words, they significantly put more probabilities on larger values. This point is discussed in more details below.
First, the calculations. The moments are determined by the integral where is the PDF of the distribution in question. Because of the PDF for Pareto Type I is easy to work with, almost all the items under Pareto Type I are quite accessible. For example, the item 8c for Pareto Type I is calculated by the following integral.
In the remaining discussion, the focus is on Pareto Type II calculations.
The Pareto th moment is definition the integral where is the Pareto Type II PDF. However, it is difficult to perform this integral. The best way to evaluate the moments in row 5 in the above table is to use the fact that Pareto Type II distribution is a mixture of exponential distributions with gamma mixing weight (see Example 2 here). Thus the moments of Pareto Type II can be obtained by integrating the conditional conditional th moment of the exponential distribution with gamma weight. The following shows the calculation.
In the above derivation, the conditional is assumed to have an exponential distribution with mean . The random variable in turns has a gamma distribution with shape parameter and rate parameter . The integrand in the integral in the second to the last step is a gamma density, making the value of the integral 1.0. When is an integer, can be simplified as indicated in row 5.
The next calculation is the mean excess loss. It is the conditional expected value . If is an insurance loss and is some kind of threshold (e.g. the deductible in an insurance policy that covers this loss), then is the expected loss in excess of the threshold given that the loss exceeds . If is the lifetime of an individual, then is the expected remaining lifetime given that the individual has survived to age .
The expected value can be calculated by the integral . This integral is not easy to evaluate when is a Pareto Type II PDF. Fortunately, there is another way to handle this calculation. The key idea is that if has a Pareto Type II distribution with parameters and (as described in the table), the conditional random variable also has a Pareto Type II distribution, this time with parameters and . The mean of a Pareto Type II distribution is always the ratio of the scale parameter to the shape parameter less one. Thus the mean of is as indicated in row 7 of the table.
The limited loss is defined as follows.
One interpretation is that it is the insurance payment when the insurance policy has an upper cap on benefit. If the loss is below the cap , the insurance policy pays the loss in full. If the loss exceeds the cap , the policy only pays for the loss up to the limit . The expected insurance payment is said to be the limited expectation. For Pareto Type II, the first moment can be evaluated by the following integral.
Integrating using a change of variable will yield the results in row 8a and row 8b in the table, i.e. the cases for and . A more interesting result is 8c, which is the th moment of the variable . The integral for this expectation can expressed using the incomplete beta function. The following evaluates the .
Further transform the integral in the above calculation by the change of variable using .
The integrand in the last integral is the probability density function of the beta distribution with parameters and . Thus is as indicated in 8c.
Now we consider two risk measures – value-at-risk (VaR) and tail-value-at-risk (TVaR). The value-at-risk at security level for a random variable is, denoted by , the th percentile of . Thus VaR is a fancy name for percentiles. Setting the Pareto Type II CDF equals to gives the VaR indicated in row 9 of the table. In other words, solving the following equation for gives the th percentile for Pareto Type II.
The tail-value-at-risk of a random variable at the security level , denoted by , is the expected value of given that it exceeds . Thus . Letting , the following integral gives the tail-value-at-risk for Pareto Type II. The integral is evaluated by the change of variable .
Several properties in the above table show that the Pareto distribution (both types) is a heavy-tailed distribution. When a distribution significantly puts more probabilities on larger values, the distribution is said to be a heavy tailed distribution (or said to have a larger tail weight). There are four ways to look for indication that a distribution is heavy tailed.
- Existence of moments.
- Hazard rate function.
- Mean excess loss function.
- Speed of decay of the survival function to zero.
Tail weight is a relative concept – distribution A has a heavier tail than distribution B. The first three points are ways to tell heavy tails without a reference distribution. Point number 4 is comparative.
Existence of moments
For a given random variable , the existence of all moments , for all positive integers , indicates a light (right) tail for the distribution of . The existence of positive moments exists only up to a certain value of a positive integer is an indication that the distribution has a heavy right tail.
Note that the existence of the Pareto higher moments is capped by the shape parameter (both Type I and Type II). Thus if , only exists for . In particular, the Pareto Type II mean does not exist for . If the Pareto distribution is to model a random loss, and if the mean is infinite (when ), the risk is uninsurable! On the other hand, when , the Pareto variance does not exist. This shows that for a heavy tailed distribution, the variance may not be a good measure of risk.
Hazard rate function
The hazard rate function of a random variable is defined as the ratio of the density function and the survival function.
The hazard rate is called the force of mortality in a life contingency context and can be interpreted as the rate that a person aged will die in the next instant. The hazard rate is called the failure rate in reliability theory and can be interpreted as the rate that a machine will fail at the next instant given that it has been functioning for units of time. It follows that the hazard rate of Pareto Type I is and is for Type II. They are both decreasing function of .
Another indication of heavy tail weight is that the distribution has a decreasing hazard rate function. Thus the Pareto distribution (both types) is considered to be a heavy distribution based on its decreasing hazard rate function.
One key characteristic of hazard rate function is that it can generate the survival function.
Thus if the hazard rate function is decreasing in , then the survival function will decay more slowly to zero. To see this, let , which is called the cumulative hazard rate function. As indicated above, the survival function can be generated by . If is decreasing in , is smaller than where is constant in or increasing in . Consequently is decaying to zero much more slowly than . Thus a decreasing hazard rate leads to a slower speed of decay to zero for the survival function (a point discussed below).
In contrast, the exponential distribution has a constant hazard rate function, making it a medium tailed distribution. As explained above, any distribution having an increasing hazard rate function is a light tailed distribution.
The mean excess loss function
Suppose that a property owner is exposed to a random loss . The property owner buys an insurance policy with a deductible such that the insurer will pay a claim in the amount of if a loss occurs with . The insuerer will pay nothing if the loss is below the deductible. Whenever a loss is above , what is the average claim the insurer will have to pay? This is one way to look at mean excess loss function, which represents the expected excess loss over a threshold conditional on the event that the threshold has been exceeded. Thus the mean excess loss function is , a function of the deductible .
According to row 7 in the above table, the mean excess loss for Pareto Type I is and for Type II is . They are both increasing functions of the deductible ! This means that the larger the deductible, the larger the expected claim if such a large loss occurs! If a random loss is modeled by such a distribution, it is a catastrophic risk situation.
In general, an increasing mean excess loss function is an indication of a heavy tailed distribution. On the other hand, a decreasing mean excess loss function indicates a light tailed distribution. The exponential distribution has a constant mean excess loss function and is considered a medium tailed distribution.
Speed of decay of the survival function to zero
The survival function captures the probability of the tail of a distribution. If a distribution whose survival function decays slowly to zero (equivalently the cdf goes slowly to one), it is another indication that the distribution is heavy tailed. This point is touched on when discussing hazard rate function.
The following is a comparison of a Pareto Type II survival function and an exponential survival function. The Pareto survival function has parameters ( and ). The two survival functions are set to have the same 75th percentile, which is . The following table is a comparison of the two survival functions.
Note that at the large values, the Pareto right tails retain much more probabilities. This is also confirmed by the ratio of the two survival functions, with the ratio approaching infinity. If a random loss is a heavy tailed phenomenon that is described by the above Pareto survival function ( and ), then the above exponential survival function is woefully inadequate as a model for this phenomenon even though it may be a good model for describing the loss up to the 75th percentile. It is the large right tail that is problematic (and catastrophic)!
Since the Pareto survival function and the exponential survival function have closed forms, We can also look at their ratio.
In the above ratio, the numerator has an exponential function with a positive quantity in the exponent, while the denominator has a polynomial in . This ratio goes to infinity as .
In general, whenever the ratio of two survival functions diverges to infinity, it is an indication that the distribution in the numerator of the ratio has a heavier tail. When the ratio goes to infinity, the survival function in the numerator is said to decay slowly to zero as compared to the denominator.
The Pareto distribution has many economic applications. Since it is a heavy tailed distribution, it is a good candidate for modeling income above a theoretical value and the distribution of insurance claims above a threshold value.