There are three practice problem sets for the 3-part discussion on the mathematical models of insurance payments – problem set 7, problem set 8 and problem set 9. Problems in these problem sets are on calculation of expected payments. We present several examples in this post on variance of insurance payment. A practice problem set will soon follow.
In contrast, the next post is a discussion on the insurance payment per payment.
Coverage with an Ordinary Deductible
To simplify the calculation, the only limit on benefits is the imposition of a deductible. Suppose that the loss amount is the random variable . The deductible is . Given that a loss has occurred, the insurance policy pays nothing if the loss is below and pays if the loss exceeds . The payment random variable is denoted by or and is explicitly described as follows:
The subscript L in is to denote that this variable is the payment per loss. This means that its mean, , is the average payment over all losses. A related payment variable is which is defined as follows:
The variable is a truncated variable (any loss that is less than the deductible is not considered) and is also shifted (the payment is the loss less the deductible). As a result, is a conditional distribution. It is conditional on the loss exceeding the deductible. The subscript P in indicates that the payment variable is the payment per payment. This means that its mean, , is the average payment over all payments that are made, i.e. average payment over all losses that are eligible for a claim payment.
The focus of this post is on the calculation of (the average payment over all losses) and (the variance of payment per loss). These two quantities are important in the actuarial pricing of insurance. If the policy were to pay each loss in full, the average amount paid would be , the average of the loss distribution. Imposing a deductible, the average amount paid is , which is less than . On the other hand, , the variance of the payment per loss, is smaller than , the variance of the loss distribution. Thus imposing a deductible not only reduces the amount paid by the insurer, it also reduces the variability of the amount paid.
The calculation of and can be done by using the pdf of the original loss random variable .
The above calculation assumes that the loss is a continuous random variable. If the loss is discrete, simply replace integrals by summation. The calculation in (3) and (4) can also be done by integrating the pdf of the payment variable .
It will be helpful to also consider the pdf of the payment per payment variable .
We show that there are three different ways to calculate and .
- Using basic principle.
- Considering as a mixture.
- Considering as a compound distribution.
Using basic principle refers to using (3) and (4) or (7) and (8). The second approach is to treat as a mixture of a point mass of 0 with weight and the payment per payment with weight . The third approach is to treat as a compound distribution where the number of claims is a Bernoulli distribution with and the severity is the payment . We demonstrate these approaches with a series of examples.
The random loss has an exponential distribution with mean 50. A coverage with a deductible of 25 is purchased to cover this loss. Calculate the mean and variance of the insurance payment per loss.
We demonstrate the calculation using the three approaches discussed above. The following gives the calculation based on basic principles.
In the above calculation, we perform a change of variable via . We now do the second approach. Note that the variable also has an exponential distribution with mean 50 (this is due to the memoryless property of the exponential distribution). The point mass of 0 has weight and the variable has weight .
In the third approach, the frequency variable is Bernoulli with and . The severity variable is . The following calculates the compound variance.
Note that the average payment per loss is , a substantial reduction from the mean if the policy pays each loss in full. The standard deviation of is , which is a reduction from 50, the standard deviation of original loss distribution. Clearly, imposing a deductible (or other limits on benefits) has the effect of reducing risk for the insurer.
When the loss distribution is exponential, approach 2 and approach 3 are quite easy to implement. This is because the payment per payment variable has the same distribution as the original loss distribution. This happens only in this case. If the loss distribution is any other distribution, we must determine the distribution of before carrying out the second or the third approach.
We now work two more examples that are not exponential distributions.
The loss distribution is a uniform distribution on the interval . The insurance coverage has a deductible of 20. Calculate the mean and variance of the payment per loss.
The following gives the basic calculation.
The mean and variance of the loss distribution are 50 and (if the coverage pays for each loss in full). By imposing a deductible of 20, the mean payment per loss is 32 and the variance of payment per loss is 682.67. The effect is a reduction of risk since part of the risk is shifted to the policyholder.
We now perform the calculation using the the other two approaches. Note that the payment per payment has a uniform distribution on the interval . The following calculates according to the second approach.
For the third approach, the frequency is a Bernoulli variable with and the severity variable is , which is uniform on .
In this example, the loss distribution is a Pareto distribution with parameters and . The deductible of the coverage is 500. Calculate the mean and variance of the payment per loss.
Note that the payment per payment also has a Pareto distribution with parameters and . This information is useful for implementing the second and the third approach. First the calculation based on basic principles.
Now, the mixture approach (the second approach). Note that .
Now the third approach, which is to calculate the compound variance.
For some loss distributions, the calculation of the variance of , the payment per loss, can be difficult mathematically. The required integrals for the first approach may not have closed form. For the second and third approach to work, we need to have a handle on the payment per payment . In many cases, the pdf of is not easy to obtain or its mean and variance are hard to come by (or even do not exist). For these examples, we may have to find the variance numerically. The examples presented are some of the distributions that are tractable mathematically for all three approaches. These three examples are such that the second and third approaches represent shortcuts for find variance of because have a known form and requires minimal extra calculation. For other cases, it is possible that the second or third approach is doable but is not shortcut. In that case, any one of the approaches can be used.
In contrast, the next post is a discussion on the insurance payment per payment.
actuarial practice problems
Daniel Ma actuarial
Dan Ma actuarial
Dan Ma actuarial science
Daniel Ma actuarial science
Daniel Ma Math
Daniel Ma Mathematics
Dan Ma actuary
Daniel Ma actuary
The previous post is a discussion of the Pareto distribution as well as a side-by-side comparison of the two types of Pareto distribution. This post has several practice problems to reinforce the concepts in the previous post.
|Practice Problem 4A|
The random variable is an insurer’s annual hurricane-related loss. Suppose that the density function of is:
Calculate the inter-quartile range of annual hurricane-related loss.
Note that the inter-quartile range of a random variable is the difference between the first quartile (25th percentile) and the third quartile (75th percentile).
|Practice Problem 4B|
|Claim size for an auto insurance coverage follows a Pareto Type II Lomax distribution with mean 7.5 and variance 243.75. Determine the probability that a randomly selected claim will be greater than 10.|
|Practice Problem 4C|
|Losses follow a Pareto Type II distribution with shape parameter and scale parameter . The value of the mean excess loss function at is 32. The value of the mean excess loss function at is 48. Determine the value of the mean excess loss function at .|
|Practice Problem 4D|
For a large portfolio of insurance policies, the underlying distribution for losses in the current year has a Pareto Type II distribution with shape parameter and scale parameter . All losses in the next year are expected to increases by 5%. For the losses in the next year, determine the value-at-risk at the security level 95%.
|Practice Problem 4E (Continuation of 4D)|
For a large portfolio of insurance policies, the underlying distribution for losses in the current year has a Pareto Type II distribution with shape parameter and scale parameter . All losses in the next year are expected to increases by 5%. For the losses in the next year, determine the tail-value-at-risk at the security level 95%.
|Practice Problem 4F|
For a large portfolio of insurance policies, losses follow a Pareto Type II distribution with shape parameter and scale parameter . An insurance policy covers losses subject to an ordinary deductible of 500. Given that a loss has occurred, determine the average amount paid by the insurer.
|Practice Problem 4G|
The claim severity for an auto liability insurance coverage is modeled by a Pareto Type I distribution with shape parameter and scale parameter . The insurance coverage pays up to a limit of 1200 per claim. Determine the expected insurance payment under this coverage for one claim.
|Practice Problem 4H|
For an auto insurance company, liability losses follow a Pareto Type I distribution. Let be the random variable for these losses. Suppose that and . Determine .
|Practice Problem 4I|
For a property and casualty insurance company, losses follow a mixture of two Pareto Type II distributions with equal weights, with the first Pareto distribution having parameters and and the second Pareto distribution having parameters and . Determine the value-at-risk at the security level of 95%.
|Practice Problem 4J|
The claim severity for a line of property liability insurance is modeled as a mixture of two Pareto Type II distributions with the first distribution having and and the second distribution having and . These two distributions have equal weights. Determine the limited expected value of claim severity at claim size 1000.
This post complements an earlier discussion of the Pareto distribution in a companion blog (found here). This post gives a side-by-side comparison of the Pareto type I distribution and Pareto type II Lomax distribution. We discuss the calculations of the mathematical properties shown in the comparison. Several of the properties in the comparison indicate that Pareto distributions (both Type I and Type II) are heavy tailed distributions. The properties presented in the comparison (and the thought processes behind them) are a good resource for studying actuarial exams.
The following table gives a side-by-side comparison for Pareto Type I and Pareto Type II.
One item that is not indicated in the table is for Pareto Type II, which is given below.
where is the incomplete beta function, which is defined as follows:
for any , and .
The above table describes two distributions that are called Pareto (Type I and Type II Lomax). Each of them has two parameters – (shape parameter) and (scale parameter). The support of Pareto Type I is the interval . In other words, Pareto type I distribution can only take on real numbers greater than the scale parameter . On the other hand, the support of Pareto Type II is the interval . So a Pareto Type II distribution can take on any positive real numbers.
The two distributions are mathematically related. Judging from the PDF, it is clear that the PDF of Pareto Type II is the result of shifting Type I PDF to the left by the magnitude of (the same can be said about the CDF and survival function). More specifically, let be a random variable that follows a Pareto Type I distribution with parameters and . Let . It is straightforward to verify that has a Pareto Type II distribution, i.e. its CDF and other distributional quantities are the same as the ones shown in the above table under Pareto Type II. If having the same parameters, the two distributions are essentially the same, in that each one is the result of shifting the other one by the amount .
A further indication that the two types are of the same distributional shape is that the variances are identical. Note that shifting a distribution to the left (or right) by a constant does not change the variance.
Since the two Pareto Types are the same distribution (except for the shifting), they share similar mathematical properties. For example, both distributions are heavy tailed distributions. In other words, they significantly put more probabilities on larger values. This point is discussed in more details below.
First, the calculations. The moments are determined by the integral where is the PDF of the distribution in question. Because of the PDF for Pareto Type I is easy to work with, almost all the items under Pareto Type I are quite accessible. For example, the item 8c for Pareto Type I is calculated by the following integral.
In the remaining discussion, the focus is on Pareto Type II calculations.
The Pareto th moment is definition the integral where is the Pareto Type II PDF. However, it is difficult to perform this integral. The best way to evaluate the moments in row 5 in the above table is to use the fact that Pareto Type II distribution is a mixture of exponential distributions with gamma mixing weight (see Example 2 here). Thus the moments of Pareto Type II can be obtained by integrating the conditional conditional th moment of the exponential distribution with gamma weight. The following shows the calculation.
In the above derivation, the conditional is assumed to have an exponential distribution with mean . The random variable in turns has a gamma distribution with shape parameter and rate parameter . The integrand in the integral in the second to the last step is a gamma density, making the value of the integral 1.0. When is an integer, can be simplified as indicated in row 5.
The next calculation is the mean excess loss. It is the conditional expected value . If is an insurance loss and is some kind of threshold (e.g. the deductible in an insurance policy that covers this loss), then is the expected loss in excess of the threshold given that the loss exceeds . If is the lifetime of an individual, then is the expected remaining lifetime given that the individual has survived to age .
The expected value can be calculated by the integral . This integral is not easy to evaluate when is a Pareto Type II PDF. Fortunately, there is another way to handle this calculation. The key idea is that if has a Pareto Type II distribution with parameters and (as described in the table), the conditional random variable also has a Pareto Type II distribution, this time with parameters and . The mean of a Pareto Type II distribution is always the ratio of the scale parameter to the shape parameter less one. Thus the mean of is as indicated in row 7 of the table.
The limited loss is defined as follows.
One interpretation is that it is the insurance payment when the insurance policy has an upper cap on benefit. If the loss is below the cap , the insurance policy pays the loss in full. If the loss exceeds the cap , the policy only pays for the loss up to the limit . The expected insurance payment is said to be the limited expectation. For Pareto Type II, the first moment can be evaluated by the following integral.
Integrating using a change of variable will yield the results in row 8a and row 8b in the table, i.e. the cases for and . A more interesting result is 8c, which is the th moment of the variable . The integral for this expectation can expressed using the incomplete beta function. The following evaluates the .
Further transform the integral in the above calculation by the change of variable using .
The integrand in the last integral is the probability density function of the beta distribution with parameters and . Thus is as indicated in 8c.
Now we consider two risk measures – value-at-risk (VaR) and tail-value-at-risk (TVaR). The value-at-risk at security level for a random variable is, denoted by , the th percentile of . Thus VaR is a fancy name for percentiles. Setting the Pareto Type II CDF equals to gives the VaR indicated in row 9 of the table. In other words, solving the following equation for gives the th percentile for Pareto Type II.
The tail-value-at-risk of a random variable at the security level , denoted by , is the expected value of given that it exceeds . Thus . Letting , the following integral gives the tail-value-at-risk for Pareto Type II. The integral is evaluated by the change of variable .
Several properties in the above table show that the Pareto distribution (both types) is a heavy-tailed distribution. When a distribution significantly puts more probabilities on larger values, the distribution is said to be a heavy tailed distribution (or said to have a larger tail weight). There are four ways to look for indication that a distribution is heavy tailed.
- Existence of moments.
- Hazard rate function.
- Mean excess loss function.
- Speed of decay of the survival function to zero.
Tail weight is a relative concept – distribution A has a heavier tail than distribution B. The first three points are ways to tell heavy tails without a reference distribution. Point number 4 is comparative.
Existence of moments
For a given random variable , the existence of all moments , for all positive integers , indicates a light (right) tail for the distribution of . The existence of positive moments exists only up to a certain value of a positive integer is an indication that the distribution has a heavy right tail.
Note that the existence of the Pareto higher moments is capped by the shape parameter (both Type I and Type II). Thus if , only exists for . In particular, the Pareto Type II mean does not exist for . If the Pareto distribution is to model a random loss, and if the mean is infinite (when ), the risk is uninsurable! On the other hand, when , the Pareto variance does not exist. This shows that for a heavy tailed distribution, the variance may not be a good measure of risk.
Hazard rate function
The hazard rate function of a random variable is defined as the ratio of the density function and the survival function.
The hazard rate is called the force of mortality in a life contingency context and can be interpreted as the rate that a person aged will die in the next instant. The hazard rate is called the failure rate in reliability theory and can be interpreted as the rate that a machine will fail at the next instant given that it has been functioning for units of time. It follows that the hazard rate of Pareto Type I is and is for Type II. They are both decreasing function of .
Another indication of heavy tail weight is that the distribution has a decreasing hazard rate function. Thus the Pareto distribution (both types) is considered to be a heavy distribution based on its decreasing hazard rate function.
One key characteristic of hazard rate function is that it can generate the survival function.
Thus if the hazard rate function is decreasing in , then the survival function will decay more slowly to zero. To see this, let , which is called the cumulative hazard rate function. As indicated above, the survival function can be generated by . If is decreasing in , is smaller than where is constant in or increasing in . Consequently is decaying to zero much more slowly than . Thus a decreasing hazard rate leads to a slower speed of decay to zero for the survival function (a point discussed below).
In contrast, the exponential distribution has a constant hazard rate function, making it a medium tailed distribution. As explained above, any distribution having an increasing hazard rate function is a light tailed distribution.
The mean excess loss function
Suppose that a property owner is exposed to a random loss . The property owner buys an insurance policy with a deductible such that the insurer will pay a claim in the amount of if a loss occurs with . The insuerer will pay nothing if the loss is below the deductible. Whenever a loss is above , what is the average claim the insurer will have to pay? This is one way to look at mean excess loss function, which represents the expected excess loss over a threshold conditional on the event that the threshold has been exceeded. Thus the mean excess loss function is , a function of the deductible .
According to row 7 in the above table, the mean excess loss for Pareto Type I is and for Type II is . They are both increasing functions of the deductible ! This means that the larger the deductible, the larger the expected claim if such a large loss occurs! If a random loss is modeled by such a distribution, it is a catastrophic risk situation.
In general, an increasing mean excess loss function is an indication of a heavy tailed distribution. On the other hand, a decreasing mean excess loss function indicates a light tailed distribution. The exponential distribution has a constant mean excess loss function and is considered a medium tailed distribution.
Speed of decay of the survival function to zero
The survival function captures the probability of the tail of a distribution. If a distribution whose survival function decays slowly to zero (equivalently the cdf goes slowly to one), it is another indication that the distribution is heavy tailed. This point is touched on when discussing hazard rate function.
The following is a comparison of a Pareto Type II survival function and an exponential survival function. The Pareto survival function has parameters ( and ). The two survival functions are set to have the same 75th percentile, which is . The following table is a comparison of the two survival functions.
Note that at the large values, the Pareto right tails retain much more probabilities. This is also confirmed by the ratio of the two survival functions, with the ratio approaching infinity. If a random loss is a heavy tailed phenomenon that is described by the above Pareto survival function ( and ), then the above exponential survival function is woefully inadequate as a model for this phenomenon even though it may be a good model for describing the loss up to the 75th percentile. It is the large right tail that is problematic (and catastrophic)!
Since the Pareto survival function and the exponential survival function have closed forms, We can also look at their ratio.
In the above ratio, the numerator has an exponential function with a positive quantity in the exponent, while the denominator has a polynomial in . This ratio goes to infinity as .
In general, whenever the ratio of two survival functions diverges to infinity, it is an indication that the distribution in the numerator of the ratio has a heavier tail. When the ratio goes to infinity, the survival function in the numerator is said to decay slowly to zero as compared to the denominator.
The Pareto distribution has many economic applications. Since it is a heavy tailed distribution, it is a good candidate for modeling income above a theoretical value and the distribution of insurance claims above a threshold value.