Study your flashcards anywhere!

Download the official Cram app for free >

  • Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key

image

Play button

image

Play button

image

Progress

1/47

Click to flip

47 Cards in this Set

  • Front
  • Back
According to Bailey and Simon, How is Relative Claim Frequency Calculated?
Relative Claim Frequency is calculated based on premium rather than car years. This avoids the maldistribution that results when higher claim frequency territories produce more X,Y and B risksand thus results in a higher territorial premiums
According to AAA, List the 3 elements associated with the economic uncertainty of losses
1 Occurence
2 Timing
3 Financial Impact
According to Bailey and Simon, the credibility for experience rating depends on these 2 things:
1 The Volume of data in the experience period

AND

2 the amount of variation of individual hazards within the class+
According to Bailey and Simon, What two assupmtions would have to hold true for us to expect the credibilities for experience periods of 1,2,3 years to vary in proportion to the number of years.
1 If an insureds chance for an accident remained constant from one year to the next.

2 If there were no risks entering or leaving the class
According to Bailey and Simon, What factors cause credibilities for experience periods of 1,2,3 years to NOT vary in proportion to the number of years.
1 Risks entering and leaving the class
2 An insureds chance for an accident changes from time to time or from year to year
3 If the risk distribution of individual insureds has a marked skewness reflecting varying degrees of accident proneness.
According to Bailey and Simon,List the 3 summarized points of the paper:
1 The experience for 1 car for 1 year has significant and measurable credibility for experience rating
2 In a highly refined private passenger rating classification system which reflects the inherent hazard, there would not be much accuracy in an individual risk merit rating plan, but, where a wide range of hazard is encompassed within a classification, credibility is much larger.
3 If we are given 1 years' experience and add a second year, we increase the credibility roughly 2/5. Given 2 years and add a third, this will roughly increase 1/6 of year 2 value.
According to Lee ILF's, lidt 2 alternative methods for calculating the expected loss size in a particular layer
1 The layer method - and it is preferred when the distribution is given numerically, as for example, when actual experience is used.

2 The Size method - and it is preferred when for fitted functions, since G(x) is a function that is generally more difficult to integrate.
According to Miccolis, List 3 general mathmatical properties of ILF's
1 - l(k) must be strictly increasing
2 - l'(k) must be monotonically decreasing
3 - l''(k) = -f(k)/ABLS. Thus, l''(k) can never be positive
According to Miccolis, list 3 problems with the development of a severity distribution from experience data.
1 Immature losses from recent policy years
2 Distributional biases which result from data on losses generated from policies with different limits
3 The credibility of the distribution at its high end is a concern.
According to Miccolis, When the loss severity distribution is available, the formulas in the paper have the following 5 applications
1 Computaion of of expected value of ILF's
2 The adjustment of the severity distribution and ILF's for Trend
3 The computation of risk charges by limit of liability
4 The calculation of the reduction in risk charge afforded by "layering" coverage
5 The computation of the expected value pure premium and the risk charge for excess of loss coverage
According to Miccolis, Define Anti-Selection
Anti-selection is adverse loss experience associated with purchasing higher limits
According to Miccolis, List the 2 forms of Anti-selection
1 Insureds who have higher loss potential purchase higher limits
2 Liability suits or settlements may be influenced by the policy limit
According to Miccolis, The mirror image of adverse selection is "favorable selection", in which the best loss experience is associated with those purchasing higher policy limits. This can occur for 2 reasons: list them
1 Financially Secure insureds, with more assetts to protect, may be better risks.
2 Insurance companies, knowing they are better risks, would be more willing to insure them at higher limnits
According to Finger’s Pure Premium by layer, The “long-tailed” nature of liability insurance arises from these 2 reasons:
1 Delayed Reporting of Claims
2 The lengthy settlement of claims
According to Finger’s Pure Premium by layer, The problem of lengthy claim settlements, when focusing on high layers, stems from these 2 things.
The increased # of claims that enter the layer
The leveraged effects of their amounts (the increase in amount applies only to the highest layer
According to Finger’s Pure Premium by layer, What are 2 problems with using basic limits claim data?
The changing value of basic limits over time impacts
a)the dollar amount of losses
b)% of total losses
According to Finger’s Pure Premium by layer, The 2 main virtues of a lognormal distribution from a modeling point of view are:
1 It can be a highly skewed distribution (the higher the CV -> the more skewed the distribution.
2 It can be justified on an intuitive basis
According to Finger’s Pure Premium by layer, list 4 practical problems that may arise when estimating the Coefficient of Variation:
1 Individual Claim values are not always known
2 Claims tend to cluster at target values (i.e. 2,500, 5,000, 10,000)
3 A large # of nuisance claims are often settled for small amounts
4 Many claims are closed without payment
According to Finger’s Pure Premium by layer, List 3 results of the analysis for estimating CV:
1 If the estimated CV is the same for each attachment point tested, then the distribution can safely be assumed to be lognormal with observed mean and given CV
2 If the estimated CV are randomly distributed about a given value, then the value is an appropriate estimate of the CV.
3 If the estimated CV’s form a progression (i.e. 6, 5, 4), then the observed data is not lognormally distributed. Here the data can be truncated and the remaining data fit to a lognormal.
According to Finger’s Pure Premium by layer, Do allocated expense payments seem to be lognormal too?
YES
According to Finger’s Pure Premium by layer, Property lines may not provide an appropriate fit due to 2 reasons:
1 A tangible fixed upper limit on most property claims
2 Widely varying values at risk
According to Finger’s Pure Premium by layer, list 2 uses for Finger’s method of estimating Pure Premium by layer for a primary carrier:
1 Evaluated the basic limits experience of long-tailed lines.
2 Widely varying values at risk
According to Steeneck’s Review of Finger’s Pure Premium by layer, A model for claim size distribution should have the following 3 characteristics:
1 The estimate of the mean should be efficient
2 All confidence intervals about the mean should be calculable
3 All moments of the distribution should exist
According to Steeneck’s Review of Finger’s Pure Premium by layer, List 2 annoying qualities of the lognormal distribution
1 Fitting problems arise when there are many small values of the variable under consideration.
2 The integral in the characteristic function cannot be solved, and the convolution cannot be expressed explicitly
According to Steeneck’s Review of Finger’s Pure Premium by layer, List 2 reasons why, from a reinsurance perspective, a real world severity distribution is essential.
1 Inflation places XOL reinsurer in a leveraged position, where the cost of error in evaluating can produce disastrous results as claims develop to ultimate
2 Although other distributions are available, estimation of the parameters has been difficult. Few losses exist in these upper layers with wich to make accurate estimates.
According to Steeneck’s Review of Finger’s Pure Premium by layer, Their were 3 problems with the underlying data that Finger used:
1 Accident data from the early 1960’s (were not trended)
2 Smaller claims are associated with the most recent accident years and are higher in volume relative to older, less frequent severe cases
3 The poor fit over the entire range of loss values can be attributed to the frequency with which losses close by incurral year.
According to Walters, Use of Computer models to Estimate Catastrophe lost Costs, Computer simulation of events has replaced traditional methods. List 2 uses of these new methods:
1 measure expected losses
2 Develop risk loadings to compensate for the variance in outcomes.
According to Walters, Use of Computer models to Estimate Catastrophe lost Costs, list 2 benefits of the recent increases in computer capacity
1 making catastrophe simulation feasible
2 enabling scientists to expand their research and produce better simulations through a better understanding of catastrophic events
According to Walters, Use of Computer models to Estimate Catastrophe lost Costs, the severity component of catastrophe modeling requires 3 modules requiring 3 skills, List them:
1 Event Simulation (science)
2 damageability of insured properties (engineering)
3 loss effect on exposures (insurance)
According to Walters, Use of Computer models to Estimate Catastrophe lost Costs, List and advantage and a disadvantage of using Monte Carlo Sampling for determining Frequency parameters.
Advantage – assigns equal probabilities to all sampled items from the entire population, which makes it easy to use and explain to a non-statistical audience
Disadvantage – a lack of precision in estimating unlikely events
According to Walters, Use of Computer models to Estimate Catastrophe lost Costs, list 3 attributes of Stratified Random Sampling to estimate frequency parameters.
1 A more accurate estimation of their distribution, considering homogeneity
2 the estimates to be combined into a precise estimation of the overall population with a smaller sample size than with Monte Carlo sampling
3 the ability to sample a large # of events in each strata than their relative probability in the overall population (which makes estimation of extreme events possible)
According to Walters, Use of Computer models to Estimate Catastrophe lost Costs, Describe how full credibility assignment relates to process and parameter risk.
Assignment of full credibility only means that random statistical variation can be resolved to minimize process risk.
Parameter risk in the selection of the key variables remains, because event frequencies of the past may not be representative of the future.
In the case of hurricane modeling, the pure premium method calculates long-term frequencies separately from the more recent average severities, so parameter risk exists, especially in the freqeuncy calculation
According to Walters, Use of Computer models to Estimate Catastrophe lost Costs, List 3 reasons why catastrophe serial numbers should be retained in coded losses?
1 to subtract cat losses for regular losses in ratemaking
2 to supply cat loss data to modelers to calibrate future models
3 to report to the reinsurers for recovery
According to Walters, Use of Computer models to Estimate Catastrophe lost Costs, List 2 advantages of having separate catastrophe rates.
1 The simplification of normal coverage rating and ratemaking, as well as better class and territory rating of the cat coverages
2 The elimination of a complicated set of statewide indications including hurricane.
According to Walters, Use of Computer models to Estimate Catastrophe lost Costs, Splitting homeowners premium into cat and non-cat premium allows for a separate calculation of risk margin; What is the benefit of this?
The non-cat component is easier to price, with less variability and a lower margin needed for profit. Once the target non-cat margin is selected, the cat margin can be calculated as a multiple of the non-cat component, using some assumptions.
According to Walters, Use of Computer models to Estimate Catastrophe lost Costs, What are the assumptions and arguments for the cat margin to be a multiple of the non-cat margin?
Assumption 1 – profit should be proportional to the standard deviation of the losses
Assumption 2 – a required portfolio risk load related to the standard deviation is not inconsistent with a variance based risk margin for individual risks
Argument for standard deviation as a basis for risk load 1 – the high correlation of the losses exposed to the risk of catastrophe
Argument for standard deviation as a basis for risk load 2 – the large contribution ot parameter risk to the total risk load requirement
According to Walters, Use of Computer models to Estimate Catastrophe lost Costs, How should fixed expenses be loaded when you have split the rates into Cat and non-cat rates?
Fixed expenses that are part of the non-hurricane policy should not be double counted. An easy way to do this is: a) include only the variable expenses in the hurricane rates and b) incorporate all fixed expenses in the non-hurricane rates
According to McClenahan, Insurer Profitability, The distinction between historical profitability and prospective profitability in P&C Insurance is not clear-cut for these 2 reasons:
1 at year end, less than 50% of losses for that year will have been paid
2 the interplay between current reserving decisions and the amortization of past reserving decisions complicates the measurement of insurance profitability.
According to McClenahan, Insurer Profitability, List 2 reasons why the existence of an opportunity cost does not give the policyholder a claim on some part of the actual earnings of the insurer.
1 Assesments cannot be made by the company against insureds for any shortfalls associated with speculative investments that results in loss of policyholder supplied funds.
2 Investment income earned over and above risk free yields are the rewards earned by the company and therefore should not be credited to the policyholders in the ratemaking process.
According to McClenahan, Insurer Profitability, List 2 reasons why investment income on surplus should be excluded from the ratemaking process.
1 Including investment income on surplus creates a situation in which an insurer with a large surplus relative to premium must charge lower rates than an otherwise equivalent insurer with less surplus. This suggests “lower cost for more protection”
2 Policyholder’s surplus represents owner’s equity which is placed at risk in order to provide the opportunity for reward
According to McClenahan, Insurer Profitability, What are 3 options for the denominator in the rate of return calculation?
1 Assets (might be appropriate from the standpoint of economic efficiency)
2 Equity (is clearly the favorite for those measuring relative values of investments) (is an appropriate basis against which to measure company-wide financial performane of a P&C insurer)
3 Sales – Is favored by those who view profit provisions in the context of insurance rates
According to McClenahan, Insurer Profitability, list 2 problems with using return on equity as a basis for using return-on-equity as a basis for rate-of-return in insurance regulation
1 Rate equity has been subordinated to rate-of-return equity (different carriers have different premium to surplus ratios)
2 It requires that equity be allocated to line of business and juristiction
According to McClenahan, Insurer Profitability, what are 2 results of using benchmark premiums-to-surplus ratios?
1 The use of a benchmark eliminates the surplus allocation problem
2 the result is not return-on-equity regulation but return-on-sales regulation
According to McClenahan, Insurer Profitability, Give a problem with return-on-sales regulation:
The method can be extremely complex from a regulatory standpoint
According to McClenahan, Insurer Profitability, List 3 results of using Return-on-sales based rate regulation.
1 Calls for benchmarks for what constitutes excessive or inadequate profit provisions as percentages of premium
2 Is premium based, and independent of the relationship between premium and equity
3 Results in true rate regulation, not rate-of-return legislation
According to McClenahan, Insurer Profitability, list 2 market responses to regulation of rates
1 P&C insurers can react to inadequate rates by tightening U/W and or reducing volume
2 The size of the residual market, the # of insurers in the voluntary market, and the degree of product diversity and innovation are related to the insurance industry perception of the opprotunity to earn a reasonable return from the risk transfer.
According to McClenahan, Insurer Profitability, what is his position on the proper benchmark for excessiveness:
Given the relationship between rate adequacy and market conditions, the proper benchmark for excessivness for a regulator is that which will produce the desired market conditions