• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/42

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

42 Cards in this Set

  • Front
  • Back
Identifying loss exposure

Organizations typically use the following methods to identify loss exposure: document analysis, compliance review, inspections and expertise inside and outside the organization.



Methods that you use are dictated by the loss exposures that you're looking for.

Document analysis


Reviewing multiple documents is necessary. Need to look at many documents, not just checklists and questionnaires, in order to get "big picture" view of what's going on. These docs can include existing insurance policies and financial stmts.

Document analysis (checklists and questionnaires)

Checklists cannot be used to show how loss exposures affect specific organizational goals (standardized b/c they are prepared by insurer so they will only cover exposure for which you can purchase insurance).




Questionnaires usually involve considerable time, expenses and effort to complete and still may not identify all loss exposures.




Checklists and questionnaires can help the organization identify loss exposures (often are prepared by the insurance comp to help organization identify loss exposures). Relate mainly to loss exposures for which commercial insurance is generally available.


Document analysis pt.2


Financial stmts can identify loss exposures. Balance sheet (shows assets and liabilities) can help identify property and liability loss exposures. Income stmt can help identify net income loss exposures. Stmt of cash flows summarizes cash inflows and outflows, can help determine if comp has adequate liquidity to satisfy losses when they come due.




Financial stmts are important but are limited b/c they don't quantify or identify individual loss exposures.

Compliance reviews


Compliance review can identify loss exposures. Determines an organization's compliance with federal and state laws and regulations. Can be conducted by organization if it has adequate in-house resources. Can help an organization minimize or eliminate liability (not property) loss exposures.




Document analysis can help to identify almost any loss exposure but compliance reviews are designed to help identify liability loss exposures only, not property.




They are typically time consuming and expensive and require ongoing monitoring.


Inspections
Some loss exposures are best identified by personal inspections. Often reveal loss exposures what would not appear in detailed written descriptions of the organization's operations and procedures (walking through a plant or warehouse to look for loss exposures). Should be conducted by person able to identify unexpected loss exposures. Inspector should discuss operations with front-line personnel.



Good for identifying property loss exposures or even liability loss exposures.

Expertise


Identification of loss exposures should include solicitation of expertise from inside and outside the organization. Includes interviews with employees from every level of the organization. External professionals should also be consulted.




Hazard analysis is a method of analysis that identifies conditions that increase the frequency or severity of loss.

Data requirements

In order to analyze loss exposures you have to gather appropriate data. The most common basis of an analysis of loss exposures is information relating to past losses arising from similar loss exposures. Using this data you can project future exposures.




Data should meet certain criteria, it should be: relevant, organized, complete and consistent (ROCC)

Relevant data


Past loss data must be relevant to the current or future loss exposure. Many elements of past data may not be relevant due to advances in technology.




For property losses, relevant data includes the repair or replacement cost of the property. Historical cost is not relevant.




For liability losses, data should relate to past claims that are similar to potential future claims.


Organized data

Data must be organized for the risk management professional to use to identify trends and patterns (may use spreadsheet). Organizing losses by size, not cause of loss, is the foundation for developing loss severity distributions over time. Losses can still be organized by COL, however size is usually most important criteria.
Complete data

What constitutes complete loss data depends largely on the nature of the loss exposure being considered (should you use data from 5 yrs ago? that depends on the loss exposure, e.g. if you go back 30yrs for auto loss exposure info the data may not be complete as cars were so different back then). Complete data allows the professional to reasonably reliable estimates of the dollar amount of future losses.




Complete data on a property loss would include costs to repair or replace, loss of revenue, extra expenses and possibly overtime wages (do workers have to work overtime to make up for destroyed property?). Maximum possible loss is typically the value of the building plus the contents of the building (not wages that would have been paid to workers in the time they couldn't work there).


Consistent data


Data must be consistent (collected on consistent basis for all recorded losses).




Must be expressed in consistent dollars. Historical losses must be adjusted for inflation.




E.x. $100,000 loss 10 yrs ago (nominal dollars) would cost $130,000 (constant dollars) today.




Constant dollars represents the value in a specific base year. Current dollars value represents the dollar value to day and is calculated by increasing historical values by inflation.

Probability

Probability is the frequency an event can be expected to occur in a stable environment. Probabilities are an important part of loss exposure analysis. Can be expressed as a percentage (50%), decimal (.5), or fraction (1/2).




Probability is affected by theoretical probability, empirical probability and the law of large numbers.


Theoretical probability


Theoretical probability is developed based on theoretical principles, rather than actual experience (not based on past experience). Toss of coin or throw of dice.




Theoretical probabilities are unchanging. However, they are generally not applicable to insurance. Typically insurance is empirical.


Empirical probability


Empirical probability is probability based on actual experience. E.x. life expectancy which would include the probability of a 60yr old dying. Only represent estimates and their accuracy depends on nature of the samples studied.




Empirical probabilities change over time while theoretical probabilities do not change. Risk management professionals use empirical probabilities (insurance).

Law of large numbers


The law of large numbers states the inaccuracy between empirical frequency and theoretical probability decreases as sample size increases. As number of independent events increases there is an increased chance that the actual results will be close to the expected results.




E.x. if a coin is flipped once and it's heads, it is expected that the second flip will be tails b/c the theoretical probability is 50%. However the second flip lands on heads. In this case the theoretical probability (what was expected) didn't happen. The empirical frequency (what actually happened) was different than what, theoretically, we expected to happen. Law of large numbers says the more time the coin is flipped, the more likely you will end up with the 50/50 split. If coin were flipped twice you may get heads both times, but if its flipped 1000 times there will be a more even distribution of 500 heads and 500 tails.




Probability analysis is best for organizations with substantial volume of data on past losses and stable operations. Forecasts of future losses are more reliable if data sample is larger.

Law of large numbers pt.2

The law of large numbers is only effective if the events being forecast are:



Independent of one another and numerous,




Events have occurred in the past under substantially identical conditions




Events expected to occur in the future under the same, unchanging conditions




Relies upon the following characteristics: mass (data sample must be large enough to represent the true probability), independence (occurrence of loss to one exposure should not affect the probability of a loss to another exposure), homogeneity (exposure units must have similar characteristics)


Probability distribution

A probability distribution is a table, chart or graph depicting estimates of a set of circumstances. Both theoretical and empirical probabilities can be used to construct distribution. Probability distributions can be either discreet or continuous.




Outcomes must be mutually exclusive (only one outcome is possible) and collectively exhaustive (all possible outcomes are represented) e.g. if it's a coin toss, mutually exclusive means there's a 100% probability the outcome will be either heads or tails while collectively exhausted means there isn't a third option, only those two options.


Theoretical probability distributions

Theoretical probability distribution tables are developed based on theoretical probabilities. Not used much for insurance.




E.x. if two dice are rolled, one blue and one yellow: there are 36 equally likely outcomes, there is a 1/36 probability of each outcome, there is a 6/36 probability of the total equaling 7 (B/Y - 1/6 or 6/1, 2/5 or 5/2, 3/4 or 4/3)


Empirical probability distributions

Empirical probability distributions are based on historical data (e.g. life expectancy, auto losses). Loss categories (bins) must be designed to include all possible losses. Bins are arbitrarily defined values that may or may not be divided into equal sizes.




Sum of empirical probabilities must equal 1 (100%).




Probability for each outcome must be defined.




No evident upper limit in size category of losses (no maximum loss).

Discreet and continuous

Distributions can also be classified as discreet and continuous. Discreet probability distributions have a limited, countable number of possible outcomes (theoretically if you throw a coin you will get head or tails, no other options). Typically used as frequency distributions showing how often something will occur.




Continuous probability distributions have an infinite number of possible outcomes (empirical can have a bin for an amount+ which would include that amount and anything over that amount, continuous). Can be presented by dividing the distribution into a countable number of bins. Most effective in depicting the value of the loss, rather than the number of outcomes.

Central tendency

The measure of central tendency represent the best guess as to the actual outcome. Central tendency is the most representative of all possible outcomes. Many probability distributions cluster around a particular value.




Measures of central tendency include: mean (expected value), median and mode.




The expected value is derived from theory and represents the weighted average of all possible outcomes in a probability distribution. The mean is derived from experience.

Expected value or mean

The expected value and mean represent the average of the data set. Sum of values divided by number of values.




Best guess for a risk management professional of what a future outcome will be.


Median

The median is the value at the midpoint of a sequential data set with odd number of values.




A probability distribution's median has a cumulative probability of 50%, meaning one-half of the values are above the median and one-half are below.




Example - in the distribution 1,7,15,18,21, the median is 15.

Mode
The mode (sounds like most) is the value that occurs most in a probability distribution. Mode helps insurance professionals focus on the outcomes that are most common.



Mode is the value of the outcome directly beneath the peak of the probability density function.


Distribution

The relationships among the mean, median and most are illustrated by a distribution's shape.




Symmetrical - one side of the curve is a mirror image of the other side. Includes bell-shaped curve. Mean median and mode are equal when there's symmetrical distribution.

Distribution pt.2
Asymmetrical (not bell curved) - skewed distribution. Common when most losses are small but there is a small probability of large losses. Mean and median values will differ. Median value typically a better predictor than the mean as to what is most likely to occur.

Dispersion

Dispersion describes the extent to which the distribution is spread out (how wide or narrow will normal distribution curve be?). Insurance professionals may use dispersion to determine whether to offer insurance coverage to a possible insured.




Dispersion is measured using standard deviation.


Standard deviation

Standard deviation represents the magnitude of the dispersion of values from the mean (tells us how wide or narrow the curve will be, the wider the curve, the more dispersion, therefore the riskier something will be). The probability of each outcome doesn't need to be known to calculate standard deviation. One must know just how often each outcome occurred (frequency).

Standard deviation pt.2

The higher the standard deviation, the more risk you're taking on. The lower the standard deviation: the more likely results will fall within a given range of the expected value. The less risk involved in a loss exposure.




If two distributions have the same expected value (mean) the distribution with the lower standard deviation has less risk (e.g. mean is 10% for both 9 and 11 as well as 5 and 15, the dispersion of 5 to 15 is more risky as standard deviation is higher).




If the two distributions have different expected values (means) and standard deviations you will chose the investment with the lowest coefficient of variation.

Coefficient of variation

The coefficient of variation compares two distributions to determine which has greater variability relative to its expected value (risk per unit of return). Standard deviation divided by mean. Insurers can used the coefficient of variation to determine whether a loss control measure has made losses more predictable.




The higher the coefficient of variation the greater the relative variability, the greater the risk. Insurers can use COV to determine whether a loss control measure has made losses more predictable.




So, if two outcomes have the same expected return (mean), chose the one with the lower standard deviation. If they have different returns, chose the one with the lower coefficient of variation.


Normal distribution

A normal distribution is a bell-shaped curve probability distribution. Assigns some positive probability for each outcome regardless of its distance from the mean. Curve never touches horizontal line (meaning probability is never 0).




Normal distribution cannot predict when losses such as physical damages will occur but it can instead be used to provide a way of scheduling maintenance for property that could become dangerous if it were to fail. It can be used to help insurance professionals marshal resources to control losses.




It indicates there is a less than 5% chance of having an outcome outside TWO standard deviations from the mean.


Normal distribution pt.2

Under a normal distribution:




68.26% probability of all outcomes are within one standard deviation of the mean.




95.44% probability of all outcomes are within two standard deviations of the mean.




99.74% probability of all outcomes are within three standard deviations of the mean.




E.g. if you have an investment with a mean return of 10%, and standard deviation of 2%, that tells you that 1 standard deviation above the mean (10%) is 12% and 1 standard deviation below the mean is 8%. That means there's a 34.13% probability that your return on investment will be between 10%-12% and a 68.26% probability that your return will be between 8%-12%.




Statisticians have shown that 1.65% standard deviation below the mean will cut off the lowest 10% of any normal distribution.

Normal distribution insurance example

Worried about loss exposures and there are several machines at a warehouse that make up inventory. If the machine fails it will be a big problem. On average a machine lasts 5000hrs. and the standard deviation is 300hrs. So there's a 66.26% probability the machine could fail at between 4700hrs and 5300hrs. There's a 2.28% probability the machine will break down before it reaches 4400hrs.

Four dimensions of a loss exposure:

Loss frequency, loss severity, total dollar losses, timing.




If any of these dimensions involve empirical distributions developed from past losses, data credibility must be determines. However, data credibility itself is not one of the dimensions.


Loss frequency

Loss frequency is the number of losses occurring within a specific period.




Relative loss frequency is number of losses relative to exposure units. E.x. if organization has 10 buildings and 10 theft losses per year, the loss frequency is 10, and the relative frequency is 1 (b/c that's one loss per building). Two of the most common applications of relative frequency measures are auto accidents per miles driven and injuries per person per hour.




Most organizations don't have sufficient exposure units to predict low-frequency, high-severity events (insurance is available for these type of events).

Loss severity


Loss severity represents how serious a loss might be (dollar amount).




Maximum possible loss is total value exposed to loss from any one event or at any one location. Typically limited to value of building plus contents. Liability losses are limited only by the amount of defendant's total wealth.

Frequency and severity

Loss frequency and severity must be considered together to fully analyze a loss exposure. Exposures generating minor but definite losses are typically retained. Frequency and severity of loss exposures tends to be inversely related.




Prouty approach (just know general idea) - considers four categories of frequency and three categories of severity. Considers both low and high severity losses. Helps risk management professionals prioritize loss exposures.


Total dollar losses
The total dollar loss is calculated by multiplying expected loss frequency and loss severity. Estimates can be used to determine which risk management techniques are appropriate. Worst-case scenario can be approximated by multiplying maximum possible loss by high estimate of loss frequency (what's our total maximum exposure?).
Timing

Timing refers to when losses occur and when loss payments are made. Timing dimension is critical (for liquidity concerns and understanding how much can be earned on an investment while waiting to pay out a loss). Amounts held in loss reserves can earn interest until the actual payment is made (sometimes losses don't need to be paid out right away, in which case the money can be invested and grow until loss comes due).




Liability losses often involve long delays between the occurrence of the event and the payout for the loss.


Data credibility

Credibility represents the level of confidence that available data can accurately predict future losses. Age of data can influence credibility. Changes in environment can influence credibility. Data has to be relevant, organized, complete and consistent. May be affected by internal and external factors. External factors include natural disasters and terrorist attacks.




Insurance professionals must determine whether to use more recent data that may be more relevant, but loss accurate, than old data.

Total dollar loss calculation example

In an array of 20 losses, the expected value of loss frequency is 4.9 and the expected value of loss severity is $385.50. Therefore, the expected total dollar loss is 4.9 x $385.50 = $1888.95.