• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/282

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

282 Cards in this Set

  • Front
  • Back

Define validity.

Validity is the extent to which a study measures what it intends to measure (internal) and the extent to which the findings can be generalises beyond the research settings they were found (external).

Define face validity.

Whether a test, scale or measure appear on 'the face of it' to measure what it intends to measure.

How can you determine face validity?

'Eyeballing' the measuring instrument or passing it onto an expert to check.

Define concurrent validity.

Demonstrated when the results obtained are very close to those obtained on another recognisable scale/test.

Give an example of concurrent validity.

If a new IQ test was administered and the IQ scores they achieve may be compared with their performance on another IQ test such as the Standford-Binet test. Close agreement between the two sets of data indicate that the new test has concurrent validity (+0.8).

In an experimental piece of research, what validity issues may arise?

Demand characteristics


Investigator effects

In an experimental piece of research, how could you resolve any validity issues?

Use a control group


Standardised procedures


Single/double blind procedures

In a questionnaire, what validity issues may arise?

Social desirability bias


Leading questions

In a questionnaire, how may you resolve any validity issues that may arise?

Respondents remain anonymous


Using a lie scale

In an observation, what validity issues may arise?

Demand characteristics


Investigator effects (overt)


Broad/overlapping behavioural categories

In an observation, how may you resolve any validity issues that may arise?

Operationalised behavioural categories


Control group


Use a covert observation instead

When using qualitative methods, what validity issues may arise?

Population validity (case studies)


Researcher bias


Open to interpretation (subjective)

When using qualitative methods, how can you resolve any validity issues that may arise?

Coherence of researchers reporting


Use of direct quotes


Triangulation

Define reliability.

When a psychological measure can be used multiple times within the same results; it is said to be reliable.

How can you assess reliability?

Test-retest method


Inter-rater reliability

Describe the test-restest method.

- Commonly applied when assessing the relaibility of psychological tests; e.g. personality, IQ or questionnaires.


- The test is administered twice to the same participant and the results are compared


- If scores are obtained then a correlation can be calculated (correlation co-efficient).


- A reliable result is one that is +0.8.

Name a problem with using the test-retest method.

- Deciding on the time-lapse between tests.


- There must be sufficient time in case the attitude or ability being tested actually changes.

Describe inter-rater reliability.

- Involves checking the consistency of ratings that two or more independent researchers have completed.


- The two observers will apply the behavioural categories to a test run (also known as pilot study) and the consistency of the results will be assessed.

When using a questionnaire, what reliability issues may arise?

- Questions may be too complex/ambiguous


- Questions may be interpreted differently by the same person on a different occasion.

When using a questionnaire, how can the reliability be improved?

- Test retest


- Create a correlation co-efficient


- Pilot study to rewrite/improve


- Poorly written questions can be replaced with closed questions

When using an interview, what reliability issues may arise?

- Questions may be leading or too ambiguous


- The participant could waffle on (qualitative data)


- Interviewer bias

When using an interview, how can the reliability be improved?

- Use the same interviewer each time


- Ensure interviewers are trained


- Use a structured interview method

When using an experiment, what reliability issues may arise?

Participants may be tested under slightly different conditions each time

When using an experiment, how can the reliability be improved?

- Use the same instructions for each participant (standardised)


- Use the same conditions for each participants

When using an observation, what reliability issues may arise?

- Different observes have different opinions/judgements


- Observer bias


- Might not have operationalised behaviour categories

When using an observation, how can the reliability be improved?

- All behavioural categories are correctly operationalised


- All possible behaviours included in the checklist


- Train the observers

Define content analysis.

A research technique that allows qualitative data to be placed into categories or themes.

Define the different methods of content analysis.

1. Coding


2. Thematic analysis

Describe coding as a method of content analysis.

The stage of a content analysis in which the communication to be studied is analysed by identifying each instance of the chosen categories (which may be words, phrases or even sentences).

Describe thematic analysis as a method of content analysis.

An inductive and qualitative approach to analysis to involves identifying or explicit ideas within the data. Theories will often emerge once the data has been coded.

What two ways can you deal with and analyse qualitative data?

- Turn it into qualitative data by coding and categorising it. Then use statistical tests to analyse it.


- Analyse each sentence and identify or seek out theme in the content.

How can a researcher review and amend themes in a content analysis?

1. Researcher makes themselves familiar with the data.


2. Categorise the information into meaningful units.


3. Count/label each time this theme occurs in the information.

Evaluate content analysis.

+ Flexibility (in terms of analysing and representing the data according to the aims of the research)


+ Ethical and valid nature (already in the public domain - consent/mundane realism)


- Lack of objectivity (reflexivity - used in descriptive forms of analysis)


- Risk of researcher bias

Explain why case studies are examples of an idiographic approach.

- Often observing a single participants or a group of similar participants.


- Focus on the individual rather than developing a set of laws.

What do case studies allow a researcher to do that a nomothetic approach does not?

Allow a researcher to investigate into a topic in much more detail than if the researcher was trying to generalise a large number of participants.

What kind of qualitative techniques lend themselves to generating material suitable for a case study?

- Semi-structured interviews


- Participant observation


- Personal notes; diaries, letters, notes


- Official documents; clinical notes, appraised reports

Describe an intrinsic case study.

- Only represent themselves


- Knowledge from them is pursued for the sake of knowing rather than a general problem

Describe an instrumental case study.

- Examples of general phenomenon


- Provides the researcher with an opportunity to study the phenomenon by:


Identifying it


Explaining it

Name 5 examples of what 'case' could refer to.

1. Person - study of a single individual


2. Group - study of a distinctive set of people


3. Location - study of place


4. Organisation - study of a single organisation


5. Event - study of a particular social/cultural event

Outline the case study of Koluchova (1976).

- Czechoslovakian, male, identified twins


- Mother died during birth


- Father was of low intelligence


- Boys were never allowed out of the house


- Kept in a small untreated closet


- Discovered at age 7


- Could hardly walk, had acute rickets, fearful, poor speech


- After having lived in a foster home, as adults they appear well adjusted and cognitively able

Outline the case study of Curtiss (1977).

- Genie was found aged 13


- History of isolation, neglect and physical resistant


- Kept strapped to a child's potty in the attic


- On discover her appearance was of a 6/7 year old child


- Unsocialised, primitive and 'hardly human'


- Made virtually no sound and couldn't walk


- Hasn't achieved full social adjustment or language despite intervention

Outline the case study of Corkin (1984).

- HM (27 year old male)


- Operated on his brain to relieve extreme epilepsy which influenced his memory


- Could recall most experiences before the operation


- Couldn't remember new experiences for more than 15 minutes


- Declarative memories (memories of facts/events) vanished


- With practise he could acquire new skills like playing tennis


- Now in his 70's he cannot recognise a photo of himself

Outline the case study of Freud (1909).

- Analysis of a phobia of horse in a 5 year old boy (Hans)


- Family lived opposite a coaching inn so Hans was afraid to leave the house


- Scared of the noise they made with their feet


- Symptomatic of his Oedipus complex


- Horses represented his dad/allowed him to stay at home with his mother


- Freud considered that this represented a disguised form of castration threat anxiety

Define a case study.

An event, problem, process, activity or programme of a single person or several similar people.

Define a bounded system of a case study.

Boundaries of the case; usually time and/or space.

How many forms of data do case studies use?

Multiple sources (almost every kind of qualitative data).

What three types of case study are there?

Single instrumental case study


Collective/multiple case study


Intrinsic case study

Define a single, instrumental case study.

Researcher focuses on an issue/concern and then selects a bounded case to illustrate that issue/concern.

Define a collective case study.

Researcher focuses on an issue/concerns and then selects several bounded cases to illustrate that issue.

Describe the method of a case study.

1. Determine if a case study will answer your research question


2. Identify the case and what types of case study will be used


3. After collecting data, the researcher analyses the data to determine common themes between all of the cases (cross-case analysis)


4. Assertions, when the researcher makes interpretations of the meaning of the case/themes. Researcher makes statements about the lessons learned from the case )what should be learned from the meaning of the data)

Describe the strengths of the case study.

+ Can stimulate new research into extraordinary behaviour that may be unlikely to be investigated otherwise (due to ethics)


+ Can challenge established theories


+ Rich, detailed in-depth data is produced


+ Can allow access to investigations which cannot be manipulated in research labs

Describe the weaknesses of the case study.

- Challenging to determine the case or find an issue/cause and then find a case to illustrate it or study the case itself


- Challenging to determine whether to study/multiple cases


- Challenging to define the boundaries of the case (time consuming)


- Replication and generalisation not possible due to the uniqueness of the situation


- Risk of researcher bias


- Memory distortion when reconstructing case history can mean that memory events are subject to influence

Define an aim.

A general statement of purpose (what the researcher intends to investigate, the purpose of the study).

Define a hypothesis.

Prediction or relationship (a clear, precise, testable statement).

Define directional hypothesis (one tailed).

States the direction of the difference or relationship, these tend to be used when the findings of the previous research studies suggest a particular outcome.

Define a non-directional hypothesis (two tailed).

Does not state the direction, these tend to be used when there is no previous research, or findings from previous research are contradictory.

Define a variable.

Anything that can vary or change within the investigation.

Define an independent variable.

An aspect of the experimental situation which is manipulated by the researcher or changes naturally so that the DV can be measured.

Define a dependent variable.

The variable that is measured by the researcher.

Define an extraneous variable.

Any variable other than the independent variable that may have an effect on the dependent variable if it is not controlled.

Give an example of an extraneous variable.

Participant variables - e.g. age, personality


Situational variables - e.g. noise, temperature


Confounding variables - have already impacted the results

Define a target population.

A group of people who are the focus of the researcher's interest from which a smaller sample is drawn.

Define a sample.

A group of people who take part in a research investigation. The sample is drawn from a target population and is presumed to be a representative of that population.

Describe random sampling.

All members of the target population have an equal chance of being selected.


A list of all members of the target population is obtained, all are then assigned a number.


The sample is then generated through a lottery methods (a computer based randomiser).

Describe systematic sampling.

A type of probability sampling in which sample methods from a larger population are selected according to a random starting point and a fixed periodic interval.


The researcher first randomly picks the first item from the population, then they will select each nth subject from the list.

Describe stratified sampling.

When the entire population is divided into subgroups (strata) based on members shared attributes.


Then the population is randomly selected within each category.

Describe opportunity sampling.

Consists of taking the sample from people who are available at the time the study was carried out and fits the criteria you are looking for.


Based on convince.

Describe opportunity sampling.

A volunteer selecting participants selecting themselves to be part of the sample; hence it is self-selection.


To select a volunteer sample, a researcher may place an advert in a newspaper or in a common room notice board.

Define bias.

When certain groups may be over to under-represented within the sample selected. This limits the extent to which generalisations can be made to the target population.

Define generalisation.

The extent to which findings and conclusions from a particular investigation can be broadly applied to the population. This is made possible if the sample of participants is representative of the population.

Define a pilot study.

A small scale version of an investigation that takes place before the real investigation.

Describe the process of a pilot study.

- Carried out before the main study.


- Often changes are made to the design as a result.


- Pilot study data will not be included in the final data.


- Piloting potentially saves time and money because it helps avoid flawed designs.

Describe a lab experiment.

- Conducted in a well controlled environment.


- Allows accurate measurements to be made.


- Researchers decides where, how and when the experiment takes place and which participants.

Describe a field experiment.

- Conducted in the 'real world'.


- More naturalistic setting in a lab.


- Experiments carried out in this method are more representative of everyday life.

Describe a natural experiment.

- External


- The researcher takes advantage of a pre-existing independent variable.


- Called natural because the variable would have changes even if the experiment tried to stop it.


- Conditions created naturally.


- Can be tested in a lab.

Describe a quasi experiment.

- Internal


- An IV that is based on an existing difference between people (e.g age or gender).


- No one can manipulate this variable, it simply exists.

Evaluate lab experiments.

+ Easy to replicate


+ Precise control of EV/IV allowing cause-effect to be established


- Artificial setting (ecological validity)


- Demand characteristics may bias results

Evaluate field experiments.

+ More likely to reflect real life


+ Less demand characteristics


- Less control over EVs


- Difficult to replicate

Evaluate natural experiments.

+ Can be used in situations which would be ethically unacceptable to manipulate IV


- More expensive and time consuming


- No control over EV


- Difficult to replicate

Define ecological validity.

The extent to which findings of a research study are able to be generalised to real life settings.

Define mundane realism.

The extent to which an activity or the entire study itself is similar to an activity or process one would complete in day to day life. Measure of ecological validity.

Define demand characteristics.

Any cue from the researcher or from the research situation that may be interpreted by participants as revealing the purpose of the investigation. This may lead to the participants changing their behaviour within the research situation.

Define social desirability.

A bias that describes the tendency of survey respondents to answer questions in a manner which will be viewed favourably by others.

Define the 'screw-you' effect.

When a participant actively goes out of their way to do everything wrong in an experiment.

Define descriptive statistics.

The term given to the analysis of data that helps to describe the numerical results of a piece of research (e.g. graphs, measures of central tendency and measures of dispersion).

Define measures of central tendency.

Mean


Mode


Median

Define biomodel.

Some data sets have two modes (two numbers that appear the most)

How do you overcome biomodel?

Use different measures of central tendency.

Define the mean.

Add up all the results, divide by the number of results.

Define the median.

Place results numerically, find the central number.

Define the mode.

Number that appears the most frequently in a set of data.

Evaluate the mean.

Can't be identified within categorised data


Affected by outliers (anonymous results)

Evaluate the median.

Doesn't acknowledge average.

Evaluate the mode.

Not representative of the whole


Risk of bimodal

Define measures of dispersion.

Describes how widely the scores vary and are spread from one another:


Range


Standard deviation

Evaluate the mean.

+ The only MCT that includes all values/scores and is the most sensitive.


+ Provides the most representative score.


- Easily distorted by extreme scores.

Evaluate the median.

+ Middle value is set


- Unrepresentative of all the data


+ Not effected by extreme scores

Evaluate the mode.

- Can be bimodel.


- Unrepresentative of all the data


+ When the data is categorical, this is the only MCT that can be used.


+ Not affected by extreme scoresE

Describe the range.

Calculates the spread of scores.


Can be calculated by subtracting the lowest value from the highest.

Describe standard deviation.

Calculates the spread of scores around the mean value.


Larger the standard deviation, the greater the dispersion and spread of scores.

What does a large SD suggest?

Not all the participants are effected by the IV in the same way because the scores are widely spread. Whereas a small standard deviation would suggest that most participants responded in the same way and that the scores are all tightly packed.

Identify three characteristics of presenting data using a scatter graph.

- To show correlation


- To show a direction (positive/negative)


- No manipulation of 1 variable

How can raw data be presented in a report?

Table of descriptive statistics


Descriptive summary paragraph


Presentative of data in a graph

How do you create a table of descriptive statistics?

- All tables must have a title


- The table mist only display 1 MCT and 1 MD


- Describe the key findings, you must only describe the data reported in the table, you can't make any suggestions about why these results may have been found.

Define a graph.

A visual aid that help make sense of quantitative data, they provide an overall picture that helps summarise the results and show patterns within the data.

What type of graph would you use for discrete/nominal data?

Bar chart

What type of graph would you use for correlational data?

Scattergraphs

What type of graph would you use for frequency/interval data?

Distributions

Define a distribution.

A visual way of examining how frequency data falls within a graph.

Describe a normal distribution curve.

When there is a large amount of frequency data, the distributions prevents a symmetrical 'bell curve'. Most people fall in the middle with very few people falling in the extreme ends. The ends of the tail never touch the axis and therefore never reaches 0 as more extreme scores are always possible.

Where do mean, median and mode fall on a normal distribution curve?

All occupy the same middle point of the curve because they are all very similar in value.

Describe a positive skew on a distribution curve.

When most of the distribution is concentrated towards the left of the graph and there is a long tail on the right.

Give an example of a positive skew on a distribution curve.

Most students in an example got a really low score on an exam.

Describe a negative skew on a distribution curve.

When most of the distribution is concentrated towards the right of the graph meaning there is a long tail on the left.

Give an example of a negative skew on a distribution curve.

Most of the students in a class score really highly on an exam.

Where do mean, median and mode fall on a skewed distribution curve?

The measures of central tendency are not all positioned in the centre; mode remains the highest point in the peak, median always comes second (this is because the median and mode are not effected by extreme scores in the data). Mean is always positioned further way from the mid-point because it is affected by extreme scores.

Define investigator effects.

Any effect of the investigators behaviour (conscious or unconscious) on the research outcome (DV).

Give an example of investigator effects.

Presence of a researcher


Way in which the researcher asks the questions


Researchers expectations of what the findings may be


Gender/age of the researcher

Give an example of demand characteristics.

What the participants may have heard from other participants


Verbal communication


Setting of the study


Characteristics of the researcher

Define randomisation.

The use of chance in order to control for the effects of bias when designing a study.

Define standardisation.

Using exactly the same formalised procedures and instructions for all participants in a research study.

Give an example of randomisation.

A memory experiment that involves recalling words from a list. The list must be randomised through a computer so there is no researcher influence.

Give an example of standardisation.

A standardised set of instructions.

Define an independent group as an experimental design.

One group of participants for the control condition. Another (separate) group of participants for the experimental condition.

Define repeated measures as an experimental design.

One group of participants take part in both conditions. They repeat their participation.

Define matched pairs as an experimental design.

One group pf participants for control condition. Another (separate) group of participants for the experimental condition. But each participants is matched to another.

Evaluate an independent group as an experimental design.

+ Avoids order effects (such as practise or boredom)


- More people are needed than with the repeated measures design


- More time consuming


- Individual differences between participants (such as age, gender)

Evaluate repeated measures as an experimental design.

+ More statistical power because they control EV's


+ Fewer subjects required


+ Quicker and cheaper


+ Assess an effect over time


- Increased likelihood of order effects

Evaluate matched pairs as an experimental design.

+ Reduces participant variables


+ Avoids order effects


- Very time consuming trying to match participants


- Impossible to match people entirely

Define a naturalistic observation.

Watching and recording behaviour in the setting within which it would normally occur.

Define a controlled observation.

Watching and recording behaviour within a structured environment; i.e where some variables are managed

Define a covert observation.

Participants' behaviour is watched and recorded without their knowledge or consent

Define an overt observation.

Participants' behaviour is watched and recorded with their knowledge or consent.

Define a participant observation.

The researcher becomes a member of the group whose behaviour they are watching and recording

Define a non-participant observation.

The researcher remains outside of the group whose behaviour they are watching and recording.

Define observational techniques.

Types of observations; can be naturalistic, controlled, covert, overt or involve participants

Define open questions.

An open question doesn't have a pre-set, fixed range of answers. It allows participants to expand and answer freely in any way they wish to. These questions often answer 'why' people think/behave in certain ways.

Define closed questions.

A closed question has a fixed choice of response, determined by the researcher.

Evaluate observations.

+ Naturalistic observations have a high ecological validity


+ Observations allow researchers to investigate usually unethical topics


+ Can develop new hypothesis for further research


- If participants know they are being observed then their behaviour becomes unnatural


- Ethical issues (consent, confidentiality, right to withdraw)


- Low reliability (inter-rater etc)

Define a questionnaire.

A set of written questions given to participants without the need for a researcher to be present.

Define the Likert scale.

When the respondent indicates their agreement with a statement using a scale of usually 5 points.

Give an example of the likert scale

Zombie films have a educational value.


1 - strongly agree


2 - agree


3 - neutral


4 - disagree


5 - strongly disagree

Define a rating scale.

Gets respondents to identify a value that represents their strength or feeling about a particular topic.

Give an example of a rating scale.

How entertaining do you find zombie films?


Very entertaining Not at all entertaining


5 4 3 2 1

Define a fixed choice option in a questionnaire.

Includes a list of possible options and respondents are required to indicate those that apply to them.

Give an example of a fixed choice option in a questionnaire.

Why do you watch zombie films?


Entertaining


Amusing

Define acquiescence bias.

A tendency for a person to respond to any questionnaire/interview item with agreement regardless of the actual content.

What makes a good questionnaire?

Do not include:


Ambiguity


Double negatives


Double barrelled questions


Emotive language


Leading questions


Use of jargon

Evaluate the likert scale.

+ Easy to answer


+ Easy analysis


- Social desirability bias

Evaluate the rating scale.

+ Easy to answer


+ Easy analysis


- Social desirability bias

Evaluate the fixed choice answer for questionnaires.

+ Easy to answer


+ Easy to analyse


- Restricted choice of answers

Describe Jason & Messick's (1961) f-scale used to measure authoritarian personality (case of acquiescence bias).

- Used the f-scale to measure authoritarian personality.


- Created the reversed f-scale (items were opposite to the original)


- Gave both questionnaires to the same group of respondents and found a strong positive correlation between two sets of results.


- Showed a tendency to agree with items on a questionnaire regardless of if the content of the question was true.

Evaluate questionnaires.

- Demand characteristics


- Social desirability bias


- Response bias & acquiescent bias


- No clarification


+ Cost effective


+ Straightforward to analyse


+ Easy to gather large amounts of data


+ Easy to replicate

Define the interview schedule as part of designing an interview.

This is the list of questions that the researcher will ask in a structured/semi-structured interview.

Define the environment as part of designing an interview.

Interviews should be conducted in a quiet room, away from other people to increase the likelihood that the interviewee will open up.

Define rapport as part of designing an interview.

In one-one interviews it is good to practise to start with neutral questions to make interviewees feel relaxed and comfortable as a way of establishing a rapport with them.

Define ethical consideration as part of designing an interview.

Interviewees must give consent to take part, be told that their answers/identity will remain anonymous and that they have the right to withdraw.

Define recording as part of designing an interview.

Notes can be taken at the time or recorded and analysed later.

Describe the process of designing an interview.

Interview schedule


Standardisation


Environment


Rapport


Ethical considerations


Recordings

What makes a good interview?

Do not include;


Ambiguity


Double negatives


Use of jargon


Double barrelled questions


Emotive language


Leading questions

Define a structured interview.

- Follows a precise list of questions that are asked in a fixed order


- Similar to a questionnaire but conducted by an interviewer


- The interviewer asks the question and wait for a response to record before moving on to the next set of questions on the list

Define a semi-structured interview.

- Has a list of questions but can be changed according to responses


- Sort of interview that is likely to have in an everyday life such as job interview

Define an unstructured interview.

- No set questions


- Similar to a conversation


- Interviewee is strongly encouraged to elaborate and expand on their answers to develop their conversation

Evaluate interviews.

+ Reduced demand characteristics (more likely to be honest in front of someone)


+ Easy to replicate


+ More control over responses


+ Respondents can elaborate


+ More ethical


- Social desirability


- Difficult to analyse


- More costly and time consuming


- Can create a rapport

Define a correlation.

Relationships between two co-variables.

How can correlations be investigated?

Using non-experimental methods that are looking for a relationship/connections such as self-report methods, observations and case studies.

How can you present the data gained from a correlational study?

In terms of STRENGTH and DIRECTION

How can direction be presented in a correlational study?

Direction is expressed as 'positive' or 'negative' (or zero correlation if there is no relationship/ or curvilinear if the relationship is more complex than just one direction.

How can strength be presented in a correlational study?

Strength is indicated numerically as a 'correlation co-efficient' that ranges from +1 through 0 to -1.

Define one direction relationships as a correlation.

One direction relationships are either negative or positive. However if they change direction over time they are known as 'curvilinear'.

Describe a negative correlation.

As X increases, Y decreases.

Describe a positive correlation.

As X increases so does Y.

Describe a zero correlation.

There is no pattern; it is neither positive or negative, this shows no relationship between X and Y.

Why are scatter graphs correlational relationships rather than experimental design?

There is not 'cause and effect' relationship between two co-variables.

What's the difference between an experimental hypothesis and a correlational hypothesis?

An experimental hypothesis predicts a difference between conditions of IV and effect on DV and can predict a direction (more/less etc) for the effect of the IV on DV whereas a correlational hypothesis predicts a relationship between two co-variables and can predict the direction (positive/negative).

What's the difference between an experiment and a correlation?

An experiment aims to establish a cause and effect relationship between the IV and DV whereas a correlation only shows the relationship between the co-variables, it doesn't show that one variable effects the other.

Give an example of a correlational hypothesis.

There will be a relationship between people who drink coffee and levels of anxiety.

Evaluate correlations.

+ Useful preliminary tool as a starting point for further research.


+ Economical and less time consuming


- Cause and effect relationship cannot be established


- May be an EV responsible for the correlation

Who published ethical guidelines?

The British Psychological Society - code of ethics

What's the difference between ethical issues and ethical guidelines?

The Ethical guidelines are a quasi-legal document produced by BPS whereas ethical issues are issues that arise when a conflict exist between the rights of participants in research and the goals of the research.

Describe the BPS Code of Ethics.

A quasi-legal document that instructs psychologists about what behaviour is not acceptable when dealing with participants. It is built around 4 major principles:


Respect


Competence


Responsibility


Integrity

Define informed consent.

Making participants aware of the aims of their research, their procedures, their rights and also what the data will be used for.

Define deception.

Deliberately misleading or withholding information from participants at any stage of the investigation.

Define protection from harm.

Participants should not be placed at any more risk than they would be in their daily lives and should be protected from physical and psychological harm. Participants should not be exposed to more risk than they would expect to experience in everyday life.

Define confidentiality.

Keeping information private; participants should feel confident that the study's report won't reveal information or data which makes it possible for individual participants to be identified, f for their data to be linked to them.

Define right to withdraw.

Participants should be allowed to leave at any point during the study if they decide they no longer want to take part, including retrospectively after the study has finished (their data would be removed from the research and destroyed).

What types of consent are there?

Prior-general


Presumptive


Retrospective

Define prior-general consent.

Participants give their permission to take part in a number of different studies - including one that will involve deception. By consenting; participants are effectively consenting to be deceived.

Define presumptive consent.

Rather than getting consent from the participants themselves, a similar group of people are asked if the study is acceptable. If this group agree then consent of the original participants is 'presumed'.

Define retrospective consent.

Participants are asked for their consent (during debriefing) having already taken part in the study. They may not have been aware of their participation or they may've been subject to deception.

How do psychologists deal with protection from harm?

Psychologists can ask ethics committees to check their research proposals, to help spot any potential problems. At the start of the study they could ask their pp's about any pre-existing problems and at any point in the study they can stop the research at the first sign of any harm occurring. After the study they can debrief all pp's and offer aftercare.

How do psychologists deal with uninformed consent?

Psychologists will ask pp's to read and sign a consent form and will ask the parents of children under 16 years to give consent on their behalf. The carers/specialists of an adult with communication/understanding difficulties will be consulted if they feel they are unable to make an informed decision on their own. Retrospective consent can be gained during the debrief.

How do psychologists deal with deception?

Debriefing should be used to explain the real aim and rationale for the deception, as well as to reassure the participant and allow them to ask any questions they may have. The right to withdraw should be emphasised throughout and retrospective right to withdraw via destruction of data should be offered.

How do psychologists deal with confidentiality?

Psychologists should allocate numbers, letters or codes to each pp to ensure they're kept anonymous, as well as keeping the local of the study as general as possible. Consent must be gained from pp's for their data to used in situations where it is impossible to offer confidentiality.

How do psychologists deal with right to withdraw?

Pp's should be informed of their right at the beginning of the research, and should be reminded of this right at suitable points during and after the research. If they choose to exercise their right, pressure should not be put on them to stay and payment should not be used to coerce pp's. Task avoidance should be taken as a wish to withdraw in child pp's.

Define peer review.

Peer review is the process of subjecting a piece of research, before publication to independent scrutiny by other psychologists in a similar field, who consider the research in terms of its validity, significance and originality.

What is the purpose of peer review?

1. To allocate research funding


2. To validate the quality and the relevance of the research


3. To suggest amendments and improvements

What is the process of peer review?

1. The study is written up by the researchers


2. It is submitted to a journal for publication


3. Editorial board selects articles for relevance and quality


4. The report is read by anonymous reviewers and returned to the journal with comments suggesting they should either accept or revise it before publication. They often recommend rejection with amendments or improvements.

Evaluate peer review.

- Anonymity of the reviewer can be used as a way to criticise rival researchers who are in competition for research funding


- Publication bias; favour research that has a positive rather than negative or non-significant results


- Peer reviewers try to maintain the status quo by being critical of any research that contradicts the current research; it can slow down research in that area

Why is peer review important in psychological research?

- Checks the validity of the research


- It is difficult for authors and researchers to spot every mistake in a piece of work


- Prevents the dissemination of irrelevant findings, unwarranted claims, unacceptable interpretations, personal views and deliberate fraud


- Can also judge the quality and significance of the research in a wider context

Describe the mental health statistics for the UK.

- 1 in 4 people will experience some kind of mental health problem in the course of a year


- Depression affects 1 in 5 older people


- British men are 3x more likely than women to commit suicide


- Self harm rates for the UK are the highest in Europe; 400 per 100,000 population

Describe the economic implications for mental health in the UK.

- Mental illness in the UK costs over £105.2 billion a year through costs of medical or social care


- Sickness absence in work due to mental health costs around £8 billion per year (70 million working days are lost)


- 43% of unemployed people will have a primary mental health issue

Explain why statistical testing is used in psychological research.

Researchers use statistical testing to determine the likelihood that the effect/difference/relationship they have found has occurred due to chance.

Why must we carry out statistical testing even if we have calculated the mean?

Although there appears to be a difference in the mean scores of two conditions we do not know yet whether this is a significant difference. It could be that the difference in the scores was just a coincidence and therefore occurred through chance.

Give an example of a statistical test.

Sign test

What conditions must be in place before conducting a sign test?

1. The researcher must be looking for a difference not a relationship.


2. The experiment must have used a repeated measures design.


3. The data collected must be nominal, i.e. organised into categories.

Define a level of significance.

This is the point where the researcher is able to accept the experimental hypothesis and show statistically that there is a significant difference between two conditions.

Define a 0.05 level of significance.

This means that there is less than 5% probability that the results occurred by chance and that is is more likely that the difference can be explained through the manipulation of the independent variable.

What conditions must be in place before using a critical values table?

- The desired level of significance.


- The number of participants.


- Whether the hypothesis is one tailed (directional) or two tailed (non-directional)


The observed value (the value that is calculated once the statistical test has been completed.

How do researchers determine whether or not the results are significant at the level of 0.05?

Critical values table.

How do you conduct a sign test?

1. Convert the data to nominal; to do this you subtract the Pp's score from the control condition. If the answer is negative put a - sign and if the answer is positive put a + sign.


2. Add up the - & +, if there are any that get the same score then this data is ignored and the total number is adjusted.


3. Take the less frequent sign (s)


4. Compare the s value with the critical value; the s value must be equal to or less than the critical value at the 0.05 level of significance.

What would you include in the design if you were designing a study?

IV


DV (and how you're going to measure it)


Type of method


Type of design

What would you include in the sample if you were designing a study?

Age


Gender


Size


Technique (e.g. opportunity sample)

What would you include in the controls if you were designing a study?

Counterbalancing/randomisation


Standardisation

What details do you include when designing a study?

Design


Sample


Controls


Materials


Procedure


Ethical issues

Define an abstract to a scientific report.

- First section of a journal article


- Short summary that includes all major elements (A, H, M, R, C)


- Psychologists often read to abstract in order to identify if this study is worth further reading

Define an introduction to a scientific report.

- A literature review of the general area of investigation


- Research review should follow a logical progression - beginning broadly and gradually becoming more specific until the aims and hypothesis are presented

Define a method of a scientific report.

- Should include sufficient detail so that other researchers are able to precisely replicate the study


- Design of the study must be clearly stated and reasons for choice


- Sample must include size, target population and sampling method


- Materials must be included


- Controls (e.g. random allocation)


- Procedure


- Ethics (and how they were addressed

Define the results of a scientific report.

- Summary of key findings


- Can include descriptive statistics and inferential statistics


- Must include final outcome (was the hypothesis accepted or rejected)


- If the results are qualitative then the results should include categories/themes

Define a discussion at the end of a scientific report.

- Summary of the results/findings verbally rather than statistically


- Must be presented in the context of the evidence presented in the introduction


- Researcher should be mindful of the limitations of the investigation and discuss them here


- Wider implications of the research are considered

Define referencing at the end of a scientific study.

- Full details of any source material that the researcher used or cited in the report

Give an example of referencing.

Author (year). Title of chapter. Title of book/journal. Publisher/volume no. & page no.

What materials are used within psychological research?

- Standardised instructions: Details are the same for each pp - purpose: to reduce EV's


- Informed consent: Requirement of the study and must remain anonymous - purpose: to make the study ethically acceptable


- Debriefing form: Reveals the true aims of the research and provides contact details in case the pp's withdraws - purpose: to make the study ethically acceptable

Why is statistical testing used in psychological research?

To determine whether a result is due to chance or due to the manipulation of the IV.

What is the accepted level of probability in psychology?

P ≤ 0.05

What does P ≤ 0.05 mean?

That the findings are less than 5% due to chance.

Why do we need significance levels?

Statistical tests work on a basis of probability rather than certainty. All statistical tests employ a level of significance. The usual level of significance is P ≤ 0.05. This means that there is up to a 5% possibility that the observed effect occurred due to chance.

Why can't we used a P ≤ 0.5?

50% likelihood that the results have occurred due to chance.


50% confidence that the results have occurred due to the manipulation of the IV.


Too lenient - Type 1 error

Why can't we used a P ≤ 0.2?

20% likelihood that the results have occurred due to chance.


80% confidence that the results have occurred due to the manipulation of the IV.


Too lenient - Type 1 error

Why can't we used a P ≤ 0.2?

10% likelihood that the results have occurred due to chance.


90% confidence that the results have occurred due to the manipulation of the IV.


Too lenient - Type 1 error

Why can't we used a P ≤ 0.01?

1% likelihood that the results have occurred due to chance.


99% confidence that the results have occurred due to the manipulation of the IV.


Too stringent - Type 2 error

Define a type 1 error.

When a researcher says that the results are significant when in fact they're not . The null hypothesis is rejected and experimental/alternative hypothesis is accepted incorrectly.


AKA false positive/optimistic error


Most likely to happen if the significant level is too lenient.

Define a type 2 error.

When a researcher says that the results are not significant when in fact they're. The null hypothesis is accepted and experimental/alternative hypothesis is rejected incorrectly.


AKA false negative/pessimistic error


Most likely to happen if the significant level is too stringent

Why do psychologists use P ≤ 0.05 level of significance?

It best balances the risk of making a type 1 or type 2 error.

Define statistical testing.

- Used to determine whether any difference found between variables is statistically significant.

What three pieces of information are needed to conduct a statistical test?

- Testing for a difference or relationship


- Research design used


- Level of measurement

Define level of measurement.

The type of quantitative data that is collected from the research.

Define nominal data.

- Data is represented in the form of categories.

Give an example of nominal data.

Are you a smoker or a non-smoker?

Define ordinal data.

- Data can be ranked or ordered in some way


- Doesn't have equal intervals between the numbers


- Can often be represented on a scale of 1-10 and may involve some interpretation

Give an example of ordinal data.

On a scale of 1-10 (1=minimal, 10=a lot), how much do you feel you have improved since your last assessment?

Define interval data.

- Data is based on a numerical scales with equal intervals.


- They use accepted units of measure and so are more objective than ordinal data.

Give an example of interval data.

Time


Speed


Height

What measure of central tendency do you need to use when looking at nominal data?

Mode

What measure of central tendency do you need to use when looking at ordinal data?

Median

What measure of central tendency do you need to use when looking at interval data?

Mean

What measure of dispersion do you need to use when looking at nominal data?

N/A

What measure of dispersion do you need to use when looking at ordinal data?

Range

What measure of dispersion do you need to use when looking at interval data?

Standard deviation

What does N stand for?

Total number of participants

What test do you use when its an experiment, using a repeated measures design and uses nominal data?

Sign test

What test do you use when its an experiment, using a independent groups design and uses ordinal data?

Mann-Whitney

What test do you use when its an experiment, using a repeated measures design and uses ordinal data?

Wilcoxon

What test do you use when its an experiment, using a repeated measures design and uses interval data?

Related t-test

What test do you use when its an experiment, using an independent group design and uses interval data?

Unrelated t-test

What test do you use when its an correlation and uses ordinal data?

Spearman's Rho

What test do you use when its an correlation and uses interval data?

Pearson's r

What test do you use when its an experiment, using an independent group design and uses interval data?

Chi-squared

In statistical tests, if there is an r in the title, how do you know if the results are significant?

Results are significant if the calculated/observed value is equal to or more than the critical value at P ≤ 0.05, therefore the experimental/alternative hypothesis would be accepted and the null rejected.

In statistical tests, if there isn't an r in the title, how do you know if the results are significant?

Results are significant if the calculated/observed value is equal to or less than the critical value at P ≤ 0.05, therefore the experimental/alternative hypothesis would be rejected and the null accepted.

How do you write a statement of significance?

The calculated value of (whichever test you have conducted) is ...


The critical value of (whichever test you have conducted) for a one/two tailed test at P ≤ 0.05 where N/df is ..., is....


As the calculated value of (whichever test you have conducted) is greater than/less than the critical value, the result is/not significant at P ≤ 0.05 and therefore we must reject experimental/alternative/null hypothesis and accept the experimental/alternative/null hypothesis; WRITE THE HYPOTHESIS YOU ACCEPT

What three pieces of information do you need to find out if a statistical test is significant or not?

1. Is the hypothesis directional (one tailed) or non-directional (two tailed)?


2. What is the sample size/degrees of freedom?


3. Level of probability? (always assume this is P ≤ 0.05 unless told otherwise).

Which statistical tests require you to know the full number of participants (N)?

Sign test, Wilcoxon, Spearmans Rho

Which statistical tests require you to know the full number of participants per condition (NaNb)?

Mann Witney

What is important to remember about the full number of participants when using a sign test or a wilcoxon?

Sample size can be reduced where there is no difference between the scores. Any score that is the same in both conditions is ignored and sample size is reduced accordingly.

Which statistical tests require you to know the degrees of freedom (df)?

Unrelated t-test, Related t-test, Pearson's r, Chi-squared

Define degrees of freedom?

The number of values in the final calculation that are free to vary

How do you calculate degrees of freedom in an unrelated t-test?

(Na + Nb) - 2

How do you calculate degrees of freedom in anrelated t-test?

N-1

How do you calculate degrees of freedom in a pearson's r test?

N-2

How do you calculate degrees of freedom in a chi-squared test?

(Rows - 1) x (columns -1)

What 8 features of science are there?

Objectivity


Empirical method


Replicability


Falsifiability


Theory construction


Hypothesis testing


Paradigms


Paradigm shift

Define objectivity.

- Opposite of subjectivity


- Must not allow personal opinion/biases to distort data


- Researchers must keep a critical distance during research


- Objectivity is the basis of the empirical method


- Lab experiments tend to have the highest level of objectivity

Define the empirical method.

- Emphasise the importance of data collection based on direct, sensory experience


- Experimental method and observational method are good examples of the empirical method


- Theory cannot claim to be scientific unless it has been empirically tested

Define theory construction.

- Theory is a general set of laws or principles that have the ability to explain particular events of behaviours


- Theory construction occurs through gathering evidence via direct observation


- Then a series of experiments reveal a connection to the observation


- Provides understanding by explaining regularities in behaviour

Define hypothesis testing.

- To make clear and precise predictions on the basis of theory


- An essential component of a theory is that it can be scientifically tested


- Theories should suggests a number of possible hypothesis


- A hypothesis can then be tested using systematic and objective methods to determine whether or not it will be rejected/accepted


- The process of deriving new hypothesis from an existing theory is known as a deduction

Define replicability.

- Popper (1934) argued that if a scientific theory is to be 'trusted' the findings from it must be shown to be repeatable across a number of different contexts/circumstances


- Replication has an important role in determining the validity of the findings


- Used to see the extent to which the findings can be generalised


- In order for replicability to become possible, it is vital for researchers to report their investigations with as much precision and detail as possible

Define falsifiability.

- Popper (1934) argued that a key criteria of scientific theory is its falsifiability


- Genuine scientific theory should hold itself up for hypothesis testing and the possibility of being proven wrong


- Even when a scientific principle has been successfully and repeatedly tested it was not necessarily true instead it has simply not been proven to be false - theory of falsification


- Clear line between good science in which theories are constantly challenged and pseudosciences which can't be falsified

Define paradigms .

- Kuhn (1962) argued that the difference between scientific disciplines from non-scientific disciplines is a shared set of assumptions and methods (paradigm)


- Psychology lacks a universally accepted paradigm (seen as a pre-science) there are too many contradicting approaches to qualify as a science


- Natural sciences are characterised by having a number of principles as their core such as the theory of evolution

Define a paradigm shift.

- Kuhn (1962) suggested that this is the process when an established science has a scientific revolution


- A handful of researchers begin to questions the accepted paradigm and as the critique begins to gather popularity and pace a paradigm shift occurs when there is too much contradictory evidence to ignore.

Give an example of a paradigm shift.

The change from a Newtonian paradigm in physics towards Einsteins theory of relativity.

Write a debrief that the psychologist could read out to the participants?

Thank you for taking part for in this investigation. The aim of the study was...


To ensure you are fully debriefed you must know...


I'd like to remind you that your data and any personal details will remain confidential and anonymous.