• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/81

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

81 Cards in this Set

  • Front
  • Back

Independent Design

comparison of 2 score sets from different participants

Related Design

comparison of 2 sets of scores from the same participants

Nominal Data

collected in categories; scores cannot be put in order on a scale


e.g. conformed or not1

Ordinal Data

scales that are made up by psychologists - subjective

e.g. ratings of aggression/attractiveness; self-report



Interval Data

where intervals between points on a scale are identical


e.g. someone who took 5 seconds took half the time of someone who took 10 seconds; someone who took 4 seconds took double that of someone who took 2 seconds

Ratio Data

interval data but with an absolute zero i.e. no negative scores


e.g. height or number of words recalled

Chi-squared

- tests of difference


- independent design


- nominal data

Mann-Whitney U-test

- tests of difference


- independent design


- ordinal/interval/ratio data

Wilcoxon Matched Pairs Signed Ranks Test

- tests of difference


- related design


- ordinal/interval/ratio data

T-test for Independent Samples

- tests of difference


- independent design


- interval/ratio data

Spearmans Rank Order Correlation Coefficient

- tests of correlation


- ordinal/interval/ratio design

Pearson's Product Moment Correlation Coefficient

- tests of correlation


- interval/ratio data

Significant Result

- difference = significant if it is unlikely to be due to chance


- can reject the null hypothesis & accept the alternate/experimental hypothesis


- if a finding is significant (at p≤0.05) the probability of achieving a difference (or correlation) as strong as that found just by chance is less than 5%

Hypotheses

null


alternative/experimental:


- 1-tailed


- 2-tailed

Null Hypothesis

"there will be no correlation/difference between..."

1-tailed Hypothesis

- directional


used when:


- a theory predicts the direction of a difference/correlation


- when previous research found a difference in a particular direction


"there will be a positive/negative correlation between..."

2-tailed Hypothesis

- non-directional


used when:


- different theories make different predictions


- previous research is contradictory


- there is no previous research


"there will be a correlation/difference between..."

Type 1 error

- false positive


- claiming a difference is significant when it isn't


- more likely with a lenient p. value


- w/ p≤0.05, probability of a type 1 error is 5%

Type 2 error

- false negative


- claiming a difference isn't significant when it is


- more likely w/ a stringent value

Using the tests

- calculated/observed value has to "beat" the critical value


- critical value = dependent on the level of significance + whether the hypothesis is 1 or 2 tailed + either N or degrees of freedom

Spearman's

- calculated value = correlation coefficient


- calculated value has to be equal to or more than

Wilcoxon's


Mann-Whitney

calculated value has to be equal to or less than

Chi-squared

- degrees of freedom = given you know the column & row totals in the table of data, how many of the actual frequencies do you need to know to work out the rest


- (rows-1) - (columns-1) = degrees of freedom


- calculated value has to be equal to or higher than the critical value


- always use the value for a 2-tailed test even when a 1-tailed was used

Ranking data

- if ordinal data is used, then a subjective scale has been used


- the best we can do is put the data in rank order - as we cannot say that one score is double that of another (e.g. GAF scale, do not know that a score of 45 is half that of 90)

Quantitative data

in the form of numbers

Qualitative data

non-numerical form


e.g. a description of a clinical case, a transcript, a diary


advantage: greater validity than quantitative data; truer to life


disadvantages: qualitative data is subjective and ∴ more open to bias; previous opinions may make researchers bias in their observations - researchers may report / "cherry-pick" data which supports their views; a different researcher could do the same observation & arrive at a completely different conclusion

Methods of qualitative data collection

- interviews, esp. semi and un-structured interviews involving open questions


- observations, esp. unstructured & participant observations

Options in qualitative data analysis

- convert qualitative data into quantitative data, e.g. content analysis


- extraction of info w/out conversion into quantitative data

2 methods of analysing qualitative data

Aim: provide a systematic way of analysing findings so the analysis of qualitative data is not subjective/open to bias


content analysis: actual conversion of the qualitative data into categories; may involve scoring, e.g. the amount of stress in different jobs from diaries; this converts qualitative data into quantitative data


thematic analysis: aims to identify patterns/themes within the qualitative; used to analyse data from bodies of text, e.g. interviews, newspaper articles; this identifies categories but doesn't count instances within categories

Thematic Analysis

1) transcribe and read


2) divide into meaning units


3) search text & highlight themes


4) adjust as sorting continues


5) define & name themes when completed


6) write report, presenting & supporting themes

Theoretical/Inductive Analysis

theoretical: theory outlines the analysis


inductive: theory only emerges from the data after analysis

Reliability

consistency/sameness of a measure, method or researcher

Validity

truth/accuracy of a measure or method

Test re-test reliability

similarity of 2 sets of scores taken on different occasions

Split-half reliability

similarity of 2 sets of scores from different halves of a questionnaire

Inter-rater/observer reliability

similarity of ratings made by different observers

Intra-rater/observer reliability

similarity of ratings make by the same observer

Improving Reliability

pilot study:


- materials should be tested to ensure they yield reliable results from pps; aim to ensure questions are not affected by irrelevant factors & to eliminate items in questions that do not correlate w/ others


- researchers should be trained to ensure they are reliable over time & w/ each other to ensure inter/intra-rater reliability; should eliminate inconsistencies in ratings & observations

Internal Validity

- operationalisation


- controls


- experimental validity (realism)

External Validity

- temporal validity


- ecological validity


- population validity

Types of Validity of Measurement

- face validity: extent to which a way of measuring something looks valid


- concurrent validity: extent to which a score is similar to a score on another test that is known to be valid


- predictive validity: extent to which a score predicts future behaviour

Threats to Validity

- demand characteristics: pps who know they're in a study may guess what the study is about & change their behaviour


- social desirability effects: pps may behave/respond in socially acceptable ways, either to look good to the researcher or themselves


- order effects: what pps experience earlier in the study may affect their behaviour/responses later in the study


- hawthorne effect: pps who know they are in a study may try harder than they would in everyday life

Improving Internal Validiy

- ensuring extraneous variables are controlled; aim to remove all differences from the 2 conditions apart from the independent variable - enabling the testing of cause & effect


- redesign studies to exclude other possible explanations of the results


- studies should also have experimental validity


- best way to improve validity is in advance; conducting a pilot study can identify & eliminate extraneous variables

Improving External Validity

replicate the original study:


- using a very different set of pps from the original study, thus testing whether the original has population validity; particularly useful if a cross-cultural study is conducted


- using a different experimental method, i.e. a more realistic setting using a field experiment; this would test the ecological validity of the original study


- many years after the original; this would test the temporal validity

Sampling

Population: those who the study is about/the study applies to


Representative: it's representative if those who are studied are similar to the population, so we only need to study a sample instead of the population


Generalisation: claim that what is true of the sample is also true of the population

3 Sampling Methods

Random: use of sampling population/frame; all members of the population have an equal chance of being in the sample


Opportunity: using whoever is available


Volunteer: publicise research in an appropriate location where the population will be able to see it

Bias

- reduces generalisability

sources of bias:


- deliberate & unwitting bias by the researcher in selecting pps:


- use of unrepresentative sampling populations from which the samples are taken


- random error, when taking samples, more likely w/ small samples


Deliberate or unwitting bias

- when selecting pps

- researcher choose pps who they believe will give them results which support his hypothesis



Unrepresentative sampling population

- from which samples are taken

- research on WEIRD (Western, Educated, Industrialised, Rich, Democratic) pps may not generalise to other cultures

Random Error

- possible to get unrepresentative samples just by chance

- more likely to occur if a small sample is being chosen


- can be addresses through the use of a stratifies sampling method


Stratified Sampling

- divide the sampling frame into groups to be represented in the sample


- sample randomly from within these groups


✓ eliminates the possibility of v. unrepresentative samples which can be produced by random sampling


✗ expensive to run

Quota Sampling

- used when a sampling frame is unavailable


- draw up quotas to be represented, recruit volunteers until quotas are full


✓ improves likelihood the sample is representative by ensuring proportions of each strata are representative


✗ issues associated w/ using volunteers - e.g. in attachment, few bad parents will volunteer → bias sample; if pps who do not agree to take part (after being asked to give informed consent) are unrepresentative so too will be the sample

Ethical Issues

- informed consent: pps should be made aware of anything that might affect their willingness to participate; children under 16 cannot give their own consent - must be gained from a parent on their behalf; school setting - children have to give their consent as well as the head teacher

- right to withdraw: should be clear at the onset of the study, i.e. in the consent form; pps also have their right to withdraw data after the study


Contents of consent forms

- information sheet about the study


- how ethical issues will be dealt with


- declaration of consent

Debriefing

- should provide pps with any necessary info to complete their understanding of the nature of the research & monitor any unforeseen negative effects or misconceptions


- pps should leave the study in the same state as they entered in


contents:


- preliminaries - thanking pps


- procedure - clarifying aim of study


- ethics - dealing w/ remaining issues





Writing a consent form

- main aim = to gain informed consent


information sheet about the study:


- statement of purpose of study, as much as this needs to be revealed


- what pps will be asked to do; procedures, as much as this needs to be revealed


how ethical issues will be dealt w/:


- how anonymity & confidentiality will be ensured


- assurance about right to withdraw


declaration of consent:


- form for pps to tick & sign

Writing a debriefing form

preliminaries:


- thank pps for taking part


procedure:


- clarify the aim of the study, inc. anything they need to know to complete their understanding of the study, i.e. anything omitted from the consent form


ethics:


- ask if they have any questions


- identify any unforeseen discomfort, distress or other negative effects of the study


- remind pps of their right to withdraw

Structure of a psychological report

- title


- abstract


- introduction


- method/procedure


- results


- discussion

Title

- should give a clear idea of what the research is about


- naming variables


- naming theory being tested

Abstract

- summary of the key points of the research


- list of key terms is oft provided separately


- allow researchers to quickly decide if the research is of interest to them


- also allows computerised databases to be searched quickly & efficiently for relevant research

Introduction

- introduction to background of the study


- covers key: theoretical background; relevant studies


- covers issues & debates about the topic


- "funnel" technique - general material → more specific research/theories


- leads logically towards aims & hypotheses of the study at the end of the intro

Method/Procedure

- clear comprehensive account of the method/procedure allows replication of study; finding can thus be checked; allows others o evaluate procedures - check reliability & validity of procedure


sub-sections:


- method & technique: lab exp, questionnaire...


- design: repeated measures, longitudinal...


- variables: IV & DV/co-variables


- pps & sampling method: number of; characteristics; sampling method


- materials: description of measuring "tools", e.g. interview schedule, questionnaire w/ full original in appendix


- procedure: full description of precise procedures used, inc. standardised instructions, how techniques were implemented


- controls: counterbalancing, matching


- ethical issues: any possible issues & how they are dealt w/

Results

- descriptive stats: inc. measures of central tendency & dispersion, graphs & tables, written account of findings


- inferential stats: details of stat test: inc. choice of test, level of significance, 1 or 2-tailed, critical value, calculated value, relationship of results to hypothesis


- identify the test to be used before to ensure there is an appropriate test for the data to be collected

Discussion

- explanation of results: how they relate to theory in intro


- discussion of strengths & weaknesses: validity; reliability; generalisability


- discussion of possible improvements & future research


- possible practical implications: usefulness in real-life situs

References

- alphabetical list of sources used


- allows others to check your accounts of research/theory are fair

Appendicies

includes:


- consent form


- lengthy instruction sheets


- original materials


- raw data (anonymised)


- debriefing form


would break up the flow of the text if used in the body but need to be there for replication, checking of calculations, assessment of ethical issues etc

Major Features of Science

- objectivity


- replicability


- theory construction


- hypothesis testing


- use of empirical methods



Objectivity

- impartial


- could ideally be accepted by any subject bc it doesn't draw on any assumptions, prejudices or values


- encourages investigators to proceed objectively, putting aside personal biases & prejudice

Main threat to objectivity = bias

- people are prone to a variety of cognitive biases, they interpret events differently & in psychological measurement there is oft room for interpretation, e.g. how to score or categorise behaviour


- room for interpretation = room for biases to come into play


- most people interpret events in biased ways - seeing things that aren't there, or basing ideas on prior assumptions


2 common biases:


- seeing correlations between variables that aren't necessarily there


- seeing cause & effect relats that aren't necessarily there

Replicability

replicability of a procedure:


- a study can be repeated in the same way


- others should be able to repeat the research; enabling them to check if results were a fluke, due to sampling bias


replicability of results:


- if the study is repeated, results will be the same


- if they are not replicable the empirical claim of the research is questionable & any support for the theory being tested is undermined




- a field experiment is likely to be lower in replicability than a lab experiment


- an unstructured interview is likely to be fairly low in replicability

Replication/Triangulation

replication = replicability of the results of a specific study


triangulation = replicability of the effect found in a study; evidence from different types of studies for the same effect

Theory Construction

- theory = explanation of why things happen the way they do


- in Psych, trying to explain the human mind & behaviour; construct theories to explain why human beings behave the way they do

Hypothesis Testing

- theories → hypotheses (claims that can be tested through research)


- hypothesis = a testable prediction


- decent theory → a precise, testable hypothesis


- ideally a theory → a hypothesis that, if supported, can only be explained by that theory

Use of empirical methods

- fundamental feature of science


- observe & measure phenomenon using objective, replicable, systematic techniques for collecting data - stat tests, experimental methods


- rejection of non-empirical methods



Validating New Knowledge

replication


triangulation


cross-cultural research


reviews


meta-analyses

Reviews

- summarise results from research on a topic


- usually take the form of a narrative/extended account of the trends in research


- allow researchers to identify overall trends in findings


2 issues:


- often invalidated by cherry-picking, i.e. only including studies that support the author's views


- statistical analysis is/was oft not v. systematic or rigorous; positive/negative/neutral used - ignored size of differences & sample sizes

Meta-analyses

- more sophisticated reviews


✓ deal w/ 2 problems of reviews:


- less likely to cherry-pick: methodological criteria used to decide if a study should be included


- results from studies are combined in a systematic way: results from studies are oft merged to give an overall score aka the effect size - a weighted average wherein larger studies count for more than small ones


✗ file drawer problem: if researchers do not find a significant result, they may not bother to submit it for publication or be less likely to get it published if they do submit it could → a misleading pattern of results

Peer Review

- designed to ensure good quality research, increasing the knowledge base of psych, is published


- to be accepted as credible, research must be published in an academic journal for which there is a peer review process

Stage 1 - Peer Review Process (by experts from the journal)

- research is submitted to a journal in a standard format


- then assessed by staff from the journal - editor, editorial board & external reviewer - experts in their field & competent to judge the merits of the paper


- ideally, blind reviewing process used - reviewer = unaware of the identity of the researcher; methodologically reviews the study first


✓ creates set of standards for research; researchers know how to conduct research to get it published in an acceptable place


✓ ensures high methodological quality


✓ filters out poor quality research

Stage 2 - Peer Review Process (by wider academic community)

- those who conduct research in the same field respond to the publication


- criticise the study, cite it in support of their own theories, replicate or adapt the study


- act as a large set of peer reviewers


- best research becomes part of the accepted knowledge base of Psych


✓ enables development of a body of knowledge on psychological topics


✓ ensures progress is made in the development of knowledge


✓ this knowledge base is then accessible to all

Peer Review Limitations

- doesn't solve file drawer problem - studies may not be submitted for peer review if they find no effect


publication bias:


- certain types of finding are less likely to be published: replications of previous research; findings that contradict the theoretical viewpoint of the reviewer or journal


- some argue publication process is slow


- Internet allows possibility of open publishing - no stage 1 but faster & more effective stage 2