Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
50 Cards in this Set
- Front
- Back
myth: we only use 10% of our brain |
we use all of our brains known because: people suffer deficits when damaging parts of the brain; evolution would not waste energy building parts of your brain not utilized it might be possible to be more productive if we used all our brain at once, but quantities are not known believed because: availability cascade (widely accepted knowledge, believe what others around us believe); wishful thinking * people CAN lose whole hemispheres and still function relatively normally |
|
myth: ESP (extra sensory perception) is real: |
* believed because: confirmation bias (coincidences happen, things happen by chance and we see it as something that happens regularly); neglect of negative results (only positive results published); wishful thinking
|
|
myth: listening to Mozart makes babies smarter: |
* believed because: wishful thinking (we hope its true)
|
|
myth: IQ tests are biased: |
* panels of scientists conclude they are not biased, they may be a bad test of future success but it;s the best we’ve got, item analysis is used to identify bad biased questions
|
|
myth: money makes you happy: |
* two kinds of happiness: pleasure and day-to-day, life satisfaction
|
|
myth: child abuse leads to psychological disorders: |
* believed because: often these things come together |
|
myth: AI is a failure: |
* believed because: moving goal posts, mysterians (searching for something that agrees with preconceived ideas, things dismissed once implemented even though monumental compared to years ago); almost implemented (things aren’t “true” AI or “real” intelligence)
|
|
myth: the full moon makes people act differently: |
* believed because: confirmation bias (notice aggression when moon is present, normal and aggressive behavior with no moon is unnoticed)
|
|
how science works: |
* quasi-experiments - observe the real world, problems with causation |
|
results and statistics: |
* significant means probably not due to chance, “alpha level” or typical threshold of <0.05 means that 1 out of 20 experiments will find significance by chance |
|
science as a culture: science as an epistemology (theory of knowledge) |
* science’s self correcting nature - publishing makes it public, other scientist attempt to disprove your theory, bad findings get weeded out as they are tested and retested, modifying nature
|
|
Daniel Gilbert's TED Talk: |
* our longings and worries are to some degree overblown because we have the capacity to manufacture the very commodity we are constantly chasing when we chose experience
|
|
dreaming can occur in: |
REM (rapid eye movement) state: rapid eye movement, muscle atonia (paralyzation of body so you don’t act out your dream), often dreaming non-REM states: 75% of our sleep is NREM, short and dull dreams |
|
interference from the world while dreaming |
(Ex. needing to pee, hearing an alarm - these incorporate themselves into your dreams) |
|
the dreaming brain: |
very active brainstem, sensing information forward DLPFC (dorsolateral prefrontal cortex) is involved with executive function and is deactivated perhaps explains reduced reasoning during dreams - no shock or noticing of weird things possibly why we have difficulty remembering dreams - evolutionary survival trait to not mistake dreams for reality) |
|
dream recall: |
* animals and infants cannot report dreams
|
|
recording dreams |
* subject to memory biases
|
|
dream characteristics: |
dreams are NOT like: films, visual image, recent social situations, recent episodic memories only realize how bizarre dreams are until we are awake (selection bias - bizarre dreams are easier to remember) dream emotion matches content - bizarre dream but consistent emotions common experiences “first person” - always about you, things happening to you tend to be narrative - have a story, cause, effects scene shifts - shifting from one scene to another with no correlation |
|
Threat Simulation Theory |
* Westerners dream of things we rarely experience, ancestral threats are over represented
|
|
dreaming as play theory |
play as practice for needed survival skills as an adult |
|
dream incubation |
how you can affect your dreams: dream incubation - pre-sleep attention to a specific concern that gets brought into the dream with you to help solve |
|
AIM Model for Conscious States |
* dream recall cessation is almost always caused by forebrain lesions
|
|
people who have no dream imagery (visual anoneria) tend to also have: |
people who have no dream imagery (visual anoneria) tend to also have a waking deficit in imaging memories (visual reminiscence) |
|
lucid dreaming |
* training aids in getting lucid dreams (dream diaries, improving dream recall, reality checks)
|
|
sleep paralysis |
sleep paralysis - carryover of muscle atonia from sleep to waking, feeling awake, possible chest pressure, unable to move, hallucinations (presence of malevolent character, feeling terror) |
|
morals |
morals - evolved to help us take care of others in our group, not people outside our group |
|
the expanding circle: self interest: friendship: tribalism: I care about all people/creatures able to have positive or negative experiences: |
* when resources are tight, people are less generous
|
|
how do we know mortality is evolved? |
* utilitarianism vs. deontology
|
|
Haidt’s moral foundations theory |
* the strength of these moral foundations depend on the person, cultural nurturing, environmental effect and genetic influences
|
|
politics and morals: |
* political values are 60% genetic
|
|
moral dumbfounding |
* evidence that mortality is based on evolutionary instincts and learning
|
|
should you trust your instincts? |
should you trust your instincts? - feeling vs. principles people look to their feelings to judge whether something is moral or not bad smells/bitter drinks will make people see things as more immoral |
|
animal mortality: |
* Ex. fairness, reciprocity, friendship in chimps (will kiss/embrace after a fight)
|
|
solipsism |
* reality vs. virtual reality (how do you know reality is reality and not a dream)
|
|
Asimov’s three laws of robotics: |
Ex. I,Robot (2004) - deals with Asimov’s three laws of robotics: 1. a robot must protect its own existence as long as it does not conflict with the first or second laws |
|
AI rights: ethical agents ethical patients |
* ethical patients - something that should be treated morally
|
|
AI rights: pain and suffering: |
pain and suffering: identity theory - the mental state is the brain state, a physical definition of the mental state, pain is the process of the brain reacting functionalism - the idea that mental states are defined by the functions they perform, we define pain functionally (Ex. distress and avoidance shown by a functioning system in the presence of a harmful element) if a robot can feel things is it okay to mistreat them? what if it is the only way for them to function properly? do they become moral/ethical patients that we need to be concerned about? |
|
what is a model? |
across fields, a representation of something that excludes unimportant detail and information, but makes other bits of information easier to get Ex. scale model of a home made of cardboard, simulation of a hurricane, models of cognitive thought (modeling of human beings and how they behave) |
|
cognitive models |
cognitive models - typically a computer program that models some aspect of thought (Ex. model of how people do categorization, how a mouse learns to navigate a maze) model makes predictions that can then be compared to data if predictions match the data, it supports the theory underlying the model |
|
cognitive architecture |
* typically includes constraints on how cognition works in ALL people (speed of learning, memory retrieval, ignores cultural and other learned aspects) * a theory about the structure of the human mind |
|
world issues with: problems on decline: |
* maybe seem to not be in decline due to the availability heuristic |
|
world issues with: problems getting worse: |
social capital (bad real life social networks, biggest factor developed world happiness is in the level of quality face to face social connections environmental damage - Ex. how do we fix climate change? technological solution: come up with an alternative, safe energy source fund science and engineering to do it get more people to care so our representatives will make it happen get money, market the problem this one problem involves many sub-problems which require solving, why it cannot be fixed with one step social solution: convince people to use much less oil still need to market the problem - need money for this |
|
issues with solving problems - all problems are intellectual: |
all problems are intellectual: (why cogsci is the most important, it can be applied everywhere and help us solve these problems) the reason we can’t solve all the problems in the world is because we ultimately don’t know how to do it figuring out how to do it requires thinking and problem solving cogsci helps us |
|
cognitive science and problem solving: |
human modeling - cognitive science studies how people actually solve problems and all of the other cognitive functions it takes to do it designing AI/programs that can solve problems for us, and better than us AI is better than us when: stock trading, arithmetic, statistics, scheduling, search engines and aggregation, certain games we are better than AI with: language, physical movement, creativity, science, social interaction, vision, certain games, most everyday tasks but AI often makes humans more effective when used as a tool |
|
Richard Dawkins's TED Talk: |
how we understand and our human capability to understand the world cogsci help us understand our limitations and biases in understanding what the world is like, cogsci also helps us get past those biases questions whether there are things that will be forever ungraspable by our and even superiorly intelligent minds? “Our brains have evolved to help us survive within the orders of magnitude of size and speed which our bodies operate at.” it is useful for our brains to construct notions of solids and impenetrability because such notions help us navigate our bodies through the middle sized world in which we have to navigate middle world - the medium-scaled environment (small being at the atomic level, large being at the cosmic level) in which we’ve evolved the ability to take act reality for an animal is whatever the brain needs for it to properly interact and survive within the world (Ex. a flying animal needs a different world model than a land or water animal) we live in a social world, a social version of middle world, we are evolved to second guess the behavior of others by become intuitive psychologists relevant to us as many majors are fields trying to understand something about the world our cognitive system affects how we “see” the world our fields affects how we see the world |
|
kinds of cognitive architecture: symbolic |
kinds of cognitive architecture: symbolic - operates at the level of discrete symbols production systems: Ex. assuming words beginning with capital letters are variables and other words are constant characteristics: declarative/procedural memory distinction: justifications: HM and other brain damage patients our inability to consciously retrieve and reflect procedural memories goals are subset of declarative memory production compilation models automatization |
|
kinds of cognitive architecture: sub-symbolic |
sub-symbolic - operates using smaller, more detailed number representations, which in aggregates constitute symbol (like a pixel of an image) associative network - uses Hebbian learning to learn patterns, when it gets incomplete input it can complete it based on the weights between the model (associations between activations that happen simultaneously strengthen weights) early connectionism - using sub symbols to form letters, which are then connected with weights to possible words connectionism - supervised learning uses backpropagation, great for classification, needs thousands of examples to work, unlike human beings input, hidden and output layers; connection weight strength depends on the repeated activity between connections the brain recognizes certain elements in what it is experiencing and connects it to further elements, the strength/weight of these connections determine the final assessment of the experience |
|
kinds of cognitive architecture: neural modeling |
neural modeling - Ex. nengo and spaun - simulated neurons at a biological level, spaun took in visual information and interpreted it so that it could draw a resultant with a neuron driven arm |
|
kinds of cognitive architecture: hybrid symbolic/sub-symbolic |
hybrid symbolic/sub-symbolic - not based |
|
kinds of cognitive architecture: brain |
brain - models of cognition at the level of biology, but speak to cognitive issues
|