• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/78

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

78 Cards in this Set

  • Front
  • Back
What is a theory?
On one view, it’s an axiomatic system: the axioms are basic laws, while the laws that they explain are theorems, i.e., entailed by (deducible from) the axioms. This involves the application of the DN model but now to the explanation of laws.
How do theoretical terms get their meanings?
asdf
How can we learn about unobservables like mass and forces, or bodies too small to see?
asdf
By deducing observational consequences? But how, if no observational terms appear in the theories?
asdf
What is the diachronic relationship between theories developed earlier and theories developed later in the history of science?
One view is what I’ll call cumulativism: earlier theories are revealed as special cases of later theories—they’re reducible to later theories, so that scientific knowledge accumulates over time.
What is the synchronic relationship between, say, current physics and current molecular biology?
asdf
What are the implications of the possible failure of diachronic reduction?
1. Science is progressive, and later theories reduce earlier theories, but the Nagelian account of reduction is incorrect. 2. Science is progressive, but later theories typically replace earlier theories; and scientific progress consists of something else. 3. Science isn’t progressive, which was something like Kuhn’s view.
If we pick implication 2 of the possible failure of diachronic reduction, then what else could progress be?
Rosenberg speaks of theories that are “increasingly correct”. But if "increasingly correct" means "false, but still closer to the truth than predecessors", then you need an account of closeness to the truth (or verisimilitude), and it's very hard to come up with a good one of those, though people have tried (e.g., Karl Popper). So I prefer to speak of theories that are "better supported by the available evidence". Over time, new scientific theories are developed which are better supported by the evidence than their predecessors; so the consensus theories of science at later times are better evidenced than the consensus theories of science at all earlier times. On this view, you just need to fend off the people who think that there's no such thing as one theory being better supported by the evidence than another.
What is the synchronic relationship between, say, current physics and current molecular biology?
Even if reductionism fails as an answer to issue 1, it may still succeed as an answer to issue 2.
What explains theoretical terms coming to mean what they do?
Answers to this question have been dominated by the assumptions that (1) we can distinguish observation terms from theoretical terms and (2) observation terms are unproblematically meaningful. What justifies these assumptions? I speculate that it’s a background assumption that (i) meaningful words have definitions, and that (ii) the definition of a word says what the word means. Since definitions can’t go on forever, theoretical terms bottom out in observational terms.
How can theoretical terms be defined by observational terms?
By stating logically necessary and sufficient conditions for T to apply in observational terms: e.g., necessarily, x is T iff x is O1 and O2. But this can’t be done with any plausibility, partly because there are indefinitely many ways of verifying observationally that a given theoretical claim is true (think of the many ways of measuring temperature), and partly because theoretical claims are meant to describe a reality underlying and explaining observational facts, which wouldn’t be the case if theoretical facts just were elaborate observational facts. Another way: intuitively, theoretical terms acquire their meanings from the roles they play in the theory they help to formulate. So perhaps we can specify what each term means by somehow specifying the contribution each term makes to the observational implications of the theory. One way to do that, intuitively, is to think of each term as equivalent in meaning to a definite description constructed from the theory: e.g., mass =def. the property of each of any two bodies that, along with the distance between them, determines the force between the two bodies.
On the Kripke/Putnam approach the meaning of a word is
its reference. Words don’t generally have definitions; words refer to things that may end up having empirically discovered definitions (gold’s atomic number is 79), but the word itself means nothing (gold means nothing)
On the Kripke/Putnam approach a new word acquires its reference from
an initial baptism
On the Kripke/Putnam approach the inroducer of a new word can perform an initial baptism by using a
reference-fixing definite description. ‘Let ‘x’ refer to some postulate that makes sense of observation y’. Doesn’t give meaning to the word, though: ‘let mary jane refer to the baby on my lap’ may fix the reference, but the baby may then go to her mother’s lap.
What is an implication of the Kripke/Putnam approach?
an antecedently meaningful observational vocabulary could be used to introduce meaningful theoretical terms into the language without those terms’ being defined in observational terms.
What is a second implication of the Kripke/Putnam view?
once the term, ‘neutrino’, has been introduced into the scientific language, theories about neutrinos can change very drastically, and yet the meaning of the term (i.e., its reference) will stay the same.
A third implication of the Kripke/Putnam view?
there’s no special problem about theoretical terms; they’re just like any number of other terms in the language.
Scientific Realism
consensus theories in mature sciences are meaningful, purport to describe the world, and should be believed, i.e., held true, in their totality.
Structural Realism
consensus theories in mature sciences are meaningful, and purport to describe the world, but only what they say about the world’s mathematical structure should be believed.
Instrumentalism
consensus theories in mature sciences are strictly speaking meaningless, and do not purport to describe the world; they serve only to assist us in the prediction of observations (and perhaps the manipulation of observable outcomes).
Constructive Empiricism (Van Fraassen)
consensus theories in mature sciences are meaningful, and purport to describe the world, but only their observational consequences should be believed.
The Natural Ontological Attitude (Fine)
take consensus theories in mature sciences at face value.
Gary Gutting: which two views?
scientific realism, constructive empiricism
What does “empirically adequate” mean?
True insofar as they describe observable states of affairs.
What’s the final view of Gutting’s paper? His own (presumably) view expounded
inference to the best explanation (IBE) in general allows one to infer to the existence of unobserved entities (e.g., stars, dinosaurs); and so in some cases it can also be used to infer to the existence of unobservable entities (e.g., electrons).
Unobserved and unobservable entities are
epistemologically parallel, so one must approve of both or neither. For being unobserved but observable is no better than being unobserved and unobservable. Actually being observed vs. not being observed is evidentially relevant; but being observable vs. being unobservable is not. A conclusion is not rendered more credible just because it makes a claim about entities that could be observed, even if in fact they have not been observed. Merely potential evidence is not evidence. And we don’t in fact require observational checking of conclusions reached via IBE, since no such checking is done in the dinosaur case. The CE could refuse to believe the conclusions of IBE’s that speak of unobservables, while believing those that speak of observables. But this would be arbitrary: the IBE’s wouldn’t necessarily be weaker in the former cases.
How might Gutting’s CE react to Gutting’s final-view (IBE) SR?
We would say that we don’t need explanations of how things are observably, i.e., he would refuse to infer to any explanation at all in such cases.
The value of explanation
Newton-Smith’s black box: perfect predictions about observable phenomena, without any idea of how the box works. We would still want o go on doing science as to explain how things are. We go on looking for explanations until we can’t go on anymore; there’s probably a fundamental level of explanation, but we don’t know if we’re there yet. So it would be arbitrary for Gutting’s CE to refuse to infer to explanations that speak of unobservables, while happily inferring to explanations that speak of observables.
Inductive reasoning
From: All Fs that have been observed are G. To: All Fs [i.e., even if unsampled] are G.
Hume’s skepticism about inductive reasoning is radical. He holds that:
The step from premise to conclusion in any inductive argument isn’t a step of reasoning at all. That is, the step isn’t rational. That is, the premise doesn’t provide any reason at all to believe the conclusion.
Hume’s Argument For His Inductive Skepticism
P1. There are only two kinds of reasoning: “demonstrative” and “probable”.  C1. If the step from premise to conclusion in a so-called inductive argument is truly a kind of reasoning, then that step is either “demonstrative” or “probable”. P2. In any so-called inductive argument, it’s always logically possible that the premise should be true while the conclusion is false.  C2. The step from premise to conclusion in a so-called inductive argument is not “demonstrative”. P3. If the step is “probable”, then there’s a generalization, to the effect that the future will be like the past, that we know to be true, and that combines with the premise to entail the conclusion. P4. If that generalization is known, then it is known either by “demonstrative” reasoning or by “probable” reasoning. P5. That generalization is not known by “demonstrative” reasoning. P6. That generalization is not known by “probable” reasoning either, because such reasoning would have to assume the generalization as a tacit premise, and would therefore be circular.  C3. The generalization is not known.  C4. The step from premise to conclusion in a so-called inductive argument is not “probable”.  C5. The step from premise to conclusion in a so-called inductive argument is not a kind of reasoning at all. QED.
Objecting to Hume’s argument
First premise is false—it implicitly assumes that an induction, to be good, would have to be an enthymematic deduction, i.e., would have to be deductively valid, given that some missing premise has been supplied. But why couldn’t induction be a sui generis kind of reasoning, additional to “demonstrative” and “probable” reasoning? Hume doesn’t consider this possibility, and so he doesn’t rule it out.
Still, Hume might ask, what’s rational or reasonable about induction?
A possible answer: induction is a reliable belief-forming process, in the sense of mostly yielding true conclusions given true premises, and for that reason it’s rational to use it.
One kind of rationality is ¬¬¬____ rationality, i.e., the use of means appropriate to one’s ends. Application to Hume?
instrumental. Suppose one’s end is the acquisition of true beliefs. Then using induction is rational for one—an appropriate means to an end that one has—if induction is reliable.
Does the Bayesian theory of confirmation help with the problem of induction?
Possibly. If sound Dutch Book arguments can be given for making one’s degrees of belief at a time conform to the axioms (hence to the theorems) of the probability calculus; these arguments try to show that, unless you make your degrees of belief conform to the axioms of the probability calculus, it will be possible for someone to make a series of bets with you that will guarantee that you lose. Similar arguments have been proposed to show that you must make your degrees of belief over time conform to the axioms of the probability calculus in order to avoid a Dutch book. In that case, failure to update according to the ROC exhibits a kind of incoherence. But Dutch Book arguments and their significance are very controversial.
Even if Hume were persuaded that he hadn’t shown that there’s nothing rational or reasonable about induction, he might still challenge a non-skeptic to explain what is rational or reasonable about induction. What is a possible response to this challenge?
It’s rational to use induction, at least if your goal is to form true beliefs, because (i) it’s (instrumentally) rational to use means appropriate to one’s ends, and (ii) induction is an appropriate means to the end of forming true beliefs (better: discovering true answers to questions you’re interested in), since (iii) induction is a reliable belief-forming process, in the sense of mostly yielding true conclusions given true premises.
For induction to be a reliable belief-forming process, all it takes is
that some suitable principle of the uniformity of nature in fact be true. Hume insists that such a principle would have to be known to be true; but this is unnecessary for reliability.
The difference between forming beliefs by using induction rather than, say, by reading tea leaves or inferring conclusions from arbitrary premises is
that induction is a reliable means to the achievement of a cognitive goal that we actually have.
Two paradoxes of confirmation
1. The Paradox of the Ravens 2. GRue paradox
Paradox of the ravens
Let h be the hypothesis that all ravens are black, and h’ be the hypothesis that all non-black things are non-ravens. And let a positive instance of the hypothesis that all Fs are Gs be any object that is both F and G. Then: P1. Any universal hypothesis is confirmed by its positive instances. (Nicod’s Criterion) P2. h is logically equivalent to h’. (That is, h and h’ entail one another.)x(~Gx~Fx) is the contrapositive of x(FxGx), and vice-versa; and contrapositives entail one another. P3. Anything that confirms a hypothesis also confirms, and to the same degree, any hypothesis logically equivalent to it. (Equivalence Condition) P4. A white shoe is a positive instance of h’.  A white shoe confirms h’.[From P1 and P4.]  A white shoe confirms h. [From P2, P3, and P4.] But surely a white shoe doesn’t confirm the hypothesis that all ravens are black!
For paradox of ravens, Rosenberg suggests that
P2 is false: h and h’ are not logically equivalent. For h holds with the force of natural necessity while h’ doesn’t. But care must be taken. It remains true that h and h’ are logically equivalent if both are simply prefixed with a necessity operator, to be read as “Necessarily…”. Rosenberg would have to make a stronger claim, e.g., that h should be read as a claiming that a necessitation relation holds between two universals (remember Armstrong on laws of nature) or as claiming that being a raven causes being black (it doesn’t follow from this that being a non-black thing causes being a non-raven).
What other solution is there to the paradox of the ravens? A (controversial) Bayesian solution different from the one that Rosenberg briefly mentions:
white shoes do confirm the hypothesis that all ravens are black, but much, much less so than do black ravens. We saw before that a hypothesis is more strongly confirmed by evidence that is unexpected than by evidence that is expected. Thus consider these two instances of Bayes’ Theorem: P(h/e1) = (P(e1/h) * P(h))/P(e1) and P(h/e2) = (P(e2/h) * P(h))/P(e2). When h entails both pieces of evidence, P(h/e1) > P(h/e2) iff, and to the extent that, P(e1) < P(e2), i.e., iff, and to the extent that, e1 is less expected than e2. The point can be applied to the paradox of the ravens. The probability of something picked at random being a black raven is very much lower than the probability of something picked at random being a non-black non-raven. Hence a non-black non-raven confirms h only to a tiny extent, especially in comparison with the extent to which a black raven confirms h.
The Grue Paradox
Invented by Nelson Goodman. The predicate “grue” is defined as follows: x is grue iff EITHER x is green and first observed before January 1st 2100 OR x is blue and first observed on or after January 1st 2100. P1. Any universal hypothesis is confirmed by its positive instances. (Nicod’s Criterion) P2. Every one of the many emeralds so far observed has been grue.  Every one of the many emeralds so far observed is a positive instance of the hypothesis that all emeralds are grue. Every one of the many emeralds so far observed confirms the hypothesis that all emeralds are grue. Indeed, the hypothesis that all emeralds are grue is as well confirmed by observed emeralds as is the hypothesis that all emeralds are green! However, the two hypotheses predict different things on or after January 1st 2100: that if an unobserved emerald is dug up on or after January 1st 2100 it will be blue, and that if an unobserved emerald is dug up on or after January 1st 2100 it will be green. Clearly, however, we have no expectation that the former prediction will be borne out!
What is the Bayesian response to the grue paradox?
Two hypotheses need not be confirmed to the same degree by the same piece of evidence, because they need not have the same prior probability. So if “All emeralds are grue” has a very low prior probability, it will hardly be confirmed at all by the many emeralds so far observed. Why should it have a very low prior probability? One thought is this: consider an emerald that is not in fact dug up and observed before January 1st 2100 but is dug up and observed on January 1st 2100. If all emeralds are grue, then it will be found to be blue. But if all emeralds are grue, then, if it had been dug up and observed just one day earlier, it would have been found to be green. But this is inconsistent with all our background knowledge about minerals, their colors, the effects of being dug, and the effects of the sheer passage of time. So we assign “All emeralds are grue” a prior probability vanishingly close to zero.
A critic of the Bayesian reply might point out what, as Goodman did?
that we can define “green” in terms of “grue” and “bleen”, where x is bleen iff x is bleen iff EITHER x is blue and first observed before January 1st 2100 OR x is green and first observed on or after January 1st 2100: x is green iff EITHER x is grue and first observed before January 1st 2100 OR x is bleen and first observed on or after January 1st 2100. This definition is commonly taken to show that green is in the same (sinking) boat as grue. But it doesn’t, for the hypothesis that all emeralds are green doesn't clash with our background knowledge in the way that the grue hypothesis does. Consider again an emerald that is not in fact dug up and observed before January 1st 2100 but is dug up and observed on January 1st 2100. If all emeralds are green, then it will be found to be bleen. Now if all emeralds are green, then, if it had been dug up and observed just one day earlier, it would have been found to be grue. But this is unproblematic, because once you examine the definitions of “bleen” and “grue”, you will see that on both occasions—the actual occasion and the hypothetical occasion—the emerald is green, an outcome fully consistent with our background knowledge about minerals, their colors, the effects of being dug, and the effects of the sheer passage of time.
Popper’s position has six claims -rhetorical
asdf
Claim 1.
Hume was right about the logical status of induction: there is no such thing as good inductive reasoning—no matter how liberally “inductive” is understood. Induction isn’t just fallible, or incapable of yielding certainty; it provides no reason whatever to believe anything at all.
Claim 2
Hence there is no reason whatever to believe the universal hypotheses of science, since such reason would have to derive from inductive reasoning.
Claim 3.
However, it is perfectly rational to prefer the hypotheses that scientists currently accept over those accepted by scientists in the past, and over those supported by reading tea-leaves or looking into crystal balls: it is not necessary to abandon the rationality of science just because you accept inductive skepticism.
Claim 4.
For the rationality of science—which is perfectly genuine—does not consist in using induction to provide evidence for, or reason to believe, scientific hypotheses. It consists, rather, in using Popper’s method of conjectures and refutations, or falsificationism.
There is a logical asymmetry between verification and falsification: while ____would require induction, ____ requires only deduction.
verification, falsification
Because verification requires induction and falsification requires deduction
no number of positive instances of a universal hypothesis deductively entail that the hypothesis is true; but just one negative instance deductively entails that the hypothesis is false. And, if a hypothesis entails an observation statement O, and O is found by observation to be true, then nothing follows deductively about the truth or falsity of the hypothesis (affirming the consequent is a fallacy, of course); but if instead O is found to be false, then the falsity of the hypothesis follows deductively (by modus tollens).
The method of conjectures and refutations—or falsificationism—is to propose
a highly informative hypothesis, to derive observational implications from it, and then to see whether those implications are true. If they are, then the hypothesis has passed that test and should be accepted for the purpose of further testing it. But if observational implications are not true, then the hypothesis has failed the test and should be rejected as false.
It is rational to prefer the hypotheses that scientists currently accept because
they have been subjected to stringent tests and have passed them; they may therefore be said (in Popper’s terminology) to be well corroborated, though note that the corroboration of a hypothesis is for Popper entirely a matter of how well it has passed empirical tests; it is not a kind of confirmation.
Claim 5.
The goal of science is to generate a sequence of hypotheses that exhibit increasing verisimilitude, i.e., that get progressively closer to he truth; and falsificationism leads to the achievement of that goal.
Claim 6.
Science has progressed—in the sense that later hypotheses are closer to the truth than earlier ones—and this has occurred because scientists do use, and have used, Popper’s falsificationist method; contra Hume, they have not used induction.
What is an objection to Popper’s falsificationist method?
it covertly relies on induction. Example to illustrate the point: H: All boiling water is at 90 C. O: Thermometer T reads “90 C”. Can O be derived from H? Not without further premises, obviously; but it could if we added further premises—the following auxiliary hypotheses: A1: T is immersed in boiling water. A2: T reads “n C” if it is immersed in some substance that is at n C. Now O can be derived from the conjunction of H, A1, and A2. Suppose, however, that O is found to be false; perhaps T reads “100 C”. What follows? Only that it's not the case that [H and A1 and A2], which is logically equivalent to [~H or ~A1 or ~A2]. So H has not, after all, been falsified. Admittedly, if we had reason to believe that A1 and A2 are true, we could then deduce that H is false; but A2 is a general claim, so the only reason we could have for believing it would be inductive. And it’s very common for a hypothesis to entail observational consequences only if it is conjoined with one or more auxiliary hypotheses; recall the case of Newton’s three laws plus the law of gravitation. So the problem is widespread.
The underdetermination of theory by observational data—if it exists—claims that
preferences for one theory over another are unjustified.
The basic skeptical argument for underdetermination of theory
P1. For any theory T, there’s a rival theory T’ such that, for any observational statement O that is known to be true, T entails O iff T’ entails O. (P1 says that every theory T has an empirically equivalent rival: T and some rival to T entail exactly the same set of observation statements that are known to be true.) P2. A theory is confirmed to the extent, and only to the extent, that its observational consequences are known to be true. (P2 states a form of epistemic empiricism.) T and T’ are confirmed to exactly the same degree. A preference for T is unjustified.
Why is the conclusion of the skeptical argument for the underdetermination of theory by observational data very significant?
-If consensus in science doesn’t arise because scientists recognize that the theories they prefer are better confirmed than the ones they don’t prefer, then something else must explain the consensus. Presumably non-rational factors have to be invoked to explain it, e.g., class interests, gender interests, economic interests, and metaphysical or theological assumptions. And sociologists of science advance explanations of scientific consensus along exactly these lines.
The first premise of the skeptical argument for underdetermination makes a strong claim how?
Scientists often have trouble coming up with even one theory that entails every observation statement known to be true! So why think that every theory has an empirically equivalent rival? At least three reasons might be given: 1. The curve-fitting problem. For any finite set of data points plotted on a graph with two axes, there are indefinitely many lines that could be drawn through all the points; and each line corresponds to a distinct hypothesis concerning the relationship between the two magnitudes, where each hypothesis makes a different set of predictions about the unknown values. [Draw two graphs.] 2. The ability to add or modify auxiliary hypotheses. 3. The ability to “gruify” any universal hypothesis, i.e., to generate an incompatible rival to it by using the recipe why which “All emeralds are grue” can be generated from “All emeralds are green”.
The second premise of the skeptical argument for underdetermination of theory neglects what?
The role of so-called super-empirical criteria for evaluating competing theories. Examples of such criteria that have been suggested: economy/parsimony [= ontological simplicity], fit with well-confirmed background theories, ability to predict—and not just accommodate—true observation statements, explanatory power, ability to unify a diversity of true observation statements.
Would a Bayesian agree to premise two of the skeptical argument for underdetermination?
Let T1 and T2 entail exactly the same set, O, of observation statements. Then must P(T1/O) = P(T2/O)? How could these probabilities be different? They could be different if the prior probabilities of the two theories were different, i.e., if P(T1) ≠ P(H2). So, no, a Bayesian would reject P2. Indeed, a Bayesian could propose to use the super-empirical criteria suggested above to assess the prior probabilities of theories.
Why are super-ermpirical criteria controversial?
At least two reasons: 1. Spelling out the nature of each criterion with precision is hard and perhaps impossible. 2. Why are we entitled to appeal to them, given that we presumably want our theories to be true? Justifying appeals to super-empirical criteria in theory-choice would require showing that the world is such that appeals to super-empirical criteria could be part of a reliable theory-choosing procedure, i.e., a procedure likely to lead to our choosing true theories. But there are exactly two possible ways to show that the world is like that.
What are the two possible ways for showing that a procedure is reliable?
The first way is a priori. But we could show that the world is like that a priori only if it were a necessary truth that the world was simple or unified or whatever, or that theories with the ability to predict—and not just accommodate—true observation statements were more likely to be true, ceteris paribus. But we can easily conceive that the world is not like that, so it isn’t a necessary truth that the world is like that. Thus the a priori way fails. The second way is a posteriori—to show by appeal to observational evidence that in fact the world is simple or unified etc.. But this way would be circular, since we would need to use the super-empirical criteria in order to show that the world is simple or unified etc. and that using them is legitimate.
Kitcher holds the dynamic view of justification
that whether a belief one has is justified is determined by what sort of belief-forming process (or propensity to form beliefs) brought it about—specifically, by whether the belief-forming process (or propensity to form beliefs) was reliable in producing true beliefs.
A perceptual belief-forming process is reliable
if it produces true beliefs in, say, a majority of the cases when it is activated by a stimulus of the right sort (light, sound etc.).
An inferential belief-forming process is reliable
if it produces true beliefs in, say, a majority of cases in which it receives true beliefs as input. (We wouldn’t want to call a belief-forming process unreliable if the reason it rarely produced true beliefs was that it rarely received true beliefs as input.)
The static view (contra Kitcher) holds
that whether a belief one has is justified is determined by the logical or evidential relations that hold between the proposition believed and other propositions that one believes or perceptual states that one is in.
Kitcher’s account of the rationality of induction is
we're instrumentally rational in using induction, given our cognitive goals, because induction is an inferential propensity that reliably achieves those goals. And that’s why induction is more rational than any other deductively invalid pattern of reasoning.
But Kitcher insists that there is no
single standard of rationality (not: no standard of rationality; no single standard of rationality); there are many. We have at least two cognitive goals: relief from agnosticism, i.e., having some answer to questions we’re interested in; and a high ratio of true answers to false answers. But we can only achieve one goal at the expense of the other. Inductive propensities that always give answers to our questions will be less cautious and so will more often give incorrect answers; inductive propensities that never give incorrect answers will be very cautious and will often give no answers at all. However, there is no uniquely reasonable point at which to strike a balance between these competing goals. Wanting an answer to every question would be reckless; but wanting answers that are always true would lead to one’s having very few beliefs at all. Presumably many points in between these extremes could be reasonable.
When we’re calculating what he calls the relief index and the truth ratio of a given inductive propensity we must make
some decision about what range of occasions on which the propensity is used we take into account. All actual occasions, and no others? All actual and some non-actual occasions? But which non-actual occasions? (Compare an assessment of the reliability of a car’s starting.) There seems to be no one right way to make such decisions.
Kitcher claims that induction is eliminative induction, which is
reasoning in which a hypothesis is deduced from premises that (i) report the existence of positive instances of the hypothesis and that (ii) assert that one of a certain set of hypotheses is true. The positive instances reported serve as counterexamples to all of the hypotheses but one, and it is this one hypothesis whose truth is deduced. (1) h1 or h2 or h3, or…or hn. (2) e1 and e2 and e3 and…and hm.  h1
Kitcher’s view of how induction works, i.e., his view that it is eliminative, enables the following solution to the paradox of the ravens
Rival hypotheses that are sensible given prior practice are ruled out only by evidence statements reporting black ravens. Such hypotheses are those of the form, “All ravens are F”, where F is a color contrary to black, and “All ravens are (black iff they meet condition C)”, where these conditions are those that prior practice regards as factors that might influence bird color. (1) h1 or h2 or h3, or…or hn. (2) e1 and e2 and e3 and…and hm. (3) a1 and a2 and a3 and…and am.  h1 The warrant for premise (3), as for premise (1), is the background knowledge that is part of our prior practice. The premises will entail the conclusion iff, for each hypothesis other than “h1”, there is some combination of evidence statements and auxiliary hypotheses that entails that the hypothesis is false.
How does the threat of underdetermination manifest itself in the context of eliminative induction?
It’s the claim that it’s not possible for every rival to h1 to be ruled out by some combination of evidence statements and auxiliary hypotheses. Why not? For one or both of two reasons: (1) Any hypothesis always has infinitely many rivals, and it's not possible to rule out infinitely many rivals with a finite number of evidence statements. (2) It’s always possible to save a rival hypothesis from elimination (i) by modifying or denying one of the auxiliary hypotheses needed for the elimination or (ii) by “pleading hallucination” and denying one of the evidence statements. These two claims constitute the Duhemian challenge.
How does Kitcher respond to the Duhemian challenge?
He argues that these two claims—both universal in form—are far too strong to be plausible. Re (1): There needn’t always be infinitely many rivals, because prior practice might leave only a finite number. And in any case finitely many evidence statements might suffice to eliminate infinitely many rival hypotheses. Re (2): Modifying or denying one of the auxiliary hypotheses needed for the elimination of a rival hypothesis has one or both of two kinds of cost: it requires ignoring independent evidence that supports the auxiliary hypothesis, e.g., evidence that a particular thermometer is not broken or unreliable or affected by X-rays or whatever; it entails losing explanations and predictions that required the abandoned or modified auxiliary—and there is no assurance that a substitute auxiliary will be available that would enable the retention of these explanations and predictions without entailing an evidence statement known to be false. Perhaps a set of substitute auxiliaries could do so; but there would still be a loss of explanatory unification.
How would a Duhemian respond to Kitcher?
even if the attempt to eliminate every rival hypothesis is not bound to fail, still it might fail sometimes, or even often, which would also be bad for the thesis that scientific change over time has been rational (i.e., the thesis that new theories are accepted in place of old ones because scientists judge, correctly, that the new theories are better evidenced than the old ones).
How would Kitcher respond to the Duhemian response to his objection to the Duhemian challenge?
Kitcher allows that the attempt to eliminate every rival hypothesis might indeed fail sometimes, though the failure might only be temporary—until new evidence comes along or a new instrument is discovered, for example (as in the historical case of the very slow acceptance of Copernicanism). But whether it fails often is a question that can only be answered by undertaking detailed historical study of concrete episodes in the history of science.