Logology (science of science)
Logology ("the science of science") is the study of all aspects of science and of its practitioners—aspects philosophical, biological, psychological, societal, historical, political, institutional, financial.
The term "logology" is used here as a synonym[1][2] for the equivalent term "science of science"[3] and the semi-equivalent term "sociology of science".[4]
The term "logology" is back-formed from "-logy" (as in "geology", "anthropology", "sociology", etc.) in the sense of the "study of study" or the "science of science"—or, more plainly, the "study of science".[1][2]
The word "logology" provides grammatical variants not available with the earlier terms "science of science" and "sociology of science"—"logologist", "to logologize", "logological", "logologically".[5]
Origins
The early 20th century brought calls, initially from sociologists, for the creation of a new, empirically based science that would study the scientific enterprise itself.[6] The early proposals were put forward with some hesitancy and deferentiality.[7][8] The new meta-science would be given a variety of names,[9] including "science of knowledge", "science of science", "sociology of science", and "logology".
The Polish sociologist Florian Znaniecki, considered the founder of Polish academic sociology and who also served as the 44th president of the American Sociological Association, opened a 1923 article:[10]
Although theoretical reflection on knowledge — which arose as early as Heraclitus and the Eleatics — stretches in an unbroken line through the history of human thought to the present day, nevertheless the most recent times have introduced into these reflections so many new questions and viewpoints so divergent from the earlier ones that we may safely say that we are now witnessing the creation of a new science of knowledge [author's emphasis] whose relationship to the old inquiries may be compared with the relationship of modern physics and chemistry to the 'natural philosophy' that preceded them, or of contemporary sociology to the 'political philosophy' of antiquity and the Renaissance. To be sure, we are still dealing with an accumulation of miscellaneous observations rather than with a systematically and consciously developed scientific whole, but gradually an order is emerging from this chaos and there is beginning to take shape a concept of a single, general theory of knowledge as a separate branch of human culture, endowed with special empirical properties and permitting of empirical study. This theory is beginning to take its place beside such sciences as economics and linguistics as it assumes the traits of a positive, comparative, generalizing and elucidating science. Thereby, too, it is coming to be distinguished clearly from epistemology, from normative logic and from a strictly descriptive history of knowledge. The distinction ... is not the result of some arbitrary a priori designation of the boundaries between the respective fields of human thought, but has developed spontaneously through the emergence — within each of the earlier types of reflection upon knowledge — of problems that have resisted accommodation within its traditional sphere. These problems, gradually concentrating on a common ground outside the scope of purely epistemological, logical or historical thought, constitute one of the main sources of the new science of knowledge.[11]
A dozen years later, two Polish sociologists of a slightly younger generation, Stanisław Ossowski and Maria Ossowska (the Ossowscy, husband and wife) took up the same subject in a more compact and better known 1935 article on "The Science of Science".[12] They wrote:
The interest taken in science as [a] field of human culture is something new. It was partly derived from historical research, partly called forth by the development of modern sociology, and partly by practical needs (... the encouragement and organization of science). Research in this field is much younger than the science of religion, than the science of economic production, than the science of art.[13]
The Ossowscy — the 1935 English-language version of whose article first introduced the term "science of science" to the world[14] — postulated that the new discipline would subsume such earlier disciplines as epistemology, the philosophy of science, the "psychology of science", and the "sociology of science".[15]
It would also concern itself with
[questions] of a practical and organizing character ... hitherto chiefly [addressed] by institutions [that have] promot[ed] science ... [questions such as] social and state policy in relation to science, the organization of higher institutions of learning, of research institutes and of scientific expeditions, protection of scientific workers, etc.[Science of science would also concern itself with] historical [questions]: [t]he history of the conception of science ... of the scientist ... of the separate disciplines, and of learning in general...[16]
The Ossowscy acknowledged the existence of an approximate German-language equivalent to the expression "science of science": "Wissenschaftslehre". But they explained that, leaving aside Johann Gottlieb Fichte (1762–1814), who had called his whole philosophical speculation by that name, the term had been used in Germany chiefly to denote logic with general methodology, or logic with general methodology and questions usually included in epistemology. "Wissenschaftslehre" had also been used in almost the same sense by Bernard Bolzano (1781–1848) — as logic, understood in a very wide sense, later made familiar at the turn of the 20th century.[17]
The Ossowscy also referenced the 20th-century German philosopher Werner Schingnitz (1899–1953) who, in fragmentary 1931 remarks, had enumerated some possible types of research in the science of science and had proposed a name for it: "scientiology". The two Polish sociologists commented: "Those who wish to replace the expression 'science of science' by a one-word term [that] sound[s] international, in the belief that only after receiving such a name [will] a given group of [questions be] officially dubbed an autonomous discipline, [might] be reminded of the name mathesiology, proposed long ago for similar purposes [by the French mathematician and physicist André-Marie Ampère (1775–1836)]."[18]
Yet, before long, in Poland, the unwieldy three-word term "nauka o nauce" ("science of science") was replaced by the more versatile one-word term "naukoznawstwo" ("logology") and its natural variants: "naukoznawca" ("logologist"), "naukoznawczy" ("logological"), and "naukoznawczo" ("logologically"). And just after World War II, only 11 years after the Ossowscy's landmark 1935 paper, the year 1946 saw the founding of the Polish Academy of Sciences' quarterly Zagadnienia Naukoznawstwa (Logology) — long before similar journals in many other countries.[19]
The new discipline also took root elsewhere — in English-speaking countries, without the benefit of a one-word name.
Science
The nature of things
The word "science" comes from the Latin "scientia", meaning "knowledge". In the English language, the word "science", when unqualified, generally refers to the "natural", "exact", or "hard sciences".[20] The corresponding term in other languages, for example French and Polish, has broader application. French distinguishes "exact sciences" (or "hard sciences"); "physico-chemical and experimental sciences"; and "humanistic and social sciences". Similarly, Polish distinguishes "exact sciences" (including logic and mathematics); "natural sciences" (physics, chemistry, biology, medicine, Earth sciences, geography, astronomy, etc.); "engineering sciences"; "social sciences" (history, geography, psychology, physical anthropology, sociology, political science, economics, international relations, pedagogy, etc.); and "humanistic sciences" (philosophy, history, cultural anthropology, linguistics, etc.).
The American self-described skeptic Michael Shermer writes that
In the late 20th century the humanities took a turn toward post-modern deconstruction and the belief that there is no objective reality to be discovered. To believe in such quaint notions as scientific progress was to be guilty of "scientism" [...]I subsequently gave up on the humanities but am now reconsidering my position after an encounter [...] with University of Amsterdam humanities professor Rens Bod [...]. Bod pointed out that my definition of science—a set of methods that describes and interprets observed or inferred phenomena, past or present, aimed at testing hypotheses and building theories—applies to such humanities fields as philology, art history, musicology, linguistics, archaeology, historiography and literary studies.[21]
Bod reminded Shermer that in 1440 the Italian philologist Lorenzo Valla exposed the Latin document Donatio Constantini (The Donation of Constantine)—which was used by the Catholic Church to legitimize its land grab of the Western Roman Empire—as a forgery. "Valla," said Bod, "used historical, linguistic and philological evidence, including counterfactual reasoning, to rebut the document. [...] Valla found words and constructions in the document that could not [...] have been used by anyone [in] the time of Emperor Constantine I, at the beginning of the fourth century A.D. The late Latin word Feudum ["fief"], for example, referred to the feudal system. But this was a medieval invention, which did not exist before the seventh century A.D." Valla's methods were those of science, says Bod: "He was skeptical, he was empirical, he drew a hypothesis, he was rational, he used very abstract reasoning (even counterfactual reasoning), he used textual phenomena as evidence, and he laid the foundations for one of the most successful theories: stemmatic philology, which can derive the original archetype text from extant copies (in fact, the much later DNA analysis was based on stemmatic philology)."[21][22]
Inspired by Valla's philological analysis of the Bible [writes Shermer], Dutch humanist Erasmus [1466–1536] employed these same empirical techniques to demonstrate that, for example, the concept of the Trinity did not appear in bibles before the 11th century. In 1606 Leiden University professor Joseph Justus Scaliger published a philological reconstruction of the ancient Egyptian dynasties, finding that the earliest one, dating to 5285 B.C., predated the Bible's chronology for the creation of the world by nearly 1,300 years. This led later scholars such as Baruch Spinoza [1632–77] to reject the Bible as a reliable historical document. "Thus, abstract reasoning, rationality, empiricism and skepticism are not just virtues of science," Bod concluded. "They had all been invented by the humanities."[...] The transdisciplinary connection between the sciences and humanities is well captured in the German word Geisteswissenschaften, which means "human sciences." This concept embraces everything humans do, including the scientific theories we generate about the natural world. "Too often humanities scholars believe that they are moving toward science when they use empirical methods," Bod reflected. "They are wrong: humanities scholars using empirical methods are returning to their own historical roots in the studia humanitatis of the 15th century, when the empirical approach was first invented."
Regardless of which university building scholars inhabit [writes Shermer], we are all working toward the same goal of improving our understanding of the true nature of things, and that is the way of both the sciences and the humanities, a scientia humanitatis.[21]
Shermer thus ends, affirming the underlying unity of all disciplines of study: a view more European than American.
Facts and theories
According to the English-born theoretical physicist and mathematician Freeman Dyson,
Science consists of facts and theories. Facts and theories are born in different ways and are judged by different standards. Facts are supposed to be true or false. They are discovered by observers or experimenters. A scientist who claims to have discovered a fact that turns out to be wrong is judged harshly. One wrong fact is enough to ruin a career.Theories have an entirely different status. They are free creations of the human mind, intended to describe our understanding of nature. Since our understanding is incomplete, theories are provisional. Theories are tools of understanding, and a tool does not need to be precisely true in order to be useful. Theories are supposed to be more-or-less true, with plenty of room for disagreement. A scientist who invents a theory that turns out to be wrong is judged leniently. Mistakes are tolerated, so long as the culprit is willing to correct them when nature proves them wrong.[23]
"The inventor of a brilliant idea," writes Dyson, "cannot tell whether it is right or wrong." Dyson cites a psychologist, David Kahneman, as describing how theories are born: "We can't live in a state of perpetual doubt, so we make up the best story possible and we live as if the story were true." "Great scientists," writes Dyson, "produce right theories and wrong theories, and believe in them with equal conviction." The passionate pursuit of wrong theories is a normal part of the development of science.[24]
Dyson cites, after Mario Livio, five famous scientists who held erroneous scientific theories: Charles Darwin, William Thomson (Lord Kelvin), Linus Pauling, Fred Hoyle, and Albert Einstein. Each made major contributions to the understanding of nature, and each believed firmly in a theory that proved wrong.[24]
Darwin explained the evolution of life with his theory of natural selection of inherited variations, but he believed in a theory of blending inheritance that made the propagation of new variations impossible.[24] He never read Gregor Mendel's studies that showed that the laws of inheritance would become simple when inheritance was considered as a random process. Though Darwin in 1866 did the same experiment that Mendel had, Darwin did not get comparable results because he failed to appreciate the statistical importance of using very large experimental samples. Eventually, Mendelian inheritance by random variation would, no thanks to Darwin, provide the raw material for Darwinian selection to work on.[25]
Lord Kelvin discovered basic laws of energy and heat, then used these laws to calculate an estimate of the age of the earth that was too short by a factor of fifty. He based his calculation on the belief that the earth's mantle was solid and could transfer heat from the interior to the surface only by conduction. It is now known that the mantle is partly fluid and transfers most of the heat by the far more efficient process of convection, which carries heat by a massive circulation of hot rock moving upward and cooler rock moving downward. Kelvin could see the eruptions of volcanoes bringing hot liquid from deep underground to the surface; but his skill in calculation blinded him to processes such as volcanic eruptions that could not be calculated.[24]
Linus Pauling discovered the chemical structure of protein and proposed a completely wrong structure for DNA, which carries hereditary information from parent to offspring. Pauling guessed a wrong structure for DNA because he assumed that a pattern that worked for protein would also work for DNA. He overlooked the gross chemical differences between protein and DNA. Francis Crick and James Watson paid attention to the differences and found the correct structure for DNA that Pauling had missed a year earlier.[24]
Fred Hoyle discovered the process by which the heavier elements essential to life, including carbon, nitrogen, oxygen and iron, are created by nuclear reactions in the cores of massive stars. He then proposed a theory of the history of the universe known as steady-state cosmology, which has the universe existing forever without a Big Bang (as Hoyle derisively dubbed it) at the beginning. He held his belief in the steady state long after observations proved that the Big Bang had happened.[24]
Albert Einstein discovered the theory of space, time and gravitation known as General Relativity, and then added an additional component later known as dark energy. Later, Einstein withdrew his proposal of dark energy, believing it unnecessary. Long after his death, observations proved that dark energy really exists, so that Einstein's addition to the theory was right and his withdrawal was wrong.[24]
To Mario Livio's five examples of scientists who blundered, Dyson adds a sixth: himself. Dyson had concluded, on theoretical principles, that what was to become known as the W-particle, a charged weak boson, could not exist. An experiment conducted at CERN, in Geneva, later proved him wrong. "With hindsight I could see several reasons why my stability argument would not apply to W-particles. [They] are too massive and too short-lived to be a constituent of anything that resembles ordinary matter."[26]
Knowability
If scientists seek to find the truth about various aspects of reality, philosophers of science address the question of the knowability of reality. The American philosopher Thomas Nagel writes:
[In t]he pursuit of scientific knowledge through the interaction between theory and observation [...] we test theories against their observational consequences, but we also question or reinterpret our observations in light of theory. (The choice between geocentric and heliocentric theories at the time of the Copernican revolution is a vivid example.) [...]How things seem [emphasis added] is the starting point for all knowledge, and its development through further correction, extension, and elaboration is inevitably the result of more seemings—considered judgments about the plausibility and consequences of different theoretical hypotheses. The only way to pursue the truth is to consider what seems true, after careful reflection of a kind appropriate to the subject matter, in light of all the relevant data, principles, and circumstances.[27]
The core of science
In 1979 Steven Weinberg shared the Nobel Prize in physics for his 1967 mathematical model that unified two of the four fundamental forces of nature: the electromagnetic force and the weak force, which affects radioactive decay. For this and subsequent profound and innovative contributions to theoretical physics, he is regarded by many fellow physicists as the most distinguished living member of their profession.[28]
In his additional capacity as a historian of science, Weinberg had early on concerned himself with the modern era of physics and astronomy, from the late 19th century to the present—a time when, he says, "the goals and standards of physical science have not materially changed." Weinberg maintains that the core goal of science has always been the same: "to explain the world"; and in reviewing earlier periods of scientific thought, he concludes that only since Isaac Newton has that goal been pursued more or less correctly. He decries the "persistent intellectual snobbery" that Plato and Aristotle showed in their disdain for science's practical applications, and he explains why he think Francis Bacon and René Descartes are the "most overrated" among the forerunners of modern science (they tried to prescribe rules for conducting science, which "never works").[29]
Weinberg draws subtle parallels between past and present science, as when a scientific theory is "fine-tuned" (adjusted) to make certain quantities equal, without any understanding of why they should be equal. Such adjusting vitiated the celestial models of Plato's followers, in which different spheres carrying the planets and stars were assumed, with no good reason, to rotate in exact unison. But, Weinberg writes, a similar fine-tuning also besets current efforts to understand the "dark energy" that is speeding up the expansion of the universe.[29]
The early history of science has been described as having gotten off to a good start, then faltered. Specifically, the doctrine of atomism, propounded by the pre-Socratic philosophers Leucippus and Democritus, was entirely naturalistic, accounting for the workings of the world by impersonal processes, not by the volitions of gods. However, Weinberg is not impressed by these pre-Socratics as proto-scientists: they apparently never tried to justify their speculations or to test them against evidence.[29]
Weinberg concurs that the reasons why science faltered early on were Plato's suggestion that scientific truth could be attained by reason alone, disregarding empirical observation; and Aristotle's attempt to explain nature teleologically—in terms of ends and purposes. Plato's ideal of attaining knowledge of the world by unaided intellect, Weinberg writes, was "a false goal inspired by mathematics"—one that for centuries "stood in the way of progress that could be based only on careful analysis of careful observation." And it "never was fruitful" to ask, as Aristotle did, "what is the purpose of this or that physical phenomenon." Still, some charity is in order: "Nothing about the practice of modern science," writes Weinberg, "is obvious to someone who has never seen it done."[29]
A scientific field in which the Greek and Hellenistic world did make progress was astronomy. This was partly for practical reasons: the sky had long served as compass, clock and calendar. Also, the regularity of the movements of heavenly bodies made them simpler to describe than earthly phenomena. But not too simple: though the sun, moon and "fixed stars" seemed regular in their celestial circuits, the "wandering stars"—the planets—were puzzling; they seemed to move at variable speeds, and even to reverse direction. Weinberg writes: "Much of the story of the emergence of modern science deals with the effort, extending over two millennia, to explain the peculiar motions of the planets."[30]
The challenge was to make sense of the apparently irregular wanderings of the planets on the assumption that all heavenly motion is actually circular and uniform in speed. Why circular? Because, Plato held, the circle is the most perfect and symmetrical form; therefore circular motion, at uniform speed, was most fitting for celestial bodies. Aristotle agreed with Plato. In Aristotle's cosmos, everything has a "natural" tendency to motion that fulfills its inner potential. For the cosmos' sublunary part (the region below the moon), the natural tendency is to move in a straight line: downward for earthen things (such as rocks) and water; upward for air and fiery things (such as sparks). But in the celestial realm things are not composed of earth, water, air or fire, but of a "fifth element", or "quintessence," which is perfect and eternal. And its natural motion is uniformly circular. The stars, the sun, the moon and the planets are carried in their orbits by a complicated arrangement of crystalline spheres, all centered around an immobile earth.[31]
The Platonic-Aristotelian conviction that celestial motions must be circular persisted stubbornly. It was fundamental to the astronomer Ptolemy's system, which improved on Aristotle's in conforming to the astronomical data by allowing the planets to move in combinations of circles called "epicycles".[31]
It even survived the Copernican revolution. Copernicus was conservative in his Platonic reverence for the circle as the heavenly pattern. According to Weinberg, Copernicus was motivated to dethrone the earth in favor of the sun as the immobile center of the cosmos largely by esthetic considerations: he objected to the fact that Ptolemy, though faithful to Plato's requirement that heavenly motion be circular, had departed from Plato's other requirement that it be of uniform speed. By putting the sun at the center—actually, somewhat off-center—Copernicus sought to honor circularity while restoring uniformity. But to make his system fit the observations as well as Ptolemy's system, Copernicus had to introduce still more epicycles. That was a mistake. As Weinberg writes, it illustrates a recurrent theme in the history of science: "A simple and beautiful theory that agrees pretty well with observation is often closer to the truth than a complicated ugly theory that agrees better with observation."[31]
The planets, however, do not move in perfect circles but in ellipses. It was Johannes Kepler, about a century after Copernicus, who reluctantly (for he too had Platonic affinities) realized this. Thanks to his examination of the meticulous observations compiled by astronomer Tycho Brahe, Kepler "was the first to understand the nature of the departures from uniform circular motion that had puzzled astronomers since the time of Plato."[31]
The replacement of circles by supposedly ugly ellipses overthrew Plato's notion of perfection as the celestial explanatory principle. It also destroyed Aristotle's model of the planets carried in their orbits by crystalline spheres; as Weinberg observes, "there is no solid body whose rotation can produce an ellipse." Even if a planet were attached to an ellipsoid crystal, that crystal's rotation would still trace a circle. And if the planets were pursuing their elliptical motion through empty space, then what was holding them in their orbits?[31]
Science had reached the threshold of explaining the world not geometrically, according to shape, but dynamically, according to force. It was Isaac Newton who finally crossed that threshold. He was the first to formulate, in his "laws of motion", the concept of force. He demonstrated that Kepler's ellipses were the very orbits the planets would take if they were attracted toward the sun by a force that decreased as the square of the planet's distance from the sun. And by comparing the moon's motion in its orbit around the earth to the motion of, perhaps, an apple as it falls to the ground, Newton deduced that the forces governing them were quantitatively the same. "This," writes Weinberg, "was the climactic step in the unification of the celestial and terrestrial in science."[31]
By formulating a unified explanation of the behavior of planets, comets, moons, tides and apples, writes Weinberg, Newton "provided an irresistible model for what a physical theory should be"—a model that fit no preexisting metaphysical criterion. In contrast to Aristotle, who claimed to explain the falling of a rock by appeal to its inner striving, Newton was unconcerned with finding a deeper cause for gravity. He declared in his Philosophiæ Naturalis Principia Mathematica: "I do not 'feign' hypotheses." What mattered were his mathematically stated principles describing this force, and their ability to account for a vast range of phenomena.[31]
Eventually, in 1915, a deeper explanation for Newton's law of gravitation was found in Albert Einstein's general theory of relativity: gravity could be explained as a manifestation of the curvature in spacetime resulting from the presence of matter and energy. Successful theories like Newton's, writes Weinberg, may work for reasons that their creators do not understand—reasons that deeper theories will later reveal. Scientific progress is not a matter of building theories on a foundation of reason, but of unifying a greater range of phenomena under simpler and more general principles.[31]
Artificial intelligence
Until recently, it had generally been assumed that science is fundamentally a pursuit for human beings, not for machines, though machines can facilitate scientists’ work. However, since 1950, when the English mathematician Alan Turing proposed what has come to be called the “Turing test,” there has been much speculation as to whether machines such as computers can possess intelligence; and, if so, whether intelligent machines could become a threat to human intellectual and scientific ascendancy—or even an existential threat to humanity itself.[32] In the light of such speculation, one might wonder: When will a computer be awarded a Nobel prize—or be indicted for crimes against humanity?
John Searle, professor of philosophy at the University of California, Berkeley, writes that
there remain enormous philosophical confusions about the correct interpretation of [modern advances in computation and information technology]. For example, one routinely reads that in exactly the same sense in which Garry Kasparov […] beat Anatoly Karpov in chess, the computer called Deep Blue played and beat Kasparov.[33][T]his claim is [obviously] suspect. In order for Kasparov to play and win, he has to be conscious that he is playing chess, and conscious of a thousand other things such as that he opened with pawn to K4 and that his queen is threatened by the knight. Deep Blue is conscious of none of these things because it is not conscious of anything at all. Why is consciousness so important? You cannot literally play chess or do much of anything else cognitive if you are totally disassociated from consciousness.[33]
There has, writes Searle, long been a systematic ambiguity in the distinction drawn between objectivity and subjectivity.[33]
There is an ambiguous distinction between an epistemic sense (“epistemic” means having to do with knowledge) and an ontological sense (“ontological” means having to do with existence). In the epistemic sense, the distinction is between types of claims (beliefs, assertions, assumptions, etc.). If I say that Rembrandt lived in Amsterdam, that statement is epistemically objective. You can ascertain its truth as a matter of objective fact. If I say that Rembrandt was the greatest Dutch painter that ever lived, that is evidently a matter of subjective opinion: it is epistemically subjective.[33]Underlying this epistemological distinction between types of claims is an ontological distinction between modes of existence. Some entities have an existence that does not depend on being experienced (mountains, molecules, and tectonic plates are […] examples). Some entities exist only insofar as they are experienced (pains, tickles, and itches are examples). This distinction is between the ontologically objective and the ontologically subjective. No matter how many machines may register an itch, it is not really an itch until somebody consciously feels it: it is ontologically subjective.[33]
Searle draws a “related distinction […] between those features of reality that exist regardless of what we think and those whose very existence depends on our attitudes."[33]
The first class I call observer-independent or original, intrinsic, or absolute. This class includes mountains, molecules, and tectonic plates. They have an existence that is wholly independent of anybody’s attitude, whereas money, property, government, and marriage exist only insofar as people have certain attitudes toward them. Their existence I call observer-dependent or observer-relative.[33]
These distinctions, writes Searle, are important for several reasons.
Most elements of human civilization—money, property, government, universities, and The New York Review [of Books], [for example]—are observer-relative in their ontology because they are created by consciousness. But the consciousness that creates them is not observer-relative. It is intrinsic, and many statements about these elements of civilization can be epistemically objective. For example, it is an objective fact that the [New York Review of Books] exists.[33][T]hese distinctions are crucial because just about all of the central notions—computation, information, cognition, thinking, memory, rationality, learning, intelligence, decision-making, motivation, etc.—have two different senses. They have a sense in which they refer to actual, psychologically real, observer-independent phenomena […]. [...] But they also have a sense in which they refer to observer-relative phenomena, phenomena that only exist relative to certain attitudes […].[33]
Searle states that,
in the literal, real, observer-independent sense in which humans compute, mechanical computers do not compute. They go through a set of transitions in electronic states that we can interpret computationally. The transitions in those electronic states are absolute or observer-independent, but the computation is observer-relative. The transitions in physical states are just electrical sequences unless some conscious agent can give them a computational interpretation. […] There is no psychological reality at all to what is happening in the [computer].[34]
Important consequences flow from this.
[A] digital computer is a syntactical machine. It manipulates symbols and does nothing else. For this reason, the project of creating human intelligence by designing a computer program that will pass the Turing Test […] is doomed from the start. The appropriately programmed computer has a syntax [rules for constructing or transforming the symbols and words of a language] but no semantics [comprehension of meaning].[35]Minds, on the other hand, have mental or semantic content.[35] […]
Except for […] cases of computations carried out by conscious human beings, computation, as defined by Alan Turing and as implemented in actual pieces of machinery, is observer-relative. The brute physical state transitions in a piece of electronic machinery are only computations relative to some actual or possible consciousness that can interpret the processes computationally.[35]
Some have hypothesized that there will eventually come into being “intelligent supercomputers”, vastly more intelligent than humans; and that they might decide, on the basis of their arbitrarily formed motivations, to destroy all life on earth. However, Searle sees no chance of this.[35]
If we ask, “How much real, observer-independent intelligence do computers have, whether ‘intelligent’ or ‘superintelligent’?” the answer is zero, absolutely nothing. The intelligence is entirely observer-relative. And what goes for intelligence goes for thinking, remembering, deciding, desiring, reasoning, motivation, learning, and information processing […]. In the observer-independent sense, the amount that the computer possesses of each of these is zero. […] [T]here is [no] psychological reality to them.[35][I]f we are worried about a maliciously motivated superintelligence destroying us, then it is important that the malicious motivation should be real. Without consciousness, there is no possibility of its being real.[35] […]
[Computers] have, literally […], no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. […] [T]he machinery has no beliefs, desires, [or] motivations.[35]
Searle notes that
we do not know how human brains create consciousness and human cognitive processes. […] Until we do know such facts, we are unlikely to be able to build an artificial brain. To carry out such a project, it is essential to remember that what matters are the inner mental processes, not the external behavior. If you get the processes right, the behavior will be an expression of those processes, and if you don’t get the processes right, the behavior that results is irrelevant.[36]
Such, writes Searle, is the current situation with Artificial Intelligence. “Computer engineering is useful for flying airplanes, diagnosing diseases, [etc.]. But the results are for the most part irrelevant to understanding human thinking, reasoning, [information] processing [...], deciding, perceiving, etc., because the results are all observer-relative and not the real thing.”[36]
Why are these mistakes so persistent? […] First there is a residual behaviorism in the cognitive disciplines. Its practitioners tend to think that if you can build a machine that behaves intelligently, then it really is intelligent. The Turing Test is an explicit statement of this mistake.[36]Secondly there is a residual dualism. Many investigators are reluctant to treat consciousness, thinking, and psychologically real information processing as ordinary biological phenomena like photosynthesis or digestion. The weird marriage of behaviorism […] and dualism […] has led to the confusions that badly need to be exposed.[36]
If computers do ultimately prove incapable of original thought—of discovery and invention—as Searle infers, then it will likely be for lack of what Bolesław Prus discerned as the motive force behind such creativity: needs.[37] It may be natural organisms' responses to their needs that will provide a clew to the mystery of the epiphenomenon that is consciousness, whose absence from electronic computers, in Searle's view, disqualifies contemporary artificial intelligence as an autonomous creative power.[38] All intelligences are processors of information, but not all processors of information are intelligences.[35]
Discovery
Discoveries and inventions
Half a century before Florian Znaniecki published his 1923 paper proposing the creation of an empirical field of study to study the field of science, the Polish writer and philosopher Aleksander Głowacki (better known by his pen name, Bolesław Prus) had made the same proposal. In an 1873 public lecture "On Discoveries and Inventions", Prus said:
Until now there has been no science that describes the means for making discoveries and inventions, and the generality of people, as well as many men of learning, believe that there never will be. This is an error. Someday a science of making discoveries and inventions will exist and will render services. It will arise not all at once; first only its general outline will appear, which subsequent researchers will emend and elaborate, and which still later researchers will apply to individual branches of knowledge.[39]
Prus defines "discovery" as "the finding out of a thing that has existed and exists in nature, but which was previously unknown to people";[40] and "invention" as "the making of a thing that has not previously existed, and which nature itself cannot make."[41]
He illustrates the concept of "discovery":
Until 400 years ago, people thought that the Earth comprised just three parts: Europe, Asia, and Africa; it was only in 1492 that the Genoese, Christopher Columbus, sailed out from Europe into the Atlantic Ocean and, proceeding ever westward, after [10 weeks] reached a part of the world that Europeans had never known. In that new land he found copper-colored people who went about naked, and he found plants and animals different from those in Europe; in short, he had discovered a new part of the world that others would later name "America." We say that Columbus had discovered America, because America had already long existed on Earth.[42]
Prus illustrates the concept of "invention":
[As late as] 50 years ago, locomotives were unknown, and no one knew how to build one; it was only in 1828 that the English engineer [George] Stephenson built the first locomotive and set it in motion. So we say that Stephenson invented the locomotive, because this machine had not previously existed and could not by itself have come into being in nature; it could only have been made by man.[41]
According to Prus, "inventions and discoveries are natural phenomena and, as such, are subject to certain laws." Those are the laws of "gradualness", "dependence", and "combination".[43]
1. The law of gradualness. No discovery or invention arises at once perfected, but it is perfected gradually; likewise, no invention or discovery is the work of a single individual but of many individuals, each adding his little contribution. [...] Potatoes were first discovered; later they were found to make good cattle feed; then it was learned that potatoes could nourish people; and, later, potatoes began to be used for making vodka.[44]In regard to inventions, gradualness may be illustrated by the evolution of the stool. First people found that it was better to sit on a stump or a rock than on the ground. Then, noticing that a rock or a stump was too heavy to lug around, they built a stool consisting of a board and several legs. Next, to the stool they added a backrest, thus making a chair; to the chair, they added arm rests, making an armchair. Then they began painting and padding the armchairs and chairs, and so on.[45]
2. The law of dependence. An invention or discovery is conditional on the prior existence of certain known discoveries and inventions. [...] If potatoes grew only in America, they could not have been discovered before America had been; if the black swan lives only in Australia, the black swan could not have been seen before Australia had been. If the rings of Saturn can be seen through telescopes, then the telescope had to have been invented before the rings could have been seen. [...][45]
3. The law of combination. Any new discovery or invention is a combination of earlier discoveries and inventions, or rests on them. When I study a new mineral, I inspect it, I smell it, I taste it, that is, I combine the mineral with my senses. Then I weigh it and heat it, which is to say, I combine the mineral with a balance and with fire. Then I place it into water, into sulfuric acid, and so forth, in short, I combine the mineral with everything that I have at hand and in this way I learn ever more of its properties. And as for inventions, who does not know that a clock is a combination of wheels, springs, dials, bells, etc.? Who does not know that gunpowder is a combination of sulfur, saltpeter and charcoal?[46]
Each of Prus' three "laws" entails important corollaries. The law of gradualness implies the following:[47]
a) Since every discovery and invention requires perfecting, let us not pride ourselves only on discovering or inventing something completely new, but let us also work to improve or get to know more exactly things that are already known and already exist. […][47]b) The same law of gradualness demonstrates the necessity of expert training. Who can perfect a watch, if not a watchmaker with a good comprehensive knowledge of his métier? Who can discover new characteristics of an animal, if not a naturalist?[47]
From the law of dependence flow the following corollaries:[47]
a) No invention or discovery, even one seemingly without value, should be dismissed, because that particular trifle may later prove very useful. There would seem to be no simpler invention than the needle, yet the clothing of millions of people, and the livelihoods of millions of seamstresses, depend on the needle’s existence. Even today’s beautiful sewing machine would not exist, had the needle not long ago been invented.[48]b) The law of dependence teaches us that what cannot be done today, might be done later. People give much thought to the construction of a flying machine that could carry many persons and parcels. The inventing of such a machine will depend, among other things, on inventing a material that is, say, as light as paper and as sturdy and fire-resistant as steel.[49]
Finally, Prus' corollaries to his law of combination:[49]
a) Anyone who wants to be a successful inventor, needs to know a great many things—in the most diverse fields. For if a new invention is a combination of earlier inventions, then the inventor’s mind is the ground on which, for the first time, various seemingly unrelated things combine. Example: The steam engine combines Rumford’s double boiler, the pump, and the spinning wheel.[49][…] What is the connection among zinc, copper, sulfuric acid, a magnet, a clock mechanism, and an urgent message? All these had to come together in the mind of the inventor of the telegraph… […][50]
The greater the number of inventions that come into being, the more things a new inventor must know; the first, earliest and simplest inventions were made by completely uneducated people—but today’s inventions, particularly scientific ones, are products of the most highly educated minds. […][51]
b) A second corollary concerns societies that wish to have inventors. I said that a new invention is created by combining the most diverse objects; let us see where this takes us.[51]
Suppose I want to make an invention, and someone tells me: Take 100 different objects and bring them into contact with one another, first two at a time, then three at a time, finally four at a time, and you will arrive at a new invention. Imagine that I take a burning candle, charcoal, water, paper, zinc, sugar, sulfuric acid, and so on, 100 objects in all, and combine them with one another, that is, bring into contact first two at a time: charcoal with flame, water with flame, sugar with flame, zinc with flame, sugar with water, etc. Each time, I shall see a phenomenon: thus, in fire, sugar will melt, charcoal will burn, zinc will heat up, and so on. Now I will bring into contact three objects at a time, for example, sugar, zinc and flame; charcoal, sugar and flame; sulfuric acid, zinc and water; etc., and again I shall experience phenomena. Finally I bring into contact four objects at a time, for example, sugar, zinc, charcoal, and sulfuric acid. Ostensibly this is a very simple method, because in this fashion I could make not merely one but a dozen inventions. But will such an effort not exceed my capability? It certainly will. A hundred objects, combined in twos, threes and fours, will make over 4 million combinations; so if I made 100 combinations a day, it would take me over 110 years to exhaust them all![52]
But if by myself I am not up to the task, a sizable group of people will be. If 1,000 of us came together to produce the combinations that I have described, then any one person would only have to carry out slightly more than 4,000 combinations. If each of us performed just 10 combinations a day, together we would finish them all in less than a year and a half: 1,000 people would make an invention which a single man would have to spend more than 110 years to make…[53][54]
The conclusion is quite clear: a society that wants to win renown with its discoveries and inventions has to have a great many persons working in every branch of knowledge. One or a few men of learning and genius mean nothing today, or nearly nothing, because everything is now done by large numbers. I would like to offer the following simile: Inventions and discoveries are like a lottery; not every player wins, but from among the many players a few must win. The point is not that John or Paul, because they want to make an invention and because they work for it, shall make an invention; but where thousands want an invention and work for it, the invention must appear, as surely as an unsupported rock must fall to the ground.[53]
But, asks Prus, "What force drives [the] toilsome, often frustrated efforts [of the investigators]? What thread will clew these people through hitherto unexplored fields of study?"[37][55]
[T]he answer is very simple: man is driven to efforts, including those of making discoveries and inventions, by needs; and the thread that guides him is observation: observation of the works of nature and of man.[37]I have said that the mainspring of all discoveries and inventions is needs. In fact, is there any work of man that does not satisfy some need? We build railroads because we need rapid transportation; we build clocks because we need to measure time; we build sewing machines because the speed of [unaided] human hands is insufficient. We abandon home and family and depart for distant lands because we are drawn by curiosity to see what lies elsewhere. We forsake the society of people and we spend long hours in exhausting contemplation because we are driven by a hunger for knowledge, by a desire to solve the challenges that are constantly thrown up by the world and by life![37]
Needs never cease; on the contrary, they are always growing. While the pauper thinks about a piece of bread for lunch, the rich man thinks about wine after lunch. The foot traveler dreams of a rudimentary wagon; the railroad passenger demands a heater. The infant is cramped in its cradle; the mature man is cramped in the world. In short, everyone has his needs, and everyone desires to satisfy them, and that desire is an inexhaustible source of new discoveries, new inventions, in short, of all progress.[56]
But needs are general, such as the needs for food, sleep and clothing; and special, such as needs for a new steam engine, a new telescope, a new hammer, a new wrench. To understand the former needs, it suffices to be a human being; to understand the latter needs, one must be a specialist—an expert worker. Who knows better than a tailor what it is that tailors need, and who better than a tailor knows how to find the right way to satisfy the need?[57]
Now let us consider how observation can lead man to new ideas; and to that end, as an example, let us imagine how, more or less, clay products came to be invented.[57]
Suppose that somewhere there lived on clayey soil a primitive people who already knew fire. When rain fell on the ground, the clay turned doughy; and if, shortly after the rain, a fire was set on top of the clay, the clay under the fire became fired and hardened. If such an event occurred several times, the people might observe and thereafter remember that fired clay becomes hard like stone and does not soften in water. One of the primitives might also, when walking on wet clay, have impressed deep tracks into it; after the sun had dried the ground and rain had fallen again, the primitives might have observed that water remains in those hollows longer than on the surface. Inspecting the wet clay, the people might have observed that this material can be easily kneaded in one’s fingers and accepts various forms.[58]
Some ingenious persons might have started shaping clay into various animal forms […] etc., including something shaped like a tortoise shell, which was in use at the time. Others, remembering that clay hardens in fire, might have fired the hollowed-out mass, thereby creating the first [clay] bowl.[59]
After that, it was a relatively easy matter to perfect the new invention; someone else could discover clay more suitable for such manufactures; someone else could invent a glaze, and so on, with nature and observation at every step pointing out to man the way to invention. […][59]
[This example] illustrates how people arrive at various ideas: by closely observing all things and wondering about all things.[59]
Take another example. [S]ometimes, in a pane of glass, we find disks and bubbles, looking through which we see objects more distinctly than with the naked eye. Suppose that an alert person, spotting such a bubble in a pane, took out a piece of glass and showed it to others as a toy. Possibly among them there was a man with weak vision who found that, through the bubble in the pane, he saw better than with the naked eye. Closer investigation showed that bilaterally convex glass strengthens weak vision, and in this way eyeglasses were invented. People may first have cut glass for eyeglasses from glass panes, but in time others began grinding smooth pieces of glass into convex lenses and producing proper eyeglasses.[60]
The art of grinding eyeglasses was known almost 600 years ago. A couple of hundred years later, the children of a certain eyeglass grinder, while playing with lenses, placed one in front of another and found that they could see better through two lenses than through one. They informed their father about this curious occurrence, and he began producing tubes with two magnifying lenses and selling them as a toy. Galileo, the great Italian scientist, on learning of this toy, used it for a different purpose and built the first telescope.[61]
This example, too, shows us that observation leads man by the hand to inventions. This example again demonstrates the truth of gradualness in the development of inventions, but above all also the fact that education amplifies man’s inventiveness. A simple lens-grinder formed two magnifying glasses into a toy—while Galileo, one of the most learned men of his time, made a telescope. As Galileo’s mind was superior to the craftsman’s mind, so the invention of the telescope was superior to the invention of a toy.[61] [...]
The three laws [that have been discussed here] are immensely important and do not apply only to discoveries and inventions, but they pervade all of nature. An oak does not immediately become an oak but begins as an acorn, then becomes a seedling, later a little tree, and finally a mighty oak: we see here the law of gradualness. A seed that has been sown will not germinate until it finds sufficient heat, water, soil and air: here we see the law of dependence. Finally, no animal or plant, or even stone, is something homogeneous and single but is composed of various organs: here we see the law of combination.[62]
Prus holds that, over time, the multiplication of discoveries and inventions has improved the quality of people's lives and has expanded their knowledge. "This gradual advance of civilized societies, this constant growth in knowledge of the objects that exist in nature, this constant increase in the number of tools and useful materials, is termed progress, or the growth of civilization."[63] Conversely, Prus warns, "societies and people that do not make inventions or know how to use them, lead miserable lives and ultimately perish."[64]
Prus' laws of "gradualness", "dependence", and especially of "combination"—the last, stating that "An invention or discovery is conditional on the prior existence of certain known discoveries and inventions"—can be illustrated by the discovery, more than a century later, of the existence of dark matter:
The story of how this [discovery] came about is typical of major revolutions in our understanding of nature. It involved a series of baby steps, missteps, and hard work, as well as the growing convergence of two fields of physics that on the surface couldn't seem farther apart: particle physics, the study of the dynamics of the very small, and cosmology, the study of the dynamics of the universe on its largest scales.[65]
Sleeping Beauties of science
A 2016 Scientific American report highlights the role of rediscovery in science. Indiana University Bloomington researchers combed through 22 million scientific papers published over the previous century and found dozens of "Sleeping Beauties"—studies that sat dormant for years before getting noticed.[66]
The top finds, which languished the longest and later received the most intense attention from scientists, came from the fields of chemistry, physics, and statistics. The dormant findings were wakened by scientists from other disciplines, such as medicine, in search of fresh insights, and by the ability to test once theoretical postulations.[66]
According to Qing Ke, an informatics graduate student who worked on the project, Sleeping Beauties will likely become even more common in the future because of the increasing accessibility of scientific literature.[66]
The Scientific American report lists the top 15 Sleeping Beauties: 7 in chemistry, 5 in physics, 2 in statistics, and 1 in metallurgy.[66] Examples include:
- Herbert Freundlich's "Concerning Adsorption in Solutions" (1906), the first mathematical model of adsorption, when atoms or molecules adhere to a surface. Today both environmental remediation and decontamination in industrial settings rely heavily on adsorption.[66]
- William S. Hummers and Richard E Offeman, "Preparation of Graphitic Oxide", Journal of the American Chemical Society, vol. 80, no. 6 (March 20, 1958), p. 1339, introduced Hummers' Method, a technique for making graphite oxide. Recent interest in graphene's potential has brought the 1958 paper to attention. Graphite oxide could serve as a reliable intermediate for the 2-D material.[66]
- J[ohn] Turkevich, P. C. Stevenson, J. Hillier, "A Study of the Nucleation and Growth Processes in the Synthesis of Colloidal Gold", Discuss. Faraday. Soc., 1951, 11, pp. 55–75, explains how to suspend gold nanoparticles in liquid. It owes its awakening to medicine, which now employs gold nanoparticles to detect tumors and deliver drugs.[66]
- A. Einstein, B. Podolsky and N. Rosen, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" Physical Review, vol. 47 (May 15, 1935), pp. 777–780. This famous thought experiment in quantum physics—now known as the EPR paradox, after the authors' surname initials—was discussed theoretically when it first came out. It was not until the 1970s that physics had the experimental means to test quantum entanglement.[66]
Multiple discovery
Historians and sociologists have remarked on the occurrence, in science, of "multiple independent discovery". The American sociologist Robert K. Merton (1910–2003) defined such "multiples" as instances in which similar discoveries are made by scientists working independently of each other.[67] "Sometimes the discoveries are simultaneous or almost so; sometimes a scientist will make a new discovery which, unknown to him, somebody else has made years before."[68][69]
Commonly cited examples of multiple independent discovery are the 17th-century independent formulation of calculus by Isaac Newton, Gottfried Wilhelm Leibniz and others, described by A. Rupert Hall;[70] the 18th-century discovery of oxygen by Carl Wilhelm Scheele, Joseph Priestley, Antoine Lavoisier and others; and the theory of evolution of species, independently advanced in the 19th century by Charles Darwin and Alfred Russel Wallace.[71] Many more examples of multiple discovery have been identified.
Merton contrasted a "multiple" with a "singleton" — a discovery that has been made uniquely by a single scientist or group of scientists working together.[72] He believed that it is multiple discoveries, rather than unique ones, that represent the common pattern in science.[73]
Multiple discoveries in the history of science provide evidence for evolutionary models of science and technology, such as memetics (the study of self-replicating units of culture), evolutionary epistemology (which applies the concepts of biological evolution to study of the growth of human knowledge), and cultural selection theory (which studies sociological and cultural evolution in a Darwinian manner).
A recombinant-DNA-inspired "paradigm of paradigms" has been posited, that describes a mechanism of "recombinant conceptualization." This paradigm predicates that a new concept arises through the crossing of pre-existing concepts and facts. This is what is meant when one says that a scientist or artist has been "influenced by" another — etymologically, that a concept of the latter's has "flowed into" the mind of the former. Of course, as Freeman Dyson points out, not every new concept will be viable:[24] adapting social Darwinist Herbert Spencer's phrase, only the fittest concepts survive.[74]
It has been argued that, in regard to multiple discovery, science and art are similar.[75][76] When two scientists independently make the same discovery, their papers are not word-for-word identical, but the core ideas in the papers are the same; likewise, two novelists may independently write novels with the same core themes, though their novels are not identical word-for-word. The paradigm of recombinant conceptualization[77]—more broadly, of recombinant occurrences—that explains multiple discovery in science and the arts, also elucidates the phenomenon of historic recurrence, wherein similar events are noted in the histories of countries widely separated in time and geography; it is the recurrence of patterns that lends a degree of prognostic power—and, thus, additional scientific validity—to the findings of history.[78]
The phenomenon of multiple independent discoveries and inventions can be viewed as a corollary to Bolesław Prus' three laws, of gradualness, dependence, and combination (see "Discoveries and inventions", above).[79] Prus' laws of gradualness and dependence may, in their turn, be seen as corollaries to his law of combination, as the former two laws (of gradualness and dependence) imply the impossibility of certain scientific or technological advances pending the availability of certain theories, facts or technologies that will need to be combined in order to produce the scientific or technological advances in question.
Multiple independent discovery and invention, like discovery and invention generally, have been fostered by the evolution of means of communication: roads, vehicles, sailing vessels, writing, printing, institutions of education, telegraphy, and mass media, including the internet. Gutenberg's invention of printing (which itself involved a number of discrete inventions) substantially facilitated the transition from the Middle Ages to modern times. All these communication developments have catalyzed and accelerated the process of recombinant conceptualization, and thus also of multiple independent discovery and invention.
Sociology of science
Politics
Emerging in the wake of World War II and the Manhattan Project that produced the world's first nuclear weapons, Big Science has provoked debate about its value. in 1961 Alvin M. Weinberg, director of Oak Ridge National Laboratory, argued that Big Science was "inject[ing] a journalistic flavor" into research, "which is fundamentally in conflict with the scientific method"—a situation in which "the spectacular rather than the perceptive becomes the scientific standard." He also worried that, with huge sums of money available to researchers, "one sees evidence of scientists' spending money instead of thought."[80]
Big Science has long been the norm in physics, which requires massive particle accelerators such as the Large Hadron Collider at CERN near Geneva. In biology, Big Science debuted in 1990 with the Human Genome Project, a 13-year, $3-billion effort co-founded by the U.S. National Institutes of Health and the U.S. Department of Energy to sequence human DNA. In the early 2010s, neuroscience became a domain of Big Science. Almost concurrently with the European Union's Human Brain Project, the United States announced the potentially multibillion-dollar BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative. Major new brain-research initiatives were also announced by Israel, Canada, Australia, New Zealand, Japan, and China.[81]
Thomas R. Insel, director of the National Institute of Mental Health, one of several agencies organizing the U.S.'s BRAIN Initiative (others include the National Science Foundation and the Defense Advanced Research Projects Agency) says that policy-makers and scientists were inspired to urge expansion of brain research by concern about the spread and cost of mental disorders, combined with excitement about new brain-manipulation technologies such as optogenetics.[82]
The early history of the European Union's Human Brain Project makes an instructive case study in the mismanagement of a Big Science project. The Human Brain Project was inspired by neuroscientist Henry Markram's vision of reverse-engineering the circuitry of the human brain. In a 2009 TED talk, he first presented to the general public his vision of mathematically simulating the brain's 86 billion neurons and 100 trillion synapses on a supercomputer. He said it could be done within 10 years and suggested that such a mathematical model might even be capable of consciousness. In various talks, interviews and articles, he suggested that a mathematical brain model would produce breakthroughs such as simulation-driven drug discovery, the replacement of certain kinds of animal experiments, and a better understanding of such disorders as Alzheimer's Disease. Furthermore, he expected the simulated brain to also spin off technology for building new, faster computers and create robots with cognitive skills and possibly intelligence. Many neuroscientists were skeptical, but his vision seemed vindicated in January 2013 when the European Union awarded him $1.3 billion, spread over 10 years, to build his simulated brain.[83]
However, the Human Brain Project created a deep public schism among European neuroscientists, and in less than two years Markram lost his position in the executive leadership of the project. A July 2014 open letter attacking the Project's science and organization gathered over 800 signatures of scientists. In March 2015, with the signatories threatening a boycott of what was supposed to be a Europe-wide collaboration, Markram initiated a mediation process to address the critics' concerns. A committee of 27 scientists reviewed both sides' arguments and, except for two dissenters, largely agreed with the critics. The mediators called for a massive overhaul of the Human Brain Project, including a new management structure and change in scientific focus.[81]
Stefan Theil argues that, "as much dysfunction as there has been around the [Human Brain Project]'s Swiss headquarters, the ultimate source of the problem is [...] in Brussels. There, at the seat of the European Commission, the executive arm of the European Union, a system of Big Science funding that marries politics with scientific objectives, allows little transparency, and exercises insufficient control has enabled the mess that the HBP has become."[81]
If [Henry Markram]'s project of reverse-engineering the human brain's circuitry] were possible, mainstream neuroscientists say, reengineering the brain at the level of detail envisioned by [him] would tell us nothing about cognition, memory or emotion—just as copying the hardware in a computer, atom by atom, would tell us little about the complex software running on it. Others accused Markram of exaggerating the [Human Brain Project]'s potential breakthroughs. [...]Despite skepticism in the neuroscience community, Markram won over the people who really mattered: funders at the European Commission, who seem to have looked less closely at the proposal's scientific feasibility than at its potential economic and political payoff. "The project's genesis was that politicians wanted to do something for European industry to catch up," [...] says [Christoph Ebell], the Human Brain Project's Executive Director]. In 2009, driven by fear of falling further behind the U.S. in computers, digital services and other technologies, what is now the European Commission's Directorate General for Communications Networds, Content and Technology began creating a competition for "flagship" projects funded with at least 1 billion euros each. As much industrial policy as science, these initiatives were to "enable Europe to take the lead" in future and emerging technologies [...]. Markram's brain on a supercomputer—and his promises of what it would achieve for neuroscience, medicine, robotics and computer technology—was a good fit for a bureaucracy that believed a 10-year, top-down plan for "disruptive" innovation was possible.[84]
Because the Human Brain Project was envisioned as a showcase project outside the usual science-funding process—and because of the big budget that had to be justified—politicians, bureaucrats and even scientists had strong incentives to exaggerate its promises.[84]
As the Human Brain Project was being set up in 2013, the European Commission failed to insist on the usual checks and balances. According to the 2015 mediation report, the project's governance was riddled with conflicts of interest. Not only did Markram and two other scientists control the board of directors, and thus the distribution of funds among the consortium of 112 institutions, but Markram's and several other board members' projects were the beneficiaries of their own funding decisions. It was only after the neuroscientists' July 2014 open letter that the European Commission began to mention governmance problems at the Human Brain Project. "Without the neuroscience community's revolt", writes Stefan Theil in October 2015, "it is not clear that the organizational changes at the HBP would be happening now."[85]
The U.S.'s BRAIN Initiative, announced in April 2013, at first met with similar skepticism as had the European Union's Human Brain Project. But instead of proceeding with secret panels and confidential reviews like the European project, the U.S. National Institute of Mental Health put the initiative on hold, named a panel of 15 leading brain experts, and let the country's brain scientists define the project in a series of public workshops. A year's deliberations produced an ambitious interdisciplinary program to develop new technological tools that will enable researchers to better monitor, measure and stimulate the brain.[85]
The key difference between the European Union's Human Brain Project and the U.S.'s BRAIN Initiative is that the latter does not depend on a single scientific vision. Instead, many teams will compete for grants and lead innovation into different, unplanned directions. Competition is happening via the U.S. National Institute of Mental Health's peer-review process, which prevents the conflicts of interest that plagued the European Human Brain Project's decision-making. Peer review is not perfect; it tends to favor known scientific paradigms. "But the BRAIN Initiative's more competitive and transparent decision making", writes Stefan Theil, "is far removed from the political black box in Brussels that produced the [Human Brain Project]."[85]
The [U.S.] BRAIN Initiative has a good chance of succeeding because despite its packaging as a moon shot-style megaproject, it is not so much Big Science as a model of distributed innovation under a central funding umbrella, with rules that encourage collaboration. The initiative's megaproject label is, perhaps, just clever PR to raise funds and galvanize support. "When I talk to members of Congress, they always want to know what the new idea is," Insel [director of the National Institute of Mental Health] says. "They don't want to spend money on more of the same." Media coverage also flocks to big new ideas. The result is that a Big Science project—or one packaged as such—is often an easier sell to politicians, their constituents and journalists. "There is a zeitgeist now of Big Science being more effective," says Zachary Mainen, head of systems neuroscience at the Lisbon-based Champalimaud Foundation and co-organizer of the [July 2014] open letter against the [Human Brain Project]. "But that doesn't mean you have to eliminate competition."[85]
Finance
Nathan Myhrvold, former Microsoft chief technology officer and founder of Microsoft Research, asserts that the financing of basic science cannot be left to the private sector—that "without government resources, basic science will grind to a halt."[86]
Myhrvold notes that Albert Einstein's special theory of relativity, published on 2 December 1915, did not spring full-blown from his brain in some eureka moment. He worked at it for years—finally driven to complete it by a rivalry with mathematician David Hilbert.[86]
Examine the... history of almost any iconic scientific discovery or technological invention—the lightbulb, the transistor, DNA, even the Internet—and you'll find that the famous names credited with the breakthrough were only a few steps ahead of a pack of competitors. Recently some writers and elected officials have used this phenomenon, called parallel innovation, to argue against the public financing of basic research.[...] Government [it has been asserted] should leave it to companies to finance the research they need.[Such] arguments are dangerously wrong. Without government support, most basic scientific research will never happen. This is most clearly true for the kind of pure research that has delivered [...] great intellectual benefits but no profits, such as the work that brought us the Higgs boson, or the understanding that a supermassive black hole sits at the center of the Milky Way, or the discovery of methane seas on the surface of Saturn's moon Titan. Company research laboratories used to do this kind of work: experimental evidence for the big bang was discovered at AT&T's Bell Labs, resulting in a Nobel Prize. Now those days are gone.
Even in applied fields, such as materials science and computer science, companies now understand that basic research is a form of charity—so they avoid it. Scientists at Bell Labs created the transistor, but that invention earned billions for Intel (and Microsoft). Engineers at Xerox PARC invented the modern graphical user interface, although Apple (and Microsoft) profited the most. IBM researchers pioneered the use of giant magnetoresistance to boost hard-disk capacity but soon lost the disk-drive business to Seagate and Western Digital.[86]
When Myhrvold created Microsoft Research, he and Bill Gates were clear that basic research was not their mission. Unless their researchers focused narrowly on innovations that could quickly be turned into revenues, the R&D budget could not be justified to their investors. "The business logic at work here has not changed. Those who believe profit-driven companies will altruistically pay for basic science that has wide-ranging benefits—but mostly to others and not for a generation—are naive."[86]
Myhrvold concludes:
If government were to leave it to the private sector to pay for basic research, most science would come to a screeching halt. What research survived would be done largely in secret, for fear of handing the next big thing to a rival. In that situation, Einstein might never have felt the need to finish his greatest work.[86]
Class
The scientific domain, like other societal domains, shows class distinctions. Sociologist Harriet Zuckerman writes:
[I]t is striking that more than half (forty-eight) of the ninety-two [Nobel] laureates [in sciences] who did their prize-winning research in the United States by 1972 had worked either as students, postdoctorates, or junior collaborators under older Nobel laureates.... What is more, these forty-eight future laureates worked under a total of seventy-one laureate masters.[87]
Zuckerman uses the cohort of Nobel laureates in the sciences in the United States to study and illustrate certain phenomena, while acknowledging that many Nobel-quality scientists have never received a Nobel prize and never will, due to the limited number of such prizes available.[88]
These scientists, like the "immortals" who happened not to have been included among the cohorts of forty in the French Academy, may be said to occupy the "forty-first chair" in science [...]. Scientists of the first rank who never won the Nobel prize include such giants as [Dmitri] Mendele[y]ev [1834–1907], whose Periodic Law and table of elements are known to every schoolchild, and Josiah Willard Gibbs [1839–1903], America's greatest scientist of the nineteenth century, who provided the foundations of modern chemical thermodynamics and statistical mechanics. They also include the bacteriologist Oswald T. Avery [1877–1955], who laid the groundwork for explosive advances in modern molecular biology, as well as all the mathematicians, astronomers, and earth and marine scientists of the first class who work in fields statutorily excluded from consideration for Nobel prizes.[88]
Zuckerman observes that "biological parents cannot choose their children any more than children can choose their biological parents. But in the social domain generally, and specifically in the domain of science and learning, there is an option."[89]
To some extent, students of promise can choose masters with whom to work and masters can choose among the cohorts of students who present themselves for study. This process of bilateral assortative selection is conspicuously at work among the ultra-elite of science. Actual and prospective members of that elite select their scientist parents and therewith their scientist ancestors just as later they select their scientist progeny and therewith their scientist descendants.[89][T]he lines of elite apprentices to elite masters who had themselves been elite apprentices, and so on indefinitely, often reach far back into the history of science, long before 1900, when [Alfred] Nobel's will inaugurated what now amounts to the International Academy of Sciences. As an example of the many long historical chains of elite masters and apprentices, consider the German-born English laureate Hans Krebs (1953), who traces his scientific lineage [...] back through his master, the 1931 laureate Otto Warburg. Warburg had studied with Emil Fis[c]her [1852–1919], recipient of a prize in 1902 at the age of 50, three years before it was awarded [in 1905] to his teacher, Adolf von Baeyer [1835–1917], at age 70. This lineage of four Nobel masters and apprentices has its own pre-Nobelian antecedents. Von Baeyer had been the apprentice of F[riedrich] A[ugust] Kekulé [1829–96], whose ideas of structural formulae revolutionized organic chemistry and who is perhaps best known for the often retold story about his having hit upon the ring structure of benzene in a dream (1865). Kekulé himself had been trained by the great organic chemist Justus von Liebig (1803–73), who had studied at the Sorbonne with the master J[oseph] L[ouis] Gay-Lussac (1778–1850), himself once apprenticed to Claude Louis Berthollet (1748–1822). Among his many institutional and cognitive accomplishments, Berthollet helped found the École Polytechnique, served as science advisor to Napoleon in Egypt, and, more significant for our purposes here, worked with [Antoine] Lavoisier [1743–94] to revise the standard system of chemical nomenclature.[90]
From this summary, it appears that [Nobel] laureates are only continuing a long-standing historical pattern for replenishing the scientific ultra-elite.[90]
Sexual bias
Claire Pomeroy, M.D., M.B.A.—president of the Albert and Mary Lasker Foundation, dedicated to advancing medical research—writing in Scientific American, points out that women scientists continue to be subjected to sexual harassment and to discrimination in professional advancement.[91]
Although the percentage of doctorates awarded to women in life sciences [in the United States] increased from 15 to 52 percent between 1969 and 2009, only about a third of assistant professors and less than a fifth of full professors in biology-related fields in 2009 were female. Women make up only 15 percent of permanent department chairs in medical schools and barely16 percent of medical school deans. [...]The problem is not only outright sexual harassment—it is a culture of exclusion and unconscious bias that leaves many women feeling demoralized, marginalized and unsure. In one study, science faculty were given identical résumés in which the names and genders of two applicants were swapped; both male and female faculty judged the male applicant to be more competent and offered him a higher salary.
Unconscious bias also appears in the form of "microassaults" that women scientists [...] endure daily. This is the endless barrage of purportedly insignificant sexist jokes, insults and put-downs that accumulate over the years and undermine confidence and ambition. Each time it is assumed that the only woman in the lab group will play the role of recording secretary, each time a research plan becomes finalized in the men's lavatory between conference sessions, each time a woman is not invited to go out for a beer after the plenary lecture to talk shop, the damage is reinforced.
When I speak to groups of women scientists, I often ask them if they have ever been in a meeting where they made a recommendation, had it ignored, and then heard a man receive praise and support for making the same point a few minutes later. Each time the majority of women in the audience raise their hands. Microassaults are especially damaging when they come from a high school science teacher, college mentor, university dean or a member of the scientific elite who has been awarded a prestigious prize—the very people who should be inspiring and supporting the next generation of scientists.[91]
Sagan effect
Largely as a result of his growing popularity, astronomer and science popularizer Carl Sagan, creator of the 1980 Cosmos PBS TV series, came to be ridiculed by scientist peers and failed to receive tenure at Harvard University in the 1960s and membership in the National Academy of Sciences in the 1990s. The eponymous "Sagan effect" persists; as a group, scientists still discourage individual investigators from engaging with the public unless they are already well-established senior researchers.[92]
The operation of the Sagan effect deprives society of the full range of expertise needed to make informed decisions about complex questions, including genetic engineering, climate change, and energy alternatives. Fewer scientific voices mean fewer arguments to counter antiscience or pseudoscientific discussion. The Sagan effect also creates the false impression that science is the domain of older white men (who dominate the senior ranks), thereby tending to discourage women and minorities from considering science careers.[92]
A number of factors contribute to the Sagan effect's durability. At the height of the Scientific Revolution in the 17th century, many researchers emulated the example of Isaac Newton, who dedicated himself to physics and mathematics and never married. These scientists were viewed as pure seekers of truth who were not distracted by more mundane concerns. Similarly, today anything that takes scientists away from their research, such as having a hobby or taking part in public debates, can undermine their credibility as researchers.[93]
Another, more prosaic factor in the Sagan effect's persistence may be professional jealousy.[93]
However, there appear to be some signs that engaging with the rest of society is becoming less hazardous to a career in science. So many people have social-media accounts now that becoming a public figure is not as unusual for scientists as previously. Moreover, as traditional funding sources stagnate, going public sometimes leads to new, unconventional funding streams. A few institutions such as Emory University and the Massachusetts Institute of Technology may have begun to appreciate outreach as an area of academic activity, in addition to the traditional roles of research, teaching, and administration. Exceptional among federal funding agencies, the National Science Foundation now officially favors popularization.[94]
See also
- Artificial intelligence
- Big Science
- Discovery (observation)
- Historic recurrence
- History of science
- History of technology
- Invention
- List of misnamed theorems
- List of multiple discoveries
- Matilda effect
- Matthew effect
- Multiple discovery
- Science
- Science and technology studies
- Science of science policy
- Science studies
- Science, technology and society
- Scientific method
- Sociology of science
Notes
- 1 2 Stefan Zamecki (2012). Komentarze do naukoznawczych poglądów Williama Whewella (1794–1866): studium historyczno-metodologiczne [Commentaries to the Logological Views of William Whewell (1794–1866): A Historical-Methodological Study]. Wydawnictwa IHN PAN., ISBN 978-83-86062-09-6, English-language summary: pp. 741–43
- 1 2 Christopher Kasparek (1994). "Prus' Pharaoh: The Creation of a Historical Novel". The Polish Review. XXXIX (1): 45–46. JSTOR 25778765. note 3
- ↑ "Science of Science Cyberinfrastructure Portal... at Indiana University". Also Maria Ossowska and Stanisław Ossowski, "The Science of Science", 1935, reprinted in Bohdan Walentynowicz, ed., Polish Contributions to the Science of Science, Boston, D. Reidel Publishing Company, 1982, ISBN 83-01-03607-9, pp. 82–95.
- ↑ Joseph Ben-David & Teresa A. Sullivan (1975). "Sociology of Science". Annual Review of Sociology. 1 (1): 203–222. doi:10.1146/annurev.so.01.080175.001223.
- ↑ This meaning of "logology" is distinct from "the study of words", as the term was introduced by Kenneth Burke in The Rhetoric of Religion: Studies in Logology (1961), which sought to find a universal theory and methodology of language. Burke, Kenneth (1970). The Rhetoric of Religion: Studies in Logology. University of California Press. ISBN 9780520016101. In introducing the book, Burke writes: "If we defined 'theology' as 'words about God', then by 'logology' we should mean 'words about words'". Burke's "logology", in this theological sense, has been cited as a useful tool of sociology. Bentz, V.M.; Kenny, W. (1997). ""Body-As-World": Kenneth Burke's Answer to the Postmodernist Charges against Sociology". Sociological Theory. 15 (1): 81–96. doi:10.1111/0735-2751.00024.
- ↑ Bohdan Walentynowicz, "Editor's Note", Polish Contributions to the Science of Science, edited by Bohdan Walentynowicz, Dordrecht, D. Reidel Publishing Company, 1982, ISBN 83-01-03607-9, p. XI.
- ↑ Klemens Szaniawski, "Preface", Polish Contributions to the Science of Science, p. VIII.
- ↑ Maria Ossowska and Stanisław Ossowski concluded that, while the singling out of a certain group of questions into a separate, "autonomous" discipline might be insignificant from a theoretical standpoint, it is not so from a practical one: "A new grouping of [questions] lends additional importance to the original [questions] and gives rise to new ones and [to] new ideas. The new grouping marks out the direction of new investigations; moreover, it may exercise an influence on university studies [and on] the found[ing] of chairs, periodicals and societies." Maria Ossowska and Stanisław Ossowski, "The Science of Science", reprinted in Bohdan Walentynowicz, ed., Polish Contributions to the Science of Science, pp. 88–91.
- ↑ Bohdan Walentynowicz, ed., Polish Contributions to the Science of Science, passim.
- ↑ Florian Znaniecki, "Przedmiot i zadania nauki o wiedzy" ("The Subject Matter and Tasks of the Science of Knowledge"), Nauka Polska (Polish Science), vol. IV (1923), no. 1.
- ↑ Florian Znaniecki, "The Subject Matter and Tasks of the Science of Knowledge" (English translation), Polish Contributions to the Science of Science, pp. 1–2.
- ↑ Maria Ossowska and Stanisław Ossowski, "The Science of Science", originally published in Polish as "Nauka o nauce" ("The Science of Science") in Nauka Polska (Polish Science), vol. XX (1935), no. 3.
- ↑ Maria Ossowska and Stanisław Ossowski, "The Science of Science", reprinted in Bohdan Walentynowicz, ed., Polish Contributions to the Science of Science, p. 83.
- ↑ Bohdan Walentynowicz, Editor's Note, in Bohdan Walentynowicz, ed., Polish Contributions to the Science of Science, p. XI.
- ↑ Maria Ossowska and Stanisław Ossowski, "The Science of Science", reprinted in Bohdan Walentynowicz, ed., Polish Contributions to the Science of Science, pp. 84–85.
- ↑ Maria Ossowska and Stanisław Ossowski, "The Science of Science", in Bohdan Walentynowicz, ed., Polish Contributions to the Science of Science, p. 86.
- ↑ Maria Ossowska and Stanisław Ossowski, "The Science of Science", in Bohdan Walentynowicz, ed., Polish Contributions to the Science of Science, pp. 86–87.
- ↑ Maria Ossowska and Stanisław Ossowski, "The Science of Science", in Bohdan Walentynowicz, ed., Polish Contributions to the Science of Science, pp. 87–88, 95.
- ↑ Bohdan Walentynowicz, "Editor's Note", Polish Contributions to the Science of Science, p. XII.
- ↑ Michael Shermer, "Scientia Humanitatis: Reason, empiricism and skepticism are not virtues of science alone", Scientific American, vol. 312, no. 6 (June 2015), p. 80.
- 1 2 3 Michael Shermer, "Scientia Humanitatis", Scientific American, vol. 312, no. 6 (June 2015), p. 80.
- ↑ Similarly rigorous scientific analysis is applied to the visual arts when investigating the authenticity of putative old-master paintings; and—as described in the Stanford University Cantor Arts Center exhibit, "500 Years of Italian Master Drawings from the Princeton University Art Museum," on show through 24 August 2015—in fixing the actual authorship of a disegno that clearly served as a preliminary sketch for a final work of established authorship. Similar genetic kinships have been rigorously demonstrated between works of literature; for example, Zygmunt Szweykowski showed that a preliminary sketch for Bolesław Prus' historical novel Pharaoh was his historical short story, "A Legend of Old Egypt", which in turn was inspired by the fatal 1887-88 illnesses of Germany's Kaiser Wilhelm I and his successor, Friedrich III. Zygmunt Szweykowski, "Geneza noweli 'Z legend dawnego Egiptu'" ("The Genesis of the Short Story, 'A Legend of Old Egypt'"), in Nie tylko o Prusie: szkice (Not Only about Prus: Sketches), pp. 256-61, 299-300.
- ↑ Freeman Dyson, "The Case for Blunders" (review of Mario Livio, Brilliant Blunders: From Darwin to Einstein—Colossal Mistakes by Great Scientists that Changed Our Understanding of Life and the Universe, Simon and Schuster), The New York Review of Books, vol. LXI, no. 4 (March 6, 2014), p. 4.
- 1 2 3 4 5 6 7 8 Freeman Dyson, "The Case for Blunders", The New York Review of Books, vol. LXI, no. 4 (March 6, 2014), p. 4.
- ↑ Freeman Dyson, "The Case for Blunders", The New York Review of Books, vol. LXI, no. 4 (March 6, 2014), pp. 6, 8.
- ↑ Freeman Dyson, "The Case for Blunders", The New York Review of Books, vol. LXI, no. 4 (March 6, 2014), p. 8.
- ↑ Thomas Nagel, "Listening to Reason" (a review of T.M. Scanlon, Being Realistic about Reasons, Oxford University Press, 132 pp.), The New York Review of Books, vol. LXI, no. 15 (October 9, 2014), p. 49.
- ↑ Jim Holt, "At the Core of Science" (a review of Steven Weinberg, To Explain the World: The Discovery of Modern Science, Harper, [2015], 416 pp., $28.99, [ISBN 978-0062346650]), The New York Review of Books, vol. LXII, no. 14 (September 24, 2015), p. 53.
- 1 2 3 4 Jim Holt, "At the Core of Science" (a review of Steven Weinberg, To Explain the World: The Discovery of Modern Science, Harper, 2015), The New York Review of Books, vol. LXII, no. 14 (September 24, 2015), p. 53.
- ↑ Jim Holt, "At the Core of Science" (a review of Steven Weinberg, To Explain the World: The Discovery of Modern Science, Harper, 2015), The New York Review of Books, vol. LXII, no. 14 (September 24, 2015), pp. 53–54.
- 1 2 3 4 5 6 7 8 Jim Holt, "At the Core of Science" (a review of Steven Weinberg, To Explain the World: The Discovery of Modern Science, Harper, 2015), The New York Review of Books, vol. LXII, no. 14 (September 24, 2015), p. 54.
- ↑ John R. Searle, “What Your Computer Can’t Know”, The New York Review of Books, 9 October 2014, p. 52.
- 1 2 3 4 5 6 7 8 9 John R. Searle, “What Your Computer Can’t Know”, The New York Review of Books, 9 October 2014, p. 52.
- ↑ John R. Searle, “What Your Computer Can’t Know”, The New York Review of Books, 9 October 2014, p. 53.
- 1 2 3 4 5 6 7 8 John R. Searle, “What Your Computer Can’t Know”, The New York Review of Books, 9 October 2014, p. 54.
- 1 2 3 4 John R. Searle, “What Your Computer Can’t Know”, The New York Review of Books, 9 October 2014, p. 55.
- 1 2 3 4 Bolesław Prus, On Discoveries and Inventions, p. 18.
- ↑ John R. Searle, “What Your Computer Can’t Know”, The New York Review of Books, 9 October 2014, pp. 54–55.
- ↑ Bolesław Prus, On Discoveries and Inventions: A Public Lecture Delivered on 23 March 1873 by Aleksander Głowacki [Bolesław Prus], Passed by the [Russian] Censor (Warsaw, 21 April 1873), Warsaw, Printed by F. Krokoszyńska, 1873, p. 12.
- ↑ Bolesław Prus, On Discoveries and Inventions, p. 3.
- 1 2 Bolesław Prus, On Discoveries and Inventions, p. 4.
- ↑ Bolesław Prus, On Discoveries and Inventions, pp. 3–4.
- ↑ Bolesław Prus, On Discoveries and Inventions, p. 12.
- ↑ Bolesław Prus, On Discoveries and Inventions, pp. 12–13.
- 1 2 Bolesław Prus, On Discoveries and Inventions, p. 13.
- ↑ Bolesław Prus, On Discoveries and Inventions, pp. 13–14.
- 1 2 3 4 Bolesław Prus, On Discoveries and Inventions, p. 14.
- ↑ Bolesław Prus, On Discoveries and Inventions, pp. 14–15.
- 1 2 3 Bolesław Prus, On Discoveries and Inventions, p. 15.
- ↑ Bolesław Prus, On Discoveries and Inventions, pp. 15–16.
- 1 2 Bolesław Prus, On Discoveries and Inventions, p. 16.
- ↑ Bolesław Prus, On Discoveries and Inventions, pp. 16–17.
- 1 2 Bolesław Prus, On Discoveries and Inventions, p. 17.
- ↑ Ludicrous as this metaphor for the process of invention may sound, it brings to mind some experiments that would soon be done by Prus' contemporary, the inventor Thomas Edison—nowhere more so than in his exhaustive search for a practicable light-bulb filament. (Edison's work with electric light bulbs also illustrates Prus' law of gradualness: many earlier inventors had previously devised incandescent lamps; Edison's was merely the first commercially practical incandescent light.)
- ↑ The reference to a thread appears to be an allusion to Ariadne's thread in the myth of Theseus and the Minotaur.
- ↑ Bolesław Prus, On Discoveries and Inventions, pp. 18–19.
- 1 2 Bolesław Prus, On Discoveries and Inventions, p. 19.
- ↑ Bolesław Prus, On Discoveries and Inventions, pp. 19–20.
- 1 2 3 Bolesław Prus, On Discoveries and Inventions, p. 20.
- ↑ Bolesław Prus, On Discoveries and Inventions, pp. 20–21.
- 1 2 Bolesław Prus, On Discoveries and Inventions, p. 21.
- ↑ Bolesław Prus, On Discoveries and Inventions, p. 22.
- ↑ Bolesław Prus, On Discoveries and Inventions, p. 5.
- ↑ Bolesław Prus, On Discoveries and Inventions, p. 24.
- ↑ Lawrence M. Krauss, "The Universe: 'The Important Stuff Is Invisible'", The New York Review of Books, vol. LXIII, no. 4 (March 10, 2016), p. 37.
- 1 2 3 4 5 6 7 8 Amber Williams, "Sleeping Beauties of Science: Some of the best research can slumber for years", Scientific American, vol. 314, no. 1 (January 2016), p. 80.
- ↑ Merton, Robert K. (1963). "Resistance to the Systematic Study of Multiple Discoveries in Science". European Journal of Sociology. 4 (2): 237–282. doi:10.1017/S0003975600000801. Reprinted in Robert K. Merton, The Sociology of Science: Theoretical and Empirical Investigations, Chicago, University of Chicago Press,1973, pp. 371–82.
- ↑ Merton, Robert K. (1973). The Sociology of Science: Theoretical and Empirical Investigations. Chicago: University of Chicago Press. ISBN 0-226-52091-9.
- ↑ Merton's hypothesis is also discussed extensively by Harriet Zuckerman. Harriet Zuckerman, Scientific Elite: Nobel Laureates in the United States, Free Press, 1979.
- ↑ Hall, A. Rupert (1980). Philosophers at War: The Quarrel between Newton and Leibniz. New York: Cambridge University Press. ISBN 0-521-22732-1.
- ↑ Tori Reeve, Down House: the Home of Charles Darwin, pp. 40-41.
- ↑ Robert K. Merton, On Social Structure and Science, p. 307.
- ↑ Robert K. Merton, "Singletons and Multiples in Scientific Discovery: a Chapter in the Sociology of Science," Proceedings of the American Philosophical Society, 105: 470–86, 1961. Reprinted in Robert K. Merton, The Sociology of Science: Theoretical and Empirical Investigations, Chicago, University of Chicago Press, 1973, pp. 343–70.
- ↑ Christopher Kasparek, "Prus' Pharaoh: the Creation of a Historical Novel," The Polish Review, vol. XXXIX, no. 1 (1994), pp. 45-46.
- ↑ Lamb and Easton, Multiple Discovery, chapter 9: "Originality in art and science."
- ↑ Christopher Kasparek, "Prus' Pharaoh: the Creation of a Historical Novel," pp. 45-46.
- ↑ Kasparek had earlier written about recombinant conceptualization in his review of Robert Olby, The Path to the Double Helix (Seattle, University of Washington Press, 1974), in Zagadnienia naukoznawstwa (Logology, or Science of Science), Warsaw, vol. 14, no. 3 (1978), pp. 461-63. Cited in Christopher Kasparek, "Prus' Pharaoh: the Creation of a Historical Novel," pp. 45-46.
- ↑ G.W. Trompf, The Idea of Historical Recurrence in Western Thought, from Antiquity to the Reformation, Berkeley, University of California Press, 1979, ISBN 0-520-03479-1, passim.
- ↑ Bolesław Prus, On Discoveries and Inventions, pp. 12–14.
- ↑ Stefan Theil, "Trouble in Mind: Two years in, a $1-billion-plus effort to simulate the human brain is in disarray. Was it poor management, or is something fundamentally wrong with Big Science?", Scientific American, vol. 313, no. 4 (October 2015), p. 38.
- 1 2 3 Stefan Theil, "Trouble in Mind", Scientific American, vol. 313, no. 4 (October 2015), p. 38.
- ↑ Stefan Theil, "Trouble in Mind", Scientific American, vol. 313, no. 4 (October 2015), pp. 38–39.
- ↑ Stefan Theil, "Trouble in Mind", Scientific American, vol. 313, no. 4 (October 2015), pp. 36, 38.
- 1 2 Stefan Theil, "Trouble in Mind", Scientific American, vol. 313, no. 4 (October 2015), p. 39.
- 1 2 3 4 Stefan Theil, "Trouble in Mind", Scientific American, vol. 313, no. 4 (October 2015), p. 42.
- 1 2 3 4 5 Nathan Myhrvold, "Even Genius Needs a Benefactor: Without government resources, basic science will grind to a halt", Scientific American, vol. 314, no. 2 (February 2016), p. 11.
- ↑ Harriet Zuckerman, Scientific Elite: Nobel Laureates in the United States, New York, The Free Press, 1977, pp. 99–100.
- 1 2 Harriet Zuckerman, Scientific Elite: Nobel Laureates in the United States, New York, The Free Press, 1977, p. 42.
- 1 2 Harriet Zuckerman, Scientific Elite: Nobel Laureates in the United States, New York, The Free Press, 1977, p. 104.
- 1 2 Harriet Zuckerman, Scientific Elite: Nobel Laureates in the United States, New York, The Free Press, 1977, p. 105.
- 1 2 Claire Pomeroy, "Academia's Gender Problem", Scientific American, vol. 314, no. 1 (January 2016), p. 11.
- 1 2 Susana Martinez-Conde, Devin Powell and Stephen L. Macknik, "The Plight of the Celebrity Scientist", Scientific American, vol. 315, no. 4 (October 2016), p. 65.
- 1 2 Susana Martinez-Conde, Devin Powell and Stephen L. Macknik, "The Plight of the Celebrity Scientist", Scientific American, vol. 315, no. 4 (October 2016), p. 66.
- ↑ Susana Martinez-Conde, Devin Powell and Stephen L. Macknik, "The Plight of the Celebrity Scientist", Scientific American, vol. 315, no. 4 (October 2016), p. 67.
Bibliography
- Freeman Dyson, "The Case for Blunders" (review of Mario Livio, Brilliant Blunders: From Darwin to Einstein—Colossal Mistakes by Great Scientists that Changed Our Understanding of Life and the Universe, Simon and Schuster), The New York Review of Books, vol. LXI, no. 4 (March 6, 2014), pp. 4–8.
- A. Rupert Hall, Philosophers at War: The Quarrel between Newton and Leibniz, New York, Cambridge University Press, 1980, ISBN 0-521-22732-1.
- Jim Holt, "At the Core of Science" (a review of Steven Weinberg, To Explain the World: The Discovery of Modern Science, Harper, [2015], 416 pp., $28.99, [ISBN 978-0062346650]), The New York Review of Books, vol. LXII, no. 14 (September 24, 2015), p. 53–54.
- Steven Johnson, Where Good Ideas Come From: The Natural History of Innovation, New York, Riverhead Books, 2010, ISBN 978-1-59448-771-2.
- Christopher Kasparek, "Prus' Pharaoh: the Creation of a Historical Novel," The Polish Review, vol. XXXIX, no. 1 (1994), pp. 45–50.
- Christopher Kasparek, review of Robert Olby, The Path to the Double Helix (Seattle, University of Washington Press, 1974), in Zagadnienia naukoznawstwa (Logology, or Science of Science), Warsaw, vol. 14, no. 3 (1978), pp. 461–63.
- Q[ing] Ke; et al. (2015). "Defining and identifying Sleeping Beauties in science". Proc. Natl. Acad. Sci. USA 112: 7426–7431. doi:10.1073/pnas.1424329112.
- Lawrence M. Krauss, "The Universe: 'The Important Stuff Is Invisible'", The New York Review of Books, vol. LXIII, no. 4 (March 10, 2016), pp. 37–38, 40.
- David Lamb and S.M. Easton, Multiple Discovery: The Pattern of Scientific Progress, Amersham, Avebury Press, 1984, ISBN 0-86127-025-8.
- Susana Martinez-Conde, Devin Powell and Stephen L. Macknik, "The Plight of the Celebrity Scientist", Scientific American, vol. 315, no. 4 (October 2016), pp. 64–67.
- Robert K. Merton, On Social Structure and Science, edited and with an introduction by Piotr Sztompka, University of Chicago Press, 1996.
- Robert K. Merton, The Sociology of Science: Theoretical and Empirical Investigations, Chicago, University of Chicago Press,1973.
- Nathan Myhrvold, "Even Genius Needs a Benefactor: Without government resources, basic science will grind to a halt", Scientific American, vol. 314, no. 2 (February 2016), p. 11.
- Thomas Nagel, "Listening to Reason" (a review of T.M. Scanlon, Being Realistic about Reasons, Oxford University Press, 132 pp.), The New York Review of Books, vol. LXI, no. 15 (October 9, 2014), pp. 47–49.
- Maria Ossowska and Stanisław Ossowski, "The Science of Science", reprinted in Bohdan Walentynowicz, ed., Polish Contributions to the Science of Science, Dordrecht, Holland, D. Reidel Publishing Company, 1982, pp. 82–95.
- Bolesław Prus, On Discoveries and Inventions: A Public Lecture Delivered on 23 March 1873 by Aleksander Głowacki [Bolesław Prus], Passed by the [Russian] Censor (Warsaw, 21 April 1873), Warsaw, Printed by F. Krokoszyńska, 1873.
- Tori Reeve, Down House: the Home of Charles Darwin, London, English Heritage, 2009.
- John R. Searle, “What Your Computer Can’t Know” (review of Luciano Floridi, The Fourth Revolution: How the Infosphere Is Reshaping Human Reality, Oxford University Press, 2014; and Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014), The New York Review of Books, vol. LXI, no. 15 (October 9, 2014), pp. 52–55.
- Michael Shermer, "Scientia Humanitatis: Reason, empiricism and skepticism are not virtues of science alone", Scientific American, vol. 312, no. 6 (June 2015), p. 80.
- Klemens Szaniawski, "Preface", Polish Contributions to the Science of Science, Dordrecht, Holland, D. Reidel Publishing Company, 1982, ISBN 83-01-03607-9, pp. VII–X.
- Zygmunt Szweykowski, Nie tylko o Prusie: szkice (Not Only about Prus: Sketches), Poznań, Wydawnictwo Poznańskie, 1967.
- Stefan Theil, "Trouble in Mind: Two years in, a $1-billion-plus effort to simulate the human brain is in disarray. Was it poor management, or is something fundamentally wrong with Big Science?", Scientific American, vol. 313, no. 4 (October 2015), pp. 36–42.
- G.W. Trompf, The Idea of Historical Recurrence in Western Thought, from Antiquity to the Reformation, Berkeley, University of California Press, 1979, ISBN 0-520-03479-1.
- Bohdan Walentynowicz, ed., Polish Contributions to the Science of Science, Dordrecht, Holland, D. Reidel Publishing Company, 1982, ISBN 83-01-03607-9.
- Bohdan Walentynowicz, "Editor's Note", Polish Contributions to the Science of Science, Dordrecht, Holland, D. Reidel Publishing Company, 1982, ISBN 83-01-03607-9, pp. XI–XII.
- Florian Znaniecki, "The Subject Matter and Tasks of the Science of Knowledge" (English translation), in Bohdan Walentynowicz, ed., Polish Contributions to the Science of Science, Dordrecht, Holland, D. Reidel Publishing Company, 1982, ISBN 83-01-03607-9, pp. 1–81.
- Harriet Zuckerman, Scientific Elite: Nobel Laureates in the United States, New York, The Free Press, 1977.
External links
- "500 Years of Italian Master Drawings from the Princeton University Art Museum", at Stanford University's Cantor Arts Center, through 24 August 2015