Dr Geoffrey Dean & Research Sceptical of Astrology
The pitfalls of trying to prove a negative.
www.astrology.co.uk
Main News Page
Return to Home Page

Flawed Experiments being passed off as Failed Experiments

Many sceptics claim that astrology has consistently failed over many years in ‘thousands of scientific tests’. This popular myth that has been duplicated in many websites and even in published reports. (Jones 2011)[1] I have asked numerous critics of astrology including Dr Geoffrey Dean to cite their best single test. The answer is usually vague - "There are so many, I cannot think of a particular one that stands out." (For a review of what are considered the 'best' attempts.). Here we look at the extensive work of Dean - who is widely cited by sceptical websites.

Geoffrey Dean - dedicated and prolific researcher and writer

Dr Geoffrey Dean, from Perth, Australia is an analytical chemist and a one-time astrologer who became a sceptic. In 1977, Dean compiled Recent Advances in Natal Astrology: A critical review 1900-1976. (Dean 1977) Though many of the experiments have been superseded, the book remains a seminal tome for astrological research. Dean has since become astrology's most dedicated and relentless critic, a CSICOP fellow and given that he lectures on astrology, could be described as a professional sceptic. Recent Advances in Natal Astrology
What was most remarkable about Dean was his prolific output in papers, endless debates in journals, creation of CSICOP's preferred astrology website, contributions to skeptical books and journals notably CSICOP's Skeptical Inquirer over two decades. This is a working career dedicated to the impossible task of proving that there is nothing 'out there'! No astrologer has ever had the resources and time to produce even a fraction of his work. What drives him to do this remains a mystery. It is certainly not for science as scientific evidence in support of astrology would be a huge coup for science. My well thumbed copy of Dean's Recent Advances in Natal Astrology (Dean 1977)

Dean's most famous experiments - good science or smoke and mirrors?

  1. The 'Unpublished' Unaspected Planets Study

    In Recent Advances (Dean 1977), Dean outlined his own promising two year study of 20-40 cases for each of ten unaspected planets compared with controls. Meanwhile another independent analysis of 'several dozen examples of each unaspected planet' was conducted by E.A. Moore in New Jersey. (Moore 1976) According to Dean "In general their findings are in good agreement. Both workers found that the planetary principle is unchanged by lack of major aspects...". However, despite the great potential, Dean's original study has remained in the file drawer has yet to be published by a peer reviewed scientific journal. (Dean 1975)

  2. Dean's Phantom Time-Twin Study:

    Dean's study involving 2,101 people born in London between 3-9 May 1958 appears to have great potential. Though he refers to it in his paper on PSI (Dean 2003 p.188) [2] (Dean, forthcoming), Dean is yet to publish his results and will not share this government data. As I write, Dean has sat on this data for seven years prompting some to wonder if there are unreported significant patterns in the time twin data. Here is yet another study confined to the file drawer.[2]

  3. Dean's Flawed Meta Analysis:

    How is it possible to assess all the astrological experiments? Only a meta-analysis enables a quantitative review and synthesis of the multiple studies. The most common type of meta-analysis is to test the statistical significance of the combined results across studies known as combined tests. The alternative method which Dean selected is to measure the magnitude of the effects of the studies known as the effect size. (Dean 2013)

    Statistician Professor Gene V. Glass who pioneered the technique of meta-analysis identified four criticisms of designs of meta-analyses. (Glass 1983). As we will see Dean managed to breach all four in his largest Meta-Analytical Study:
    1. Mixing Apples and Oranges.
      Logical conclusions cannot be drawn by comparing and aggregating studies that include different measuring techniques, definitions of variables and subjects because they are too dissimilar.” (Wolf 1986)
      a.) Vedic Astrology In Dean’s meta-analysis of astrologers ability to match birth charts to owners, the largest and least effective (in astrological terms) study was by Narlickar (2009). However, this test was of the ability of 200 Vedic (Indian) astrologers to identify ‘mentally retarded’ children. Narlickar stated in his study that Vedic astrology was “fundamentally different from both Chinese and Western astrology”. If Dean was to include Vedic Astrology, why did he not include Jeffrey Armstrong's test with Micheal Schermer editor of Skeptic Magazine?
      b.) Chinese 'Astrology' Also included is a test of blind matching horoscopes by Rudolf Smit (2000) of eight astrologers recruited over the Internet. “Most of them used Chinese Astrology”.
      c.) Unrelated to hypothesis As Dean’s analysis seeks to measure how well astrologers can match birth charts, it should be based on what astrologers claim. Predicting intellectual disability, accidental death, suicide, inclination to murder (which accounts for over a quarter of the tests) is not part of the current Western astrological practice. Of course, these subjects are worthy of research. Some have shown high effect sizes in favour of an astrological hypothesis, but they are exploratory and should not be used to judge the practice of astrology.
    2. Many Sub-Standard Tests.
      "Results of meta-analyses are uninterpretable because results from “poorly” designed studies are included along with results from “good” studies."
      a.) Flawed A number of studies that have been shown to be flawed have been included: McGrew and McFall (1990) and this is in the opinion of Dean (Berzins 1992) or inappropriate like James Randi testing one astrologer (1981).
      b.) Misreporting The original flawed results from Carlson are included. The authors list the effect size as 0.017 but Ertel (2009) has shown it to be 0.15. Also, the separate rating test of 100 charts in the Carlson study which was statistically significant is not included – Ertel states the effect size is 0.10 and p = .037 (2009). Some of the reported results are unpublished and based on personal communication (Press 1977), Neher (1987) and Wunder (2004).
      c.) Anecdotal Eight of the sources were unpublished and mostly reliant on personal communication with Dean. (Berzins, Dean on tour 1982, Fourie, Gauquelin, Neher, Press, Smit 2000, Wunder) This meant that seventeen tests could be classified as anecdotal and certainly not subject to peer review.
    3. Duplication.
      "Multiple results from the same study are often used which may bias or invalidate the meta-analysis and make the results appear more reliable than they really are, because the results are not independent."
      • The 69 results are published from only 44 independent tests suggesting the possibility that the results are inflated by 157% to appear more reliable and to bias the meta-analysis.
      • The lack of success in matching charts with suicides in the New York Suicide Study are repeated eight times using minor variations (Press 1977).
    4. Publication Bias.

      "Publication bias is the tendency on the parts of investigators, reviewers, and editors to submit or accept manuscripts for publication based on the direction or strength of the study findings. ... Prevention of publication bias is important both from the scientific perspective (complete dissemination of knowledge) and from the perspective of those who combine results from a number of similar studies (meta-analysis)." Dickersin K. Journal of the American Medical Association. (Dickersin 1990)
      Meta-Analyses suffer from publication bias. While some astrologers are strongly motivated to produce positive results, very few do any systematic studies that would be acceptable for publication in a journal. Many were educated in the humanities and lack training in science. In addition, there is no funding for astrological research.

      On the other hand, no matter how honourable their intentions, scientists who produce evidence that favours astrology are disinclined to report their results. A case in point is Professor Cajochen who sat on his study that showed a correlation between sleep and lunar phase for four years. He stated that he feared being considered a 'lunatic'.[3] This reluctance is stronger for researchers who are connected with organisations dedicated to debunking the paranormal. Dean himself has, possibly unwittingly, contributed to this file drawer effect by not publishing at least two of his own studies.

      This witholding of results or reluctance to publish is partly down to a taboo against fringe subjects and partly because scientists are warned that "the more extraordinary a claim, the heavier is the burden of proof demanded." ~ Marcello Truzzi This failure of research and the "file drawer effect" can distort meta-analysis.

    Why did Dean choose to present his data in terms of effect size and not the most common way which is to use statistical significance? A probable reason is that a growing number of astrological studies (Gauquelin, Redhead studies, Carlson reviewed, Timm & Kobberl, Muller, Fuzeau-Braesch, Ruis, Tarvainen et al) are confirming high levels of significance favouring astrology. This has left Dean's large studies quite isolated and starting to look like anomalies. So one way around this is to make effect size the arbiter of validity. Duplicating the results and the inclusion of small, anecdotal or flawed studies serves to disguise and dilute these large significant studies.

    For many years, Dean's meta-analysis was published without citation. I speculated that given that Dean has been involved in almost every sceptical 'investigation' into astrology since 1985, his list would include faulty studies and as a result "it is not unreasonable to assume that the Meta-Analyses suffer from GIGO (Garbage In, Garbage Out) or the fallacy that Many Wrongs do not make a Right."

    Astrology Under Scrutiny (2013). Geoffrey Dean is principal compiler. When the list of studies was finally published in Astrology under Scrutiny (2013), my prediction turned out to be correct. However, I had underestimated the extent of the problems. Not only were many low-quality studies included (by Dean’s own admission) as I had anticipated, but there also were those with fundamentally different techniques to Western Astrology (Chinese & Vedic) and those with a hypothesis that does not correspond to claims from typical astrologers, and there were omissions and significant effect sizes (Carlson) were misreported. Yet, Dean rigidly and unreasonably insisted that “Astrologers are unable to match birth charts to their owners better than guessing!” One has to ask, what would it take for a dyed-in-the-wool sceptic to review his unscientific beliefs? There are still sceptical sites proclaiming this as their best evidence against astrology.[4]


  4. Can astrology predict Extraversion and Neuroticism?

    Dean's biggest experiment was entitled "Can astrology predict E and N?" published in Correlation (Dean 1985). He used an impressive original sample size 1,198 test subjects, mostly born in the Southern Hemisphere who had all completed their Eysenck Personality Inventory (EPI).(Eysenck 1975) He selected and studied the charts of 288 subjects whose EPI scores for Extraversion (E) and Neuroticism (N) were extreme. Having successfully failed to find significance in the results, he then had 45 astrologers attempt to blind match a smaller selection of 160 extreme cases. Dean claimed that the results were no better than chance. However, despite his conclusion, the tests failed to address the practice of astrology.
    1. Fundamental Problem of comparing Astrology with Psychological Profiles
      • Are self-compiled psychological tests accurate?
        The accuracy of a self-reported test relies on the honesty and self-awareness of the respondent. Typical questions are not specific and answers often depend on context. Anyone aware of Eysenck's lie detector questions can optimize their answers for social desirability or to seek or avoid being labelled as a stereotype. Experience with the Carlson test shows that subjects were unable to identify their own self-reporting test results suggesting that they may not be reliable. [for more] The problem is these tests lack the objectivity and incisiveness of astrology.[5]
      • There is no standard among many different Psychological Profile techniques
        In this test the Eysenck Personality Inventory [EPI] is assumed to be the objective 'gold standard' against which character is measured. However, there is still no consensus among psychologists. In addition, the EPI is just one of many quite different psychological tests. Moreover, the EPI is also one of a number of psychological tests devised by Eysenck and in retrospect it can be seen as a work in progress in view of Eysenck's later updates.
    2. The EPI is an unreliable test for measuring astrology
      • The Myers Briggs Type Indicator is closer to astrology
        The more popular Myers Briggs Type Indicator (MBTI) is closer to astrology then the Eysenck Personality Inventory (EPI) having originated from the writings of Carl Jung who had studied astrology. The MBTI also allows scope for both extraversion and introversion rather than Eysenck's dualistic model of polarity. However, even the MBTI is not entirely faithful to Jung's original concept of Neuroticism as there is no mention of introspection or reflectiveness.
      • EPI revealed more about age and culture than personality
        Though Eysenck's EPI claims universality, constancy and stability, the actual results in this test varied considerably by culture: Australians were significantly more extravert and neurotic than New Zealanders. Even more of a problem was that age predominated the study over character. The youngest who were University students and supercharged on their hormones inevitably scored about double that of the oldest subjects in both E and N! The mean ages were over five years apart for both E+ and N+ when compared to E- and N-. Dean covered this flaw up by mocking the astrologers for performing worse than the age trend. Sure ... the astrologers could have cheated by rating extraversion and neuroticism simply by a subject's age. But it was the capability of astrology to measure personality regardless of age that was being tested. Clearly a measuring system that revealed more about age and culture than personality was inappropriate as a test for astrology.
      • Eysenck's E & N typologies fail to match the 4 Elements despite apparent similarities.
        Eysenck's definition of Extraversion and Neuroticism differed greatly from astrological tradition and the ancient model of the four temperaments even though there is a superficial resemblance. For example a careful examination of Eysenck's traits reveals that Earth can be neurotic (N+) with traits like pessimistic, sober, reserved, quiet and unsociable and Air can be introvert (E-) with qualities like peaceful, thoughtful and even-tempered. Dean, however, following traditional attributions worked on the opposite assumption. His fundamental mis-attribution would have undoubtedly also misled the astrologers (who were even less familiar with the EPI) and this made blind matching somewhat haphazard.
    3. No prior research or claims relating astrology to the EPI that could be tested.
      This research was designed as a test of the validity of astrology. However, there had been no prior research or study of the EPI and astrology. No astrologer had made any claims relating to Eysenck's system (EPI) and astrology - so any test would have been based on general comments and conjecture about 'extroversion' and emotional stability that were disconnected with Eysenck's very particular typologies. And despite his initial failure to find results, Dean persisted in setting up the astrologers to fail to do what he knew he was unable to do. So rather than a realistic test of astrology, this experiment appears to be the creation of a straw-man argument.
    4. Why treating small samples of outliers & anomalies as typical can be misleading:
      By testing only the extreme results (1/15th) in a large sample of self-completed personality questionnaires instead of the standard 1/3rd, many issues arose. The remaining extreme subjects were mainly outliers and anomalies. By treating them as typical subjects for astrologers, the data became misleading due to an exclusion bias.
      • Risk of Sampling Bias, Exclusion Bias & Cherry Picking
        According to Dean "to make each direction clear-cut" he reduced the sample size to 40 in each of 4 categories (High v Low Extraversion and High v Low Neuroticism) by taking the 6.66..% most extreme cases. (Dean 1986)

        Why did this happen? If you took the last 15ths of a traditional bottle of wine, you would get the dregs. Would it be fair to test a wine expert on this sample or a typical sample?

        Normal practice in psychology would be to take the extreme 33.3..% on either side of the continuum. This would avoid giving such emphasis to the 'outliers' and anomalies. Now if Dean had reviewed the Eysenck Personality Inventory [EPI] (Eysenck 1975) results and chose the extremes knowing that the data was not typical, the data has been cherry-picked. If, as is more likely, it was an arbitrary choice or done to keep the sample size manageable, then there is a problem with sampling bias and exclusion bias in particular since typical or average examples of Neuroticism and Extraversion have been removed by taking only extreme cases. There is no doubt that extreme data is valuable for research. However, in this case Dean was using the untypical to judge typical practice. This sampling error calls into question the conclusions that he drew from the experiment.
      • Small samples mainly comprised of outliers and anomalies
        Dean broke up his original large sample (~1.2k) into in two ways. He used a small sample of 288 for his own research and 160 to test the astroloers. The 288 was divided into 8 groups of 36 subjects of a certain typology (e.g. E+N-). He then split each group into 18 which removed all possibility of significant results. By filtering out (76%-87%) 'normal' cases and keeping only extremes, each small group was predominated by Outliers and anomalies. Outliers are often the result of a measurement error or in a personality test, psychopathy or a hoax response where spoof answers are given to convey a ridiculous extreme personality. So when he refined and divided up his samples, Dean enabled these rogue responses to have an exaggerated impact on the results. By doing this, any astrological effects were masked or could be dismissed as being random. This practice is contrary to Gauquelin's and other studies of astrology which have shown that large samples (10k +) are required to overcome artifacts and to demonstrate the effects of astrology.
      • Extremes are more likely to express their "Shadow Aspect"
        In Jungian and other branches of psychology, exteme self-definitions of personality (the outliers and anomalies) are often a cover for the opposite traits. This results in what is known as the shadow personality.[6] Here's an example of a shadow personality that has been documented by research that shows that homophobic individuals are aroused by male homosexual stimuli despite being in denial or unaware of this.[7] So the self-reporting descriptions of their character would be the polar opposite of their nature.
    The fact remains that any subjective, age and culture-dependent psychological profile based on self-reporting will be an unsuitable match with an objective, life-time, cross-cultural astrological analysis.

  5. The Carlson Double Blind Astrology Test (1985)- which now supports astrology

    Dr Dean happened to be in California when Shawn
    Carlson performed his Astrology Test in the early 1980s (Carlson 1985)- some eight thousand miles from home in Perth, Australia. Dean was able to provide advice at the time and later by mail.[8] The test was overseen, sponsored and published by members of CSICOP - a group considered by some scientists to be dedicated to debunking metaphysical claims.[9] The tests have since been criticised by three professors including Hans Eysenck (Eysenck 1986) for faulty conclusion (Type II error) and design (Data Selection bias, Sampling bias and Mathematical Error).(Currey 2011) Professor Suitbert Ertel has shown that where astrologers were able to rate their performance, they performed to a statistically significant level that cannot be dismissed as chance. (p=.037) (Ertel 2009)

Why are so many astrology tests flawed?

In their research, Eysenck and Nias came across “study after study where the whole experiment had to be faulted because of quite elementary errors in the choice or interpretation of psychological measuring instruments, errors which would be obvious to a first-year student of psychology”. (Eysenck 1982)

"Most astrological research is marred by errors in methodology and statistical treatment. It would seem that such methodological errors are made not only by astrologers, but equally by critics who attempt to disprove astrological claims." ~ Professor Hans Eysenck (Eysenck 1983)
Dean and Mather take a more upbeat view (perhaps referring to his own tests) commenting "During the 1990s ... thanks to computers and journals like Correlation, the quality of research far exceeded anything reported in Recent Advances twenty years earlier." (Dean 2000)
In the field of astrology, there are considerably more fatally flawed tests than real evidence. There are many reasons for this. There is no budget for testing astrology and most astrologers are more motivated by the study and application of astrology than in addressing the challenge of providing and defending scientific proof. So most tests are run by sceptics with budgets in fields like psychology who design quantitative tests when the data requires qualitative analysis that would be better addressed by those who understand astrology. There are also real procedural hurdles to jump.
In his critical review, Dean did concede one exception: "Gauquelin has covered every possible non-astrological source of error so thoroughly that his results seem beyond doubt. ... [Gauquelin's results] support some of the most fundamental astrological concepts of all, ... " (Dean 1977) Though Dean now considers that Gauquelin's replicated results may be down to an artifact.

References

  • ^ Carlson, Shawn (1985) A Double Blind Test of Astrology. Nature, December 1985 Vol.318, pp.418-425.
  • ^ Currey, Robert (2011): U-turn in Carlson's Astrology test, Correlation. Vol.27 (2), July 2011
  • ^ Dean, Geoffrey (1975) Unaspektierte Planeten, Kosmobiologisches Jahrbuch 1977, Ebertin Verlag, Aalen, pp.111-133
  • ^ Dean, G.A. & Mather, Arthur (1977) Recent Advances in Natal Astrology. A Critical Review 1900-1976. Analogic, Subiaco, Australia
  • ^ Dean, G. (1985), ‘Can astrology predict E and N? 2: the whole chart’,Correlation, 5 (2), pp.2–24.
  • ^ Dean, G. (1986), ‘Can astrology predict E and N? 3: discussion and further research’, Correlation, 6 (2), pp. 7–52. Includes meta-analyses of astrological studies.
  • ^ Dean, G., Mather, A. & Kelly, I.W. (1996), ‘Astrology’, in The Encyclopedia of the Paranormal, ed. G. Stein (Amherst, NY: Prometheus Books), pp. 47–99. Includes meta-analyses, effect size comparisons and artifacts.
  • ^ Dean, G. & Mather, A. (2000) Patron of Research, Tribute to Charles Harvey Astrological Journal 42(6), pp.45-46, Nov/Dec 2000
  • ^ Dean, G. & Kelly, Ivan (2001) ‘Does astrology work? Astrology and skepticism 1975–2000’, in Skeptical Odysseys, ed.
  • ^ Dean, G. & Kelly, I.W. (2003) Is Astrology relevant to Consciousness and Psi? The Journal of Consciousness Studies, 10, No. 6–7, 2003, pp.175–198
  • ^ Dean, G., Heukelom, W., Smit, R. Terpstra, B. Nias, D. Mather, A. (2013) Astrology under Scrutiny - Close encounters with science Wout Heukelom and Cygnea van der Hooning.
  • ^ Dickersin, K. (1990) The existence of publication bias and risk factors for its occurrence JAMA 263 (10) pp.1385–1359. March 1990
  • ^ Ertel, Suitbert (2009) Appraisal of Shawn Carlson's Renowned Astrology Tests, Journal of Scientific Exploration, Vol.23, #2.pp.125-137
  • ^ Eysenck, Hans Jürgen & Eysenck,Sybil B. G. (1975) Manual of the Eysenck Personality Questionnaire. London: Hodder and Stoughton
  • ^ Eysenck, H. & Nias, D.K.B. (1982) Astrology, Science or Superstition St Martins Press, London
  • ^ Eysenck, H. (1983) Methodological Errors by Critics of Astrological Claims. Opening lecture at 3rd Institute of Psychiatry Conference, May 1983.
  • ^ Eysenck, H. (1986) Critique of “A Double-Blind Test of Astrology”, Astro-Psychological (1986) Problems, Vol.4 (1), January 1986. Eysenck wrote “The conclusion does not follow from the data”.
  • ^ Glass, Gene V., McGaw, Barry & Smith, Mary Lee (1986) Meta-analysis in social research, Sage Library of Social Research, Sage Publications
  • ^ Jones, Steve (2011) BBC Trust review of impartiality and accuracy of the BBC’s coverage of science.[1]
  • ^ Kruskal, W. (1960) Some notes on wild observations, Technometrics, University of Chicago.
  • ^ Moore, E.A. (1976) From personal communication with Dean based on printed material used in the Moore School of Astrology on unaspected planets. New Jersey 1974.
  • ^ Ripley, Brian D. 2004. Robust statistics
  • ^ Robert Rosenthal (1979) "The file drawer problem and tolerance for null results" Psychological Bulletin 86 (3): pp.638–641

Footnotes

  1. ^ BBC Trust review of impartiality and accuracy of the BBC’s coverage of science (July 2011) With an independent assessment by Professor Steve Jones and content research from Imperial College London. Astrology_is drivel because it flies in the face of four centuries of evidence, from Galileo to the latest space probe." p.60 Professor Jones was probably not aware that Galileo was an astrologer when he compiled his detailed report presumably paid by the licence fee entitled "Getting the best out of the BBC for licence fee payers". And of course, like Professor Cox, Jones argues by unsupported assertion. The question is what is this so-called evidence? How does a space probe disprove astrology? Was it launched under inauspicious planets?
  2. ^ Dean G. & Kelly, I. W. (2003). "Is Astrology Relevant to Consciousness and Psi?". Journal of Consciousness Studies 10 (6–7): p 188 - 190.
    From personal correspondence between Robert Currey 19 July 2008 to 8 September 2008: Dr Dean claimed that he is still working on the paper. In 2008, I requested permission to publish these exchanges but Dean has yet to reply.
  3. ^ Choi, Charles (2013) Bad Sleep? Blame the Moon^ Livescience. July 25, 2013 Professor Cajochen "It took me more than four years until I decided to publish the results, because I did not believe it myself,I was really skeptical about the finding, and I would love to see a replication."
  4. ^ The original Meta-Analysis was published on-line (~2008) and presumably (though it does not say) an update compiled from previous meta-analyses. Dean, G. (1986), Dean, G., Mather, A. & Kelly, I.W. (1996), Dean, G. & Kelly, I.W. (2001). The most up to date meta-analyses are published in Astrology Under Scrutiny (2013) with references on pages 354-355.
  5. ^ The Eysenck Personality Questionnaire for Introversion/Extraversion has different parameters. I would consider the EPQ to be an inferior measure compared with astrology. For example, practicality is ranked against reflectiveness. Which is the extraverted quality? As an astrologer, I would not equate practicality or reflectiveness with extraversion.
  6. ^ The Shadow is an unconscious aspect of the personality which is not recognised by the conscious ego and can be rejected."Everyone carries a shadow,and the less it is embodied in the individual's conscious life, the blacker and denser it is." ~ Jung, C.G. (1938). "Psychology and Religion." Psychology and Religion: West and East. P.131.
  7. ^ Is homophobia associated with homosexual arousal? (1996) Journal of Abnormal Psychology Psychol. 1996 Aug;105(3):440-5. Adams HE, Wright LW Jr, Lohr BA. Department of Psychology, University of Georgia, Athens 30602-3013, USA."Homophobia is apparently associated with homosexual arousal that the homophobic individual is either unaware of or denies."
  8. ^ Dean (2010). In personal email correspondence between Robert Currey and Geoffrey Dean 15 July 2010, Dean commented: "I was in California at the time Carlson was doing his experiment, was able to discuss it with him in person and subsequently by mail, and was able to meet some of the astrologers involved in it."
  9. ^ Sheldrake, Rupert (2004) Distorted Visions. Letter published in the Times Higher Education Supplement. December 17 2004. "CSICOP is an ideologically motivated debunking organisation..."
  10. ^ American Heritage® Medical Dictionary (2004) Houghton Mifflin Company.

Glossary

  • GIGO stands for Garbage In, Garbage Out. The expression originated to explain how invalid data entered into a computer program results in invalid output.
  • Cherry picking, suppressing evidence, or the fallacy of incomplete evidence is the act of pointing to individual cases or data that seem to confirm a particular position, while ignoring a significant portion of related cases or data that may contradict that position. Argument By Selective Observation and in this instance, counting the misses and forgetting the hits.
  • Sampling bias in statistics, is where a sample is collected in such a way that some members of the intended population are less likely to be included than others. In the Carlson experiment, the sample was from a homogenous population which hindered the task of differentiation. Sampling bias can also take the form of the selection of studies to support a particular hypothesis as appears to have occurred in Dean's Meta-Analyses.
  • ^Data Selection bias can occur in several ways:
    1. When data is partitioned with knowledge of the contents of the partitions, and then analysed with tests designed for blindly chosen partitions. In the Carlson test and CSICOP's replication of the Gauquelin studies, samples were subdivided into smaller samples so that significant results were dampened down.
    2. When "bad" data is rejected on arbitrary grounds, instead following previously stated or generally agreed criteria.
    3. When outliers are either rejected or selected exclusively. Rejection of or selection of only outliers on statistical grounds can also result in biased results.
  • Type II Error is the error of failing to reject a false null hypothesis or wrongly accepting a false null hypothesis.
  • Exclusion bias results from exclusion of particular groups from the sample.
  • Publication bias is the practice of investigators, reviewers, and editors to submit or accept manuscripts for publication based on the direction or strength of the study findings. One aspect of this is the file drawer effect where studies are conducted but either abandoned or not reported because the results do not conform to the experimenter's expectations or intentions. A small number of papers left in a researcher's drawer can result in a significant bias. (Rosenthal 1979)
  • Outliers are observations that are numerically distant from the rest of the data and can indicate faulty data, erroneous procedures, or areas where a certain theory might not be valid. "They (Outliers) can play havoc with standard statistical methods." (Ripley 2004) "It is a dangerous oversimplification to discuss apparently wild observations in terms of inclusion in, or exclusion from, a more or less conventional formal analysis. An apparently wild (or otherwise anomalous) observation is a signal that says: 'Here is something from which we may learn a lesson, perhaps of a kind not anticipated beforehand, and perhaps more important than the main object of the study.'" (Kruskal 1960)
  • Fallacy Of Composition is assuming that a whole has the same simplicity as its constituent parts. In fact, a great deal of science is the study of emergent properties.


Robert Currey
www.twitter.com/RobertCurrey
Share Share
Astrology News & Famous Charts Main Astrology News Page. Information, stories, theories and facts.

Index to past articles Over 50 articles relating to astrology.

Why it is no longer acceptable to say astrology is rubbish on a scientific basis. Why it is no longer acceptable to say astrology is rubbish on a scientific basis.

Philosophers who refused to look through Galileo's telescopePhilosophers who refused to look through Galileo's Telescope

Problems with testing astrological practice under strict scientific methodsProblems with testing astrological practice under strict scientific conditions

Why Randi cannot be trusted to be impartial.Illusionists are for entertainment, not to feign or undermine science.

Shawn Carlson test of astrologyU-Turn in Carlson's Double-Blind Astrology Experiment

How and why astrology became outcast from mainstream thinking.How and why Astrology became an outcast from the mainstream

Bias can infect even top scientists and journals.Scepticism can be used to justify institutional bias even among respected scientists and journals.

Sunday Times article on Percy Seymour's new Book Scientific Proof of Astrology.Sunday Times article on Percy Seymour's new Book Scientific Proof of Astrology

Is there a known mechanism for astrology and if not is this grounds to debunk it?Is there a known mechanism for astrology and if not can it be dismissed?

Was Kepler a sceptic or an astrologer or both?Was astronomer and mathematician, Johannes Kepler a sceptic or an astrologer or both?
Secrets behind a Test of Astrology by illusionist Derren Brown

Was CSICOP scientific and is CSI truly skeptical?




Though Dean included a Vedic study by a sceptical researcher and studies from a BBC TV programme, he omitted this study of Vedic astrologer Jeffrey Armstrong by Michael Schermer, from his Meta-Analysis.

    Why Dean encountered such problems in testing astrology:

  1. Accurate objective data is extremely difficult to obtain in sufficient quantities.
  2. Isolating the huge number of variables involving human behaviour and astrology is an immense challenge.
  3. Replicating the unique conditions of human beings and planetary patterns is almost impossible.
  4. The Experimenter Effect[10] shows that the behaviour, personality and conscious and unconscious expectations of the experimenter will bias the hypothesis, criteria, selection of data, format, results and conclusions. This is not chemistry - when you study human behaviour, the experimenter is part of the experiment. This leads onto questions of an Oberver Effect, whereby apparently random selections such as a control group may be affected by the involvement and observation of the researcher.
  5. Statistics perform well in physics, chemistry or molecular biology. However, when you work with more varied and complex data, results can be skewed, misrepresented and manipulated. Just think of the controversy over projections based on objective climate data.
    How a test might be improved. [more ...]

Share on Facebook

 EQUINOX © 2010   Contact us * www.astrology.co.uk   Page: