Is There a Standard Method for Judging Experiments in Meta-Analysis?

  • Thread starter GFX
  • Start date
In summary, the conversation highlights the topic of telekinesis and its relation to quantum theory. While some believe in the potential for telekinesis, others view it as a pseudoscientific area that should be approached with a critical eye. The conversation also mentions the potential for unexpected discoveries in the pursuit of supernatural phenomena, and the use of science for various purposes, including entertainment. Ultimately, there is no solid evidence to support the existence of telekinesis, and any claims should be approached with skepticism.
  • #1
GFX
1
0
Ive read a few articles about telekinetics and all those mind sciences and I am just wondering whether you guys believe in or not. The articles I've read are extremely convincing and said that they have to do with quantum theory. Me, being only 14, don't know much about quantum theory. I mainly want to know your opinion on telekinesis and stuff like that.
 
Physics news on Phys.org
  • #2
My opinion is that most of the mysto-physics are pseudo scientific areas and should always be looked at with the most critical eye.

However, even if guite useless in the short perspective, these studies of supernatural "phenomenas" may be of importance in the long run. Just think of alchemy: those guys spent hundreds of years in their quest to make gold out of rubbish. Allthough the quest was doomed the hunt led to numerous discoveries which ultimatley made up the foundaitions of modern chemistry.

So even if some maniac waste his/her life trying to prove the conection between quantum mechanics and flying rocks, he/she still have the chance to stumble across something of importance. If not, talented people can focus on the important stuff without having to bother about the more doubtful areas of science.

I haven't seen the articles you're reffering to, but do have in mind that most things can be made to look convincing in media. I'm also not a master of quantum physics, but I strongly doubt that somone can move anything directly by thought, and even if so, I doubt that it will have anything to do with quantum mechanics.

Remember that science can be used in many purposes. I've tried to use relativistic arguments when I was late for calss... that didn't work.
 
Last edited:
  • #3
A couple of people in the circus do telekenesis let me tell you its all an act
 
  • #4
I will go with bozo...I think psychokinesis (why do people insist on calling it telekinesis) is just a bunch of lies, just like a lot of other 'mind-phenomena' is.

Like fortune tellers and horrorscopers - want to have some fun? Find yourself a fortune teller in your local downtown mall and, when you go in, show absolutely no emotion. That's what they base everything on. Your reactions. If you show no reactions, (my teacher told a story of him doing this, my psychology teacher) it really messes them up. They just don't know what to do!
 
  • #5
I think that if anybody could do telekinesis we'd probably know about it. Let's face it, telekinesis would be an incredibly powerful weapon, even without much control over it it would still be quite easy to use it to cause heart attacks, fatal brain hemhorrages etc. Now there's no reason to believe that telekinetic powers should be limited to people who intend to use them for nice purposes (the laws of physics have no concept of morality, for instance Newton's Law of Gravitation doesn't suddenly stop being true when you try to drop a ten ton block of lead on somebody's head). So sooner or later, we should expect that some psychopath with telekinetic powers would come along and use them to perpetrate the psychic equivalent of a shooting massacre. I think we'd notice an incident where everybody who comes near such a person suddenly dies of a fatal stroke.
This is not to say that telekinesis is necessarily impossible - merely that it is highly unlikely that with our current knowledge of consiousness that anybody can do it.
 
  • #6
KingNothing said:
(why do people insist on calling it telekinesis)

Both terms are correct.


AS for the original post, its perfectly possible that quatnum theory could explain such phenomenon and such phenonmenon are real, and when i say perfectly possible, i mean i cannot definitively prove otherwise. However there is no real evidence to suggest that this explanation is any better any other explanation, they are all equally speculative, and equally unscientific, despite any reference to modern theories.
 
  • #7
franznietzsche said:
Both terms are correct.


AS for the original post, its perfectly possible that quatnum theory could explain such phenomenon and such phenonmenon are real, and when i say perfectly possible, i mean i cannot definitively prove otherwise. However there is no real evidence to suggest that this explanation is any better any other explanation, they are all equally speculative, and equally unscientific, despite any reference to modern theories.


Well, when you look at it, is there any REAL evidence to justify the heavily belaboured quantum theory? I am afraid that it, much like Einstiens relativity, have been overused and abused by dreamers and screamers, psuedoscientists and the real McCoy both to accommodate whatever latest theory has the scientific communities knickers in a knot
 
  • #8
GFX said:
Ive read a few articles about telekinetics and all those mind sciences and I am just wondering whether you guys believe in or not. The articles I've read are extremely convincing and said that they have to do with quantum theory. Me, being only 14, don't know much about quantum theory. I mainly want to know your opinion on telekinesis and stuff like that.

With the possible exception of some very, very slight statistical aberrations in laboratory results, there is no reliable evidence that anyone can do this. A few scientists are claiming that enough of these slight statistical anomalies are now seen to so as to qualify as scientific evidence, but I know of no major journals that have published this conclusion.

This all relates to experiments in which the viewer attempts to affect the role of a dice or some other random event. No dramtic examples of this sort of thing are known. For example, no one has ever demonstrated the ability to levitate a chair.

Our mentor Hypnagogue knows about this I think...I will try to dig up anything significant.
 
Last edited:
  • #9
Something else to mention is that seeing objects move by themselves isn't uncommon in hallucinations. I have read stories of people seeing things move in hallucinations caused by sleep deprivation, simple partial seizures, general psychotic conditions, and drugs. Stories from people in such states of mind add to the lore about telekinesis. I can remember that back in the 60s and 70s people who took drugs were very loyal to the belief that drugs allowed for a glimpse into an alternate "reality", and they felt these experiences were authentic. There were no end of stories from people who claimed the drugs allowed them to "do things with the mind" they couldn't otherwise do.
 
  • #10
RE: "The articles I've read are extremely convincing and said that they have to do with quantum theory. Me, being only 14, don't know much about quantum theory."

How could it be so convincing if it relied on science you don't understand?

RE: "A few scientists are claiming that enough of these slight statistical anomalies are now seen to so as to qualify as scientific evidence, but I know of no major journals that have published this conclusion."

This is the voodoo they call meta-analysis. When scientists trot out meta-analysis, head for the hills.
 
  • #11
JohnDubYa said:
This is the voodoo they call meta-analysis. When scientists trot out meta-analysis, head for the hills.

Can you elaborate? How does "meta-analysis" differ from qualified statistical analysis?
 
  • #12
Ivan Seeking said:
Can you elaborate? How does "meta-analysis" differ from qualified statistical analysis?
Hypnagogue described meta-analysis in detail in my thread "Eyes on the back of your head".
 
  • #14
JohnDubYa said:
This is the voodoo they call meta-analysis. When scientists trot out meta-analysis, head for the hills.

Let me ask the question this way. What precisely is your objection to the analyses already done?
 
  • #15
Assuming we are talking about the same thing, a meta-analysis combines the results of numerous studies to try and achieve statistical significance. There are numerous problems with it:

1. To be accurate, those experiments that produced negative results would have to be published at the same frequency as those with positive results. This is almost never true. In fact, it isn't even close. Those that claim that a huge number of such "file drawer" experiments would have had to be performed to discount the results of a meta-analysis are simply wrong. I can generate a completely null result by having one file drawer experiment for each published experiment. If one experiment showed that heads appeared 52% of the time with a sample population of 200 throws, then a file-drawer experiment of 200 throws showing heads appearing 48% of the time would produce a combined null result. If anything, there are probably far more file-drawer experiments than published experiments.

2. The individual studies do not have the same designs. Sure, the experimenters reject those studies that have sufficiently dissimilar designs (as if you can really define "similar"). But all this does is give them one subjective means of throwing out experiments that they know will not help their cause.

3. The studies used in a meta-analysis are hand-picked, thus losing objectivity. Because meta-analysis studies are done on experiments involving very feint (probably nonexistent) phenomena, then any subjectivity mixed into the process overshadows the mathematical results.

In other words, when performing experiments to discover phenomena that is extremely hard to detect, precise control conditions are vital. But meta-analysis incorporates even more subjectivity than the individual experiments.

4. You lose accountabilty when the individual studies were performed by researchers who did not cooperate with each other. How do you know that 10% of the researchers didn't cheat?

5. You cannot replicate the study. Suppose a meta-analysis produces evidence for an effect. How would you replicate the study?

Here is my opinion: If you don't have statistical significance, you ain't got squat. Meta-analyses are performed by researchers who simply will not take "no" for an answer because their own ideology depends on a "yes." (And "yes" always seems to be what the meta-analysis produces.)

Now, are there legitimate reasons to use a meta-analysis? Sure. It can sometimes point to phenomena that MIGHT exist and, therefore, could be a source for future experimentation. But taken alone it isn't evidence of anything. (In my opinion, of course. Others disagree. I will leave it up to the members to decide if they want to believe in its efficacy. If so, I suggest they leave their wallets at home.)
 
  • #16
Thought provoking stuff, JDY. My only problem with your analysis is that you seem to be suggesting that meta-analyses tend biased in favour of the phenomenon under scrutiny. This may be true - and my experience is limited - but I know of at least one meta-anaysis that was fairly clearly weighted against the phenomenon being investigated...

...in fact I have just grabbed an example: Newell et al. (2002) in their investigation of psychological treatments for cancer (published in a mainstream journal) coded results of trials as being nonsignificant unless more than 50% of measures of the same variable we found to be significant e.g. if a psychometric measure of anxiety found patients to be significantly less anxious, but cortisol levels (stress hormones) had not significantly decreased, then the overall measure was seen as nonsignificant.

In general the standard of proof is raised for non-conventional research, so I am surprised to hear you say that it is not generally the case in meta-analyses. Tell me it aint so, JDY?
 
  • #17
When performing experiments, scientists are often out to discover a phenomenon. When the phenomenon is not discovered, the value of the research drops tremendously.

It would be nice to live in a world where negative results count for as much as positive results. However, if you perform a study to find a link between (say) electromagnetic fields and cancer, your study has much greater chance of being published if you found a positive connection between the two.

It isn't just the researchers at fault. The publications don't want to publish oodles of "Didn't Find It" articles.

So scientific research is inherently biased in favor of positive results (which is why the list of unsafe foods increases constantly). Simply put, positive results are more newsworthy and, therefore, more "worthy" for publication.

We can counteract this bias to a large extent by demanding replication before making assumptions. (But the media make the assumptions regardless.) But meta-analysis has no provision for replication.

Meta-analysis simply takes the biased results from numerous experiments and gleans an outcome. The Emporer's Nose fallacy (see Feynman) now takes over. You don't get more meaningful results simply be increasing the amount of faulty data.

My biggest complaing with meta-analysis is simple: It introduces bias, contrary to one of the main goals of statistics. What good is any statistical method that actually INTRODUCES bias?

RE: "In general the standard of proof is raised for non-conventional research,"

To my knowledge, the measure of statistical significance is not changed for non-conventional research. They simply cannot meet the goal.

One more thought: In order for a result to be convincing, the result has to be deduced from clean statistical methods free from serious objections. Since meta-analysis has so many problems associated with it, what good is it?
 
  • #18
People are now starting to catch on:

http://heartdisease.about.com/library/weekly/aa092000a.htm

"Heart Disease / Cardiology: Should calcium blockers be avoided in hypertension?

By DrRich

A recent meta-analysis strongly suggests that the use of calcium channel blockers in the treatment of high blood pressure significantly increases the risk of both heart attacks and heart failure. The report has engendered a firestorm of protest from the pharmaceutical industry and from several hypertension experts.

The report was presented in August, 2000 at the Congress of the European Society of Cardiology by Dr. Curt Furberg, Professor of Public Health Services at the Wake Forest University School of Medicine, and well-known expert on the effectiveness of cardiovascular drugs. Furberg’s study found that patients taking calcium blockers had a 27% increase in heart attacks, and a 26% increase in heart failure, as compared to patients taking other kinds of drugs for hypertension. Furberg concluded that, when treating hypertension, doctors should avoid calcium blockers whenever possible, using diuretics, beta blockers, and ACE inhibitors instead.

Responses to this report from hypertension experts and from the pharmaceutical industry was instantaneous. In press releases and in interviews with news services, various leading lights in the hypertension community described Furberg’s report as being “unbalanced,” “outrageous,” “unscientific,” “inflammatory,” and “inappropriate.” The terminology they used in private conversation, some say, was less complimentary.

The International Society of Hypertension released their own meta-analysis (the WHO-ISH study) on August 24, and found no problem with long-acting calcium blockers. [How could a sound statistical method produce two completely different answers? Answer: It can't. Meta-analysis is not sound.]

Pfizer, which sells amlodipine (the best selling calcium blocker), pointed out in a press release that only 190 of the 27,000 patients in Furberg’s study were taking their drug.

All critics agreed that the big problem with Furberg’s data was the meta-analysis he used.

What’s a meta analysis?

A meta-analysis is a relatively new statistical technique which combines the data from several clinical trials that all address a particular clinical question, in an attempt to estimate an overall answer to that question.

The reason meta-analyses are necessary is that similar clinical trials addressing a particular question frequently give different answers. The meta-analysis is a means of trying to come up with an overall best answer, given the available data from all the trials. When properly done, meta-analyses can give important insights to clinical questions, insights that can be gained in no other way.

There are many inherent problems with meta-analyses, however. By selecting which trials to include in the meta-analysis and which to exclude, by weighing which outcomes are the most appropriate to measure, and by making many other decisions in choosing the methodology for performing the meta-analysis, often (it is said), the one who performs that analysis gets to choose the outcome. The difficulty in performing a legitimate meta-analysis has led some to remark that meta-analysis is to statistical analysis what meta-physics is to physics. [BINGO!]

As a result, when the results of the meta-analysis do not agree with a particular expert’s preconceived notion, it is always extremely easy for that expert to zero in on several aspects of the meta-analysis he/she disagrees with, and that, he/she can pronounce, completely invalidates the entire study. Better yet, critics can do their own meta-analysis to counter the one they don’t like.

Thus, if meta-analyses are to be used at all, they ought ideally to be (a) performed by individuals who do not have a particular prejudice as to the outcome, and (b) performed by experts who understand the field. Unfortunately, these two criteria are often mutually exclusive. [Especially in paranormal research.]

The political-economic dynamics of the calcium blocker meta-analyses

The world calcium blocker market is estimated to be about $6 billion. Drug companies potentially have a lot to lose if these drugs fall out of favor.

The WHO-ISH meta-analysis, presented by the International Society of Hypertension and suggesting no problem with long-acting calcium blockers, was paid for by the pharmaceutical industry. Notably, the WHO-ISH study did not include heart failure as an outcome, despite the fact that calcium blockers are known to cause this problem. Furthermore, one would be hard pressed to identify a “hypertension expert” who wasn’t significantly compromised by professional and/or financial relationships with pharmaceutical companies. Indeed, it is close relationships with drug companies that often determine who is recognized as an expert, since drug companies sponsor much of the research and many of the speaking engagements that give experts their visibility. One thus ought to attach at least a few grains of salt to the WHO-ISH meta-analysis, and to the opinions expressed by many of the experts’ complaining about the Furberg study.

In contrast, Furberg’s study had no external funding. Furberg himself is a highly regarded investigator, thought to be relatively independent of drug company money. However, Furberg has been at significant odds with prominent players in the hypertension community for at least 5 years for his attacks on calcium blockers, and, one might argue (and some have), has a reputation to protect. His latest study tends to vindicate his efforts for the past several years – another outcome, in other words, would not have vindicated those efforts. Hence the characterization of his study as “unbalanced,” “inflammatory,” etc."
 
Last edited:
  • #19
John, I salute you, and all who sail in you. A good point well made. I took your original point as 'meta-analyses are generally biased in favour of the topic analysed' (not a quote), whereas I was saying the bias was often in the opposite direction. The article you cite says that in fact you can make the figures say whatever you like, depending on your POV.

Nobody has said truer words that Disreali on statistics. :smile:


JohnDubYa said:
RE: "In general the standard of proof is raised for non-conventional research,"
To my knowledge, the measure of statistical significance is not changed for non-conventional research. They simply cannot meet the goal.

To quote Coolican (1994) (an undergraduate psychology stats textbook): "If we are about to challenge a well-established theory or research finding by publishing results which contradict it, the convention is to achieve 1% significance before publication. A further reason for requiring 1% significance would be when the researcher only has a one-off chance to demonstrate an effect" (p.250).

1% as opposed to the usual 5% i.e. the effect must be 5 times greater if it is non-replicable e.g. most field and natural experiments, or lacks a sound theoretical basis e.g. psi.

If you were to be blunt about it, it could easily be said that this convention is designed to ignore subtle phenomema in favour of endorsing only the blindingly obvious! Is that the great adventure that we call science?
 
  • #20
Good catch on the last issue. I wasn't aware of the two different standards.
 
  • #21
So You People Think We Telekinetics Are Fake? Well I Got News For You Telekinesis Can Be Done By Anyone Even Einstein!
 
  • #22
If Einstein can still do it, James Randi should fork over more than a million dollars.
 
  • #23
Dont doubt anything you haven't tried doing. I have tried this for 3 months and got results. I can now move objects across the table. You try this for 5 years and see if its fake. I can assure you its real.
 
  • #24
Michaelpol said:
Dont doubt anything you haven't tried doing. I have tried this for 3 months and got results. I can now move objects across the table. You try this for 5 years and see if its fake. I can assure you its real.

Why can't you do this in front of scientists?
 
  • #25
I don't know any scientists for one. Plus if you get nervous thoughts will fill your mind and prevent you from concentrating.
 
  • #26
Michaelpol said:
Plus if you get nervous thoughts will fill your mind and prevent you from concentrating.


Actually, that I can believe.

Where do you get your information about bugs?
 
  • #28
RE: "Dr. Dean Radin is director of the Consciousness Research Division of the Harry Reid Center for Environmental Studies at the University of Las Vegas, Nevada. Early indications from his experiments may weaken the DAT model. He placed portable random event generating machines with no feedback display at major events, such as the Academy Awards, the Super Bowl, and the O. J. Simpson trial, to see if the thoughts of many people concentrating in harmony could affect the physical world. The audiences were unwitting participants and never saw the machines. Radin found, however, that their combined intentions affected the machine's normally random output. Since the experiment offered no opportunity for precognition, a requirement for the DAT model, the recorded effects would seem to indicate psychokinetic powers at work."

Where's the control group?

It also appears that a double-blind study was not peformed. If so, why the Hell not?

Frankly, I don't believe he ever performed any of these experiments. I certainly do not believe he performed all of them.
 
  • #29
JohnDubYa said:
RE: "Radin found, however, that their combined intentions affected the machine's normally random output. Since the experiment offered no opportunity for precognition, a requirement for the DAT model, the recorded effects would seem to indicate psychokinetic powers at work." QUOTE]

What about other explanations e.g. equipment malfunction, mobile phones or other equipment being turned off or on, weather conditions etc? I'd like to hear from someone with technical knowledge of some other possiblities.

From the same source ( http://www.parascope.com/articles/0397/pk05.htm ):

"Jahn and Dunne undertook similar experiments, in one instance placing a portable REG at a theatrical event. ...Participants had told us that during two or three parts of the performance people would be in greater resonance," said Dunne. Because the performance ran eight times, we were able to show a strong correlation. PEAR calculated the odds against the findings being attributable to chance expectations at 2x10 to the 4th power".

1/ Correlation does not prove cause. However, the strong p value suggested here would make further investigation warranted, to rule out alternative explanations if nothing else. After all, it was 24 instances in total, not just one.

2/ What else was different in the situation during the 2 or 3 periods mentioned e.g. audience louder, vibrations due to sound (including infrasound) etc.

I'm prepared to believe that these studies were actually carried out, but I'd like to see the full report, including details of statistical analysis, preferably in a peer-reviewed journal.

Its probably a good thing to hold judgement until we have something a bit more tangible to chew on. But even then at best we only ever really reach that position that something is extremely likely (e.g. that water boils at 100C) or extremely unlikely (that at 100C spirits are exorcised from water).
 
  • #30
"He placed portable random event generating machines with no feedback display at major events, such as the Academy Awards, the Super Bowl, and the O. J. Simpson trial, to see if the thoughts of many people concentrating in harmony could affect the physical world. The audiences were unwitting participants and never saw the machines. Radin found, however, that their combined intentions affected the machine's normally random output. Since the experiment offered no opportunity for precognition, a requirement for the DAT model, the recorded effects would seem to indicate psychokinetic powers at work."

This makes no sense, he says "many people concentrating in harmony". At the Academy awards, each person is going to be thinking about their own little piece, where is the "concentration in harmony"? Same thing with the Superbowl and the O. J. Simpson trial. Where's the single focused concentration? You have two opposing sides at the Superbowl, same thing at the Simpson trial.

So just how was the "normally random output" affected? Since there were large groups of people concentrating on different things, there was no single focus, I don't get it.
 
Last edited:
  • #31
RE: "I'm prepared to believe that these studies were actually carried out,"

I'm not. I think these studies were a giant fraud from the very get-go. We are presented with no description of the method, apparatus, results, goals... nothing. And terms are introduced with no clear definition (resonance?). It almost sounds like the type of writing that a student pulls out of his ass the night before a deadline.
 
  • #32
JohnDubYa said:
We are presented with no description of the method, apparatus, results, goals... nothing. And terms are introduced with no clear definition (resonance?)

You don't usually see the kind of detail you describe in the average magazine article. That's why I'd like to see the full article, if such a thing exists. I'll do a search if I get a chance later on.
 
  • #33
I have dug out an abstract that looks interesting, and some blurb on a book chapter. Not a lot to go on, but if anyone feels can get the full articles it might be worth a read.

Correlations of Continuous Random Data with Major World Events

Foundations of Physics Letters December 2002, vol. 15, no. 6, pp. 537-550(14)

Nelson R.D.[1]; Radin D.I.[2]; Shoup R.[3]; Bancel P.A.[4]

[1] Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, New Jersey 08544. rdnelson@princeton.edu [2] Institute of Noetic Sciences, Petaluma, California 94952 [3] Boundary Institute, Los Altos, California 94024 [4] 108, rue St Maur, Paris, France F-75011

Abstract:
The interaction of consciousness and physical systems is most often discussed in theoretical terms, usually with reference to the epistemological and ontological challenges of quantum theory. Less well known is a growing literature reporting experiments that examine the mind-matter relationship empirically. Here we describe data from a global network of physical random number generators that shows unexpected structure apparently associated with major world events. Arbitrary samples from the continuous, four-year data archive meet rigorous criteria for randomness, but pre-specified samples corresponding to events of broad regional or global importance show significant departures of distribution parameters from expectation. These deviations also correlate with a quantitative index of daily news intensity. Focused analyses of data recorded on September 11, 2001, show departures from random expectation in several statistics. Contextual analyses indicate that these cannot be attributed to identifiable physical interactions and may be attributable to some unidentified interaction associated with human consciousness.

Keywords: physical random systems; correlations in random data; quantum randomness; consciousness; global events; statistical anomalies

Document Type: Research article ISSN: 0894-9875



Nelson & Radin (2001). Statistically robust anomalous effects: Replication in random event generator experiments. In Rao, Koneru Ramakrishna (Ed). (2001). Basic research in parapsychology (2nd ed.). (pp. 89-93). Jefferson, NC, US: McFarland & Co, Inc., Publishers.
ISBN: 0786410086

(from the chapter) Discusses studies and meta-analyses of random event generators (REGs). It is argued that this database of more than 800 independent studies contains both weak and strong replications and the meta-analytic procedures allow for the combination of results from which it is possible to draw conclusions regarding this class of experiments. A bibliographic search was performed and located 152 reports beginning in 1959, of experiments meeting the circumscription constraints. 235 control and 597 experimental studies were examined. Each study was represented by a z score reflecting the deviation of results in the direction of intention, and an effect size was computed. A weight was assigned to the study based on 16 quality criteria. It is concluded that the REG database contains unequivocal evidence for a replicable statistical effect in a variety of specific protocols, all designed to assess an anomalous correlation of distribution parameters with the intentions of human observers. it is argued that the effect is robust, and it is not significantly diluted by incorporated adjustments for experiment quality and for inhomogeneity, nor is it eliminated by incorporating an estimated filedrawer of unreported nonsignificant studies.
 
  • #34
RE: "Discusses studies and meta-analyses of random event generators (REGs)."

Meta-analyses? In other words, voodoo.

Here is the actual paper:

http://www.boundaryinstitute.org/articles/FoPL_nelson-pp.pdf [Broken]

Those teaching English can use this paper to demonstrate how the passive writing style produces muddy prose (which is perfect in situations where you want to obfuscate methods and results).

Figure 2 exemplifies my problems with such research papers. This figure shows a trend that apparently begins on September 11. Since the figure does not illustrate data gathered before September 5 or after September 15, there is no way to know if the trend on September 11 is unusual. Frankly, I see nothing profound in the figure, whatsoever.

I could not find any mention of how the researchers bent over backwards to prevent bias, such as using a double-blind methods.

I think the operable word here is "glean." The researchers obtained data, and knowing what they wanted to find, were able to glean statistical results that confirmed their beliefs. If they had handed over the data to a disinterested observer with no clue as to the dates, I doubt he would have reached the same conclusion. We will never know because they (apparently) never bothered to follow careful protocol.
 
Last edited by a moderator:
  • #35
I checked into the people doing the studies and as I figured...it's pretty much unsubstantiated, data is skewed, etc...

Here's an excerpt from Skeptic Report - An evening with Dean Radin

O.J.: A global event?


Radin gave several examples of how GCP had detected "global consciousness". One was the day O.J. Simpson was acquitted of double-murder. We were shown a graph where - no doubt about that - the data formed a nice ascending curve in the minutes after the pre-show started, with cameras basically waiting for the verdict to be read. And yes, there was a nice, ascending curve in the minutes after the verdict was read.

However, about half an hour before the verdict, there was a similar curve ascending for no apparent reason. Radin's quick explanation before moving on to the next slide?

"I don't know what happened there."

It was not to be the last time we heard that answer.

September 11th: A study in wishful thinking.


It was obvious that the terror attacks of that day should make a pretty good case for Global Consciousness (GC). On the surface, it did. There seemed to be a very pronounced effect on that day and in the time right after.

There were, however, several problems. The most obvious was that the changes began at 6:40am ET, when the attacks hadn't started yet. It can of course be argued when the attacks "started", but if the theory is based on a lot of people "focusing" on the same thing, the theory falls flat - at 6:40am, only the attackers knew about the upcoming event. Not even the CIA knew. Hardly enough to justify a "global" consciousness.


"Another serious problem with the September 11 result was that during the days before the attacks, there were several instances of the eggs picking up data that showed the same fluctuation as on September 11th. When I asked Radin what had happened on those days, the answer was:

"I don't know."

I then asked him - and I'll admit that I was a bit flabbergasted - why on Earth he hadn't gone back to see if similar "global events" had happened there since he got the same fluctuations. He answered that it would be "shoe-horning" - fitting the data to the result.

Checking your hypothesis against seemingly contradictory data is "shoe-horning"?

For once, I was speechless. "

http://www.skepticreport.com/psychics/radin2002.htm [Broken]

And this, from Skepdic.com about "PEAR". Seems the results are seriously skewed.

After all, fraud, unconscious cheating, errors in calculation, software errors, and self-deception could be considered as “influence of human operators.” So could the fact that "operator 10," believed to be a PEAR staff member, "has been involved in 15% of the 14 million trials, yet contributed to a full half of the total excess hits" (McCrone 1994). According to Dean Radin, the criticism that there "was anyone person responsible for the overall results of the experiment...was tested and found to be groundless" (Radin 1997, 221). His source for this claim is a 1991 article by Jahn et al. in the Journal of Scientific Exploration, "Count population profiles in engineering anomalies experiments" (5:205-32). However, Jahn gives the data for his experiments in Margins of Reality: The Role of Consciousness in the Physical World (Harcourt Brace, 1988, p. 352-353). John McCrone has done the calculations and found that 'If [operator 10's] figures are taken out of the data pool, scoring in the "low intention" condition falls to chance while "high intention" scoring drops close to the .05 boundary considered weakly significant in scientific results."

http://www.skepdic.com/pear.html
 
Last edited by a moderator:

Similar threads

  • General Discussion
Replies
6
Views
1K
Replies
14
Views
847
Replies
3
Views
2K
  • Programming and Computer Science
Replies
14
Views
1K
  • General Discussion
Replies
12
Views
1K
Replies
9
Views
1K
  • Beyond the Standard Models
Replies
3
Views
2K
Replies
5
Views
2K
  • STEM Academic Advising
Replies
14
Views
554
Back
Top