It's been well known that the traditional scientific method differs in many substantial ways from the analysis techniques used by "scientific" VS debaters.
One particular point in which they differ is in the use of statistical techniques to analyze data. This shows most clearly in the use of selective figures, rather than averages and weights. It also shows fairly clearly in the uncertainties reported for figures, in which a broad range is simply multiplied from the base range rather than the range of random error accounted for in a normal fashion.
I will confess that, in the name of easy comprehension, I have adopted the "naive" VS debaters' version of propagation of error rather than the statistical on my main website - as even those VS debaters who should know better usually do - but have seriously been considering using a rigorously scientific application of the normal propagations of error.
Statistics, propagation of error, and the VS community
-
- Site Admin
- Posts: 2164
- Joined: Mon Aug 14, 2006 8:26 pm
- Contact:
- AnonymousRedShirtEnsign
- Jedi Knight
- Posts: 380
- Joined: Thu Aug 17, 2006 10:05 pm
- Location: Six feet under the surface of some alien world
-
- Site Admin
- Posts: 2164
- Joined: Mon Aug 14, 2006 8:26 pm
- Contact:
I'm not even talking about that part of the scientific method - just the basic statistical mechanics that scientists use to analyze those observations and predictions.AnonymousRedShirtEnsign wrote:One of the problems with using the scientific method in the Versus debate is the fact that one can not test the hypothesis in controlled conditions. So we are basically limited to observations and predictions.
Here, I'll give an example of what I'm talking about. Let's say I'm analyzing the energy involved in an explosion.
To make up the figures off the top of my head...
A block of steel of density 7.5-8 g/cc, pre-sliced, is measured against a black background as 51x51 pixels and cubical so far as we know, is hit with a shockwave blast. A small splotch, known to be an exactly 2.1 meter tall cyborg, takes up 6 pixels along its long axis.
The cube is seen to be made of two pieces, separated by the shockwave, the positions of which are measured to be 401 pixels away from the midpoint 5 frames after a frame in which the block was seen to be intact, at a framerate of 25 fps.
All of these pixel measurements are rounded up against a perfectly black background. We know to the nearest pixel on total dimensions; thus, the cyborg is somewhere between 5-6 pixels, the block 50-51 pixels, the block has separated for 4-5 frames, etc.
Now, to do the physics quickly, where the dimension is l, the density d, and the velocity v=x/t, we have E=mv^2/2=0.5*l^3*d*x^2/t^2.
A standard statistical propagation of error gives 20.0+/-5.2 terajoules, calculated from the mean of each measurement. This (or, given the level of accuracy present here, 20.+/- 5 terajoules) is what a scientific paper would typically report as a final result.
If you are lucky, then you'll see the extreme possibilities stated as a range (10.2-42.2 terajoules) by a VS debater. It's an over-generous statement of error that presumes that all the ranges are selectively biased towards either maximization or minimization.
Ordinarily, you will either see a maximum figure (42.2 terajoules), a minimum (10.2 terajolues), or an arbitrary figure somewhere in between. Looking back over that, perhaps I should use an even simpler example...
-
- Site Admin
- Posts: 2164
- Joined: Mon Aug 14, 2006 8:26 pm
- Contact:
- AnonymousRedShirtEnsign
- Jedi Knight
- Posts: 380
- Joined: Thu Aug 17, 2006 10:05 pm
- Location: Six feet under the surface of some alien world