The disappearance of darkness

Discussion in 'Digital Photography' started by Me, May 7, 2013.

  1. Me

    Trevor Guest

    "Chris Malcolm" <> wrote in message
    news:...
    >>> obvious "evidence-based" scientific experiments if you haven't taken
    >>> the care to make explicit the underlying assumptions and made sure
    >>> that the implied models hold.

    >
    >> An obvious contradiction.

    >
    > The problem is that this is only obvious at first glance.


    And second, and third... to a real scientist anyway.
    (They are in short supply these days with funding often relying on vested
    outcomes :-(


    >> An experiment is *not* "evidence based" if it
    >> relies on unproven assumptions and models.
    >> That's how psuedo science works of course.

    >
    > In that case a great deal of public medical policy is pseudo
    > science.


    Unfortunately true, especially when it comes to big Pharma.


    > It's also well known to be the case that in periods of what Kuhn
    > called "normal science" (in "The Structure of Scientific Revolutions")
    > the assumptions and models are usually not made explicit.


    And deliberately so when trying to mislead for vested interests.


    > Plus the
    > assumptions and models underlying a specific scientific paradigm are
    > only provisionally proved by the ongoing success of the paradigm. As
    > Popper pointed out what is often taken to have been scientifically
    > proved is in fact really only so far not disproved.


    Right, but it's usually often to disprove something at least, than it is to
    ever prove anything for all possible conditions.

    Trevor.
    Trevor, May 29, 2013
    1. Advertising

  2. Me

    Whisky-dave Guest

    On Wednesday, May 29, 2013 12:25:08 AM UTC+1, Alan Browne wrote:
    > On 2013.05.28 06:17 , Whisky-dave wrote:
    >
    > > On Thursday, May 23, 2013 9:55:50 PM UTC+1, Alan Browne wrote:

    >
    > >> On 2013.05.22 23:26 , Eric Stevens wrote:

    >
    > >>

    >
    > >>> On Wed, 22 May 2013 21:19:48 -0400, wrote:

    >
    > >>>> Next time you're near a hydro tower, listen to the hum of the wires and tower...
    > >>> What is a 'hydro tower'?

    >
    >
    > >> In Quebec 98% or so of power is hydro derived - the utility is called

    >
    > >>

    >
    > >> "Hydro Quebec" so we say "hydro tower" (poles, service, lines, ...)

    >
    > >> This is common in several areas in North America.

    >
    >
    > >>> Wires hum without electricity. It's just the wind.

    >
    >
    > >> Not at all. Power lines have an audible 120 Hz hum esp. when it is

    >
    > >> humid. (would be 100 Hz in places with 50 Hz service). One place I ski

    >
    > >>

    >
    > >> has 315 VAC power lines go right over the area immediately before the

    >
    > >>

    >
    > >> chair lift and on very humid days it is quite loud.

    >
    > >> The space between the wires and the ground is a dielectric - it is being

    >
    > >>

    >
    > >> charged and discharged continuously (a power waste that does not occur

    >
    > >>

    >
    > >> with HVDC). While the lines are aluminum they have a stainless steel

    >
    > >>

    >
    > >> core and that vibrates with the continuous charge/discharge from the

    >
    > >>

    >
    > >> line to the ground. (Or perhaps water droplets get charged and then

    >
    > >>

    >
    > >> react to the field charging and discharging).

    >
    > >>

    >
    > >>

    >
    > >>

    >
    > >> When it is really dry there is a crackling sound. (Corona discharge).

    >
    > >

    >
    > > So it's not really the wires that are making the noise/hum or crackling it's the molecules in the air producing changes in pressure.

    >
    > > I don;t think you can hear current flowing through a wire if it were in a vacumm or in space :)

    >
    >
    >
    > Whether the noise is the wire vibrating or the air vibrating, eventually
    >
    > molecules do need to move to transmit noise (pressure waves).


    Energy needs to be propergated, and it's still not the sound of the electricity flowing in the wires which is an important point.


    >
    > > I seem to remmeber some definition of current from school physics in that a curretn of 1 amp flowing through two parelle conductors 1 metre apart in a vacumm produces as for of 1 X 10^-7 newtons of force.

    >
    > > This rapid changing of force is what makes the 'noise' by dispacing the air.

    >
    >
    >
    > As I stated elsewhere, the noise is very different depending on the
    >
    > humidity. In dry air a 'crackling' sound, in humid, more of a hum.


    Yes but still not the sound of electicity in wires, these sounds depend on the weather. Perhaps the tone of the sound can be used to measure humidity, but it's not a relible way of measuring electricity.

    Obviosult if electricity made a sound audiophiles would need their cables encased in a vacuum otherwise the sound would escape from the wires. ;-)


    > Currents cause a force where there is a magnetic field (the definition
    >
    > you put up above). In turn this can make a noise. (That's how
    >
    > loudspeakers work).


    Yep, it's the comperssion of the air that causes the sounds/noises not the electricity.

    >
    > Voltage causes sparking and discharges.


    Thought that was down to inoisation.
    Whisky-dave, May 29, 2013
    1. Advertising

  3. Me

    J. Clarke Guest

    In article <>, ozcvgtt02
    @sneakemail.com says...
    >
    > J. Clarke <> wrote:
    >
    > > Would you be kind enough to provide an example of an experiment in which
    > > "assumptions and models" affect the outcome?

    >
    > Experiment as in "the measured raw results" or as in "the
    > results after evaluating the measurements"?


    As in the published paper.
    J. Clarke, May 29, 2013
  4. In rec.photo.digital Wolfgang Weisselberg <> wrote:
    > J. Clarke <> wrote:


    >> Would you be kind enough to provide an example of an experiment in which
    >> "assumptions and models" affect the outcome?


    > Experiment as in "the measured raw results" or as in "the
    > results after evaluating the measurements"?


    You've got it!

    --
    Chris Malcolm
    Chris Malcolm, May 29, 2013
  5. In rec.photo.digital J. Clarke <> wrote:
    > In article <>, ozcvgtt02
    > @sneakemail.com says...
    >>
    >> J. Clarke <> wrote:
    >>
    >> > Would you be kind enough to provide an example of an experiment in which
    >> > "assumptions and models" affect the outcome?

    >>
    >> Experiment as in "the measured raw results" or as in "the
    >> results after evaluating the measurements"?


    > As in the published paper.


    Do those published papers count in which the authors later revised the
    conclusions they originally drew from their experimental results? Or
    those in which later reviewers, not the original authors, did the same
    thing?

    --
    Chris Malcolm
    Chris Malcolm, May 29, 2013
  6. Me

    J. Clarke Guest

    In article <>,
    says...
    >
    > In rec.photo.digital J. Clarke <> wrote:
    > > In article <>, ozcvgtt02
    > > @sneakemail.com says...
    > >>
    > >> J. Clarke <> wrote:
    > >>
    > >> > Would you be kind enough to provide an example of an experiment in which
    > >> > "assumptions and models" affect the outcome?
    > >>
    > >> Experiment as in "the measured raw results" or as in "the
    > >> results after evaluating the measurements"?

    >
    > > As in the published paper.

    >
    > Do those published papers count in which the authors later revised the
    > conclusions they originally drew from their experimental results? Or
    > those in which later reviewers, not the original authors, did the same
    > thing?


    Give us your example if you have one.
    J. Clarke, May 29, 2013
  7. In rec.photo.digital.slr-systems J. Clarke <> wrote:
    > In article <>,
    > says...
    >>
    >> In rec.photo.digital J. Clarke <> wrote:
    >> > In article <>, ozcvgtt02
    >> > @sneakemail.com says...
    >> >>
    >> >> J. Clarke <> wrote:
    >> >>
    >> >> > Would you be kind enough to provide an example of an experiment in which
    >> >> > "assumptions and models" affect the outcome?
    >> >>
    >> >> Experiment as in "the measured raw results" or as in "the
    >> >> results after evaluating the measurements"?

    >>
    >> > As in the published paper.

    >>
    >> Do those published papers count in which the authors later revised the
    >> conclusions they originally drew from their experimental results? Or
    >> those in which later reviewers, not the original authors, did the same
    >> thing?


    > Give us your example if you have one.


    The famous example is the Millikan-Ehrenhaft controversy, e.g.:

    http://www.umich.edu/~chemstu/content_weeks/F_06_Week4/Mullikan_Erenhaft.pdf

    http://bjps.oxfordjournals.org/content/56/4/681.short

    Both the above have many good further references.

    --
    Chris Malcolm
    Chris Malcolm, May 30, 2013
  8. Me

    J. Clarke Guest

    In article <>,
    says...
    >
    > In rec.photo.digital.slr-systems J. Clarke <> wrote:
    > > In article <>,
    > > says...
    > >>
    > >> In rec.photo.digital J. Clarke <> wrote:
    > >> > In article <>, ozcvgtt02
    > >> > @sneakemail.com says...
    > >> >>
    > >> >> J. Clarke <> wrote:
    > >> >>
    > >> >> > Would you be kind enough to provide an example of an experiment in which
    > >> >> > "assumptions and models" affect the outcome?
    > >> >>
    > >> >> Experiment as in "the measured raw results" or as in "the
    > >> >> results after evaluating the measurements"?
    > >>
    > >> > As in the published paper.
    > >>
    > >> Do those published papers count in which the authors later revised the
    > >> conclusions they originally drew from their experimental results? Or
    > >> those in which later reviewers, not the original authors, did the same
    > >> thing?

    >
    > > Give us your example if you have one.

    >
    > The famous example is the Millikan-Ehrenhaft controversy, e.g.:
    >
    > http://www.umich.edu/~chemstu/content_weeks/F_06_Week4/Mullikan_Erenhaft.pdf
    >
    > http://bjps.oxfordjournals.org/content/56/4/681.short
    >
    > Both the above have many good further references.


    And the upshot of that is that Millikan fudged the data, which is a nono
    in any experiment, regardless of the "assumptions and models".
    J. Clarke, May 31, 2013
  9. In rec.photo.digital.slr-systems J. Clarke <> wrote:
    > In article <>,
    > says...
    >>
    >> In rec.photo.digital.slr-systems J. Clarke <> wrote:
    >> > In article <>,
    >> > says...
    >> >>
    >> >> In rec.photo.digital J. Clarke <> wrote:
    >> >> > In article <>, ozcvgtt02
    >> >> > @sneakemail.com says...
    >> >> >>
    >> >> >> J. Clarke <> wrote:
    >> >> >>
    >> >> >> > Would you be kind enough to provide an example of an experiment in which
    >> >> >> > "assumptions and models" affect the outcome?
    >> >> >>
    >> >> >> Experiment as in "the measured raw results" or as in "the
    >> >> >> results after evaluating the measurements"?
    >> >>
    >> >> > As in the published paper.
    >> >>
    >> >> Do those published papers count in which the authors later revised the
    >> >> conclusions they originally drew from their experimental results? Or
    >> >> those in which later reviewers, not the original authors, did the same
    >> >> thing?

    >>
    >> > Give us your example if you have one.

    >>
    >> The famous example is the Millikan-Ehrenhaft controversy, e.g.:
    >>
    >> http://www.umich.edu/~chemstu/content_weeks/F_06_Week4/Mullikan_Erenhaft.pdf
    >>
    >> http://bjps.oxfordjournals.org/content/56/4/681.short
    >>
    >> Both the above have many good further references.


    > And the upshot of that is that Millikan fudged the data, which is a nono
    > in any experiment, regardless of the "assumptions and models".


    Quite so, but that doesn't invalidate the general point about how
    assumptions and models affect the interpretation of experimental
    results, which is why this particular example has been discussed so
    much by historians and philosophers of science.

    --
    Chris Malcolm
    Chris Malcolm, May 31, 2013
  10. Me

    J. Clarke Guest

    In article <>,
    says...
    >
    > In rec.photo.digital.slr-systems J. Clarke <> wrote:
    > > In article <>,
    > > says...
    > >>
    > >> In rec.photo.digital.slr-systems J. Clarke <> wrote:
    > >> > In article <>,
    > >> > says...
    > >> >>
    > >> >> In rec.photo.digital J. Clarke <> wrote:
    > >> >> > In article <>, ozcvgtt02
    > >> >> > @sneakemail.com says...
    > >> >> >>
    > >> >> >> J. Clarke <> wrote:
    > >> >> >>
    > >> >> >> > Would you be kind enough to provide an example of an experiment in which
    > >> >> >> > "assumptions and models" affect the outcome?
    > >> >> >>
    > >> >> >> Experiment as in "the measured raw results" or as in "the
    > >> >> >> results after evaluating the measurements"?
    > >> >>
    > >> >> > As in the published paper.
    > >> >>
    > >> >> Do those published papers count in which the authors later revised the
    > >> >> conclusions they originally drew from their experimental results? Or
    > >> >> those in which later reviewers, not the original authors, did the same
    > >> >> thing?
    > >>
    > >> > Give us your example if you have one.
    > >>
    > >> The famous example is the Millikan-Ehrenhaft controversy, e.g.:
    > >>
    > >> http://www.umich.edu/~chemstu/content_weeks/F_06_Week4/Mullikan_Erenhaft.pdf
    > >>
    > >> http://bjps.oxfordjournals.org/content/56/4/681.short
    > >>
    > >> Both the above have many good further references.

    >
    > > And the upshot of that is that Millikan fudged the data, which is a nono
    > > in any experiment, regardless of the "assumptions and models".

    >
    > Quite so, but that doesn't invalidate the general point about how
    > assumptions and models affect the interpretation of experimental
    > results, which is why this particular example has been discussed so
    > much by historians and philosophers of science.


    The sort of interpretation you are discussing lies in the domain of
    theoretical physics, not experimental. If you want to see a wonderful
    example of a good experimentalist acting as a totally embarassing
    theoretician, read "Creation's Tiny Mystery" by Robert V. Gentry.
    J. Clarke, May 31, 2013
  11. J. Clarke <> wrote:
    > In article <>,
    >> In rec.photo.digital.slr-systems J. Clarke <> wrote:
    >> > In article <>,
    >> >> In rec.photo.digital J. Clarke <> wrote:
    >> >> > In article <>, ozcvgtt02
    >> >> >> J. Clarke <> wrote:


    >> >> >> > Would you be kind enough to provide an example of an experiment in which
    >> >> >> > "assumptions and models" affect the outcome?


    >> >> >> Experiment as in "the measured raw results" or as in "the
    >> >> >> results after evaluating the measurements"?


    >> >> > As in the published paper.


    >> >> Do those published papers count in which the authors later revised the
    >> >> conclusions they originally drew from their experimental results? Or
    >> >> those in which later reviewers, not the original authors, did the same
    >> >> thing?


    >> > Give us your example if you have one.


    >> The famous example is the Millikan-Ehrenhaft controversy, e.g.:


    >> http://www.umich.edu/~chemstu/content_weeks/F_06_Week4/Mullikan_Erenhaft.pdf


    >> http://bjps.oxfordjournals.org/content/56/4/681.short


    >> Both the above have many good further references.


    > And the upshot of that is that Millikan fudged the data,


    BECAUSE he had "assumptions and models" he wanted to appease.
    The measured raw results were ... interpreted and filtered in
    the published paper.

    > which is a nono
    > in any experiment, regardless of the "assumptions and models".


    So, according to our current knowledge, is charge quantized
    (Millikan, fudged) or not (Ehrenhaft, not fudged)?

    -Wolfgang
    Wolfgang Weisselberg, May 31, 2013
  12. J. Clarke <> wrote:
    > In article <>, ozcvgtt02
    >> J. Clarke <> wrote:


    >> > Would you be kind enough to provide an example of an experiment in which
    >> > "assumptions and models" affect the outcome?


    >> Experiment as in "the measured raw results" or as in "the
    >> results after evaluating the measurements"?


    > As in the published paper.


    Almost all data that was used to conclude "facts" that were
    later found to be not that way.

    -Wolfgang
    Wolfgang Weisselberg, May 31, 2013
  13. Me

    J. Clarke Guest

    In article <>, ozcvgtt02
    @sneakemail.com says...
    >
    > J. Clarke <> wrote:
    > > In article <>,
    > >> In rec.photo.digital.slr-systems J. Clarke <> wrote:
    > >> > In article <>,
    > >> >> In rec.photo.digital J. Clarke <> wrote:
    > >> >> > In article <>, ozcvgtt02
    > >> >> >> J. Clarke <> wrote:

    >
    > >> >> >> > Would you be kind enough to provide an example of an experiment in which
    > >> >> >> > "assumptions and models" affect the outcome?

    >
    > >> >> >> Experiment as in "the measured raw results" or as in "the
    > >> >> >> results after evaluating the measurements"?

    >
    > >> >> > As in the published paper.

    >
    > >> >> Do those published papers count in which the authors later revised the
    > >> >> conclusions they originally drew from their experimental results? Or
    > >> >> those in which later reviewers, not the original authors, did the same
    > >> >> thing?

    >
    > >> > Give us your example if you have one.

    >
    > >> The famous example is the Millikan-Ehrenhaft controversy, e.g.:

    >
    > >> http://www.umich.edu/~chemstu/content_weeks/F_06_Week4/Mullikan_Erenhaft.pdf

    >
    > >> http://bjps.oxfordjournals.org/content/56/4/681.short

    >
    > >> Both the above have many good further references.

    >
    > > And the upshot of that is that Millikan fudged the data,

    >
    > BECAUSE he had "assumptions and models" he wanted to appease.
    > The measured raw results were ... interpreted and filtered in
    > the published paper.


    They were not "interpreted and filtered". Data points that did not
    support his viewpoint were rejected. That is not experimental science,
    that is religion. He's an example of a good theoretician who was not a
    good experimentalist.

    > > which is a nono
    > > in any experiment, regardless of the "assumptions and models".

    >
    > So, according to our current knowledge, is charge quantized
    > (Millikan, fudged) or not (Ehrenhaft, not fudged)?


    According to the current not fudged data, replicated many, many, many
    times, it is quantized.
    J. Clarke, Jun 1, 2013
  14. J. Clarke <> wrote:
    > In article <>, ozcvgtt02
    >> J. Clarke <> wrote:


    >> > And the upshot of that is that Millikan fudged the data,


    >> BECAUSE he had "assumptions and models" he wanted to appease.
    >> The measured raw results were ... interpreted and filtered in
    >> the published paper.


    > They were not "interpreted and filtered". Data points that did not
    > support his viewpoint were rejected.


    What do you think a filter/filtering something does?


    > That is not experimental science,
    > that is religion. He's an example of a good theoretician who was not a
    > good experimentalist.


    Religion is being a good theoretician?


    You know the old joke?
    A quack doctor (in the then Wild West) travelling from
    town to town treated a farmer for fever: "Eat sauerkraut".
    The farmer recovers. The quack doctor writes in his little
    black book: "Sauerkraut helps against fever."

    Another town, soon after. The smith has fever. So,
    consulting his little black book, he orders him to eat
    sauerkraut. The smith dies, however.

    The quack doctor corrects the entry in his little black
    book: "Sauerkraut helps against fever only for farmers."

    Can you spot the model and assumption that led the quack doctor
    to the "corrected" entry?


    >> > which is a nono
    >> > in any experiment, regardless of the "assumptions and models".


    >> So, according to our current knowledge, is charge quantized
    >> (Millikan, fudged) or not (Ehrenhaft, not fudged)?


    > According to the current not fudged data, replicated many, many, many
    > times, it is quantized.


    So basically Ehrenhaft was wrong.

    -Wolfgang
    Wolfgang Weisselberg, Jun 1, 2013
  15. Me

    J. Clarke Guest

    In article <>, ozcvgtt02
    @sneakemail.com says...
    >
    > J. Clarke <> wrote:
    > > In article <>, ozcvgtt02
    > >> J. Clarke <> wrote:

    >
    > >> > And the upshot of that is that Millikan fudged the data,

    >
    > >> BECAUSE he had "assumptions and models" he wanted to appease.
    > >> The measured raw results were ... interpreted and filtered in
    > >> the published paper.

    >
    > > They were not "interpreted and filtered". Data points that did not
    > > support his viewpoint were rejected.

    >
    > What do you think a filter/filtering something does?
    >
    >
    > > That is not experimental science,
    > > that is religion. He's an example of a good theoretician who was not a
    > > good experimentalist.

    >
    > Religion is being a good theoretician?


    No, picking your data to support your model is behaving like a religious
    zealot.

    > You know the old joke?
    > A quack doctor (in the then Wild West) travelling from
    > town to town treated a farmer for fever: "Eat sauerkraut".
    > The farmer recovers. The quack doctor writes in his little
    > black book: "Sauerkraut helps against fever."
    >
    > Another town, soon after. The smith has fever. So,
    > consulting his little black book, he orders him to eat
    > sauerkraut. The smith dies, however.
    >
    > The quack doctor corrects the entry in his little black
    > book: "Sauerkraut helps against fever only for farmers."
    >
    > Can you spot the model and assumption that led the quack doctor
    > to the "corrected" entry?


    As an experimentalist I'm interested only in the data, not the model.

    > >> > which is a nono
    > >> > in any experiment, regardless of the "assumptions and models".

    >
    > >> So, according to our current knowledge, is charge quantized
    > >> (Millikan, fudged) or not (Ehrenhaft, not fudged)?

    >
    > > According to the current not fudged data, replicated many, many, many
    > > times, it is quantized.

    >
    > So basically Ehrenhaft was wrong.


    His theory was wrong, his experiments though were simply imprecise.

    Science doesn't work by isolated experiment. We had two experimenters,
    one of whom fudged the data and the other of whom was sloppy, so they
    got different results. Later experimenters did not fudge the data and
    were not sloppy and found that their results were similar to those
    obtained by the one who did fudge. At that point his model was
    confirmed. If instead of fudging his data he fixed his experiment then
    he would not have had to fudge.
    J. Clarke, Jun 1, 2013
  16. J. Clarke <> wrote:
    > In article <>, ozcvgtt02
    >> J. Clarke <> wrote:
    >> > In article <>, ozcvgtt02
    >> >> J. Clarke <> wrote:


    >> > They were not "interpreted and filtered". Data points that did not
    >> > support his viewpoint were rejected.


    >> What do you think a filter/filtering something does?


    >> > That is not experimental science,
    >> > that is religion. He's an example of a good theoretician who was not a
    >> > good experimentalist.


    >> Religion is being a good theoretician?


    > No, picking your data to support your model is behaving like a religious
    > zealot.


    I'd offer that many *other* types or zealots do that.
    Politicans do, even a-religious, a-moral ones.
    Most people in a job interview select the parts to tell and
    to be silent about --- even when they say no untrue word and
    colour nothing --- trying to give a most positive impression
    of themselves. Are they relogious zealots, too?


    >> You know the old joke?
    >> A quack doctor (in the then Wild West) travelling from
    >> town to town treated a farmer for fever: "Eat sauerkraut".
    >> The farmer recovers. The quack doctor writes in his little
    >> black book: "Sauerkraut helps against fever."


    >> Another town, soon after. The smith has fever. So,
    >> consulting his little black book, he orders him to eat
    >> sauerkraut. The smith dies, however.


    >> The quack doctor corrects the entry in his little black
    >> book: "Sauerkraut helps against fever only for farmers."


    >> Can you spot the model and assumption that led the quack doctor
    >> to the "corrected" entry?


    > As an experimentalist I'm interested only in the data, not the model.


    The data clearly says that Sauerkraut only helps farmers with
    fever! You *need* a model saying that usually the occupation
    (or to which gods they pray or if they wear their hair long or
    short) does not influence the illness, but that other factors
    (say, germs or impact trauma) do.

    Unless you have a model, you can't even check if the data
    supports it or create a test to check your model.

    >> >> > which is a nono
    >> >> > in any experiment, regardless of the "assumptions and models".


    >> >> So, according to our current knowledge, is charge quantized
    >> >> (Millikan, fudged) or not (Ehrenhaft, not fudged)?


    >> > According to the current not fudged data, replicated many, many, many
    >> > times, it is quantized.


    >> So basically Ehrenhaft was wrong.


    > His theory was wrong, his experiments though were simply imprecise.


    Obviously Ehrenhaft (I still don't get over his name! Translate
    it to English one day ...) *thought* his data was precise
    enough. Either he was a bad experimenter, or his experimental
    apparature or process was faulty, or he didn't drop data that
    was clearly invalid (which is, by some definition, fudging,
    yet not doing so is also clearly fudging).


    > Science doesn't work by isolated experiment. We had two experimenters,
    > one of whom fudged the data and the other of whom was sloppy, so they
    > got different results. Later experimenters did not fudge the data and
    > were not sloppy and found that their results were similar to those
    > obtained by the one who did fudge. At that point his model was
    > confirmed. If instead of fudging his data he fixed his experiment then
    > he would not have had to fudge.


    In fact, it was a tiny bit more complicated than that.
    Following experimenters *also* fudged their data in sort of
    the same way Millikan did: Millikan gave too low a charge for
    the electron, follwing experimenters reported a slighly larger
    number. Then following experimenters *also* reported a still
    slightly larger number. THEN follwing experimenters reported
    yet STILL slightly larger numbers. And so on. (If they hadn't
    fudged, you'd not find an asymptotically approach to today's
    number, but values straddling the final number ...)

    Today the number has stabilized; it's more than 5 times larger
    than Millikan's standard error on his data.

    -Wolfgang
    Wolfgang Weisselberg, Jun 7, 2013
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. ric

    Bookmark Icon Disappearance

    ric, Apr 9, 2005, in forum: Firefox
    Replies:
    1
    Views:
    950
    Splibbilla
    Apr 9, 2005
  2. CarlO

    Disc Disappearance

    CarlO, May 13, 2005, in forum: Computer Support
    Replies:
    7
    Views:
    404
    Ben Myers
    May 17, 2005
  3. Max Christoffersen

    Wilner's Bizarre Disappearance..

    Max Christoffersen, Feb 3, 2004, in forum: DVD Video
    Replies:
    125
    Views:
    2,500
    vanpall
    Feb 9, 2004
  4. Gary

    Minimizing window causes disappearance

    Gary, Aug 12, 2006, in forum: Computer Support
    Replies:
    0
    Views:
    390
  5. Evensong

    Partition Disappearance problem.

    Evensong, Feb 6, 2010, in forum: Computer Support
    Replies:
    0
    Views:
    311
    Evensong
    Feb 6, 2010
Loading...

Share This Page