**>> Speed UP 187%(ave.) Your Computer

Discussion in 'Computer Information' started by Kirk Gregory Czuhai, Aug 17, 2004.

  1. In cooperation with students from the University of Louisvile, the Physics
    Department and several other multi-disiplenaren associated to modifiy an
    edition of Microsoft(r) Outlook(r) as licenced to them by the Microsoft(r)
    Corporation.

    They were able to find a way to get even older PC's to download,
    "SIMULTANEOUSLY AND UPLOAD" text EVEN BINARY files, email, thousands, of
    times faster with even older PCS by just ___BELIEVE IT OR NOT--REWRITING
    ___some of the Microsoft operating code!!!

    But as this is the TRUTH, so help me DIE!!!

    every man has HIS PRICE! Microsoft found out about the FAST computer when
    it detected the VAST number of POSTS everywhere and then STOLE THE PATCH
    code and erased the existing copies of it, threatened the STUDENTS and PAID
    THEM OFF telling them to keep their mouths shut, hiring them, etc.,
    whatever, off course not getting caugth stealing the code, or wiping the
    files. AND OF COURSE, we know Microsoft has its friends in government do we
    not?

    Just look at the posts in the news groups? how could you explain the recent
    number of them all?

    must be a super duper nin-compooper computer or something like that
    would you not agree?

    realsmartoday.com
    P.O. Box 270029
    Louisville Co. 80027


    To remove email
    http://realsmartoday.com/remove.html

    The Internet Connection Wizard helps you connect to one or more e-mail or
    news servers. You will need the following information from your Internet
    service provider (ISP) or local area network (LAN) administrator

    **** GOOD NEWS !!!

    Kirk Gregory Czuhai SAYS !!!

    He SAVED HIS NOTES ... AND ...
    HE SAYS THAT ...
    the superdooper pc computer can
    easily be re-constructed from them
    Kirk Gregory Czuhai NOW posts
    these some of these notes here!
    more at:
    http://www.altelco.net/~churches/BlueRoses.htm

    NOTES:
    Kirk Gregory Czuhai

    Sunday, August 15, 2004
    PHYSICS ((IS)) just !!! SO MUCH PhUn !!!
    SINCE SO MANY OF you seem to be somewhat
    emamoured by some of the Physics articles
    i have written!
    the following BLOGS on my part i devote to
    "some" physics topics "JUST 4 U!!!"

    "I AIM TO PLEASE . . . +++===> "

    peace and love,
    and,
    love and peace,
    (kirk) kirk gregory czuhai


    posted by Kirk Gregory Czuhai @ 7:39 PM 0 comments

    steven jawbones gots it all figured out !!!
    Also available at http://math.ucr.edu/home/baez/week207.htmlJuly 25,
    2004This Week's Finds in Mathematical Physics - Week 207John Baez I'm
    spending the summer in Cambridge, but last week I was in Dublin attending
    "GR17", which is short for the 17th International Conference on General
    Relativity and Gravitation:1) GR17 homepage,
    http://www.dcu.ie/~nolanb/gr17.htmThis is where Stephen Hawking decided to
    announce his solution ofthe black hole information loss problem. Hawking is
    a media superstar right up there with Einstein and Michael Jackson, so when
    reporters heard about this, the ensuing hoopla overshadowed everything else
    in the conference. As soon I arrived, one of the organizers complained to me
    that they'dhad to spend 4000 pounds on a public relations firm to control
    the reporters and other riff-raff who would try to attend Hawking's
    talk.Indeed, there seemed to be more than the usual number of crackpots
    floating about, though I admit I haven't been to this particular series of
    conferences before - perhaps general relativity attracts such people? The
    public lecture by Penrose on the last day of the conference may have helped
    lure them in. He spoke on "Fashion, Faith and Fantasy in Theoretical
    Physics", and people by the door sold copies of his brand new thousand-page
    blockbuster:2) Roger Penrose, The Road To Reality: A Complete Guide to the
    Physical Universe, Jonathan Cape, 2004.(You may enjoy guessing which popular
    theories he classified underthe three categories of fashion, faith and
    fantasy.) After his talk, *all* the questions were actually harangues from
    people propounding idiosyncratic theories of their own, and the question
    period was drawn to an abrupt halt in the middle of one woman's rant about
    fractal cosmology. But I bumped into the saddest example when I was having a
    chat with some colleagues at a local pub. A fellow with long curly grey
    locks and round horn-rimmed glasses sat down beside me. I'd seen him around
    the conference, so I said hello. He asked me if I'd like to hear about his
    theory; at this point my internal alarm bells started ringing. I told him I
    was busy, but said I'd take a look at his manuscript later.It turned out to
    describe an idea I'd never even dreamt of before: a heliocentric cosmology
    in which the planets move along circular orbits with epicycles a la Ptolemy!
    And his evidence comes from a neolithic British tomb called Newgrange. This
    tomb may have been aligned to let in the sun on the winter solstice, but
    some people doubt this, becauseit seems the alignment would have been
    slightly off back in 3200 BC when Newgrange was built. However, it's
    slightly off only if you work out the precession of the equinox using
    standard astronomy. If you use his theory, it lines up perfectly! Pretty
    cute. The only problem is that his paper contains no evidence for this
    claim. Instead, it's only a short note sketching the idea, followed by
    lengthy attachments containing his correspondence with the Dublin police. In
    these, he complained that people were trying to block his patent on a
    refrigerator that produces no waste heat. They were constantly flying
    airplanes over his house, and playing pranks like boiling water in his
    teakettle when he was away, trying to drive him insane. Anyway, on Wednesday
    the 21st the whole situation built to a head when Hawking gave his talk in
    the grand concert hall of the Royal Dublin Society. As we had been warned,
    the PR firm checked our badges at the door. Reporterswith press badges were
    also allowed in, so the aisles were soon lined with cameras and recording
    equipment. I got there half an hour early to get a good seat, and while I
    was waiting, Jenny Hogan from the New Scientist asked if she could interview
    me for my reaction afterwards. In short, a thoroughly atypical physics talk!
    But you shouldn't imagine the mood as one of breathless anticipation. At
    least for the physicists present, a better description would be something
    like "skeptical curiosity". None of them seemed to believe that Hawking
    could suddenly shed new light on a problem that has been attacked from many
    angles for several decades. One reason is that Hawking's best work was done
    almost 30 years ago. A string theorist I know said that thanks to work
    relating anti-deSitter space and conformal field theory - the so-called
    "AdS-CFT" hypothesis - string theorists had become convinced that no
    information is lost by black holes. Thus, Hawking had been feeling strong
    pressure to fall in line and renounce his previous position, namely that
    information *is* lost. A talk announcing this would come as no big surprise.
    After a while Kip Thorne, John Preskill, Petros Florides and Hawking'sgrad
    student Christophe Galfard came on stage. Then, amid a burst of flashbulbs,
    Hawking's wheelchair gradually made its way down the aisle and up a ramp,
    attended by a nurse - possibly his wife, I don't know. He had been recently
    sick with pneumonia.Once Hawking was on stage, the conference organizer
    Petros Florides made an introduction, joking that while physicists believe
    no information cantravel faster than light, this seems to have been
    contradicted by the speed with which the announcement of Hawking's talk
    spread around the globe. Then he recalled the famous bet that Preskill made
    with Hawking andThorne. In case you don't know, John Preskill is a leader in
    quantumcomputation at Caltech. Kip Thorne is an expert on relativity, also
    at Caltech, one of the authors of the famous textbook "Gravitation", and now
    playing a key role in the LIGO project to detect gravitational waves. The
    bet went like this:Whereas Stephen Hawking and Kip Thorne firmly believe
    that information swallowed by a black hole is forever hidden from the
    outside universe, and can never be revealed even as the black hole
    evaporates and completely disappears,And whereas John Preskill firmly
    believes that a mechanism for the information to be released by the
    evaporating black hole must and will be found in the correct theory of
    quantum gravity,Therefore Preskill offers, and Hawking/Thorne accept, a
    wager that:When an initial pure quantum state undergoes gravitational
    collapse to form a black hole, the final state at the end of black hole
    evaporation will always be a pure quantum state.The loser(s) will reward the
    winner(s) with an encyclopedia of the winner's choice, from which
    information can be recovered at will.Stephen W. Hawking, Kip S. Thorne, John
    P. PreskillPasadena, California, 6 February 1997 It's signed by Thorne and
    Preskill, with a thumbprint of Hawking's.After a bit of joking around and an
    explanation of how the questionsession would work, Hawking began his talk.
    Since it's fairly short and not too easy to summarize, I think I'll just
    quote the whole transcript which I believe Sean Carroll got from the New
    York Times science reporter Dennis Overbye. I've made a few small
    corrections. There were also some slides, but you're not missing a lot by
    not seeingthem. The talk was not easy to understand, so unless quantum
    gravity isyour specialty you may feel like just skimming it to get the
    flavor, andthen reading my attempt at a summary. The talk began with
    Hawking's trademark introduction, uttered as usual in his computer-generated
    voice:Can you hear me?I want to report that I think I have solved a major
    problem intheoretical physics, that has been around since I discovered
    thatblack holes radiate thermally, thirty years ago. The question is, is
    information lost in black hole evaporation? If it is, the evolution is not
    unitary, and pure quantum states, decay into mixed states.I'm grateful to my
    graduate student Christophe Galfard for help inpreparing this talk.The black
    hole information paradox started in 1967, when WernerIsrael showed that the
    Schwarzschild metric, was the only staticvacuum black hole solution. This
    was then generalized to the no hairtheorem: the only stationary rotating
    black hole solutions of theEinstein-Maxwell equations are the Kerr-Newman
    metrics. The no hairtheorem implied that all information about the
    collapsing body waslost from the outside region apart from three conserved
    quantities:the mass, the angular momentum, and the electric charge.This loss
    of information wasn't a problem in the classical theory. Aclassical black
    hole would last for ever, and the information could be thought of as
    preserved inside it, but just not very accessible. However, the situation
    changed when I discovered that quantum effects would cause a black hole to
    radiate at a steady rate. At least in the approximation I was using, the
    radiation from the black hole would be completely thermal, and would carry
    no information. So what would happen to all that information locked inside a
    black hole, that evaporated away, and disappeared completely? It seemed the
    only way the information could come out would be if the radiation was not
    exactly thermal, but had subtle correlations. No one has found a mechanism
    to produce correlations, but most physicists believe one must exist. If
    information were lost in black holes, pure quantum states would decay into
    mixed states, and quantum gravity wouldn't be unitary.I first raised the
    question of information loss in '75, and theargument continued for years,
    without any resolution eitherway. Finally, it was claimed that the issue was
    settled in favour of conservation of information, by AdS/CFT. AdS/CFT is a
    conjecturedduality between supergravity in anti-deSitter space and a
    conformalfield theory on the boundary of anti-deSitter space at infinity.
    Since the conformal field theory is manifestly unitary, the argument is that
    supergravity must be information preserving. Any information that falls in a
    black hole in anti-deSitter space, must come out again. But it still wasn't
    clear how information could get out of a black hole. It is this question I
    will address.Black hole formation and evaporation can be thought of as
    ascattering process. One sends in particles and radiation frominfinity, and
    measures what comes back out to infinity. Allmeasurements are made at
    infinity, where fields are weak, and onenever probes the strong field region
    in the middle. So one can't be sure a black hole forms, no matter how
    certain it might be inclassical theory. I shall show that this possibility
    allowsinformation to be preserved and to be returned to infinity.I adopt the
    Euclidean approach, the only sane way to do quantumgravity
    non-perturbatively. [He grinned at this point.] In this, the time evolution
    of an initial state is given by a path integral over all positive definite
    metrics that go between two surfaces that are a distance T apart at
    infinity. One then Wick rotates the time interval, T, to the Lorentzian.The
    path integral is taken over metrics of all possible topologiesthat fit in
    between the surfaces. There is the trivial topology: theinitial surface
    cross the time interval. Then there are the nontrivial topologies: all the
    other possible topologies. The trivial topology can be foliated by a family
    of surfaces of constant time. Thepath integral over all metrics with trivial
    topology, can be treatedcanonically by time slicing. In other words, the
    time evolution(including gravity) will be generated by a Hamiltonian. This
    will give a unitary mapping from the initial surface to the final.The
    nontrivial topologies cannot be foliated by a family ofsurfaces of constant
    time. There will be a fixed point in any timeevolution vector field on a
    nontrivial topology. A fixed point in the Euclidean regime corresponds to a
    horizon in the Lorentzian. A small change in the state on the initial
    surface would propagate as a linear wave on the background of each metric in
    the path integral. If the background contained a horizon, the wave would
    fall through it,and would decay exponentially at late time outside the
    horizon. Forexample, correlation functions decay exponentially in black
    holemetrics. This means the path integral over all topologicallynontrivial
    metrics will be independent of the state on the initialsurface. It will not
    add to the amplitude to go from initial state tofinal that comes from the
    path integral over all topologicallytrivial metrics. So the mapping from
    initial to final states, given by the path integral over all metrics, will
    be unitary. One might question the use in this argument, of the concept of a
    quantum state for the gravitational field on an initial or final spacelike
    surface. This would be a functional of the geometries of spacelike surfaces,
    which is not something that can be measured in weak fields near infinity.
    One can measure the weak gravitational fields on a timelike tube around the
    system, but the caps at top and bottom, go through the interior of the
    system, where the fields may be strong.One way of getting rid of the
    difficulties of caps would be to jointhe final surface back to the initial
    surface, and integrate over allspatial geometries of the join. If this was
    an identification under aLorentzian time interval, T, at infinity, it would
    introduce closedtimelike curves. But if the interval at infinity is the
    Euclideandistance, beta, the path integral gives the partition function
    forgravity at temperature 1/beta.The partition function of a system is the
    trace over all states,weighted with e^{-beta H}. One can then integrate beta
    alonga contour parallel to the imaginary axis with the factor e^{-beta
    E_0}.This projects out the states with energy E_0. In a gravitational
    collapse and evaporation, one is interested in states ofdefinite energy,
    rather than states of definite temperature.There is an infrared problem with
    this idea for asymptotically flatspace. The Euclidean path integral with
    period beta is the partitionfunction for space at temperature 1/beta. The
    partition function is infinite because the volume of space is infinite. This
    infrared problem can be solved by a small negative cosmological constant. It
    will not affect the evaporation of a small black hole, but it will change
    infinity to anti-deSitter space, and make the thermal partition function
    finite.The boundary at infinity is then a torus, S^1 cross S^2. The
    trivialtopology, periodically identified anti-deSitter space, fills in the
    torus, but so also do nontrivial topologies, the best known of which is
    Schwarzschild anti-deSitter. Providing that the temperature is small
    compared to the Hawking-Page temperature, the path integral over all
    topologically trivial metrics represents self-gravitating radiation in
    asymptotically anti-deSitter space. The path integral over all metrics of
    Schwarzschild AdS topology represents a black hole and thermal radiation in
    asymptotically anti-deSitter.The boundary at infinity has topology S^1 cross
    S^2. The simplesttopology that fits inside that boundary is the trivial
    topology, S^1 cross D^3, the three-disk. The next simplest topology, and the
    first nontrivial topology, is S^2 cross D^2. This is the topology of the
    Schwarzschild anti-deSitter metric. There are other possible topologies that
    fit inside the boundary, but these two are the important cases:
    topologically trivial metrics and the black hole. The black hole is eternal.
    It cannot become topologically trivial at late times.In view of this, one
    can understand why information is preserved intopologically trivial metrics,
    but exponentially decays in topologically non trivial metrics. A final state
    of empty spacewithout a black hole would be topologically trivial, and be
    foliatedby surfaces of constant time. These would form a 3-cycle modulothe
    boundary at infinity. Any global symmetry would lead toconserved global
    charges on that 3-cycle. These would preventcorrelation functions from
    decaying exponentially in topologicallytrivial metrics. Indeed, one can
    regard the unitary Hamiltonianevolution of a topologically trivial metric as
    the conservation ofinformation through a 3-cycle.On the other hand, a
    nontrivial topology, like a black hole, will not have a final 3-cycle. It
    will not therefore have any conservedquantity that will prevent correlation
    functions from exponentiallydecaying. One is thus led to the remarkable
    result that late timeamplitudes of the path integral over a topologically
    non trivialmetric, are independent of the initial state. This was noticed
    byMaldacena in the case of asymptotically anti-deSitter in 3d,
    andinterpreted as implying that information is lost in the BTZ black
    holemetric. Maldacena was able to show that topologically trivial
    metricshave correlation functions that do not decay, and have amplitudes
    ofthe right order to be compatible with a unitary evolution. Maldacenadid
    not realize, however that it follows from a canonical treatmentthat the
    evolution of a topologically trivial metric, will be unitary.So in the end,
    everyone was right, in a way. Information is lost in topologically
    nontrivial metrics, like the eternal black hole. On the other hand,
    information is preserved in topologically trivialmetrics. The confusion and
    paradox arose because people thoughtclassically, in terms of a single
    topology for spacetime. It waseither R^4, or a black hole. But the Feynman
    sum over histories allowsit to be both at once. One can not tell which
    topology contributed theobservation, any more than one can tell which slit
    the electron wentthrough, in the two slits experiment. All that observation
    at infinitycan determine is that there is a unitary mapping from initial
    statesto final, and that information is not lost.My work with Hartle showed
    the radiation could be thought of astunnelling out from inside the black
    hole. It was therefore notunreasonable to suppose that it could carry
    information out of theblack hole. This explains how a black hole can form,
    and then giveout the information about what is inside it, while
    remainingtopologically trivial. There is no baby universe branching off, as
    I once thought. The information remains firmly in our universe. I'm sorry to
    disappoint science fiction fans, but if information ispreserved, there is no
    possibility of using black holes to travel toother universes. If you jump
    into a black hole, your mass-energy willbe returned to our universe, but in
    a mangled form, which contains theinformation about what you were like, but
    in an unrecognisable state.There is a problem describing what happens,
    because strictly speakingthe only observables in quantum gravity are the
    values of the fieldat infinity. One cannot define the field at some point in
    the middle,because there is quantum uncertainty in where the measurement
    isdone. However, in cases in which there are a large number, N, of light
    matter fields, coupled to gravity, one can neglect the gravitational
    fluctuations, because they are only one among N quantum loops. One can then
    do the path integral over all matter fields, in a given metric, to obtain
    the effective action, which will be a functional of the metric.One can add
    the classical Einstein-Hilbert action of the metric tothis quantum effective
    action of the matter fields. If one integratedthis combined action over all
    metrics, one would obtain the fullquantum theory. However, the semiclassical
    approximation is torepresent the integral over metrics by its saddle point.
    This willobey the Einstein equations, where the source is the expectation
    valueof the energy momentum tensor, of the matter fields in their
    vacuumstate.The only way to calculate the effective action of the matter
    fields,used to be perturbation theory. This is not likely to work in the
    caseof gravitational collapse. However, fortunately we now have
    anon-perturbative method in AdS/CFT. The Maldacena conjecture saysthat the
    effective action of a CFT on a background metric is equal tothe supergravity
    effective action of anti-deSitter space with thatbackground metric at
    infinity. In the large N limit, the supergravityeffective action is just the
    classical action. Thus the calculationof the quantum effective action of the
    matter fields, is equivalent tosolving the classical Einstein equations.The
    action of an anti-deSitter-like space with a boundary atinfinity would be
    infinite, so one has to regularize. One introduces subtractions that depend
    only on the metric of the boundary.The first counter-term is proportional to
    the volume of the boundary. The second counter-term is proportional to the
    Einstein-Hilbert actionof the boundary. There is a third counter-term, but
    it is not covariantly defined. One now adds the Einstein-Hilbert action of
    the boundary and looks for a saddle point of the total action. This will
    involve solving the coupled four- and five-dimensional Einstein equations.
    It will probably have to be done numerically.In this talk, I have argued
    that quantum gravity is unitary, andinformation is preserved in black hole
    formation and evaporation. I assume the evolution is given by a Euclidean
    path integral overmetrics of all topologies. The integral over topologically
    trivialmetrics can be done by dividing the time interval into thin slicesand
    using a linear interpolation to the metric in each slice. Theintegral over
    each slice will be unitary, and so the whole pathintegral will be unitary.On
    the other hand, the path integral over topologically nontrivialmetrics, will
    lose information, and will be asymptotically independentof its initial
    conditions. Thus the total path integral will beunitary, and quantum
    mechanics is safe.It is great to solve a problem that has been troubling me
    for nearlythirty years, even though the answer is less exciting than
    thealternative I suggested. This result is not all negative however,because
    it indicates that a black hole evaporates, while remainingtopologically
    trivial. However, the large N solution is likely to be a black hole that
    shrinks to zero. This is what I suggested in 1975.In 1997, Kip Thorne and I
    bet John Preskill that information waslost in black holes. The loser or
    losers of the bet are to providethe winner or winners with an encyclopaedia
    of their own choice, fromwhich information can be recovered with ease. I'm
    now ready to concedethe bet, but Kip Thorne isn't convinced just yet. I will
    give JohnPreskill the encyclopaedia he has requested. John is all-American,
    sonaturally he wants an encyclopaedia of baseball. I had great difficulty in
    finding one over here, so I offered him an encyclopaedia of cricket, as an
    alternative, but John wouldn't be persuaded of the superiority of cricket.
    Fortunately, my assistant, Andrew Dunn, persuaded the publishers
    Sportclassic Books to fly a copy of "Total Baseball: The Ultimate Baseball
    Encyclopedia" to Dublin. I will give John the encyclopaedia now. If Kip
    agrees to concede the bet later, hecan pay me back.After this, Kip Thorne
    ran a question and answer period, saying thathe would alternate between
    questions from conference participants,which Hawking's grad student would
    answer, and questions from the press, which Hawking would answer - after
    Thorne checked Hawking's facial expressions to see whether he felt they were
    worth answering. First, a correspondent from the BBC asked Stephen Hawking
    what the significance of this result was for "life, the universe and
    everything". (Here I'm using John Preskill's humorous paraphrase.) Hawking
    agreedto answer this, and while he began laboriously composing a reply using
    the computer system on his wheelchair, his grad student Christophe Galfard
    fielded three questions from experts: Bill Unruh, Gary Horowitz and Robb
    Mann. I didn't find the replies terribly illuminating, except that when
    asked if information would be lost if we kept feeding the black hole matter
    to keep it from evaporating away, Galfard said "yes". Everyoneafterwards
    commented on what a tough job it would be for a student to field questions
    in front of about 800 physicists and the international press. At this point
    Kip Thorne checked to see if Hawking was done composinghis reply. He was
    not. To fill time, Thorne explained why he hadn'tyet conceded the bet,
    saying that while the talk seemed convincing to him, he still wanted to see
    the details. He explained to the reportersa bit about how science was done:
    we don't just listen to Hawking andtake his word for everything, we have to
    go off and calculate things ourselves. He told a nice story about how when
    Hawking first showedthat black holes radiate, everyone with their own
    approach to quantumfield theory on curved spacetime needed to redo this
    calculation theirown way to be convinced - with Yakov Zeldovich, who'd
    gotten the gamestarted by showing that energy could be extracted from
    *rotating* blackholes in the form of radiation, being one of the very last
    to agree!Preskill chimed in by saying "I'll be honest - I didn't understand
    thetalk", adding that would need to see more details. After a bit more of
    this sort of thing, Hawking was ready to answerthe BBC reporter's question.
    His answer was surprisingly short, and itwent something like this (I can't
    find an exact quote): "This resultshows that everything in the universe is
    governed by the laws of physics." A suitably grandiose answer for a
    grandiose question!One can imagine much better explanations of unitarity,
    but not veryquick ones.At this point Kip Thorne solicited more questions
    from the press butsaid they should confine themselves to questions with
    yes-or-no answers.Jenny Hogan got off the first one, asking what Hawking
    would do nowthat he's solved this problem. Kip Thorne pointed out that this
    wasnot a yes-or-no question, but in the midst of the ensuing
    conversationHawking shot off an unexpectedly rapid reply: "I don't know."
    Everyonelaughed, and at this point the public question period was called to
    a close,though reporters were allowed to stay and pester Hawking some
    more.At the time Hawking's talk seemed very cryptic to me, but in the
    processof editing the above transcript it's become a lot clearer, so I'll
    tryto give a quick explanation. I should start by saying that the jargon
    used in this talk, while doubtless obscure to most people, is actually quite
    standard and not very difficult to anyone who has spent some time studying
    the Euclidean path integral approach to quantum gravity. The problem is not
    the jargon so much as the lack of detail, which requires some imagination to
    fill in. When I first heard the talk, this lack of detail had mecompletely
    stumped. But now it makes a little more sense....He's studying the process
    of creating a black hole and letting itevaporate away. He's imagining
    studying this in the usual styleof particle physics, as a "scattering
    experiment", where we throw ina bunch of particles and see what comes out.
    Here we throw in a bunchof particles, let them form a black hole, let the
    black hole evaporateaway, and examine the particles (typically photons for
    the most part) that shoot out. The rules of the game in a "scattering
    experiment" are that we can only talk about what's going on "at infinity",
    meaning very far from where the black hole forms - or more precisely, where
    it mayor may not form! The advantage of this is that physics at infinity can
    be described without the full machinery of quantum gravity: we don't have to
    worry about quantum fluctuations of the geometry of spacetime messing up our
    ability to say where things are. The disadvantage is that we can't actually
    say for sure whether or not a black hole formed. Atleast this *seems* like a
    "disadvantage" at first - but a better term for it might be a "subtlety",
    since it's crucial for resolving the puzzle:Black hole formation and
    evaporation can be thought of as ascattering process. One sends in particles
    and radiation frominfinity, and measures what comes back out to infinity.
    Allmeasurements are made at infinity, where fields are weak, and onenever
    probes the strong field region in the middle. So one can't be sure a black
    hole forms, no matter how certain it might be inclassical theory. I shall
    show that this possibility allowsinformation to be preserved and to be
    returned to infinity.Now, the way Hawking likes to calculate things in this
    sort of problem is using a "Euclidean path integral". This is a
    rathercontroversial approach - hence his grin when he said it's the "only
    sane way" to do these calculation - but let's not worry aboutthat. Suffice
    it to say that we replace the time variable "T"in all our calculations by
    "iT", do a bunch of calculations, and then replace "iT" by "T" again at the
    end. This trick is called"Wick rotation". In the middle of this process, we
    hope all our formulas involving the geometry of 4d *spacetime* have
    magically become formulas involving the geometry of 4d *space*. The answers
    to physical questions are then expressed as integrals over all geometries of
    4d space that satisfy some conditions depending on the problem we're
    studying. This integral over geometries alsoincludes a sum over topologies.
    That's what Hawking means by this:I adopt the Euclidean approach, the only
    sane way to do quantumgravity non-perturbatively. In this, the time
    evolution of an initial state is given by a path integral over all positive
    definite metrics that go between two surfaces that are a distance T apart at
    infinity. One then Wick rotates the time interval, T, to the Lorentzian. The
    path integral is taken over metrics of all possible topologies that fit in
    between the surfaces. Unfortunately, nobody knows how to define these
    integrals. However,physicists like Hawking are usually content to compute
    them in a"semiclassical approximation". This means integrating not over
    allgeometries, but only those that are close to some solution of the
    classical equations of general relativity. We can then do a clever
    approximation to get a closed-form answer.(Nota bene: here I'm talking about
    the equations of general relativityon 4d *space*, not 4d spacetime. That's
    because we're in the middleof this Wick rotation trick.)Actually, I'm
    oversimplifying a bit. We don't get "the answer" toour physics question this
    way: we get one answer for each solution of the equations of general
    relativity that we deem relevant to the problem at hand. To finish the job,
    we should add up all these partialanswers to get the total answer. But in
    practice this last step is always too hard: there are too many topologies,
    and too many classicalsolutions, to keep track of them all.So what do we do?
    We just add up a few of the answers, cross our fingers, and hope for the
    best! If this procedure offends you, go do something easy like math.In the
    problem at hand here, Hawking focuses on two classical solutions,or more
    precisely two classes of them. One describes a spacetime with no black hole,
    the other describes a spacetime with a black hole which lastsforever. Each
    one gives a contribution to the semiclassical approximation of the integral
    over all geometries. To get answers to physical questions, he needs to sum
    over *both*. In principle he should sum over infinitely many others, too,
    but nobody knows how, so he's probably hoping the crux of the problem can be
    understood by considering just these two. He says that if you just do the
    integral over geometries near theclassical solution where there's no black
    hole, you'll find - unsurprisingly - that no information is lost as time
    passes.He also says that if you do the integral over geometries near
    theclassical solution where there is a black hole, you'll
    find -surprisingly - that the answer is *zero* for a lot of questions you
    can measure the answers to far from the black hole. In physics jargon, this
    is because a bunch of "correlation functions decay exponentially". So, when
    you add up both answers to see if information is lost in thereal problem,
    where you can't be sure if there's a black hole or not,you get the same
    answer as if there were no black hole! So in the end, everyone was right, in
    a way. Information is lost in topologically nontrivial metrics, like the
    eternal black hole. On the other hand, information is preserved in
    topologically trivialmetrics. The confusion and paradox arose because people
    thoughtclassically, in terms of a single topology for spacetime. It
    waseither R^4, or a black hole. But the Feynman sum over histories allowsit
    to be both at once. One can not tell which topology contributed
    theobservation, any more than one can tell which slit the electron
    wentthrough, in the two slits experiment. All that observation at
    infinitycan determine is that there is a unitary mapping from initial
    statesto final, and that information is not lost.The mysterious part is why
    the geometries near the classical solution where there's a black hole don't
    contribute at all to information loss, even though they do contribute to
    other important things, like theHawking radiation. Here I'd need to see an
    actual calculation. Hawkinggives a nice hand-wavy topological argument, but
    that's not enough forme. Since this issue is long enough already and I want
    to get it out soon,I won't talk about other things that happened at this
    conference - norwill I talk about the conference on n-categories earlier
    this summer!I just want to say a few elementary things about the topology
    lurking in Hawking's talk... since some mathematicians may enjoy it. As he
    points out, the answers to a bunch of questions diverge unless we put our
    black hole in a box of finite size. A convenient wayto do this is to
    introduce a small negative cosmological constant,which changes our default
    picture of spacetime from Minkowski spacetime,which is topologically R^4, to
    anti-deSitter spacetime, which is topologically R x D^3 after we add the
    "boundary at infinity". Here R is time and the 3-disk D^3 is space. This is
    a Lorentzian manifold with boundary, but when we do Wick rotation we get a
    Riemannianmanifold with boundary having the same topology. However, when we
    are doing Euclidean path integrals at nonzero temperature, we should replace
    the time line R here by a circle whose radius is the reciprocal of that
    temperature. (Take my word for it!) So now our Riemannian manifold with
    boundary is S^1 x D^3. This is what Hawking uses to handle the geometries
    without a blackhole. The boundary of this manifold is S^1 x S^2. But there's
    another obvious manifold with this boundary, namely D^2 x S^2. And this
    corresponds to the geometries with a black hole! This is cutebecause we see
    it all the time in surgery theory. In fact I commentedon Hawking's use of
    this idea a long time ago, in "week67". In his talk, Hawking points out that
    S^1 x D^3 has a nontrivial 3-cycle in it if we use relative cohomology and
    work relative to the boundaryS^1 x S^2. But, D^2 x S^2 does not. When
    spacetime is n-dimensional, conservation laws usually come from integrating
    closed (n-1)-forms over cycles that correspond to "space", so we get
    interesting conservation laws when there are nontrivial (n-1)-cycles. Here
    Hawking is using this toargue for conservation of information when there's
    no black hole - namelyfor S^1 x D^3 - but not when there is, namely for D^2
    x S^2. All this is fine and dandy; the hard part is to see why the case when
    there *is* a black hole doesn't screw things up! This is where his allusions
    to "exponentially decaying correlation functions come in" - and this is
    where I'd like to see more details. I guess a good place to start
    isMaldacena's papers on the black hole in 3d spacetime - the so-called
    Banados-Teitelboim-Zanelli or "BTZ" black hole. This is a baby version of
    the problem, one dimension down from the real thing, where everything should
    get much simpler. For the original BTZ paper, try:3) Maximo Banados, Marc
    Henneaux, Claudio Teitelboim, and Jorge Zanelli,Geometry of the 2+1 black
    hole, available as gr-qc/9302012.Maldacena's papers can also be found on the
    physics arXiv, but I'mnot sure which one Hawking is referring to, so I'll
    wait until someonetells me before adding a link to that one. Sometime I will
    also add linksto a bunch of photos taken at this conference - including
    photos of the plaque under the bridge where Hamilton wrote his defining
    relations for the
    quaternions!----------------------------------------------------------------
    -------Previous issues of "This Week's Finds" and other expository articles
    onmathematics and physics, as well as some of my research papers, can
    beobtained athttp://math.ucr.edu/home/baez/For a table of contents of all
    the issues of This Week's Finds, tryhttp://math.ucr.edu/home/baez/twf.htmlA
    simple jumping-off point to the old issues is available
    athttp://math.ucr.edu/home/baez/twfshort.htmlIf you just want the latest
    issue, go tohttp://math.ucr.edu/home/baez/this.week.html


    posted by Kirk Gregory Czuhai @ 7:33 PM 0 comments

    Saturday, August 14, 2004
    dumb polack?
    Son comes to his father.
    "Dad," he asks, "do you know where Poland is?"
    "I don't know son, but it couldn't be far away,
    'cos we have that dumb Polack at work, and he once said
    that it only takes him 15 minutes to get to work."
    --------------
    How can one tell a good Thai restaurant?
    Thai people eat there.
    How can one tell a good French restaurant?
    French people eat there.
    How can one tell a good Japanese restaurant?
    Japanese people eat there.
    How can one tell a really bad Polish restaurant?
    Americans eat there.
    2nd punchline:
    Is it really bad because they eat there,
    or do they eat there becuse it's really bad?
    --------------
    enjoy...
    peace and love,
    (kirk) kirk gregory czuhai


    posted by Kirk Gregory Czuhai @ 9:14 PM 0 comments

    physics PHUN 4u !!!
    ------------------------- A theoretical physics
    FAQ -------------------------
    http://www.mat.univie.ac.at/~neum/physics-faq.txt Here are answers to some
    frequently asked questions from theoretical physics. They were collected
    from my answers to postings to the newsgroup sci.physics.research. Of course
    they refer only to a tiny part of theoretical physics, and they are only as
    good as my understanding of the matter. This doesn't mean that they are
    poor... But if you have suggestions for improvements, please write me at
    If you have questions, please post them to the
    newsgroup sci.physics.research (http://www.lns.cornell.edu/spr)! Happy
    Reading! Arnold Neumaier University of Vienna
    http://www.mat.univie.ac.at/~neum/ Abbreviations: QM = quantum mechanics.
    QFT = quantum field theory. QED = quantum electrodynamics. s.p.r =
    sci.physics.research (newsgroup). Strings like quant-ph/0303047 refer to
    electronic documents in the e-Print archive (see
    http://xxx.lanl.gov/). ----------------- Table of Contents -----------------
    (The labels may change with time as answers to further questions will be
    added. So, to quote part of the FAQ, refer to the title of a section and not
    to its label.) 1a. Are electrons pointlike/structureless? 1b. What are
    'bare' and 'dressed' particles? 1c. How meaningful are single Feynman
    diagrams? 1d. How real are 'virtual particles'? 1e. What is the meaning of
    'on-shell' and 'off-shell'? 1f. Virtual particles and Coulomb interaction
    1g. Are virtual particles and decaying particles (resonances) the same? 1h.
    Can particles go backward in time? 1i. What about particles faster than
    light (tachyons)? 2a. Summing divergent series 2b. Nonperturbative
    computations in QFT 2c. Functional integrals, Wightman functions, and
    rigorous QFT 2d. Is there a rigorous interacting QFT in 4 dimensions? 2e. Is
    QED consistent? 2f. Bound states in relativistic QFT 2g. Why bother about
    rigor in physics? 2h. Why normal ordering? 3a. Is there a multiparticle
    relativistic quantum mechanics? 3b. Localization and position operators 3c.
    Representations of the Poincare group, spin and gauge invariance 4a. A
    concise formulation of the measurement problem of QM 4b. The double slit
    experiment 4c. The Stern-Gerlach experiment 4d. The minimal interpretation
    4e. The preferred basis problem 4f. Does decoherence solve the measurement
    problem? 4g. Which interpretation of quantum mechanics is most consistent?
    4h. What about relativistic measurement theory? 5a. Random numbers in
    probability theory 5b. How meaningful are probabilities of single events?
    5c. How do probabilities apply in practice? 5d. Priors and entropy in
    probability theory 6a What are bras and kets? 6b. What is the meaning of the
    entries of a density matrix? 7a. What is the tetrad formalism? 7b. Energy in
    general relativity 7c. Difficulties in quantizing gravity 7d. Is quantum
    mechanics compatible with general relativity? 7e. Why do gravitons have spin
    2? 8a. Theoretical challenges close to experimental data 98a. Background
    needed for theoretical physics 99a.
    Acknowledgments -------------------------------------- Are electrons
    pointlike/structureless? -------------------------------------- Both
    electrons and neutrinos are considered to be pointlike as bare particles,
    because of the way they appear in the standard model. But physical,
    relativistic particles are not pointlike. An intuitive argument for this is
    the fact that their localization to a region siognificantly smaller than the
    de Broglie wavelength would need energies larger than that needed to create
    particle-antiparticle pairs, which changes the nature of the system. (See
    also this FAQ about localization, and Foldy's papers quoted there.) On a
    more formal, quantitative level, the physical, dressed particles have
    nontrivial form factors, due to the renormalization necessary to give finite
    results in QFT. Nontrivial form factors give rise leading to a positive
    charge radius. In his book S. Weinberg, The quantum theory of fields, Vol.
    I, Cambridge University Press, 1995, Weinberg defines and explicitly
    computes in (11.3.33) the 'charge radius' of a physical electron. But his
    formula is not fully satisfying since it is not fully renormalized (infrared
    divergence: the expression contains a ficticious photon mass, and diverges
    if this goes to zero). (28) in hep-ph/0002158 = Physics Reports 342, 63-26
    (2001) handles this using a binding energy dependent cutoff, which makes the
    electron charge radius depend on its surrounding. The paper L.L. Foldy,
    Neutron-electron interaction, Rev. Mod. Phys. 30, 471-481 (1958). discusses
    the extendedness of the electron in a phenomenological way. On the numerical
    side, I only found values for the charge radius of the neutrinos, computed
    from the standard model to 1 loop order. The values are about 4-6 10^-14 cm
    for the three neutrino species. See (7.12) in Phys. Rev. D 62, 113012 (2000)
    http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1992PhDT.......130L gives in
    an abstract of a 1982 thesis of Anzhi Lai an electron charge radius of ~
    10^{-16} cm (But I haven't seen the thesis.) The "form" of an elementary
    particle is described by its form factor, which is a well-defined physical
    function (though at present computable only in perturbation theory)
    describing how the (spin 0, 1/2, or 1) particle's response to an external
    classical electromagnetic field deviates from the Klein-Gordon, Dirac, or
    Maxwell equations, respectively. In Foldy's paper, the form factors are
    encoded in the infinite sum in (16). The sum is usually considered in the
    momentum domain; then one simply gets two k-dependent form factors, where k
    represents the 4-momentum transferred in the interaction. These form factors
    can be calculated in a good approximation perturbatively from QFT, see for
    example Peskin and Schroeder's book. An extensive discussion of form factors
    of Dirac particles and their relation to the radial density function is in
    D. R. Yennie, M. M. Levy and D. G. Ravenhall, Electromagnetic Structure of
    Nucleons, Rev. Mod. Phys. 29, 144-157 (1957). and R. G. Sachs High-Energy
    Behavior of Nucleon Electromagnetic Form Factors Phys. Rev. 126, 2256-2260
    (1962) For proton and neutron form factors, see hep-ph/0204239 and
    hep-ph/0303054 ---------------------------------------- What are 'bare' and
    'dressed' particles? ---------------------------------------- A bare
    electron is the formal entity discussed in textbooks when they do
    perturbative quantum electrodynamics. The intuitive picture generally given
    is that a bare electron is surrounded by a cloud of virtual photons and
    virtual electron-positron pairs to make up a physical, 'dressed' electron.
    Only the latter is real and observable. The former is a formal caricature of
    the latter, with paradoxical properties (infinite mass, etc.). On a more
    substantial level, the observable electrons are produced from the bare
    electrons by a process called renormalization, which modifies the
    propagators by self-energy terms and the currents by form factors. As the
    name says, the latter define the 'form' of a particle. (In the above
    picture, it would correspond to the shape of the virtual cloud, though it is
    better to avoid giving the virtual particles too much of meaning.) The
    dressed object is the renormalized, physical object, described
    perturbatively as the bare object 'clothed' by the cloud of virtual
    particles. The dressed interaction is the 'screened' physical interaction
    between these dress objects. To draw an analogy in nonrelativistic QM think
    of nuclei as bare atoms, electrons as virtual particles, atoms as dressed
    nuclei and the residual interaction between atoms, computed in the
    Born-Oppenheimer approximation, as the dressed interaction. Thus, for Argon
    atoms, the dressed interaction is something close to a Lennard-Jones
    potential, while the bare interaction is Coulomb repulsion. This is the
    situation physicists had in mind when they invented the notions of bare and
    dressed particles. Of course, it is only an analogy, and should not be taken
    very seriously. It just explains the intuition about the terminology used.
    The electrons in QM are real, physical electrons that can be isolated. The
    reason is that they are good eigenstates of the Hamiltonian. On the other
    hand, virtual particles don't have this nice attribute since the
    relativistic Hamiltonian H from field theory contains creation and
    annihilation operators which mess things up. The bare particles correspond
    to 1-particle states in the Hilbert space (though that is not quite true
    since there is no good Hilbert space picture in conventional interacting
    QFT). Multiplying them with H introduces terms with other particle numbers,
    hence a bare particle can never be an eigenstate of H, and thus never be
    observable in the way a nonrelativistic particle is. The eigenstates of the
    relativistic Hamiltonian are, instead, complicated multibody states
    consisting of a superposition of states with any number of particles and
    antiparticles, just subject to the restriction that the total quantum
    numbers come out right. These are the dressed
    particles. ------------------------------------------- How meaningful are
    single Feynman diagrams? ------------------------------------------- The
    standard model is a theory defined in terms of a Lagrangian. To get
    computable output, Feynman graph techniques are used. But individual Feynman
    graphs are meaningless (often infinite); only the sum of all terms of a
    given order can be given - after a process called renormalization - a
    well-defined (finite) meaning. This is well-known; so no-one treats the
    Feynman graphs as real. What is taken as real is the final outcome of the
    calculations, which can be compared with
    measurements. --------------------------------- How real are 'virtual
    particles'? --------------------------------- All language is only an
    approximation to reality, which simply is. But to do science we need to
    classify the aspects of reality that appear to have more permanence, and
    consider them as real. Nevertheless, all concepts, including 'real' have a
    fuzziness about them, unless they are phrased in terms of rigorous
    mathematical models (in which case they don't apply to reality itself but
    only to a model of reality). In the informal way I use the notion, 'real' in
    theoretical physics means a concept or object that - is independent of the
    computational scheme used to extract information from a theory, - has a
    reasonably well-defined and consistent formal basis - does not give rise to
    misleading intuition. This does not give a clear definition of real, of
    course. But it makes charge distributions and inputs and outputs of
    (theoretical models of) scattering experiments something real, while making
    bare particles and virtual particles artifacts of perturbation theory.
    'Real' real particles are slightly different from 'mathematical' real
    particles. Note that whenever we observe a system we make a number of
    idealizations that serve to identify the objects in reality with the
    mathematical concepts we are using to describe them. Then we calculate
    something, and at the end we retranslate it into reality. If our initial
    initialization was good enough and our theory is good enough, the final
    result will match reality well. Modern QED and other field theories are
    based on the theory developed for modeling scattering events. Now scattering
    events take a very short time compared to the lifetime of the objects
    involved before and after the event. Therefore, we represent a prepared beam
    of particles hitting a target as a single particle hitting another single
    particle, and whenever this in fact happens, we observe end products, e.g.
    in a wire chamber. Strictly speaking (i.e., in a fuller model of reality),
    we'd have to use a multiparticle (statistical mechanics) setting, but this
    is never done since it does not give better information and the added
    complications are formidable. As long as we prepare the particles long
    (compared to the scattering time) before they scatter and observe them long
    enough afterwards, they behave essentially as in and out states,
    respectively. (They are not quite free, because of the electromagnetic
    self-field they generate, this gives rise to the infrared problem in QED and
    can be corrected by using coherent states.) The preparation and detection of
    the particles is outside this model, since it would produce only minute
    corrections to the scattering event. But to treat it would require to
    increase the system to include source and detector, which makes the problem
    completely different. Therefore at the level appropriate to a scattering
    event, the 'real' real particles are modeled by 'mathematical' in/out
    states, which therefore are also called 'real'. On the other hand,
    'mathematical' virtual particles have nothing to do with observations, hence
    have no counterpart in reality; therefore they are called 'virtual'. The
    figurative virtual objects in QFT are there only because of the well-known
    limitations of the foundations of QFT. In a nonperturbative setting they
    wouldn't occur at all. This can be seen by comparing with QM. One could also
    do nonrelativistic QM with virtual objects but no one does so (except
    sometimes in motivations for QFT), because it does not add value to a
    well-understood theory. Virtual particles are an artifice of perturbation
    theory that give an intuitive (but if taken too far, misleading)
    interpretation for Feynman diagrams. More precisely, a virtual photon, say,
    is an internal photon line in one of the Feynman diagrams. But there is
    nothing real associated with it. Detectable photons are always real,
    'dressed' photons. Virtual particles, and the Feynman diagrams they appear
    in, are just a visual tool of keeping track of the different terms in a
    formal expansion of scattering amplitudes into multi-dimensional integrals
    involving multiple Green's functions - the virtual particle momenta
    represent the integration variables. They have no meaning at all outside
    these integrals. Thus they get out of mathematical existence once one
    changes the formula for computing a scattering amplitude. Therefore virtual
    particles are somehow analogous to virtual integers k obtained by computing
    log(1-x) = sum_k x^k/k by expansion into a Taylor series. Since we can
    compute the logarithm in many other ways, it is ridiculous to attach to k
    any intrinsic meaning. But ... ... in QFT, we have no good ways to compute
    scattering amplitudes without at least some form of expansion (unless we use
    the lowest order of some approximation method), which makes virtual
    particles look a little more real. But the analogy to the Taylor series
    shows that it's best not to look at them that way. (For a very informal view
    of QED in terms of clouds of virtual particles see
    http://groups.google.com/groups?selm= and
    the later mails in this thread.) A sign of the irreality of virtual
    particles is the fact that when you do partial resummations of diagrams,
    many of the virtual particles disappear. A fully nonperturbative theory
    would sum everything, and no virtual particles would be present anymore.
    Thus virtual particles are entirely a consequence of looking at QFT in a
    perturbative way rather than nonperturbatively. In the standard covariant
    Feynman approach, energy (cp_0) and momentum (\p; the backslash indicates
    'boldface') is conserved, and virtual particles are typically off-shell
    (i.e., they do not satisfy the equation p^2 = p_0^2 - \p^2 = m^2 for
    physical particles). To see this, try to model a vertex in which an electron
    (mass m_e) absorbs a photon (mass 0). One cannot keep the incoming electron
    and photon and the outgoing photon on-shell (satisfying p^2 = m^2) without
    violating the energy-momentum balance. However, when working in light front
    quantization, one keeps all particles on-shell, and instead has energy and
    momentum nonconservation (removed formally by adding an additional
    'spurion'). The effect of this is that the virtual particle structure of the
    theory is changed completely: For example, the physical vacuum and the bare
    vacuum now agree, while in the standard approach, the vacuum looks like a
    highly complicated medium made up from infinitely many bare particles....
    But phyiscal particles must still be dressed, though less heavily than in
    the traditional Feynman approach. Clearly concepts such as virtual particles
    that depend so much on the method of quantization cannot be regarded as
    being real. See also earlier discussions on s.p.r. such as
    http://www.lns.cornell.edu/spr/2003-06/msg0051674.html also
    http://www.lns.cornell.edu/spr/1999-02/msg0014762.html and followups; maybe
    http://www.lns.cornell.edu/spr/2003-05/msg0051023.html is also of interest.
    [For a longwinded alternative view of virtual particles that I do _not_
    share but rather find misleading, see
    http://www.desy.de/user/projects/Physics/Quantum/virtual_particles.html] ---
    ----------------------------------------------- What is the meaning of
    'on-shell' and
    'off-shell'? -------------------------------------------------- This applies
    only to relativistic particles. A particle of mass m is on-shell if its
    momentum p satisfies p^2 (= p_0^2-p_1^2-p_2^2-p_3^2) = m^2, and off-shell
    otherwise. The 'mass shell' is the manifold of momenta p with p^2=m^2.
    Observable (i.e., physical) particles are asymptotic states (scattering
    states) described (modulo unresolved mathematical difficulties) by free
    fields based on the dispersion relation p^2=m^2, and hence are necessarily
    on-shell. Off-shell particles only arise in intermediate perturbative
    calculations; they are necessarily 'virtual'. The situation is muddled by
    the fact that one has to distinguish (formal) bare mass and (physical)
    dressed mass; the above is valid only for the dressed mass. Moreover, the
    mass shell loses its meaning in external fields, where, instead, a so-called
    'gap equation' appears. ----------------------------------------- Virtual
    particles and Coulomb interaction -----------------------------------------
    Virtual objects have strange properties. For example, the Coulomb
    interaction between two electrons is mediated by virtual photons faster than
    the speed of light, with imaginary masses. (This is often made palatable by
    invoking a time-energy uncertaintly relation, which would allow particles to
    go off-shell. But there is no time operator in QFT, so the analogy to
    Heisenberg's uncertainty relation for position and momentum is highly
    dubious.) Strictly speaking, the Coulomb interaction is simply the Fourier
    transform of the photon propagator 1/q^2, followed by a nonrelativistic
    approximation. It has nothing at all to do with virtual particle
    exchanges --- except if you do perturbation theory. But then there is no
    surprise that it must influence already the tree level. By a hand waving
    argument (equate the Born approximations) this gives the nonrelativistic
    correspondence. But to get the Coulomb interaction as part of the
    Schroedinger equation, you need to sum all ladder diagrams with
    0,1,2,3,...,n,... exchanged photons arranged in form of a ladder. Then one
    needs to approximate the resulting Bethe-Salpeter equation. These are
    nonperturbative techniques. (The computations are still done at few loops
    only, which means that questions of convergence never enter.) Virtual
    photons mediating the Coulomb repulsion between electrons have spacelike
    momenta and hence would proceed faster than light if there were any reality
    to them. But there cannot be; you'd need infinitely many of them, and
    infinitely many virtual electron-positron pairs (and then superpositions of
    any numbers of these) to match exactly a real, dressed object or
    interaction. ---------------------------------------------------------------
    ---- Are virtual particles and decaying particles (resonances) the
    same? ------------------------------------------------------------------- A
    very sharp resonance has a long lifetime relative to a scattering event,
    hence behaves like a particle in scattering. It is regarded as a real object
    if it lives long enough that its trace in a wire chamber is detectable, or
    if its decay products are detectable at places significantly different from
    the place where it was created. On the other hand, a very broad resonance
    has a very short lifetime and cannot be differentiated well from the
    scattering event producing it; so the idealization defining the scattering
    event is no longer valid, and one would not regard the resonance as a
    particle. Of course, there is an intermediate grey regime where different
    people apply different judgment. This can be seen, e.g., in discussions
    concerning the tables of the Particle Data Group. The only difference
    between a short-living particle and a stable particle is the fact that the
    stable particle has a real rest mass, while the mass m of the resonance has
    a small imaginary part. Note that states with complex masses can be handled
    well in a rigged Hilbert space (= Gelfand triple) formulation of quantum
    mechanics. Resonances appear as so-called Siegert (or Gamov) states. A good
    reference on resonances (not well covered in textbooks) is V.I. Kukulin et
    al., Theory of Resonances, Kluwer, Dordrecht 1989. For rigged Hilbert spaces
    (treated in Appendix A of Kukulin), see also quant-ph/9805063 and for its
    functional analysis ramifications, K. Maurin, General Eigenfunction
    Expansions and Unitary Representations of Topological Groups, PWN Polish
    Sci. Publ., Warsaw 1968. But a very short-living particle is usually not the
    same as a virtual particle. Instead, it is a complicated, nearly bound state
    of other particles. On the other hand, virtual particles are essentally
    always elementary. (There are exceptions when deriving Bethe-Salpeter
    equations and the like for the approximate calculations of bound states and
    resonances, where one creates an effective theory in which the latter are
    treated as elementary.) The difference can also be seen in the mathematical
    representation. In an effective theory where the resonance (e.g., the
    neutron or a meson) is regarded as an elementary object, the resonance again
    appears in in/out states as a real particle, with complex on shell momentum
    satisfying p^2=m^2, but in internal Feynman diagrams as a virtual particle
    with real mass, almost always off-shell, i.e., violating this equation.
    However, there are some unstable elementary particles like the weak gauge
    bosons. Usually, you observe a 4-fermion interaction and the gauge bosons
    are virtual. But at high energy = very short scales, you can in principle
    observe the gauge bosons and make them real. Now I don't know if they were
    observed as particle tracks or only as resonances (i.e. indirect evidence
    from 4-fermion cross sections). And I don't know how people actually model
    this situation. Maybe there are experts who can provide further details on
    this. In any case, from a mathematical point of view, you must choose the
    framework. Either one works in a Hilbert space, then masses are real and
    there are no unstable particles (since these 'are' poles on the so-called
    'unphysical' sheet); in this case, there are no asymptotic gauge bosons and
    all are therefore virtual. Or one works in a rigged Hilbert space and deform
    the inner product; this makes part of the 'unphysical' sheet visible; then
    the gauge bosons have complex masses and there exist unstable particles
    corresponding to in/out gauge bosons which are real. The modeling framework
    therefore decides which language is
    appropriate. ---------------------------------- Can particles go backward in
    time? ---------------------------------- In the old relativistic QM (for
    example in Volume 1 of Bjorken and Drell) antiparticles are viewed as
    particles traveling backward in time. This is based on a consideration of
    the solutions of the Dirac equation and the idea of a filled sea of
    negative-energy solutions in which antiparticles appear as holes (though
    this picture only works for fermions since it requires an exclusion
    principle). One can go some way with this view, but more sophisticated stuff
    requires the QFT picture (as in Volume 2 of Bjorken and Drell and most
    modern treatments). In relativistic QFT, all particles (and antiparticles)
    travel forward in time, corresponding to timelike or lightlike momenta.
    (Only 'virtual' particles may have unrestricted momenta; but these are
    unobservable artifacts of perturbation theory.) The need for antiparticles
    is in QFT instead revealed by the fact that they are necessary to construct
    operators with causal (anti)commutation relations, in connection with the
    spin-statistic theorem. See, e.g., Volume 1 of Weinberg's QFT book. Thus
    talking about particles traveling backward in time, the Dirac sea, and holes
    as positrons is outdated; it is today more misleading than it does
    good. -------------------------------------------------- What about
    particles faster than light
    (tachyons)? -------------------------------------------------- Tachyons are
    hypothetical particles with speed exceeding the speed of light. Special
    relativity demands that such particles have imaginary rest mass (negative
    m^2), and hence can never be brought to rest (or below the speed of light);
    unlike ordinary particles, they speed up as they lose energy, Charged
    tachyons produce Cerenkov radiation which has never been observed. (However,
    Cerenkov radiation is indeed observed when fast particles enter a dense
    medium in which the speed of light is smaller than the particle's speed.
    Relativitly only demands that no particle with real mass is faster than the
    speed of light in vacuum.) Neutrinos are uncharged and have a squared mass
    of zero or very close to zero, and hence could possibly be tachyons.
    Recently observed neutrino oscillations confirmed a small squared mass
    difference between at least two species of neutrinos. This does not yet
    settle the sign of m^2 for any species. Direct measurements of m^2 have
    experimental errors still compatible with m^2=0. For data see
    http://cupp.oulu.fi/neutrino/ The initial interest in tachyons stopped
    around 1980, when it was clear that the QFT of tachyons would be very
    different from standard QFT, and that experiment didn't demand their
    existence. In fact, the theory of symmetry breaking demands that tachyons do
    _not_ exist: When a relativistic field theory is deformed in a way that the
    square of the mass (pole of the S-matrix) of some physical particle would
    cross zero, the old physical vacuum becomes unstable and induces a phase
    transition to a new physical vacuum in which all particles have real
    nonnegative mass. This would happen already at tiny negative m^2, and is
    believed to be the cause of inflation in the early universe. (Of course, the
    exact mechanism is not known since it would require a nonperturbative
    definition of QFT. But classical and semiclassical computations strongly
    suggest the correctness of this picture.) Expanding a theory (such as the
    standard model) around an unstable state (e.g., the Higgs with a local
    maximum at vanishing vacuum expectation) formally produces a bare tachyon.
    This does not contradict the above assertion. Asymptotic power series
    expansions around maxima (especially those with tiny or vanishing
    convergence radius) make meaningless assertions about the behavior of a
    function near one of its minima. Since physical particles arise from field
    excitations near the global minimum of the effective energy, perturbations
    around the maximum are unphysical. An expansion around an unstable state
    gives no significant information, unless one has a system that actually _is_
    close such an unstable state (as perhaps the very early universe). But in
    that case there are no relevant excitations (tachyons), since the whole
    process (inflation) of motion towards a more stable state proceeds so
    rapidly that excitations do not form and everything can be analyzed
    semiclassically. The physical Higgs field is far away from the unstable
    maximum, and its particle excitations have a positive real mass, hence are
    not tachyons. Below are some references about tachyons. the more important
    papers are marked by an asterisk. * G. Feinberg, Possibility of
    Faster-Than-Light Particles, Phys. Rev. 159, 1089 (1967). J. Dhar and E. C.
    G. Sudarshan, Quantum Field Theory of Interacting Tachyons, Phys. Rev. 174,
    1808-1815 (1968) M. Glück, Note on Causal Tachyon Fields, Phys. Rev. 183,
    1514 (1969). D. G. Boulware, Unitarity and Interacting Tachyons, Phys. Rev.
    D 1, 2426 (1970). * B. Schroer, Quantization of m^2<0 Field Equations, Phys.
    Rev. D 3, 1764 (1971). G. Feinberg Lorentz invariance of tachyon theories
    Phys. Rev. D 17, 1651 (1978) C. Schwartz Some improvements in the theory of
    faster-than-light particles Phys. Rev. D 25, 356 (1982) SM. B. Davis, M. N.
    Kreisler, and T. Alväger Search for Faster-Than-Light Particles Phys. Rev.
    183, 1132 (1969) * L. W. Jones A review of quark search experiments Rev.
    Mod. Phys. 49, 717 (1977) [Section IIIG reviews the vain search for
    tachyons.] The Wikipedia entry for tachyons,
    http://en.wikipedia.org/wiki/Tachyon gives some more explanations.
    http://www.weeklyscientist.com/ws/articles/tachyons.htm speculates about
    connections between tachyons and inflation, but has some links with further
    useful information. ------------------------ Summing divergent
    series ------------------------ Most perturbation series in QFT are believed
    to be asymptotic only, hence divergent. Strong arguments (which haven't lost
    in half a century their persuasive power) supporting the view that one
    should expect the divergence of the QED (and other relatvistic QFTs) power
    series for S-matrix elements, for all values of alpha0 (and independent of
    energy) are given in F.J. Dyson, Divergence of perturbation theory in
    quantum electrodynamics, Phys. Rev. 85 (1952), 613--632. However, one can
    one still extract information by resumming techniques. With experimental
    results you just have numbers, and not infinite series, so questions of
    convergence do not occur. On the other hand, if you know of an infinite
    series a finite number of terms only, the result can be, strictly speaking,
    anything. But usually one applies some extrapolation algorithm (e.g., the
    epsilon or eta algorithm) to get a meaningful guess for the limit, and
    estimates the error by doing the same several times, keeping a variable
    number of terms. The difference between consecutive results can count as a
    reasonable (though not foolproof) error estimate of these results.
    Similarly, given a finite number of coefficients of a power series, one can
    use Pade approximation to find an often excellent approximation of the
    'intended' function, although of course, a finite series says, strictly
    speaking, nothing about the limit of the sequence. But to have reliable
    bounds you need to know an exact definition of what you are approximating,
    and work from there. One can study these things quite well with functions
    which have known asymptotic expansions (e.g., Watson's lemma). In many cases
    (and under well-defined conditions), the resulting infinite series is Borel
    summable. To sum f(x) = sum a_k x^k (1) if it is divergent or very slowly
    convergent, you can sum instead its Borel transform Bf(x) = sum a_k/k! x^k
    (2) which obviously converges much faster (if not yet, you could probably
    repeat the procedure). under certain assumptions on f, stronger than simply
    asserting that (1) is an asymptotic expansion for f (but including the case
    where (1) has a positive radius of convergence), one can show that f can be
    reconstructed from Bf by means of some integral transform. In certain cases,
    where nonperturbative QM applies, one can show that the nonperturbative
    result satisfies the properties needed to show that Borel summation of the
    perturbative expansion reproduces the nonperturbative result. See also the
    thread Re: unsolved problems in QED starting with
    http://www.lns.cornell.edu/spr/2003-03/msg0049669.html ---------------------
    -------------- Nonperturbative computations in
    QFT ----------------------------------- There is well-defined theory for
    computing contributions to the S-matrix in QED (and other renormalizable
    field theories) by perturbation theory. There is also much more which uses
    handwaving arguments and appeals to analogy to compute approximations to
    nonperturbative effects. Examples are: - relating the Coulomb interaction
    and corrections to scattering amplitudes and then using the nonrelativistic
    Schroedinger equation, - computing Lamb shift contributions (now usually
    done in what is called the NRQED expansion), - Bethe-Salpeter and
    Schwinger-Dyson equations obtained by resumming infinitely many diagrams.
    The use of 'nonperturbative' and 'expansion' together sounds paradoxical,
    but is common terminology in QFT. The term 'perturbative' refers to results
    obtained directly from renormalized Feynman graph evaluations. From such
    calculations, one can obtain certain information (tree level interactions,
    form factors, self energies) that can be used together with standard QM
    techniques to study nonperturbative effects - generally assuming without
    clear demonstrations that this transition to QM is allowed. Of course,
    although usually called 'nonperturbative', these techniques also use
    approximations and expansions. The most conspicous high accuracy
    applications (e.g. the Lamb shift) are highly nonperturbative. But on a
    rigorous level, so far only the perturbative results (coefficients of the
    expansion in coupling constants) have any validity. Although the
    perturbation series in QED are believed to be asymptotic only, one can get
    highly accurate approximations for quantities like the Lamb shift. However,
    the Lamb shift is a nonperturbative effect of QED. One uses an expansion in
    the fine structure constant, in the ratio electron mass/proton mass, and in
    1/c (well, different methods differ somewhat). Starting e.g., with Phys.
    Rev. Lett. 91, 113005 (2003) you'd be able to track the literature.
    Perturbative results are also often improved by partial summation of
    infinite classes of related diagrams. This is a standard approach to go some
    way towards a nonperturbative description. Of course, the series diverges
    (in case of a bound state it _must_ diverge, already in the simplest,
    nonrelativistic examples!), but the summation is done on a formal level (as
    everything in QFT) and only the result reinterpreted in a numerical way. In
    this way one can get in the ladder approximation Schroedinger's equation,
    and in other approximations Bethe-Salpeter equations, etc. See Volume 1 of
    Weinberg's QFT
    book. ---------------------------------------------------------- Functional
    integrals, Wightman functions, and rigorous
    QFT ---------------------------------------------------------- QFT assumes
    the existence of interacting (operator distribution valued) fields Phi(x)
    with certain properties, which imply the existence of distributions
    W(x_1,...,x_n)=<0Phi(x_1)...Phi(x_n)0. But the right hand side makes no
    rigorous sense in traditional QFT as found in most text books, except for
    free fields. Axiomatic QFT therefore tries to construct the W's - called the
    Wightman functions - directly such that they have the properties needed to
    get an S-matrix (Haag-Ruelle theory), whose perturbative expansion can be
    compared with the nonrigorous mainstream computations. This can be done
    successfully for many 2D theories and for some 3D theories, but not, so far,
    in the physically relevant case of 4D. To construct something means to prove
    its existence as a mathematically well-defined object. Usually this is done
    by giving a construction as a sort of limit, and proving that the limit is
    well-defined. (This is different from solving a theory, which means
    computing numerical properties, often approximately, occasionally - for
    simple problems - in closed analytic form.) To compare it to something
    simpler: In mathematics one constructs the Riemann integral of a continuous
    function over a finite interval by some kind of limit, and later the
    solution of an initial value problem ordinary differential equations by
    using this and a fixed point theorem. This shows that each (nice enough)
    initial value problem is uniquely solvable. But it tells very little of its
    properties, and in practice no one uses this construction to calculate
    anything. But it is important as a mathematical tool since it shows that
    calculus is logically consistent. Such a logical consistence proof of any 4D
    interacting QFT is presently still missing. Since logical consistency of a
    theory is important, the first person who finds such a proof will become
    famous - it means inventing new conceptual tools that can handle this
    currently intractable problem. Wightman functions are the moments of a
    linear functional on some algebra generated by field operators, and just as
    linear functionals on ordinary function spaces are treated in terms of
    Lebesgue integration theory (and its generalization), so Wightman linear
    functionals are naturally treated by functional integration. The 'only'
    problem is that the latter behaves much more poorly from a rigorous point of
    view than ordinary integration. Wightman functions are the moments of a
    positive state on noncommutative polynomials in the quantum field Phi, while
    time-ordered correlation functions are the moments of a complex measure on
    commutative polynomials in the classical field Phi. In both cases, we have a
    linear functional, and the linearity gives rise to an interpretation in
    terms of a functional integral. The exponential kernel in Feynman's path
    integral formula for the time-ordered correlation functions comes from the
    analogy between (analytically continued) QFT and statistical mechanics, and
    the Wightman functions can also be described in a similar analogy, though
    noncommutativity complicates matters. The main formal reason for this is
    that a Wick theorem holds both in the commutative and the noncommutative
    case. For rigorous quantum field theory one essentially avoids the path
    integral, because it is difficult to give it a rigorous meaning when the
    action is not quadratic. Instead, one only keeps the notion that an integral
    is a linear functional, and constructs rigorously useful linear functionals
    on the relevant algebras of functions or operators. In particular, one can
    define Gaussian functionals (e.g., using the Wick theorem as a definition,
    or via coherent states); these correspond exactly to path integrals with a
    quadratic action. If one looks at a Gaussian functional as a functional on
    the algebra of fields appearing in the action (without derivatives of
    fields), one gets - after time-ordering the fields - the traditional path
    integral view and the time-ordered correlation functions. If one looks at it
    as a functional on the bigger algebra of fields and their derivatives, one
    gets - after rewriting the fields in terms of creation and annihilation
    operators - the canonical quantum field theory view with Wightman functions.
    The algebra is generated by the operators a(f) and a^*(f), where f has
    compact support, but normally ordered expressions of the form S = integral
    dx : L(Phi(x), Nabla Phi(x)) : make sense weakly (i.e., as quadratic forms).
    The art and difficulty is to find well-defined functionals that formally
    match the properties of the functionals 'defined' loosely in terms of path
    integrals. This requires a lot of functional analysis, and has been
    successfully done only in dimensions d<4. For an overview, see: A.S.
    Wightman, Hilbert's sixth problem: Mathematical treatment of the axioms of
    physics, in: Mathematical Developments Arising From Hilbert Problems, edited
    by F. Browder, (American Mathematical Society, Providence, R.I.) 1976,
    pp.147-240. ---------------------------------------------------- Is there a
    rigorous interacting QFT in 4
    dimensions? ---------------------------------------------------- In spite of
    many attempts (and though numerous uncontrolled approximations are routinely
    computed), no one has so far succeeded in rigorously constructing a single
    QFT in 4D which has nontrivial scattering. Not even QED is a mathematical
    object, although it is the theory that was able to reproduce experiments
    (Lamb shift) with an accuracy of 1 in 10^12, and with less accuracy already
    in 1948. But till today no one knows how to formulate the theory in such a
    way that the relevant objects whose approximations are calculated and
    compared with experiment are logically well-defined. See, e.g., the S.P.R.
    threads http://groups.google.com/groups?q=Unsolved problems in QED
    http://groups.google.com/groups?q=What is well-defined in QED This probably
    explains the high prize tag of 1.000.000 US dollars, promised for a solution
    to one of the Clay millenium problems, that asks to find a valid
    construction for d=4 quantum Yang-Mills theories that is strong enough to
    prove correlation inequalities corresponding to the existence of a mass gap.
    The problem is to explain rigorously why the mass spectrum for compact Yang
    Mills QFT begins at a positive mass, while the classical version has a
    continuous spectrum beginning at 0. The state of the art at the time the
    problem was crowned by a prize is given in
    www.claymath.org/Millennium_Prize_Problems/Yang-Mills_Theory/_objects/Offici
    al_Problem_Description.pdf I don't think significant progress has been
    published since then. Yang-Mills theories are (perhaps erroneously) believed
    to be the simplest (hopefully) tractable case, being asymptotically complete
    while not having the extra difficulties associated with matter fields.
    (There are only gluons, no quarks or leptons.) Of course, one would like to
    show rigorously that QED is consistent. But QED has certain problems (the
    Landau pole, see below) that are absent in so-called asymptotically free
    theories, of which Yang-Mills is the simplest. Note that rigorous
    interacting relativistic theories in 2D and 3D exist; see, e.g., Glimm and
    Jaffe's ''Quantum Physics: A Functional Integral Point of View''. This book
    is quite difficult on first reading. Volume 3 of Thirring's Course in
    Mathematical Physics (which only deals with nonrelativistic QM but in a
    reasonably rigorous way) might be a good preparation to the functional
    analysis needed. A more leisurely introduction of the physical side of the
    matter is in Elcio Abdalla, M. Christina Abdalla, Klaus D. Rothe
    Non-Perturbative Methods in 2 Dimensional Quantum Field Theory World
    Scientific, 1991, revised 2nd. ed. 2001.
    http://www.wspc.com/books/physics/4678.html The book is about rigorous
    results, with a focus on solvable models. Note that 'solvable' means in this
    context 'being able to find a closed analytic expression for all S-matrix
    elements'. These solvable models are to QFT what the hydrogen atom is to
    quantum mechanics. The helium atom is no longer 'solvable' in the present
    sense, though of course very accurate approximate calculations are possible.
    Unfortunately, solvable models appear to be restricted to 2 dimensions. The
    deeper reason for the observation that dimension d=2 is special seems to be
    that in 2D the line cone is just a pair of lines. Thus space and time look
    completely alike, and by a change of variables (light front quantization),
    one can disentangle things nicely and find a good Hamiltonian description.
    This is no longer the case in higher dimensions. (But 4D light front
    quantization, using a tangent plane to the light cone, is well alive as an
    approximate technique, e.g., to get numerical results from QCD.) Thus, while
    2D solvable models pave the way to get some rigorous understanding of the
    concepts, they are no substitute for the functional analytic techniques
    needed to handle the non-solvable models such as Phi^4
    theory. ------------------ Is QED consistent? ------------------ Many
    physicists think that QED cannot be a consistent theory, although it gives
    the most accurate predictions modern physics has to offer, namely that of
    the Lamb shift. But there is a phenomenon called the Landau pole that
    indicates that at extremely large energies (far beyond the range of physical
    validity of QED) something might go wrong with QED. This is probably why
    Yang=Mills and not QED was chosen as the model theory for the millenium
    prize. Since the existence of the Landau pole is confirmed only in low order
    perturbation theory and in lattice calculations, hep-lat/9801004 and
    hep-th/9712244 this observation has currently no rigorous mathematical
    substance. Moreover, the quality of the computed approximations are a strong
    indication that there should be a consistent mathematical foundation (for
    not too high energies), although it hasn't been found yet. There is no
    indication at all that at the energies where QED suffices to describe our
    world (with electrons and nuclei considered elementary particles), it should
    be inconsistent. To show this rigorously, or to disprove therefore remains
    another unsolved (and for physics more important) problem. Perturbative QED
    is only a rudimentary version of the 'real QED'; which can be seen that
    Scharf's results on the external field case are much stronger (he constructs
    in his book the S-matrix) than those for QED proper (where he only shows the
    existence of the power series in alpha, but not their convergence). The
    quest for 'existence' of QED is the quest for a framework where the formulas
    make sense nonperturbatively, and where the power series in alpha is a
    Taylor expansion of a (presumably nonanalytic) function of alpha that is
    mathematically well-defined for alpha around 1/137 and not too high energy.
    This is still open. More precisely: Probably the QED S-matrix exists
    nonperturbatively for alpha <= 1/137 and input energies <= some number
    E_limit(alpha) larger than the physical validity of pure QED. What is needed
    is a mathematical proof that the QED S-matrix exists for 0

    posted by Kirk Gregory Czuhai @ 10:57 AM 0 comments

    more physics PHUN 4u !!!
    ------------------------- A theoretical physics
    FAQ -------------------------
    http://www.mat.univie.ac.at/~neum/physics-faq.txt Here are answers to some
    frequently asked questions from theoretical physics. They were collected
    from my answers to postings to the newsgroup sci.physics.research. Of course
    they refer only to a tiny part of theoretical physics, and they are only as
    good as my understanding of the matter. This doesn't mean that they are
    poor... But if you have suggestions for improvements, please write me at
    If you have questions, please post them to the
    newsgroup sci.physics.research (http://www.lns.cornell.edu/spr)! Happy
    Reading! Arnold Neumaier University of Vienna
    http://www.mat.univie.ac.at/~neum/ Abbreviations: QM = quantum mechanics.
    QFT = quantum field theory. QED = quantum electrodynamics. s.p.r =
    sci.physics.research (newsgroup). Strings like quant-ph/0303047 refer to
    electronic documents in the e-Print archive (see
    http://xxx.lanl.gov/). ----------------- Table of Contents -----------------
    (The labels may change with time as answers to further questions will be
    added. So, to quote part of the FAQ, refer to the title of a section and not
    to its label.) 1a. Are electrons pointlike/structureless? 1b. What are
    'bare' and 'dressed' particles? 1c. How meaningful are single Feynman
    diagrams? 1d. How real are 'virtual particles'? 1e. What is the meaning of
    'on-shell' and 'off-shell'? 1f. Virtual particles and Coulomb interaction
    1g. Are virtual particles and decaying particles (resonances) the same? 1h.
    Can particles go backward in time? 1i. What about particles faster than
    light (tachyons)? 2a. Summing divergent series 2b. Nonperturbative
    computations in QFT 2c. Functional integrals, Wightman functions, and
    rigorous QFT 2d. Is there a rigorous interacting QFT in 4 dimensions? 2e. Is
    QED consistent? 2f. Bound states in relativistic QFT 2g. Why bother about
    rigor in physics? 2h. Why normal ordering? 3a. Is there a multiparticle
    relativistic quantum mechanics? 3b. Localization and position operators 3c.
    Representations of the Poincare group, spin and gauge invariance 4a. A
    concise formulation of the measurement problem of QM 4b. The double slit
    experiment 4c. The Stern-Gerlach experiment 4d. The minimal interpretation
    4e. The preferred basis problem 4f. Does decoherence solve the measurement
    problem? 4g. Which interpretation of quantum mechanics is most consistent?
    4h. What about relativistic measurement theory? 5a. Random numbers in
    probability theory 5b. How meaningful are probabilities of single events?
    5c. How do probabilities apply in practice? 5d. Priors and entropy in
    probability theory 6a What are bras and kets? 6b. What is the meaning of the
    entries of a density matrix? 7a. What is the tetrad formalism? 7b. Energy in
    general relativity 7c. Difficulties in quantizing gravity 7d. Is quantum
    mechanics compatible with general relativity? 7e. Why do gravitons have spin
    2? 8a. Theoretical challenges close to experimental data 98a. Background
    needed for theoretical physics 99a.
    Acknowledgments -------------------------------------- Are electrons
    pointlike/structureless? -------------------------------------- Both
    electrons and neutrinos are considered to be pointlike as bare particles,
    because of the way they appear in the standard model. But physical,
    relativistic particles are not pointlike. An intuitive argument for this is
    the fact that their localization to a region siognificantly smaller than the
    de Broglie wavelength would need energies larger than that needed to create
    particle-antiparticle pairs, which changes the nature of the system. (See
    also this FAQ about localization, and Foldy's papers quoted there.) On a
    more formal, quantitative level, the physical, dressed particles have
    nontrivial form factors, due to the renormalization necessary to give finite
    results in QFT. Nontrivial form factors give rise leading to a positive
    charge radius. In his book S. Weinberg, The quantum theory of fields, Vol.
    I, Cambridge University Press, 1995, Weinberg defines and explicitly
    computes in (11.3.33) the 'charge radius' of a physical electron. But his
    formula is not fully satisfying since it is not fully renormalized (infrared
    divergence: the expression contains a ficticious photon mass, and diverges
    if this goes to zero). (28) in hep-ph/0002158 = Physics Reports 342, 63-26
    (2001) handles this using a binding energy dependent cutoff, which makes the
    electron charge radius depend on its surrounding. The paper L.L. Foldy,
    Neutron-electron interaction, Rev. Mod. Phys. 30, 471-481 (1958). discusses
    the extendedness of the electron in a phenomenological way. On the numerical
    side, I only found values for the charge radius of the neutrinos, computed
    from the standard model to 1 loop order. The values are about 4-6 10^-14 cm
    for the three neutrino species. See (7.12) in Phys. Rev. D 62, 113012 (2000)
    http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1992PhDT.......130L gives in
    an abstract of a 1982 thesis of Anzhi Lai an electron charge radius of ~
    10^{-16} cm (But I haven't seen the thesis.) The "form" of an elementary
    particle is described by its form factor, which is a well-defined physical
    function (though at present computable only in perturbation theory)
    describing how the (spin 0, 1/2, or 1) particle's response to an external
    classical electromagnetic field deviates from the Klein-Gordon, Dirac, or
    Maxwell equations, respectively. In Foldy's paper, the form factors are
    encoded in the infinite sum in (16). The sum is usually considered in the
    momentum domain; then one simply gets two k-dependent form factors, where k
    represents the 4-momentum transferred in the interaction. These form factors
    can be calculated in a good approximation perturbatively from QFT, see for
    example Peskin and Schroeder's book. An extensive discussion of form factors
    of Dirac particles and their relation to the radial density function is in
    D. R. Yennie, M. M. Levy and D. G. Ravenhall, Electromagnetic Structure of
    Nucleons, Rev. Mod. Phys. 29, 144-157 (1957). and R. G. Sachs High-Energy
    Behavior of Nucleon Electromagnetic Form Factors Phys. Rev. 126, 2256-2260
    (1962) For proton and neutron form factors, see hep-ph/0204239 and
    hep-ph/0303054 ---------------------------------------- What are 'bare' and
    'dressed' particles? ---------------------------------------- A bare
    electron is the formal entity discussed in textbooks when they do
    perturbative quantum electrodynamics. The intuitive picture generally given
    is that a bare electron is surrounded by a cloud of virtual photons and
    virtual electron-positron pairs to make up a physical, 'dressed' electron.
    Only the latter is real and observable. The former is a formal caricature of
    the latter, with paradoxical properties (infinite mass, etc.). On a more
    substantial level, the observable electrons are produced from the bare
    electrons by a process called renormalization, which modifies the
    propagators by self-energy terms and the currents by form factors. As the
    name says, the latter define the 'form' of a particle. (In the above
    picture, it would correspond to the shape of the virtual cloud, though it is
    better to avoid giving the virtual particles too much of meaning.) The
    dressed object is the renormalized, physical object, described
    perturbatively as the bare object 'clothed' by the cloud of virtual
    particles. The dressed interaction is the 'screened' physical interaction
    between these dress objects. To draw an analogy in nonrelativistic QM think
    of nuclei as bare atoms, electrons as virtual particles, atoms as dressed
    nuclei and the residual interaction between atoms, computed in the
    Born-Oppenheimer approximation, as the dressed interaction. Thus, for Argon
    atoms, the dressed interaction is something close to a Lennard-Jones
    potential, while the bare interaction is Coulomb repulsion. This is the
    situation physicists had in mind when they invented the notions of bare and
    dressed particles. Of course, it is only an analogy, and should not be taken
    very seriously. It just explains the intuition about the terminology used.
    The electrons in QM are real, physical electrons that can be isolated. The
    reason is that they are good eigenstates of the Hamiltonian. On the other
    hand, virtual particles don't have this nice attribute since the
    relativistic Hamiltonian H from field theory contains creation and
    annihilation operators which mess things up. The bare particles correspond
    to 1-particle states in the Hilbert space (though that is not quite true
    since there is no good Hilbert space picture in conventional interacting
    QFT). Multiplying them with H introduces terms with other particle numbers,
    hence a bare particle can never be an eigenstate of H, and thus never be
    observable in the way a nonrelativistic particle is. The eigenstates of the
    relativistic Hamiltonian are, instead, complicated multibody states
    consisting of a superposition of states with any number of particles and
    antiparticles, just subject to the restriction that the total quantum
    numbers come out right. These are the dressed
    particles. ------------------------------------------- How meaningful are
    single Feynman diagrams? ------------------------------------------- The
    standard model is a theory defined in terms of a Lagrangian. To get
    computable output, Feynman graph techniques are used. But individual Feynman
    graphs are meaningless (often infinite); only the sum of all terms of a
    given order can be given - after a process called renormalization - a
    well-defined (finite) meaning. This is well-known; so no-one treats the
    Feynman graphs as real. What is taken as real is the final outcome of the
    calculations, which can be compared with
    measurements. --------------------------------- How real are 'virtual
    particles'? --------------------------------- All language is only an
    approximation to reality, which simply is. But to do science we need to
    classify the aspects of reality that appear to have more permanence, and
    consider them as real. Nevertheless, all concepts, including 'real' have a
    fuzziness about them, unless they are phrased in terms of rigorous
    mathematical models (in which case they don't apply to reality itself but
    only to a model of reality). In the informal way I use the notion, 'real' in
    theoretical physics means a concept or object that - is independent of the
    computational scheme used to extract information from a theory, - has a
    reasonably well-defined and consistent formal basis - does not give rise to
    misleading intuition. This does not give a clear definition of real, of
    course. But it makes charge distributions and inputs and outputs of
    (theoretical models of) scattering experiments something real, while making
    bare particles and virtual particles artifacts of perturbation theory.
    'Real' real particles are slightly different from 'mathematical' real
    particles. Note that whenever we observe a system we make a number of
    idealizations that serve to identify the objects in reality with the
    mathematical concepts we are using to describe them. Then we calculate
    something, and at the end we retranslate it into reality. If our initial
    initialization was good enough and our theory is good enough, the final
    result will match reality well. Modern QED and other field theories are
    based on the theory developed for modeling scattering events. Now scattering
    events take a very short time compared to the lifetime of the objects
    involved before and after the event. Therefore, we represent a prepared beam
    of particles hitting a target as a single particle hitting another single
    particle, and whenever this in fact happens, we observe end products, e.g.
    in a wire chamber. Strictly speaking (i.e., in a fuller model of reality),
    we'd have to use a multiparticle (statistical mechanics) setting, but this
    is never done since it does not give better information and the added
    complications are formidable. As long as we prepare the particles long
    (compared to the scattering time) before they scatter and observe them long
    enough afterwards, they behave essentially as in and out states,
    respectively. (They are not quite free, because of the electromagnetic
    self-field they generate, this gives rise to the infrared problem in QED and
    can be corrected by using coherent states.) The preparation and detection of
    the particles is outside this model, since it would produce only minute
    corrections to the scattering event. But to treat it would require to
    increase the system to include source and detector, which makes the problem
    completely different. Therefore at the level appropriate to a scattering
    event, the 'real' real particles are modeled by 'mathematical' in/out
    states, which therefore are also called 'real'. On the other hand,
    'mathematical' virtual particles have nothing to do with observations, hence
    have no counterpart in reality; therefore they are called 'virtual'. The
    figurative virtual objects in QFT are there only because of the well-known
    limitations of the foundations of QFT. In a nonperturbative setting they
    wouldn't occur at all. This can be seen by comparing with QM. One could also
    do nonrelativistic QM with virtual objects but no one does so (except
    sometimes in motivations for QFT), because it does not add value to a
    well-understood theory. Virtual particles are an artifice of perturbation
    theory that give an intuitive (but if taken too far, misleading)
    interpretation for Feynman diagrams. More precisely, a virtual photon, say,
    is an internal photon line in one of the Feynman diagrams. But there is
    nothing real associated with it. Detectable photons are always real,
    'dressed' photons. Virtual particles, and the Feynman diagrams they appear
    in, are just a visual tool of keeping track of the different terms in a
    formal expansion of scattering amplitudes into multi-dimensional integrals
    involving multiple Green's functions - the virtual particle momenta
    represent the integration variables. They have no meaning at all outside
    these integrals. Thus they get out of mathematical existence once one
    changes the formula for computing a scattering amplitude. Therefore virtual
    particles are somehow analogous to virtual integers k obtained by computing
    log(1-x) = sum_k x^k/k by expansion into a Taylor series. Since we can
    compute the logarithm in many other ways, it is ridiculous to attach to k
    any intrinsic meaning. But ... ... in QFT, we have no good ways to compute
    scattering amplitudes without at least some form of expansion (unless we use
    the lowest order of some approximation method), which makes virtual
    particles look a little more real. But the analogy to the Taylor series
    shows that it's best not to look at them that way. (For a very informal view
    of QED in terms of clouds of virtual particles see
    http://groups.google.com/groups?selm= and
    the later mails in this thread.) A sign of the irreality of virtual
    particles is the fact that when you do partial resummations of diagrams,
    many of the virtual particles disappear. A fully nonperturbative theory
    would sum everything, and no virtual particles would be present anymore.
    Thus virtual particles are entirely a consequence of looking at QFT in a
    perturbative way rather than nonperturbatively. In the standard covariant
    Feynman approach, energy (cp_0) and momentum (\p; the backslash indicates
    'boldface') is conserved, and virtual particles are typically off-shell
    (i.e., they do not satisfy the equation p^2 = p_0^2 - \p^2 = m^2 for
    physical particles). To see this, try to model a vertex in which an electron
    (mass m_e) absorbs a photon (mass 0). One cannot keep the incoming electron
    and photon and the outgoing photon on-shell (satisfying p^2 = m^2) without
    violating the energy-momentum balance. However, when working in light front
    quantization, one keeps all particles on-shell, and instead has energy and
    momentum nonconservation (removed formally by adding an additional
    'spurion'). The effect of this is that the virtual particle structure of the
    theory is changed completely: For example, the physical vacuum and the bare
    vacuum now agree, while in the standard approach, the vacuum looks like a
    highly complicated medium made up from infinitely many bare particles....
    But phyiscal particles must still be dressed, though less heavily than in
    the traditional Feynman approach. Clearly concepts such as virtual particles
    that depend so much on the method of quantization cannot be regarded as
    being real. See also earlier discussions on s.p.r. such as
    http://www.lns.cornell.edu/spr/2003-06/msg0051674.html also
    http://www.lns.cornell.edu/spr/1999-02/msg0014762.html and followups; maybe
    http://www.lns.cornell.edu/spr/2003-05/msg0051023.html is also of interest.
    [For a longwinded alternative view of virtual particles that I do _not_
    share but rather find misleading, see
    http://www.desy.de/user/projects/Physics/Quantum/virtual_particles.html] ---
    ----------------------------------------------- What is the meaning of
    'on-shell' and
    'off-shell'? -------------------------------------------------- This applies
    only to relativistic particles. A particle of mass m is on-shell if its
    momentum p satisfies p^2 (= p_0^2-p_1^2-p_2^2-p_3^2) = m^2, and off-shell
    otherwise. The 'mass shell' is the manifold of momenta p with p^2=m^2.
    Observable (i.e., physical) particles are asymptotic states (scattering
    states) described (modulo unresolved mathematical difficulties) by free
    fields based on the dispersion relation p^2=m^2, and hence are necessarily
    on-shell. Off-shell particles only arise in intermediate perturbative
    calculations; they are necessarily 'virtual'. The situation is muddled by
    the fact that one has to distinguish (formal) bare mass and (physical)
    dressed mass; the above is valid only for the dressed mass. Moreover, the
    mass shell loses its meaning in external fields, where, instead, a so-called
    'gap equation' appears. ----------------------------------------- Virtual
    particles and Coulomb interaction -----------------------------------------
    Virtual objects have strange properties. For example, the Coulomb
    interaction between two electrons is mediated by virtual photons faster than
    the speed of light, with imaginary masses. (This is often made palatable by
    invoking a time-energy uncertaintly relation, which would allow particles to
    go off-shell. But there is no time operator in QFT, so the analogy to
    Heisenberg's uncertainty relation for position and momentum is highly
    dubious.) Strictly speaking, the Coulomb interaction is simply the Fourier
    transform of the photon propagator 1/q^2, followed by a nonrelativistic
    approximation. It has nothing at all to do with virtual particle
    exchanges --- except if you do perturbation theory. But then there is no
    surprise that it must influence already the tree level. By a hand waving
    argument (equate the Born approximations) this gives the nonrelativistic
    correspondence. But to get the Coulomb interaction as part of the
    Schroedinger equation, you need to sum all ladder diagrams with
    0,1,2,3,...,n,... exchanged photons arranged in form of a ladder. Then one
    needs to approximate the resulting Bethe-Salpeter equation. These are
    nonperturbative techniques. (The computations are still done at few loops
    only, which means that questions of convergence never enter.) Virtual
    photons mediating the Coulomb repulsion between electrons have spacelike
    momenta and hence would proceed faster than light if there were any reality
    to them. But there cannot be; you'd need infinitely many of them, and
    infinitely many virtual electron-positron pairs (and then superpositions of
    any numbers of these) to match exactly a real, dressed object or
    interaction. ---------------------------------------------------------------
    ---- Are virtual particles and decaying particles (resonances) the
    same? ------------------------------------------------------------------- A
    very sharp resonance has a long lifetime relative to a scattering event,
    hence behaves like a particle in scattering. It is regarded as a real object
    if it lives long enough that its trace in a wire chamber is detectable, or
    if its decay products are detectable at places significantly different from
    the place where it was created. On the other hand, a very broad resonance
    has a very short lifetime and cannot be differentiated well from the
    scattering event producing it; so the idealization defining the scattering
    event is no longer valid, and one would not regard the resonance as a
    particle. Of course, there is an intermediate grey regime where different
    people apply different judgment. This can be seen, e.g., in discussions
    concerning the tables of the Particle Data Group. The only difference
    between a short-living particle and a stable particle is the fact that the
    stable particle has a real rest mass, while the mass m of the resonance has
    a small imaginary part. Note that states with complex masses can be handled
    well in a rigged Hilbert space (= Gelfand triple) formulation of quantum
    mechanics. Resonances appear as so-called Siegert (or Gamov) states. A good
    reference on resonances (not well covered in textbooks) is V.I. Kukulin et
    al., Theory of Resonances, Kluwer, Dordrecht 1989. For rigged Hilbert spaces
    (treated in Appendix A of Kukulin), see also quant-ph/9805063 and for its
    functional analysis ramifications, K. Maurin, General Eigenfunction
    Expansions and Unitary Representations of Topological Groups, PWN Polish
    Sci. Publ., Warsaw 1968. But a very short-living particle is usually not the
    same as a virtual particle. Instead, it is a complicated, nearly bound state
    of other particles. On the other hand, virtual particles are essentally
    always elementary. (There are exceptions when deriving Bethe-Salpeter
    equations and the like for the approximate calculations of bound states and
    resonances, where one creates an effective theory in which the latter are
    treated as elementary.) The difference can also be seen in the mathematical
    representation. In an effective theory where the resonance (e.g., the
    neutron or a meson) is regarded as an elementary object, the resonance again
    appears in in/out states as a real particle, with complex on shell momentum
    satisfying p^2=m^2, but in internal Feynman diagrams as a virtual particle
    with real mass, almost always off-shell, i.e., violating this equation.
    However, there are some unstable elementary particles like the weak gauge
    bosons. Usually, you observe a 4-fermion interaction and the gauge bosons
    are virtual. But at high energy = very short scales, you can in principle
    observe the gauge bosons and make them real. Now I don't know if they were
    observed as particle tracks or only as resonances (i.e. indirect evidence
    from 4-fermion cross sections). And I don't know how people actually model
    this situation. Maybe there are experts who can provide further details on
    this. In any case, from a mathematical point of view, you must choose the
    framework. Either one works in a Hilbert space, then masses are real and
    there are no unstable particles (since these 'are' poles on the so-called
    'unphysical' sheet); in this case, there are no asymptotic gauge bosons and
    all are therefore virtual. Or one works in a rigged Hilbert space and deform
    the inner product; this makes part of the 'unphysical' sheet visible; then
    the gauge bosons have complex masses and there exist unstable particles
    corresponding to in/out gauge bosons which are real. The modeling framework
    therefore decides which language is
    appropriate. ---------------------------------- Can particles go backward in
    time? ---------------------------------- In the old relativistic QM (for
    example in Volume 1 of Bjorken and Drell) antiparticles are viewed as
    particles traveling backward in time. This is based on a consideration of
    the solutions of the Dirac equation and the idea of a filled sea of
    negative-energy solutions in which antiparticles appear as holes (though
    this picture only works for fermions since it requires an exclusion
    principle). One can go some way with this view, but more sophisticated stuff
    requires the QFT picture (as in Volume 2 of Bjorken and Drell and most
    modern treatments). In relativistic QFT, all particles (and antiparticles)
    travel forward in time, corresponding to timelike or lightlike momenta.
    (Only 'virtual' particles may have unrestricted momenta; but these are
    unobservable artifacts of perturbation theory.) The need for antiparticles
    is in QFT instead revealed by the fact that they are necessary to construct
    operators with causal (anti)commutation relations, in connection with the
    spin-statistic theorem. See, e.g., Volume 1 of Weinberg's QFT book. Thus
    talking about particles traveling backward in time, the Dirac sea, and holes
    as positrons is outdated; it is today more misleading than it does
    good. -------------------------------------------------- What about
    particles faster than light
    (tachyons)? -------------------------------------------------- Tachyons are
    hypothetical particles with speed exceeding the speed of light. Special
    relativity demands that such particles have imaginary rest mass (negative
    m^2), and hence can never be brought to rest (or below the speed of light);
    unlike ordinary particles, they speed up as they lose energy, Charged
    tachyons produce Cerenkov radiation which has never been observed. (However,
    Cerenkov radiation is indeed observed when fast particles enter a dense
    medium in which the speed of light is smaller than the particle's speed.
    Relativitly only demands that no particle with real mass is faster than the
    speed of light in vacuum.) Neutrinos are uncharged and have a squared mass
    of zero or very close to zero, and hence could possibly be tachyons.
    Recently observed neutrino oscillations confirmed a small squared mass
    difference between at least two species of neutrinos. This does not yet
    settle the sign of m^2 for any species. Direct measurements of m^2 have
    experimental errors still compatible with m^2=0. For data see
    http://cupp.oulu.fi/neutrino/ The initial interest in tachyons stopped
    around 1980, when it was clear that the QFT of tachyons would be very
    different from standard QFT, and that experiment didn't demand their
    existence. In fact, the theory of symmetry breaking demands that tachyons do
    _not_ exist: When a relativistic field theory is deformed in a way that the
    square of the mass (pole of the S-matrix) of some physical particle would
    cross zero, the old physical vacuum becomes unstable and induces a phase
    transition to a new physical vacuum in which all particles have real
    nonnegative mass. This would happen already at tiny negative m^2, and is
    believed to be the cause of inflation in the early universe. (Of course, the
    exact mechanism is not known since it would require a nonperturbative
    definition of QFT. But classical and semiclassical computations strongly
    suggest the correctness of this picture.) Expanding a theory (such as the
    standard model) around an unstable state (e.g., the Higgs with a local
    maximum at vanishing vacuum expectation) formally produces a bare tachyon.
    This does not contradict the above assertion. Asymptotic power series
    expansions around maxima (especially those with tiny or vanishing
    convergence radius) make meaningless assertions about the behavior of a
    function near one of its minima. Since physical particles arise from field
    excitations near the global minimum of the effective energy, perturbations
    around the maximum are unphysical. An expansion around an unstable state
    gives no significant information, unless one has a system that actually _is_
    close such an unstable state (as perhaps the very early universe). But in
    that case there are no relevant excitations (tachyons), since the whole
    process (inflation) of motion towards a more stable state proceeds so
    rapidly that excitations do not form and everything can be analyzed
    semiclassically. The physical Higgs field is far away from the unstable
    maximum, and its particle excitations have a positive real mass, hence are
    not tachyons. Below are some references about tachyons. the more important
    papers are marked by an asterisk. * G. Feinberg, Possibility of
    Faster-Than-Light Particles, Phys. Rev. 159, 1089 (1967). J. Dhar and E. C.
    G. Sudarshan, Quantum Field Theory of Interacting Tachyons, Phys. Rev. 174,
    1808-1815 (1968) M. Glück, Note on Causal Tachyon Fields, Phys. Rev. 183,
    1514 (1969). D. G. Boulware, Unitarity and Interacting Tachyons, Phys. Rev.
    D 1, 2426 (1970). * B. Schroer, Quantization of m^2<0 Field Equations, Phys.
    Rev. D 3, 1764 (1971). G. Feinberg Lorentz invariance of tachyon theories
    Phys. Rev. D 17, 1651 (1978) C. Schwartz Some improvements in the theory of
    faster-than-light particles Phys. Rev. D 25, 356 (1982) SM. B. Davis, M. N.
    Kreisler, and T. Alväger Search for Faster-Than-Light Particles Phys. Rev.
    183, 1132 (1969) * L. W. Jones A review of quark search experiments Rev.
    Mod. Phys. 49, 717 (1977) [Section IIIG reviews the vain search for
    tachyons.] The Wikipedia entry for tachyons,
    http://en.wikipedia.org/wiki/Tachyon gives some more explanations.
    http://www.weeklyscientist.com/ws/articles/tachyons.htm speculates about
    connections between tachyons and inflation, but has some links with further
    useful information. ------------------------ Summing divergent
    series ------------------------ Most perturbation series in QFT are believed
    to be asymptotic only, hence divergent. Strong arguments (which haven't lost
    in half a century their persuasive power) supporting the view that one
    should expect the divergence of the QED (and other relatvistic QFTs) power
    series for S-matrix elements, for all values of alpha0 (and independent of
    energy) are given in F.J. Dyson, Divergence of perturbation theory in
    quantum electrodynamics, Phys. Rev. 85 (1952), 613--632. However, one can
    one still extract information by resumming techniques. With experimental
    results you just have numbers, and not infinite series, so questions of
    convergence do not occur. On the other hand, if you know of an infinite
    series a finite number of terms only, the result can be, strictly speaking,
    anything. But usually one applies some extrapolation algorithm (e.g., the
    epsilon or eta algorithm) to get a meaningful guess for the limit, and
    estimates the error by doing the same several times, keeping a variable
    number of terms. The difference between consecutive results can count as a
    reasonable (though not foolproof) error estimate of these results.
    Similarly, given a finite number of coefficients of a power series, one can
    use Pade approximation to find an often excellent approximation of the
    'intended' function, although of course, a finite series says, strictly
    speaking, nothing about the limit of the sequence. But to have reliable
    bounds you need to know an exact definition of what you are approximating,
    and work from there. One can study these things quite well with functions
    which have known asymptotic expansions (e.g., Watson's lemma). In many cases
    (and under well-defined conditions), the resulting infinite series is Borel
    summable. To sum f(x) = sum a_k x^k (1) if it is divergent or very slowly
    convergent, you can sum instead its Borel transform Bf(x) = sum a_k/k! x^k
    (2) which obviously converges much faster (if not yet, you could probably
    repeat the procedure). under certain assumptions on f, stronger than simply
    asserting that (1) is an asymptotic expansion for f (but including the case
    where (1) has a positive radius of convergence), one can show that f can be
    reconstructed from Bf by means of some integral transform. In certain cases,
    where nonperturbative QM applies, one can show that the nonperturbative
    result satisfies the properties needed to show that Borel summation of the
    perturbative expansion reproduces the nonperturbative result. See also the
    thread Re: unsolved problems in QED starting with
    http://www.lns.cornell.edu/spr/2003-03/msg0049669.html ---------------------
    -------------- Nonperturbative computations in
    QFT ----------------------------------- There is well-defined theory for
    computing contributions to the S-matrix in QED (and other renormalizable
    field theories) by perturbation theory. There is also much more which uses
    handwaving arguments and appeals to analogy to compute approximations to
    nonperturbative effects. Examples are: - relating the Coulomb interaction
    and corrections to scattering amplitudes and then using the nonrelativistic
    Schroedinger equation, - computing Lamb shift contributions (now usually
    done in what is called the NRQED expansion), - Bethe-Salpeter and
    Schwinger-Dyson equations obtained by resumming infinitely many diagrams.
    The use of 'nonperturbative' and 'expansion' together sounds paradoxical,
    but is common terminology in QFT. The term 'perturbative' refers to results
    obtained directly from renormalized Feynman graph evaluations. From such
    calculations, one can obtain certain information (tree level interactions,
    form factors, self energies) that can be used together with standard QM
    techniques to study nonperturbative effects - generally assuming without
    clear demonstrations that this transition to QM is allowed. Of course,
    although usually called 'nonperturbative', these techniques also use
    approximations and expansions. The most conspicous high accuracy
    applications (e.g. the Lamb shift) are highly nonperturbative. But on a
    rigorous level, so far only the perturbative results (coefficients of the
    expansion in coupling constants) have any validity. Although the
    perturbation series in QED are believed to be asymptotic only, one can get
    highly accurate approximations for quantities like the Lamb shift. However,
    the Lamb shift is a nonperturbative effect of QED. One uses an expansion in
    the fine structure constant, in the ratio electron mass/proton mass, and in
    1/c (well, different methods differ somewhat). Starting e.g., with Phys.
    Rev. Lett. 91, 113005 (2003) you'd be able to track the literature.
    Perturbative results are also often improved by partial summation of
    infinite classes of related diagrams. This is a standard approach to go some
    way towards a nonperturbative description. Of course, the series diverges
    (in case of a bound state it _must_ diverge, already in the simplest,
    nonrelativistic examples!), but the summation is done on a formal level (as
    everything in QFT) and only the result reinterpreted in a numerical way. In
    this way one can get in the ladder approximation Schroedinger's equation,
    and in other approximations Bethe-Salpeter equations, etc. See Volume 1 of
    Weinberg's QFT
    book. ---------------------------------------------------------- Functional
    integrals, Wightman functions, and rigorous
    QFT ---------------------------------------------------------- QFT assumes
    the existence of interacting (operator distribution valued) fields Phi(x)
    with certain properties, which imply the existence of distributions
    W(x_1,...,x_n)=<0Phi(x_1)...Phi(x_n)0. But the right hand side makes no
    rigorous sense in traditional QFT as found in most text books, except for
    free fields. Axiomatic QFT therefore tries to construct the W's - called the
    Wightman functions - directly such that they have the properties needed to
    get an S-matrix (Haag-Ruelle theory), whose perturbative expansion can be
    compared with the nonrigorous mainstream computations. This can be done
    successfully for many 2D theories and for some 3D theories, but not, so far,
    in the physically relevant case of 4D. To construct something means to prove
    its existence as a mathematically well-defined object. Usually this is done
    by giving a construction as a sort of limit, and proving that the limit is
    well-defined. (This is different from solving a theory, which means
    computing numerical properties, often approximately, occasionally - for
    simple problems - in closed analytic form.) To compare it to something
    simpler: In mathematics one constructs the Riemann integral of a continuous
    function over a finite interval by some kind of limit, and later the
    solution of an initial value problem ordinary differential equations by
    using this and a fixed point theorem. This shows that each (nice enough)
    initial value problem is uniquely solvable. But it tells very little of its
    properties, and in practice no one uses this construction to calculate
    anything. But it is important as a mathematical tool since it shows that
    calculus is logically consistent. Such a logical consistence proof of any 4D
    interacting QFT is presently still missing. Since logical consistency of a
    theory is important, the first person who finds such a proof will become
    famous - it means inventing new conceptual tools that can handle this
    currently intractable problem. Wightman functions are the moments of a
    linear functional on some algebra generated by field operators, and just as
    linear functionals on ordinary function spaces are treated in terms of
    Lebesgue integration theory (and its generalization), so Wightman linear
    functionals are naturally treated by functional integration. The 'only'
    problem is that the latter behaves much more poorly from a rigorous point of
    view than ordinary integration. Wightman functions are the moments of a
    positive state on noncommutative polynomials in the quantum field Phi, while
    time-ordered correlation functions are the moments of a complex measure on
    commutative polynomials in the classical field Phi. In both cases, we have a
    linear functional, and the linearity gives rise to an interpretation in
    terms of a functional integral. The exponential kernel in Feynman's path
    integral formula for the time-ordered correlation functions comes from the
    analogy between (analytically continued) QFT and statistical mechanics, and
    the Wightman functions can also be described in a similar analogy, though
    noncommutativity complicates matters. The main formal reason for this is
    that a Wick theorem holds both in the commutative and the noncommutative
    case. For rigorous quantum field theory one essentially avoids the path
    integral, because it is difficult to give it a rigorous meaning when the
    action is not quadratic. Instead, one only keeps the notion that an integral
    is a linear functional, and constructs rigorously useful linear functionals
    on the relevant algebras of functions or operators. In particular, one can
    define Gaussian functionals (e.g., using the Wick theorem as a definition,
    or via coherent states); these correspond exactly to path integrals with a
    quadratic action. If one looks at a Gaussian functional as a functional on
    the algebra of fields appearing in the action (without derivatives of
    fields), one gets - after time-ordering the fields - the traditional path
    integral view and the time-ordered correlation functions. If one looks at it
    as a functional on the bigger algebra of fields and their derivatives, one
    gets - after rewriting the fields in terms of creation and annihilation
    operators - the canonical quantum field theory view with Wightman functions.
    The algebra is generated by the operators a(f) and a^*(f), where f has
    compact support, but normally ordered expressions of the form S = integral
    dx : L(Phi(x), Nabla Phi(x)) : make sense weakly (i.e., as quadratic forms).
    The art and difficulty is to find well-defined functionals that formally
    match the properties of the functionals 'defined' loosely in terms of path
    integrals. This requires a lot of functional analysis, and has been
    successfully done only in dimensions d<4. For an overview, see: A.S.
    Wightman, Hilbert's sixth problem: Mathematical treatment of the axioms of
    physics, in: Mathematical Developments Arising From Hilbert Problems, edited
    by F. Browder, (American Mathematical Society, Providence, R.I.) 1976,
    pp.147-240. ---------------------------------------------------- Is there a
    rigorous interacting QFT in 4
    dimensions? ---------------------------------------------------- In spite of
    many attempts (and though numerous uncontrolled approximations are routinely
    computed), no one has so far succeeded in rigorously constructing a single
    QFT in 4D which has nontrivial scattering. Not even QED is a mathematical
    object, although it is the theory that was able to reproduce experiments
    (Lamb shift) with an accuracy of 1 in 10^12, and with less accuracy already
    in 1948. But till today no one knows how to formulate the theory in such a
    way that the relevant objects whose approximations are calculated and
    compared with experiment are logically well-defined. See, e.g., the S.P.R.
    threads http://groups.google.com/groups?q=Unsolved problems in QED
    http://groups.google.com/groups?q=What is well-defined in QED This probably
    explains the high prize tag of 1.000.000 US dollars, promised for a solution
    to one of the Clay millenium problems, that asks to find a valid
    construction for d=4 quantum Yang-Mills theories that is strong enough to
    prove correlation inequalities corresponding to the existence of a mass gap.
    The problem is to explain rigorously why the mass spectrum for compact Yang
    Mills QFT begins at a positive mass, while the classical version has a
    continuous spectrum beginning at 0. The state of the art at the time the
    problem was crowned by a prize is given in
    www.claymath.org/Millennium_Prize_Problems/Yang-Mills_Theory/_objects/Offici
    al_Problem_Description.pdf I don't think significant progress has been
    published since then. Yang-Mills theories are (perhaps erroneously) believed
    to be the simplest (hopefully) tractable case, being asymptotically complete
    while not having the extra difficulties associated with matter fields.
    (There are only gluons, no quarks or leptons.) Of course, one would like to
    show rigorously that QED is consistent. But QED has certain problems (the
    Landau pole, see below) that are absent in so-called asymptotically free
    theories, of which Yang-Mills is the simplest. Note that rigorous
    interacting relativistic theories in 2D and 3D exist; see, e.g., Glimm and
    Jaffe's ''Quantum Physics: A Functional Integral Point of View''. This book
    is quite difficult on first reading. Volume 3 of Thirring's Course in
    Mathematical Physics (which only deals with nonrelativistic QM but in a
    reasonably rigorous way) might be a good preparation to the functional
    analysis needed. A more leisurely introduction of the physical side of the
    matter is in Elcio Abdalla, M. Christina Abdalla, Klaus D. Rothe
    Non-Perturbative Methods in 2 Dimensional Quantum Field Theory World
    Scientific, 1991, revised 2nd. ed. 2001.
    http://www.wspc.com/books/physics/4678.html The book is about rigorous
    results, with a focus on solvable models. Note that 'solvable' means in this
    context 'being able to find a closed analytic expression for all S-matrix
    elements'. These solvable models are to QFT what the hydrogen atom is to
    quantum mechanics. The helium atom is no longer 'solvable' in the present
    sense, though of course very accurate approximate calculations are possible.
    Unfortunately, solvable models appear to be restricted to 2 dimensions. The
    deeper reason for the observation that dimension d=2 is special seems to be
    that in 2D the line cone is just a pair of lines. Thus space and time look
    completely alike, and by a change of variables (light front quantization),
    one can disentangle things nicely and find a good Hamiltonian description.
    This is no longer the case in higher dimensions. (But 4D light front
    quantization, using a tangent plane to the light cone, is well alive as an
    approximate technique, e.g., to get numerical results from QCD.) Thus, while
    2D solvable models pave the way to get some rigorous understanding of the
    concepts, they are no substitute for the functional analytic techniques
    needed to handle the non-solvable models such as Phi^4
    theory. ------------------ Is QED consistent? ------------------ Many
    physicists think that QED cannot be a consistent theory, although it gives
    the most accurate predictions modern physics has to offer, namely that of
    the Lamb shift. But there is a phenomenon called the Landau pole that
    indicates that at extremely large energies (far beyond the range of physical
    validity of QED) something might go wrong with QED. This is probably why
    Yang=Mills and not QED was chosen as the model theory for the millenium
    prize. Since the existence of the Landau pole is confirmed only in low order
    perturbation theory and in lattice calculations, hep-lat/9801004 and
    hep-th/9712244 this observation has currently no rigorous mathematical
    substance. Moreover, the quality of the computed approximations are a strong
    indication that there should be a consistent mathematical foundation (for
    not too high energies), although it hasn't been found yet. There is no
    indication at all that at the energies where QED suffices to describe our
    world (with electrons and nuclei considered elementary particles), it should
    be inconsistent. To show this rigorously, or to disprove therefore remains
    another unsolved (and for physics more important) problem. Perturbative QED
    is only a rudimentary version of the 'real QED'; which can be seen that
    Scharf's results on the external field case are much stronger (he constructs
    in his book the S-matrix) than those for QED proper (where he only shows the
    existence of the power series in alpha, but not their convergence). The
    quest for 'existence' of QED is the quest for a framework where the formulas
    make sense nonperturbatively, and where the power series in alpha is a
    Taylor expansion of a (presumably nonanalytic) function of alpha that is
    mathematically well-defined for alpha around 1/137 and not too high energy.
    This is still open. More precisely: Probably the QED S-matrix exists
    nonperturbatively for alpha <= 1/137 and input energies <= some number
    E_limit(alpha) larger than the physical validity of pure QED. What is needed
    is a mathematical proof that the QED S-matrix exists for 0




    --
    Inventor of the Exhaust Pipe.

    http://www.altelco.net/~lovekgc/BigFranklin.doc

    Too much Smoke and Mirrors?
    0000 year of OUR LORD !!!!

    http://www.altelco.net/~lovekgc/Jesus0000.htm

    *** New International SPAM email
    Blocking FREE web based SERVICE:

    http://www.altelco.net/~lovekgc/kgcLOVE.htm

    My Very Beautiful Poetry at

    http://www.altelco.net/~lovekgc/Awarm.doc

    http://www.altelco.net/~lovekgc/kgcpoetry.doc

    peace and love,

    (kirk) kirk gregory czuhai

    http://www.altelco.net/~churches/BlueRoses.htm

    do SEE AT LEAST!!! --->

    http://www.altelco.net/~lovekgc/algebrahelp.htm

    just to aid Alan Green Span, Practice Tree Conservation!

    FREE BEER -->

    http://www.altelco.net/~lovekgc/beer.htm



    --
    Inventor of the Exhaust Pipe.

    http://www.altelco.net/~lovekgc/BigFranklin.doc

    Too much Smoke and Mirrors?
    0000 year of OUR LORD !!!!

    http://www.altelco.net/~lovekgc/Jesus0000.htm

    *** New International SPAM email
    Blocking FREE web based SERVICE:

    http://www.altelco.net/~lovekgc/kgcLOVE.htm

    My Very Beautiful Poetry at

    http://www.altelco.net/~lovekgc/Awarm.doc

    http://www.altelco.net/~lovekgc/kgcpoetry.doc

    peace and love,

    (kirk) kirk gregory czuhai

    http://www.altelco.net/~churches/BlueRoses.htm

    do SEE AT LEAST!!! --->

    http://www.altelco.net/~lovekgc/algebrahelp.htm

    just to aid Alan Green Span, Practice Tree Conservation!

    FREE BEER -->

    http://www.altelco.net/~lovekgc/beer.htm



    --
    Inventor of the Exhaust Pipe.

    http://www.altelco.net/~lovekgc/BigFranklin.doc

    Too much Smoke and Mirrors?
    0000 year of OUR LORD !!!!

    http://www.altelco.net/~lovekgc/Jesus0000.htm

    *** New International SPAM email
    Blocking FREE web based SERVICE:

    http://www.altelco.net/~lovekgc/kgcLOVE.htm

    My Very Beautiful Poetry at

    http://www.altelco.net/~lovekgc/Awarm.doc

    http://www.altelco.net/~lovekgc/kgcpoetry.doc

    peace and love,

    (kirk) kirk gregory czuhai

    http://www.altelco.net/~churches/BlueRoses.htm

    do SEE AT LEAST!!! --->

    http://www.altelco.net/~lovekgc/algebrahelp.htm

    just to aid Alan Green Span, Practice Tree Conservation!

    FREE BEER -->

    http://www.altelco.net/~lovekgc/beer.htm





    --
    Inventor of the Exhaust Pipe.

    http://www.altelco.net/~lovekgc/BigFranklin.doc

    Too much Smoke and Mirrors?
    0000 year of OUR LORD !!!!

    http://www.altelco.net/~lovekgc/Jesus0000.htm

    *** New International SPAM email
    Blocking FREE web based SERVICE:

    http://www.altelco.net/~lovekgc/kgcLOVE.htm

    My Very Beautiful Poetry at

    http://www.altelco.net/~lovekgc/Awarm.doc

    http://www.altelco.net/~lovekgc/kgcpoetry.doc

    peace and love,

    (kirk) kirk gregory czuhai

    http://www.altelco.net/~churches/BlueRoses.htm

    do SEE AT LEAST!!! --->

    http://www.altelco.net/~lovekgc/algebrahelp.htm

    just to aid Alan Green Span, Practice Tree Conservation!

    FREE BEER -->

    http://www.altelco.net/~lovekgc/beer.htm
     
    Kirk Gregory Czuhai, Aug 17, 2004
    #1
    1. Advertising

  2. Kirk Gregory Czuhai

    ric Guest

    Wizard wrote:

    > What a bunch of horseshit!


    [...major snip]

    So is quoting back OVER TWO THOUSAND lines of text! Moron!
     
    ric, Aug 18, 2004
    #2
    1. Advertising

  3. Kirk Gregory Czuhai

    Wizard Guest

    Pissoff!

    ric wrote:
    >
    > Wizard wrote:
    >
    > > What a bunch of horseshit!

    >
    > [...major snip]
    >
    > So is quoting back OVER TWO THOUSAND lines of text! Moron!
     
    Wizard, Aug 18, 2004
    #3
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. a.metselaar

    speed speed speed

    a.metselaar, Dec 28, 2003, in forum: Computer Support
    Replies:
    14
    Views:
    1,070
    BuffNET Tech Support - MichaelJ
    Dec 30, 2003
  2. Alex Schloss

    ASCHLOSS 939 -8TH -AVE NYNY 10019 HELP PLEASE

    Alex Schloss, Jan 10, 2005, in forum: Computer Support
    Replies:
    2
    Views:
    587
    Dr. Harvie Wahl-Banghor
    Jan 10, 2005
  3. -Andy-

    'Ave a good one

    -Andy-, Oct 28, 2005, in forum: Computer Support
    Replies:
    0
    Views:
    460
    -Andy-
    Oct 28, 2005
  4. OurCompGuy
    Replies:
    0
    Views:
    415
    OurCompGuy
    Dec 30, 2006
  5. Au79

    Rural laptop to cost $187, with Linux

    Au79, Mar 18, 2006, in forum: Computer Support
    Replies:
    4
    Views:
    412
    =?UTF-8?B?UsO0Z8Oqcg==?=
    Mar 19, 2006
Loading...

Share This Page