Gravity
Home
Universe
Stars
Quasars
Black Holes
Warp Surf
Singularity
Tachyons
Aliens

Space Phenomenon

Gravity

In the late eighteenth century an experiment was conducted in a stone house on the outskirts of London to measure the mass of the Earth by knowledge of the gravitational attraction of spheres of known mass. The work was completed by Henry Cavendish who watched as a small lead barbell suspended by a fiber twisted minutely under the gravitational tug of two bowling ball-sized weights. The size of the twist revealed the strength of gravity between the two known masses, the barbell and the balls. Because Cavendish knew how strong the Earth's acceleration would be, he could then precisely resolve its mass. The experiment rested on a knowledge of the gravitational constant of the Universe. This is one of the fundamental quantities in nature but its exact resolution is not known. Though constants such as the charge of the electron can be resolved to seven decimal places, physicists are unable to measure G accurately after only the third.

"It (the lack of resolution) grates on me like a burr in the saddle,"

Science December 1998

There has been continuing research to find an accurate answer for G that has continued over many decades. To the dismay of the various teams instead of producing work that has brought them collectively closer to a final answer they have come up with significantly different values.

"You might say we've had negative progress,"

says Barry Taylor, a physicist at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland. However, the November 1998 meeting to honour the Cavendisg anniversary the 45 members of the Big G community had a pleasant surprise. Six groups using a variety of techniques weighed in with new values of G, and they were all in rough agreement. Though the numbers seem to be converging there is slight skepticism from many members of the scientific community because the data points represent a divergence from the previous data (see figure 1). The general explanation offered at the present time for the divergence is simply the continued refining of the empirical results.

The problem of measuring G is that it is not possible to shield out the gravitational attraction of other objects. Riley Newman, a physicist at the University of California, Irvine, likes to tell the story of how his group once discovered a peculiar early morning wiggle in gravity's strength. After months of head-scratching, a graduate student leaving the lab at 3 a.m. was literally doused by the answer: The sprinkler system on the surrounding lawn, set on a regular clock, was soaking the ground with enough water to create an additional nearby mass, which skewed the morning readings.
Science December 1998

The official value for G chosen by an international panel in 1986 comes from a 1982 measurement by Gabe Luther, now at Los Alamos National Laboratory in New Mexico, and William Towler of the University of Virginia. They deduced their value by means of a setup not dissimilar to Cavendish's. In the Luther-Towler experiment (in blue at the top of figure 1) a tiny barbell was hung from a long fiber made of quartz or tungsten. When disturbed, the barbell would rotate lazily back and forth about once every 6 minutes, driven by the fiber's resistance to twisting. The addition of two large tungstun balls added an a gravitational pull to the barbell a force that slowed the swing time by a split second. By measuring that difference, Luther and Towler resolved G with an estimated accuracy of better than a hundredth of a percent. This was excepted as canon until 1994, when field leaders at the German standards lab, the PTB in Braunschweig, announced a value of G that was purported as being equally as accurate (far right blue plot on figure 1). Instead of suspending test weights from a delicate fiber, they floated them on a layer of mercury. That allowed the researchers to use larger masses and generate a stronger pull. To the surprise of the community the German value came out considerably above the Luther-Towler number a full half-percent larger. Things got worse in 1995 when New Zealand's MSL came out with a number that significantly undercut the accepted value. Given that the new values are converging it seems more likely that the debate will soon be resolved. 

 

A gravitational field is not as is often quoted a force, but more the warping of space-time in relation to a mass or energy. This warping is a disturbance moving out away from the object and extends theoretically to infinity. However, the warping falls off as an inverse square function, thus becoming rapidly less intense the further from the source. (Gribbin 1996) The communication of gravitational fields between objects is by a hypothetical boson known as the graviton. They carry the field strength for gravitational fields in a manner identical to the way photons are used in electromagnetic fields. (Kaufmann 1991) For a long time, there have been discussions on whether the Universal Gravitational Constant is constant or if there is not a slow variation with time. It appears that these claims are unsupported empirically.

Paul A.M. Dirac, among the greatest theoretical physicist this century if not of all time, once suggested that the universal gravitational constant, G, weakened with time, in proportion to the age of the universe. Dirac's theory stated that the force of gravity would be only half as strong in 10 billion years as it is today. This became possible to assess empirically in the late 1960s. The moon’s distance from the Earth is understood to be slowly increasing because of its tidal interaction with the Earth. If gravity were growing weaker with time, the moon would recede from the earth even faster than conventional theory predicted. Using laser pulses reflected from mirrors placed by Apollo astronauts scientists showed that though the moon is indeed slowly receding it is only at the expected rate, not the faster one Dirac's theory would imply.

Robert Ehrlich George Mason University

Scientific American Web Site

Robert Ehrlich in the physics department of George Mason University responds to questions in Scientific American regarding the change of the Gravitational Constant by explaining:

"Scientists do continually try to measure the constants of physics, but the usual motivation is to get more precise values, rather than to check whether those values have changed. If you had a technique to measure the speed of light that was no better than one that was previously used, you might not bother to make another measurement unless there were some reason to believe the previous value was in error. Moreover, if you did measure the speed of light and got an anomalous value, the odds are good that you would conclude that you had made a mistake in the measurement, rather than that the speed of light 'hiccupped' that particular day. State-of-the-art measurement techniques are extremely complex and require all sorts of checks to be sure they have been performed correctly. If in fact the constants of nature were changing, it would be very difficult to know how precise your experiment needed to be to detect such a change, unless you had some way of estimating the expected rate of change. Random, occasional hiccups in the constants almost certainly would go undetected."

Scientific American Web Site

However, recent developments in an unrelated field may call into question some previously adhered to theory concerning gravitational acceleration and perhaps even the Gravitational Constant. Though these do not seem to suggest a notable decrease with time exceptionally accurate measurements by JPL no longer correspond to accepted values. (New Scientist No 2151). JPL in Pasedena has been working for over a decade on the anomalous results and have so far been unable to come up with an answer. It appears that data from space probes points to an unknown gravitational attraction within our solar system. Close observations of the trajectory of spacecraft reveals perturbations that some scientists are seriously considering as evidence that our current theories of gravity are wrong.

John Anderson of NASA's Jet Propulsion Laboratory in Pasadena:

"We've been working on this problem for several years, and we accounted for everything we could think of,"

The Pioneer 10 probe has been sending messages to Earth since its launch in 1972. By studying the red shift of the returning radio-waves, one can calculate the probe’s speed. Unusually Pioneer 10 seems to be slowing more quickly than it should.

Anderson notes that the signals from Pioneer 10 are far from "clean". The movement of the Earth around the Sun periodically alters the apparent redshift of the waves. The probe also occasionally corrects its course so that its antenna remains pointing towards the Earth. Scientists believe that all these factors have been accounted for.

However, in 1987 the team working on the Pioneer 10 data found a systematic anomaly that appeared to imply the probe was receiving additional deceleration from the Sun. The disagreement is 80 billionths of a centimeter per second squared. However JPL said to scientists used to working with absolute precision it is a glaring discrepancy.

Pioneer 10's gauges show no unexpected loss of fuel so that could not be responsible for the deceleration and it is not believed possible that the interstellar could cause such a large deceleration. Thermal radiation from the batteries of the spacecraft would be too small and would emit in all directions not causing the kind of deceleration observed. The team also examined the possibility of gravitation from an unknown asteroid but this too was deemed impossible.

What gives their findings credence is that the effect is not limited to a single space craft. If this phenomenon was observed only on Pioneer 10 then it would be most likely an error or problem with the probe. However, Pioneer 11, launched in 1973 is also slowing at about the same rate. The Ulysses probe, launched in 1990 sent originally to slingshot over Jupiter appears to display similar anomalies, as does data from Galileo.

New Scientist 12 September 1998 No. 2151 page 4

 

The problem with what the JPL team is suggesting is that it is not observed to affect anything else in the solar system:

Clifford Will, a physicist at Washington University in St Louis states:

"They're extremely good at what they do, but I think there's some kind of systematic effect that has corrupted the data."

John Ries, a planetary scientist at the University of Texas at Austin, expressed skepticism given that even a small acceleration would affect the planets over a period of 4 ½ billion years. Anderson and his colleagues are still cautious and have not ruled out a systematic error. More recent work seems in agreement with the thoughts of John Rie. It has been suggested that the retardation of distant spacecraft as documented in New Scientist 12 September 1998 No. 2151 page 4 may have a more straight forward explanation.

Since John Anderson of NASA's Jet Propulsion Laboratory in Pasadena came forward with the evidence that as probes moved away from the Sun they slowed down two scientists have suggested an alternative solution.

All the spacecraft have plutonium-based radioisotope thermoelectric generators (RTGs) to power them. It is believed that resistance in the spacecraft's circuits turns some of the electrical power produced by the RTGs into heat. All probes are fitted with louvered fin heat sinks to radiate the excess heat away. According to Edward Murphy, an astronomer at Johns Hopkins University in Baltimore, Maryland.

The heat sinks appear to face away from the Sun, Murphy says the departing photons give an acceleration according to Newton's Third Law. This would act as a retardation to the overall speed slowing the probes down.

He concluded his findings of the radiation and retardation by saying:

"It's pretty close, and within observational errors,"

Jonathan Katz of Washington University in St Louis, Missouri, also blames heat--in this case, the heat wasted because of the RTGs' inefficiency at turning thermal energy into electricity. He points out that the satellites have large antennas that point to the Earth, and that the RTGs sit just off to the side. "The radiation can bounce off the back of the antenna and push the spacecraft towards Earth," he says.

Anderson who as discussed above claims to have ruled out a heat effect as the cause of the deceleration, is still unconvinced by the new arguments. "You can't get the force you need," he says.

From New Scientist, 17 October 1998

Gravity and Antigravity

The ships of the Federation employ an artificial gravity system to allow movement along planform decks. This is accomplished through the use of a network of small gravity generators. The network is divided into four regions two within the saucer section and two in the star drive section. All four networks are tied to the Inertial Dampening System to allow effective "shock absorption" during movement. The networks employ a series of graviton generators that create short-lived graviton pulses. This allows the creation of artificial gravity in a manner similar to the tractor beam emitters.

Early human-built interstellar ships used rotating centrifuges to mimic the effects of gravity, though more modern Starfleet vessels use antigravity. Even the very early ships used by the Federation including the Daedalus Class launched prior to 2167 only 6 years after the founding of the United Federation of planets would appear to have been equipped with antigravity generators given the absence of rotating centrifuges in their hull geometries. The theory of antigravity is being considered in the scientific community and periodically claims arise from scientists claiming to have discovered the principles behind nullifying gravity.

Sternbach and Okuda 1991

Andrew Trupin, in an interview with Scientific American stated his belief that though it was possible to produce an equal and opposite magnetic force to counterbalance the pull of gravity. Gravity its self was not a force more a warping of space time and therefore nullifying gravity would require the realignment of space time. This can not be achieved with magnetic fields that can however, give an illusion of antigravity. At present no such "force" exists that can undo the warping of space time.

Scientific American Web Site

Steinn Sigurdsson of the Institute of Astronomy at Cambridge University addressed the general feasibility of counteracting the pull of gravity:

"One can imagine three ways of countering gravity."

Solutions in relativity allow for the existence of repulsive forces related to gravity. A curious solution exists in which infinite ‘walls’ of high density are postulated, existing under very high surface tension. Such walls would repel all matter with a constant acceleration. It has been conjectured by some theorists that finite pieces of such walls could exist in the real universe and provide local repulsion. Such objects have been invoked to explain some puzzles of cosmology, although most physicists consider conventional explanations to be more likely. In a related vein, a universal repulsive ‘force’ is also postulated to have existed during the era of inflation. In this view, the early universe swelled enormously because of a repulsive 'force' pervading the vacuum.

Smoot 1995

Creating Antigravity

Dutch physicist, Hendrik Casimir in 1948 put forward the first evidence for the existence of a form of antigravity. Casimir, who was born in The Hague in 1909, He put forward his theory in the years from 1942 while working for the electrical company Philips. The Casimir effect involves two parallel metal plates, placed very close together with nothing in between them. The basis of the effect is that what one would normally define as empty space is actually a turbulent mass of virtual particles. This phenomenon is known as zero point momentum.

 

Scientific American December 1997

Photons will be among the most common particles created in the vacuum. Photons being the particles responsible for carrying the electromagnetic force. Virtual photons are common, partly because a photon is its own antiparticle, and partly because photons have no "rest mass. Photons with different energies are associated with electromagnetic waves of different wavelengths, with shorter wavelengths corresponding to greater energy.

 

This series of virtual particle fluctuations gives the vacuum energy. Unfortunately, this energy is the same everywhere, and so it cannot be detected or used. Energy can only be used to do work, and thereby make its presence known, if there is a difference in energy from one place to another.

However, though the above is true for virtual particles in an unbounded vacuum; between two electrically conducting plates, electromagnetic waves would only be able to form certain stable patterns. Such waves can only vibrate in certain ways. The allowed vibrations are in the electromagnetic spectrum are similar to the fundamental note for a particular length of string, and its harmonics, or overtones

Quite simply, no photon corresponding to a wavelength greater than the separation between the plates can fit in to the gap. This means that some of the activity of the vacuum is suppressed in the gap between the plates, while the usual activity goes on outside. The result is that in each cubic centimeter of space there are fewer virtual photons bouncing around between the plates than there are outside, and so the plates feel a force pushing them together. Several experiments have been carried out to measure the strength of the Casimir force between two plates, using both flat and curved plates made of various kinds of material. The force has been measured for a range of plate gaps from 1.4 nanometers to 15 and exactly matches Casimir's prediction.

Scientific American December 1997

Further Research Into Zero Point Motion

Zero point motion in a Bose-Einstein Condensate was quantitatively measured for the first time in May 1999, allowing researchers, in effect, to study matter at a temperature of absolute zero. materials cooled to absolute zero will never cease all motion due in effect to the quantum mechanical principle of indeterminacy they will always be in slight motion. MIT researcher Wolfgang Ketterle, measured such "zero-point motion" in a sodium BEC, a collection of gas atoms that are collectively in the lowest possible energy state. According to Ketterle, 

"the condensate has no entropy and behaves like matter at absolute zero." 

The MIT physicists measured any form of motion by analysing the manner in which   atoms absorb light at slightly lower or higher frequencies if they are moving away from or towards the light. To determine these Doppler shifts, the researchers used a technique known as Bragg scattering. In this technique, atoms absorb photons at one energy from a laser beam and are stimulated by a second laser to emit a photon at another energy which can be shifted upward or downward depending on the atoms' motion towards or away from the lasers. Measuring the range in energies of the emitted photons allowed the researchers to determine the range of momentum values in the condensate. Multiplying this measured momentum spread (delta p) by the size of the condensate (delta x) gave an answer of approximately h-bar (Planck's constant divided by 2 pi)--the minimum value allowed by Heisenberg's uncertainty relation and quantum physics. While earlier BECs surely harvested this zero-point motion, previous measurements of BEC momentum spreads were done with exploding condensates having energies hundreds of times larger than the zero-point energy. 

Physical Review Letters, 7 June 1999

The Existence of Antigravity

The uses of antigravity would be far reaching but and relates to the discussion on an accompanying page where by the use of antigraviton fields are discussed with respect to holing the "throat" of Einstein Rosen Bridges open. In 1987, Morris and Thorne drew attention to such possibilities. In the same paper, they concluded that:

"one should not blithely assume the impossibility of the exotic material that is required for the throat of a traversable wormhole."

Gribbin Homepage

When attempting to determine cosmological distances astronomers have techniques to map the parameters of the Universe. many of these techniques rely on the use of light or other electromagnetic wave.  what are commonly used are "standard candles", instances where sources of known intrinsic strength can be used as a function with red shift. It was Henrietta Leavitt's 1912 discovery of the relation between the period and luminosity of Cepheid variable stars allowed the construction of these standard candle techniques. That is, certain events will always have the same intrinsic energy if we know certain specifics. Stellar nova will always follow certain set rules determined by their colour and frequency. By using Type 1a supernova it was believed that distant galaxies could be charted and from any changes in these the distances and recession of objects could be mapped. However in late 1997 it was realised that certain nova events were far fainter than they should have been. A distant supernovae observed by Perlmutter and colleagues towards the end of that year gave a result that contradicted the current Universe model. It suggested that the expansion rate was actually increasing rather than decreasing (Nature 391 51). Further research by other groups confirmed their results. This suggested the bodies were further away and retarding faster than had previously been believed. two team of astronomers - one led by Saul Perlmutter from the Lawrence Berkeley National Laboratory, the other by Alex Filippenko of the University of California at Berkeley - have found that the remnants of nearly all type 1a supernovae are at least 15 % further away than the standard model of the Universe predicts. The problem became that the Universe was expanding more quickly than it should have been and the suggestion was that some force was pulling the fabric of space apart. It looked like a form of fundamental force that worked only on very macroscopic scales to push the Universe apart.

Many astrophysicists believe that the Universe underwent a period of incredibly rapid expansion shortly after the big bang. One consequence of this "inflationary" model is that the Universe is "flat" with a value of Omega equal to one. However, when all the visible mass in the Universe is added up, it is much less than that needed to give a flat Universe. This is one of the main motivations of the search for invisible or "dark" matter in the Universe.

Some cosmologists believe that the acceleration is caused by quantum effects, which result in a non-zero cosmological constant, Lambda. If the sum of Lambda and Omega equals 1, then the Universe will remain "flat", as predicted by inflation theory.

Some groups have tried to measure Lambda by studying gravitational lensing. If Lambda is non-zero, then astronomers should see more lensing events than if Lambda were zero. According to Matthias Bartelmann of the Max Planck Institute for Astrophysics in Garching, Germany, computer simulations can predict the number of lensing events you should see for different values of Lambda. His results approximately match observations of the gravitational lensing of radio galaxies carried out by Chris Kochanek and colleagues from the Harvard-Smithsonian Center for Astrophysics in the US (Astrophys J. 495 157). Their results place an upper limit of 0.7 on Lambda and a lower limit of 0.3 on Omega. Both these figures match the supernova data.

However, more recently (21 August 1999) warnings have been issued that scientists have noticed a new source of bias in tracking supernovae. While the bias does not necessarily affect arguments for a cosmological constant, it suggests that astronomers have probably not yet pinpointed every possible source of error. The significance of this finding was thrown into question when astronomers discovered that the most distant of these supernovae might be inherently dimmer (New Scientist, 17 July, p 4). Now Dale Howell, a graduate student at the University of Texas at Austin, points out another possible problem in a paper to appear later this year in The Astrophysical Journal.

Howell says that the central region of galaxies on older photographic plates are usually overexposed, so supernovae are less likely to be discovered there. In contrast, more modern digital photography exposes galaxies evenly.

This is a problem because supernovae in the center of galaxies vary much more in brightness than those on the edge. This leads to a selection bias: closer supernovae discovered via photography might have different properties from faraway supernovae found by more modern methods.

"It makes it easier still to imagine that there's some systematic error," says Adam Riess, a supernova spotter at the University of California at Berkeley. However, Riess says that he and his team have attempted to allow for such a bias in their measurements. 

(New Scientist 21 August 1999)

Einstein Rosen Bridges.