Wednesday, November 9, 2011

The Theory of Everything






The Theory of Everything is a term for the ultimate theory of the universe—a set of equations capable of describing all phenomena that have been observed, or that will ever be observed. It is the modern incarnation of the reductionist ideal of the ancient Greeks, an approach to the natural world that has been fabulously successful in bettering the lot of mankind and continues in many people's minds to be the central paradigm of physics. A special case of this idea, and also a beautiful instance of it, is the equation of conventional nonrelativistic quantum mechanics, which describes the everyday world of human beings—air, water, rocks, fire, people, and so forth. The details of this equation are less important than the fact that it can be written down simply and is completely specified by a handful of known quantities: the charge and mass of the electron, the charges and masses of the atomic nuclei, and Planck's constant. For experts we write

(The logical  formula could not be printed)

The symbols Zα and Mα are the atomic number and mass of the αth nucleus, Rα is the location of this nucleus, e and m are the electron charge and mass, rj is the location of the jth electron, and ℏ is Planck's constant.
Less immediate things in the universe, such as the planet Jupiter, nuclear fission, the sun, or isotopic abundances of elements in space are not described by this equation, because important elements such as gravity and nuclear interactions are missing. But except for light, which is easily included, and possibly gravity, these missing parts are irrelevant to people-scale phenomena, and are, for all practical purposes, the Theory of Everything for our everyday world.
However, it is obvious glancing through this list that the Theory of Everything is not even remotely a theory of every thing. We know this equation is correct because it has been solved accurately for small numbers of particles (isolated atoms and small molecules) and found to agree in minute detail with experiment. However, it cannot be solved accurately when the number of particles exceeds about 10. No computer existing, or that will ever exist, can break this barrier because it is a catastrophe of dimension. If the amount of computer memory required to represent the quantum wavefunction of one particle is N then the amount required to represent the wavefunction of k particles is Nk. It is possible to perform approximate calculations for larger systems, and it is through such calculations that we have learned why atoms have the size they do, why chemical bonds have the length and strength they do, why solid matter has the elastic properties it does, why some things are transparent while others reflect or absorb light. With a little more experimental input for guidance it is even possible to predict atomic conformations of small molecules, simple chemical reaction rates, structural phase transitions, ferromagnetism, and sometimes even superconducting transition temperatures. But the schemes for approximating are not first-principles deductions but are rather art keyed to experiment, and thus tend to be the least reliable precisely when reliability is most needed, i.e., when experimental information is scarce, the physical behavior has no precedent, and the key questions have not yet been identified. There are many notorious failures of alleged ab initio computation methods, including the phase diagram of liquid 3He and the entire phenomenonology of high-temperature superconductors. Predicting protein functionality or the behavior of the human brain from these equations is patently absurd. So the triumph of the reductionism of the Greeks is a pyrrhic victory: We have succeeded in reducing all of ordinary physical behavior to a simple, correct Theory of Everything only to discover that it has revealed exactly nothing about many things of great importance.
In light of this fact it strikes a thinking person as odd that the parameters e, ℏ, and m appearing in these equations may be measured accurately in laboratory experiments involving large numbers of particles. The electron charge, for example, may be accurately measured by passing current through an electrochemical cell, plating out metal atoms, and measuring the mass deposited, the separation of the atoms in the crystal being known from x-ray diffraction. Simple electrical measurements performed on superconducting rings determine to high accuracy the quantity the quantum of magnetic flux hc/2e. A version of this phenomenon also is seen in superfluid helium, where coupling to electromagnetism is irrelevant . Four-point conductance measurements on semiconductors in the quantum Hall regime accurately determine the quantity e2/h. The magnetic field generated by a superconductor that is mechanically rotated measures e/mc. These things are clearly true, yet they cannot be deduced by direct calculation from the Theory of Everything, for exact results cannot be predicted by approximate calculations. This point is still not understood by many professional physicists, who find it easier to believe that a deductive link exists and has only to be discovered than to face the truth that there is no link. But it is true nonetheless. Experiments of this kind work because there are higher organizing principles in nature that make them work. The Josephson quantum is exact because of the principle of continuous symmetry breaking. The quantum Hall effect is exact because of localization. Neither of these things can be deduced from microscopics, and both are transcendent, in that they would continue to be true and to lead to exact results even if the Theory of Everything were changed. Thus the existence of these effects is profoundly important, for it shows us that for at least some fundamental things in nature the Theory of Everything is irrelevant. P. W. Anderson's famous and apt description of this state of affairs is “more is different”.
The emergent physical phenomena regulated by higher organizing principles have a property, namely their insensitivity to microscopics, that is directly relevant to the broad question of what is knowable in the deepest sense of the term. The low-energy excitation spectrum of a conventional superconductor, for example, is completely generic and is characterized by a handful of parameters that may be determined experimentally but cannot, in general, be computed from first principles. An even more trivial example is the low-energy excitation spectrum of a conventional crystalline insulator, which consists of transverse and longitudinal sound and nothing else, regardless of details. It is rather obvious that one does not need to prove the existence of sound in a solid, for it follows from the existence of elastic moduli at long length scales, which in turn follows from the spontaneous breaking of translational and rotational symmetry characteristic of the crystalline state. Conversely, one therefore learns little about the atomic structure of a crystalline solid by measuring its acoustics.
The crystalline state is the simplest known example of a quantum protectorate, a stable state of matter whose generic low-energy properties are determined by a higher organizing principle and nothing else. There are many of these, the classic prototype being the Landau fermi liquid, the state of matter represented by conventional metals and normal 3He. Landau realized that the existence of well-defined fermionic quasiparticles at a fermi surface was a universal property of such systems independent of microscopic details, and he eventually abstracted this to the more general idea that low-energy elementary excitation spectra were generic and characteristic of distinct stable states of matter. Other important quantum protectorates include superfluidity in Bose liquids such as 4He and the newly discovered atomic condensates, superconductivity, band insulation, ferromagnetism, antiferromagnetism and the quantum Hall states. The low-energy excited quantum states of these systems are particles in exactly the same sense that the electron in the vacuum of quantum electrodynamics is a particle: They carry momentum, energy, spin, and charge, scatter off one another according to simple rules, obey fermi or bose statistics depending on their nature, and in some cases are even “relativistic,” in the sense of being described quantitively by Dirac or Klein-Gordon equations at low energy scales. Yet they are not elementary, and, as in the case of sound, simply do not exist outside the context of the stable state of matter in which they live. These quantum protectorates, with their associated emergent behavior, provide us with explicit demonstrations that the underlying microscopic theory can easily have no measurable consequences whatsoever at low energies. The nature of the underlying theory is unknowable until one raises the energy scale sufficiently to escape protection.
Thus far we have addressed the behavior of matter at comparatively low energies. But why should the universe be any different? The vacuum of space-time has a number of properties (relativity, renormalizability, gauge forces, fractional quantum numbers) that ordinary matter does not possess, and this state of affairs is alleged to be something extraordinary distinguishing the matter making up the universe from the matter we see in the laboratory. But this is incorrect. It has been known since the early 1970s that renormalizability is an emergent property of ordinary matter either in stable quantum phases, such as the superconducting state, or at particular zero-temperature phase transitions between such states called quantum critical points. In either case the low-energy excitation spectrum becomes more and more generic and less and less sensitive to microscopic details as the energy scale of the measurement is lowered, until in the extreme limit of low energy all evidence of the microscopic equations vanishes away. The emergent renormalizability of quantum critical points is formally equivalent to that postulated in the standard model of elementary particles right down to the specific phrase “relevant direction” used to describe measurable quantities surviving renormalization. At least in some cases there is thought to be an emergent relativity principle in the bargain. The rest of the strange agents in the standard model also have laboratory analogues. Particles carrying fractional quantum numbers and gauge forces between these particles occur as emergent phenomena in the fractional quantum Hall effect . The Higgs mechanism is nothing but superconductivity with a few technical modifications. Dirac fermions, spontaneous breaking of CP, and topological defects all occur in the low-energy spectrum of superfluid 3He .
Whether the universe is near a quantum critical point is not known one way or the other, for the physics of renormalization blinds one to the underlying microscopics as a matter of principle when only low-energy measurements are available. But that is exactly the point. The belief on the part of many that the renormalizability of the universe is a constraint on an underlying microscopic Theory of Everything rather than an emergent property is nothing but an unfalsifiable article of faith. But if proximity to a quantum critical point turns out to be responsible for this behavior, then just as it is impossible to infer the atomic structure of a solid by measuring long-wavelength sound, so might it be impossible to determine the true microscopic basis of the universe with the experimental tools presently at our disposal. The standard model and models based conceptually on it would be nothing but mathematically elegant phenomenological descriptions of low-energy behavior, from which, until experiments or observations could be carried out that fall outside the its region of validity, very little could be inferred about the underlying microscopic Theory of Everything. Big Bang cosmology is vulnerable to the same criticism. No one familiar with violent high-temperature phenomena would dare to infer anything about Eqs. and by studying explosions, for they are unstable and quite unpredictable one experiment to the next. The assumption that the early universe should be exempt from this problem is not justified by anything except wishful thinking. It could very well turn out that the Big Bang is the ultimate emergent phenomenon, for it is impossible to miss the similarity between the large-scale structure recently discovered in the density of galaxies and the structure of styrofoam, popcorn, or puffed cereals.
Self-organization and protection are not inherently quantum phenomena. They occur equally well in systems with temperatures or frequency scales of measurement so high that quantum effects are unobservable. Indeed the first experimental measurements of critical exponents were made on classical fluids near their liquid-vapor critical points. Good examples would be the spontaneous crystallization exhibited by ball bearings placed in a shallow bowl, the emission of vortices by an airplane wing, finite-temperature ferromagnetism, ordering phenomena in liquid crystals, or the spontaneous formation of micelle membranes. To this day the best experimental confirmations of the renormalization group come from measurements of finite-temperature critical points. As is the case in quantum systems, these classical ones have low-frequency dynamic properties that are regulated by principles and independent of microscopic details. The existence of classical protectorates raises the possibility that such principles might even be at work in biology.
What do we learn from a closer examination of quantum and classical protectorates? First, that these are governed by emergent rules. This means, in practice, that if you are locked in a room with the system Hamiltonian, you can't figure the rules out in the absence of experiment, and hand-shaking between theory and experiment. Second, one can follow each of the ideas that explain the behavior of the protectorates we have mentioned as it evolved historically. In solid-state physics, the experimental tools available were mainly long-wavelength, so that one needed to exploit the atomic perfection of crystal lattices to infer the rules. Imperfection is always present, but time and again it was found that fundamental understanding of the emergent rules had to wait until the materials became sufficiently free of imperfection. Conventional superconductors, for which nonmagnetic impurities do not interfere appreciably with superconductivity, provide an interesting counterexample. In general it took a long time to establish that there really were higher organizing principles leading to quantum protectorates. The reason was partly materials, but also the indirectness of the information provided by experiment and the difficulty in consolidating that information, including throwing out the results of experiments that have been perfectly executed, but provide information on minute details of a particular sample, rather than on global principles that apply to all samples.
Some protectorates have prototypes for which the logical path to microscopics is at least discernable. This helped in establishing the viability of their assignment as protectorates. But we now understand that this is not always the case. For example, superfluid 3He, heavy-fermion metals, and cuprate superconductors appear to be systems in which all vestiges of this link have disappeared, and one is left with nothing but the low-energy principle itself. This problem is exacerbated when the principles of self-organization responsible for emergent behavior compete. When more than one kind of ordering is possible the system decides what to do based on subtleties that are often beyond our ken. How can one distinguish between such competition, as exists for example, in the cuprate superconductors, and a “mess”? The history of physics has shown that higher organizing principles are best identified in the limiting case in which the competition is turned off, and the key breakthroughs are almost always associated with the serendipitous discovery of such limits. Indeed, one could ask whether the laws of quantum mechanics would ever have been discovered if there had been no hydrogen atom. The laws are just as true in the methane molecule and are equally simple, but their manifestations are complicated.
The fact that the essential role played by higher organizing principles in determining emergent behavior continues to be disavowed by so many physical scientists is a poignant comment on the nature of modern science. To solid-state physicists and chemists, who are schooled in quantum mechanics and deal with it every day in the context of unpredictable electronic phenomena such as organelle, Kondo insulators  or cuprate superconductivity, the existence of these principles is so obvious that it is a cliché not discussed in polite company. However, to other kinds of scientist the idea is considered dangerous and ludicrous, for it is fundamentally at odds with the reductionist beliefs central to much of physics. But the safety that comes from acknowledging only the facts one likes is fundamentally incompatible with science. Sooner or later it must be swept away by the forces of history.
For the biologist, evolution and emergence are part of daily life. For many physicists, on the other hand, the transition from a reductionist approach may not be easy, but should, in the long run, prove highly satisfying. Living with emergence means, among other things, focusing on what experiment tells us about candidate scenarios for the way a given system might behave before attempting to explore the consequences of any specific model. This contrasts sharply with the imperative of reductionism, which requires us never to use experiment, as its objective is to construct a deductive path from the ultimate equations to the experiment without cheating. But this is unreasonable when the behavior in question is emergent, for the higher organizing principles—the core physical ideas on which the model is based—would have to be deduced from the underlying equations, and this is, in general, impossible. Repudiation of this physically unreasonable constraint is the first step down the road to fundamental discovery. No problem in physics in our time has received more attention, and with less in the way of concrete success, than that of the behavior of the cuprate superconductors, whose superconductivity was discovered serendipitously, and whose properties, especially in the underdoped region, continue to surprise. As the high-Tc community has learned to its sorrow, deduction from microscopics has not explained, and probably cannot explain as a matter of principle, the wealth of crossover behavior discovered in the normal state of the underdoped systems, much less the remarkably high superconducting transition temperatures measured at optimal doping. Paradoxically high-Tc continues to be the most important problem in solid-state physics, and perhaps physics generally, because this very richness of behavior strongly suggests the presence of a fundamentally new and unprecedented kind of quantum emergence.
In his book “The End of Science” John Horgan  argues that our civilization is now facing barriers to the acquisition of knowledge so fundamental that the Golden Age of Science must be thought of as over. It is an instructive and humbling experience to attempt explaining this idea to a child. The outcome is always the same. The child eventually stops listening, smiles politely, and then runs off to explore the countless infinities of new things in his or her world. Horgan's book might more properly have been called the End of Reductionism, for it is actually a call to those of us concerned with the health of physical science to face the truth that in most respects the reductionist ideal has reached its limits as a guiding principle. Rather than a Theory of Everything we appear to face a hierarchy of Theories of Things, each emerging from its parent and evolving into its children as the energy scale is lowered. The end of reductionism is, however, not the end of science, or even the end of theoretical physics. How do proteins work their wonders? Why do magnetic insulators superconduct? Why is 3He a superfluid? Why is the electron mass in some metals stupendously large? Why do turbulent fluids display patterns? Why does black hole formation so resemble a quantum phase transition? Why do galaxies emit such enormous jets? The list is endless, and it does not include the most important questions of all, namely those raised by discoveries yet to come. The central task of theoretical physics in our time is no longer to write down the ultimate equations but rather to catalogue and understand emergent behavior in its many guises, including potentially life itself. We call this physics of the next century the study of complex adaptive matter. For better or worse we are now witnessing a transition from the science of the past, so intimately linked to reductionism, to the study of complex adaptive matter, firmly based in experiment, with its hope for providing a jumping-off point for new discoveries, new concepts, and new wisdom.

Tuesday, November 8, 2011

DARPA & Pentagon- Satellite launch from AirPlanes






U.S. military operations rely heavily upon satellites to spy on battlefields and coordinate friendly forces across the globe, but fast-changing ground conditions or enemy attacks on satellites can threaten to overwhelm the system. That's why the Pentagon has announced $164 million to turn airliners into airborne launch platforms that can send small satellites into orbit within 24 hours.
An airplane-based launch means that the U.S. military could swiftly deploy satellites from any normal airfield, rather than rely upon expensive and possibly vulnerable ground-based launch pads. The Pentagon's research agency, the Defense Advanced Research Projects Agency (DARPA), also anticipates slashing small satellite payload costs from more than $30,000 per pound to less than $10,000 per pound — making such launches three times cheaper.

Taking off from an airliner flying at 25,000 feet allows the theoretical space launch vehicle to start out above most of the atmosphere. It also adds a starting speed boost to the space launch vehicle, and allows designers to create a larger, more efficient rocket nozzle.
DARPA wants the program to demonstrate at least 12 launches of 100-pound payloads to low Earth orbit, with each launch costing about $1 million. Launches could start as soon as 2015, according to DARPA's official announcement of the program on Nov. 4.
The U.S. military has shown past interest in having the capability to quickly put new satellites into orbit. Its attempts to create flexible orbital spies include the reusable Air Force space plane, called the X-37B, which is currently on its second test flight above the Earth.
Satellite replacements might also be needed in case the existing satellite network becomes disabled or compromised. Hackers have demonstrated their ability to interfere with U.S. government satellites, and countries such as Russia and China possess systems capable of shooting down or disabling satellites.
But if the new program succeeds, the U.S. military could put new satellites or satellite replacements into any orbit without the limitations of fixed geographical launch pads. Anyone hoping to stop such launches would have to consider almost any airfield as a possible launch site

Thursday, November 3, 2011

AEHF-1 (Advance Extremely High Frequency) Satellite.




The U.S. Air Force has completed the initial activation of its first jam-proof Advanced Extremely High Frequency military communications satellite, and begun on-orbit testing.
Ground terminals at Schriever AFB, Colo. and MIT/Lincoln Labs, Mass. logged in to the Lockheed Martin-built satellite as part of its startup process. The Space and Missile Systems Center at Los Angeles AFB plans to transfer responsibility for the satellite to 14th Air Force early next year.
“By the end of November we should have completed sufficient testing to confidently make the decision on whether to ship and subsequently launch SV-2 in April 2012,” says Dave Madden, director of SMC’s Milsatcom Systems Directorate.
Worth more than $1 billion and built on Lockheed Martin’s A2100 satellite bus, AEHF-1 required 14 months to reach orbit more than 22,000 mi. above Earth, after its Aug. 14, 2010, launch.
Though launch was nominal, foreign object debris in the propulsion system was later found to be the culprit that prevented the liquid apogee engine to burn properly and propel the satellite higher into orbit.
These problems forced Lockheed to forfeit $15 million of its available remaining award fee at the time.
The total cost of the recovery mission, including assembling a team comprising the nation’s top orbital scientists and conducting painstaking forensics and reviews on the satellite prelaunch, was estimated at $25 million.
Total cost for development, which began in 2002, and buying six AEHF satellites is at least $13.5 billion.

Friday, October 28, 2011

National Polar-orbiting Operation Environmental Satellite System.























The NPP Project is a joint effort of the National Polar-orbiting Operational 
Environmental satellite system (NPOESS) Integrated Program Office (IPO), the National 
Oceanic and Atmospheric Administration (NOAA) and NASA. 
The NPP spacecraft will be an Earth observing satellite carrying four instruments into a 
polar, sun-synchronous, 824 km orbit. NPP will be launched on a Delta II launch vehicle. 
The design lifetime of the NPP spacecraft is 5 years. 
Instruments 
The following instruments will be a part of the NP spacecraft: 
The Visible-Infrared Imager Radiometer Suite (VIIRS) instrument is a 
multispectral scanning radiometer with 3000 km swath width and derives its 
heritage from Advanced Very High Resolution Radiometer (AVHRR), Optical 
Line Scanner (OLS), Moderate Resolution Imaging Spectroradiomenter 
(MODIS), and Sea-viewing Wide Field-of-View Sensor (SeaWIFS). 
The Cross Track Infrared Sounder (CrIS) instrument is a Michelson 
interferometer. Its heritage is the High Resolution Infrared Sounder (HIRS), 
Atmospheric Infrared Sounder (AIRS), and the Infrared Atmospheric Sounding 
Interferometer (IASI) radiometer. It is co-registered with the Advanced 
Technology Microwave Sounder (ATMS) and is designed to work in conjunction 
with it. 
The ATMS instrument is a passive microwave radiometer with a swath width of 
2300 km. Its heritage is the Advanced Microwave Sounding Unit (AMSU) A1/A2 
and the Humidity Sounder for Brazil (HSB). 
The Ozone Mapping and Profiler Suite (OMPS) measures solar scattered radiation 
to map the vertical and horizontal distribution of ozone in the Earth’s atmosphere 
using a nadir ultra-violet (UV) sensor and limb-scanning UV/visible (VIS) 
sensors. Its heritage is the Solar Backscatter Ultraviolet  (SBUV)/2 radiometer, 
the Total ozone Mapping Spectrometer (TOMS), the Shuttle ozone Limb Scatter 
Experiment (SOLSE) and the Limb Ozone Retrieval Experiment (LORE).


The NPP Mission Success is determined by its capabilities to provide continuation of a 
group of earth system observations initiated by the Earth Observing System (EOS) Terra, 
Aqua and Aura missions. The NPP Mission Success is also judged by its ability to reduce 
the risks associated with its advance observational capabilities as they are being 
transitioned from the NASA research program into the NPOESS operational program in 
support of both the Department of Defense (DoD) and NOAA. These include pre- 
operational risk reduction demonstration and validation for selected NPOESS 
instruments, and algorithms, as well as ground data processing, archive and distribution. 
Together these data records will fulfill the U.S. Climate Change Research Program 
(CCRP) objectives of understanding the earth’s climate system and its variability on a 
decadal basis. 
The specific NASA science criteria are: 
1. Continue vertical temperature and moisture profiles of the Earth’s atmosphere 
with accuracy, extent, and frequency consistent with those made with the Aqua 
satellite sensors. 
2. Continue a record of sea surface temperature with accuracy, extent and frequency 
consistent with those made with Terra and Aqua sensors. 
3. Continue a record of surface biophysical and climatic parameters with accuracy, 
extent and frequency consistent with those made with Terra and Aqua sensors. 
4. Continue a record of cloud and aerosol properties with accuracy, extent, and 
frequency consistent with those made with Terra and Aqua sensors. 
5. Continue a record of ozone total column abundance and vertical profile with 
accuracy, extent, and frequency consistent with those made with previous US 
spacecraft making comparable measurements.

Tuesday, October 18, 2011

Nano Satellite





Nano Satellite

LOGAN, Utah — There is big news on the small satellite front. From super-secret agencies and the U.S. military to academia and private firms, as well as world space agencies and NASA, ultra-small satellites are the big thing.
In sizing up "smallsats," there are a range of classifications in the less-than-500- kilogram department, be they minisatellites, microsatellites, nanosatellites, picosatellites, palm-size CubeSats, even the diminutive Femto satellite, weighing in at less than 100 grams.
Cornell University has begun to delve into a postage stamp-size "satellite on a chip" design, called Sprite, envisioning a swarm of these tiny probes exploring planetary atmospheres for organic compounds.

Call them a powerful force in the universe. Smallsats have already shown their ability to monitor disasters, study Earth’s environment and support agriculture, cartography and earth science missions.



Smallsats are part of the solution — when they used to be a distraction, said Matt Bille, an associate with Booz Allen Hamilton in Colorado Springs, Colo.
So, what does this foretell?"The knowledge of how to make and use smallsats has passed the tipping point," Bille told SPACE.com. "It exists worldwide and has fostered a global generation of satellite builders and engineers. It used to be only a few organizations could build a satellite. Now, a smart teenager with a CubeSat kit and a soldering iron is a space agency. We’ve only begun to grasp the implications of that."
"What this means for the future is that use of smallsats and satellites in general will only increase. The proliferation of smallsat capabilities has unleashed the most powerful force in the universe — human creativity," Bille said.
That was the message from Bille, joined by about 1,100 participants who gathered here Aug. 8-11 at Utah State University. The meeting was used to reflect upon 25 years of smallsat progress and what’s ahead — a gathering of experts convened by Utah State University and the American Institute of Aeronautics and Astronautics. 




Low-cost high-tech
Looking back over thelast few decades and gazing forward was Siegfried Janson, a senior scientist at The Aerospace Corp. in Los Angeles.
Janson flagged the onslaught of advances in micro- and nanoelectronics, microelectromechanical (MEM) systems, solar cell technologies, global positioning systems, and the Internet itself. Toss in for good measure personal computers, he said, stuffed with multiple processors, graphic cards, pepped up with more and more memory.
All that low-cost high tech has allowed small teams to blueprint, build and fly progressively smaller satellites with ever-increasing capacity, Janson told the audience.
Janson anticipates that there will be a wider diversity of missions by highly capable small satellites, like formation flying to create large but virtual antenna sizes to make possible enhanced imaging from space.
Collaboration
"Advancement of the technologies is no longer the primary issue," said Pat Patterson, chairman of the smallsat conference and director of the strategic and military space division at Utah State University’s Space Dynamics Laboratory. "It’s still the mission that matters. It has to give the customer some value," he told SPACE.com.
"It’s kind of all coming together," Patterson said, pointing to smallsat attitude- control devices, batteries and solar cells, new ways to beat the heat and cold of space, coupled with smaller, lower-costing launchers.
Collaboration is the key, said Doug Sinclair, owner of Sinclair Interplanetary in Toronto, Canada. He advised that universities building CubeSats need to focus on what they do best and rely on other groups to supply other resources.
"For instance, exchange a radio for a computer. Both groups end up with a CubeSat, but now they’ve got much better odds of succeeding," Sinclair said.
Common utility
As for what’s the future of smallsats, there will be growth, new missions, and new ways of working together, said Bille,expressing his own views and not speaking for Booz Allen Hamilton policy. "CubeSats are like the personal computer of this industry."
Hunsaker’s personal crystal ball predicts networked satellites with individual IP addresses controlled through the Internet and providing individualized positioning, communications, social and multimedia capability."Perhaps just like personal computing and cell phones that have common utility among individual consumers today, smallsats will also follow that trend," said colleague Tom Hunsaker, also an associate at Booz Allen Hamilton.
Bille concluded: “The age of microspacecraft is on solid ground now. There’s a definite trend toward putting small things together to do big accomplishments.”

Thursday, October 13, 2011

Boeing- CST-100 & GEOINT



According to a September 15th press release, the agreement states that Boeing is continuing to advance its design for the CST-100 spacecraft under NASA's Commercial Crew Development Space Act Agreement. In mid-July, Boeing released several artist's renderings of it's CST(Crew Space Transportation)-100 spacecraft which will deliver it's passengers to both the ISS and the Bigelow Aerospace Orbital Space Complex (image below). The CST-100 is a bit larger than Apollo but smaller than Orion, with the ability to be able to launch on several different rockets including the Atlas, Delta and Falcon. The price hasn't quite been set for a seat on the CST-100 and the co-founder of Space Adventures, Eric Anderson, stated that the company isn't quite ready to talk about the price yet. They did however state that the pricing matrix would be competitive to the current Russian launches on the Soyuz spacecraft which is currently used by Space Adventures. To give you an idea, the Canadian billionaire, Guy Laliberte ponied up about $40 million for his last trip to the International Space Station. Find out a bit more about all of the extended training that he had to go through in order to qualify for his trip by clicking here. Mr. Laliberte had to undergo almost 200 days of intense training to prepare for this spaceflight to the Space Station -- Find out a bit more about the training by reading extracts from Guy's blog. If you navigate around the Space Adventures website, you can find lots of interesting videos, demonstrations and even video blogs and recordings from past passengers.
So, although the pricing will most likely be far to much for an average individual to reserve a seat, it's like anything else—pricing eventually goes down as more players enter into the game. Perhaps in a couple decades, pricing might be a bit more reasonable, and we'll all be headed into orbit.

The Company's newest technology 

The Boeing Company [NYSE: BA] will demo its geospatial data management technologies for the Intelligence Community, defense and national security customers at the GEOINT Symposium, October 16-19 in San Antonio, Texas. The Boeing exhibit will introduce the company’s newest “Human Geography” technology. 

“The Boeing Human Geography solution provides community data in categories such as political ideology, ethnicity, cultural habits, language, education and health care — and how these have contributed to the intelligence picture,” said Dewey Houck, Intelligence Systems Group vice president. “It offers historical trends and patterns to help give the analyst a holistic understanding of nations and regions by broadening and deepening their analytic expertise.” In addition to Human Geography, the Boeing exhibit at Booth 313 also will feature the following technologies: 
  • TAC – An analytical tool that enables real-time collaborative analysis through the persistent querying of streaming and stored data, giving users immediate access to data relevant to their topic of interest
  • 3-D Ladar – A mapping capability that uses laser light technology to produce a precise 3-D image of the terrain. The laser radar, or ladar, weighs less than 20 pounds (8 kg), enabling multi-platform use and supporting a variety of surveillance and sensing applications
  • SAR Agility – The Synthetic Aperture Radar (SAR) image analysis tool draws on the power of mass-market Graphic Processing Units (GPUs) to provide real-time processing and user interaction, resulting in fast and comprehensive extraction of actionable information from complex SAR imagery
Boeing also will showcase its comprehensive, web-based GEOINT source-discovery solution. This solution allows online, on-demand access to search across internal and external data sources, as well as different classification levels, using Boeing eXMeritus HardwareWall and a variety of industry standard protocols and messaging formats. 

Friday, September 16, 2011

Keyhole-7 GAMBIT & Keyhole-9 HEXAGON






It’s been super-secret for so many years, but for one day only on Saturday (Sept. 17), some of the United States' once-clandestine spy satellites will be seen by public eyes for the first time.
The buzz in space security circles is that the National Reconnaissance Office (NRO) will apparently lift the veil of silence on its hush-hush early spysat hardware — space-based James Bondish satellites that performed highly classified, intelligence-gathering duties.
The odds are that NRO's GAMBIT and HEXAGON space surveillance programs of the 1960s will be the spotlighted projects. A HEXAGON satellite will be on display Saturday at the Smithsonian Institution's National Air and Space Museum's Steven F. Udvar Hazy Center, in Chantilly, Va.
At 60 feet (18 meters) long and 10 feet (3 m) wide, the HEXAGON satellites were the largest spy satellites the United States ever launched into space. The satellites took photographs of the Soviet Union and other targets around the world from 1971 to the early 1980s, according to a Smithsonian announcement.
It's all part of 50th anniversary celebration of the NRO this year. A curtain-raising reception is slated for Saturday at the Udvar Hazy Center — just down the road from U.S. space spy agency's headquarters. [10 Ways the Government Watches You  ]
Reportedly, the event will be a packed house of government civilian, military, and industry attendees. The commemoration is co-hosted by the NRO, the Smithsonian and the American Institute of Aeronautics and Astronautics (AIAA).
"The commemorative event is being held to recognize and celebrate the collective contributions that the NRO's people and innovative technologies have made to our nation’s security in supporting policy decisions, intelligence activities, and military operations around the world," an AIAA announcement stated. "As part of this event, the NRO plans to highlight examples of those contributions by unveiling two legacy satellite reconnaissance systems."
The artifacts being unveiled will be on display during the gala.

For its part, the NRO clearly has a secret legacy to stand on.
"When the United States needs eyes and ears in critical places where no human can reach — be it over the most rugged terrain or through the most hostile territory — it turns to the National Reconnaissance Office," reads a statement on the NRO website.
The NRO is the U.S. government agency in charge of designing, building, launching and maintaining America's vital intelligence satellites. From its inception as a hush-hush entity in 1960s to their out-in-the-open declassification in 1992, NRO is geared to crank out reconnaissance support to the intelligence community and Department of Defense.
"We are unwavering in our dedication to fulfilling our vision: Vigilance From Above," asserts the NRO website. "Develop. Acquire. Launch. Operate." [Most Destructive Space Weapons Concepts]
Amazing things
In the Sept. 12 edition of "The Space Review," noted military space historian Dwayne Day pointed out in an article entitled "Flashlights in the Dark" that the decades-long secrecy over two of NRO's Cold War era satellite programs — the Keyhole-7 GAMBIT and Keyhole-9 HEXAGON — is being cast off.
"GAMBIT was started in 1960, with a first launch in 1963. HEXAGON started in 1966, with a first launch in 1971. Both programs operated until the mid-1980s. Both used cameras that recorded their images on film that was parachuted to Earth," Day explained in his article. "Both were highly successful. And both represented the pinnacle of American Cold War intelligence collection technology, unmatched in capabilities by any other nation."
"Back in 1995, President Clinton signed an executive order that declassified the first reconnaissance satellite, named CORONA, and called for a review of the two programs that followed it," Day told SPACE.com. [Related: New Telescope Spots Space Hazards for Military Satellites]
"That review started in the latter 1990s, but then got put on hold. Probably the reason that these programs are finally being declassified is mostly because the National Reconnaissance Office is celebrating its fiftieth anniversary … and also because the Obama administration is more open to some declassification of historical programs, more so than the Bush administration," Day said.
Day explained that what people don't realize is that while NASA was doing amazing things in space in the 1960s and later, there was a whole other military space program that was also accomplishing amazing things.
"The NRO was operating intelligence satellites that were astounding in their capabilities. These satellites helped reveal what the Soviet Union was doing so that they could not surprise us. And they also made it possible to verify the arms control treaties that the superpowers signed in the early 1970s," Day said.
Beyond Corona
Bruce Carlson was appointed as the 17th Director of the NRO in June 2009.
Speaking last month before a gathering of small satellite developers at Utah State University, he noted that the CORONA program was developed as a work-around to an image transmission problem and provided a vital capability.
The CORONA program continued until 1972 and achieved a number of notable firsts, Carlson said:
  • First to recover objects from orbit
  • First to deliver intelligence information from a satellite
  • First to produce stereoscopic satellite photography
  • First to employ multiple re-entry vehicles, and
  • First satellite reconnaissance program to pass the 100-mission mark — 145 satellites were launched under the CORONA program.
"This year marks the 50th Anniversary of the NRO. During that time, our mission has transitioned from a mission focused on the USSR to a diverse and widely dispersed mission which includes international terrorists, drug traffickers, peacekeeping and humanitarian relief operations … to name a few," Carlson said.
Logical next step
According to Jeffrey Richelson, a senior fellow of the National Security Archive who has written extensively about the NRO and space reconnaissance, NRO throwing light on deep secrets is refreshing news.
“Gambit and Hexagon declassification is the logical next step if the NRO was going to do anything really significant for the 50th anniversary,” Richelson told SPACE.com.
Hexagon declassification was envisioned in President Bill Clinton’s 1995 executive order. There was the 2002 declassification of Keyhole-7 and Keyhole-9 (known in short hand as KH) mapping product.
"This may largely complete the process — or at least begin to complete the process — of declassifying the G and H programs — if KH-8 data and product is declassified along with KH-7. But even if it is G/KH-7 and H, it would be significant,” Richelson said. “Certainly the declassification has the potential for providing a lot of new historical data about the satellite imagery effort."

Saturday, September 10, 2011

Gravity Recovery and Interior Laboratory (GRAIL)

Delta II launches with moon-bound GRAIL spacecraft

September 10th, 2011 by William Graham
The Delta II rocket has launched on its 150th flight on Saturday, departing from Cape Canaveral’s Space Launch Complex 17 for the final time on the second of two launch opportunities – at 9:08am Eastern (13:08 UTC). The rocket is carrying NASA’s two GRAIL spacecraft, which will be used to study the Moon’s gravitational field.
Delta II/GRAIL:
GRAIL, or Gravity Recovery And Interior Laboratory, is a two-spacecraft mission being flown as part of NASA’s Discovery programme. It is expected to yield a better understanding of the Moon’s internal structure and thermal evolution. This will allow scientists to formulate a model of the Moon’s formation which can also be applied to terrestrial planets.
The principal scientific objectives of the GRAIL mission are to produce a map of the Moon’s lithosphere, to allow scientists to understand the Moon’s thermal evolution, and the evolution of breccia within the Moon’s crust, and to determine more details of the Moon’s interior, particularly the size of the Moon’s core, and the structure beneath impact basins.
The two spacecraft are identical, apart from the positioning of star trackers and instruments to allow the spacecraft to fly with their antennae pointing towards each other. They were built by Lockheed Martin, based around a bus developed for the USA-165, or XSS-11, satellite; a technology demonstration spacecraft operated by NASA and the United States Air Force, which was launched in 2005. Each GRAIL spacecraft has a mass of 307 kilograms, including 106 kilograms of hydrazine fuel.
The spacecraft are each equipped with two 1.9 square metre, 520-cell, solar arrays, which will generate at least 700 watts of power. The solar arrays will charge a 30 amp-hour lithium ion battery in each spacecraft, which will be used to store power for when the spacecraft are not in sunlight. Propulsion of each spacecraft will be provided by an MR-106L monopropellant engine, capable of generating 22 newtons of thrust.
The spacecraft are three-axis stabilised, with reaction wheels and eight warm gas thrusters, each capable of producing 0.9 newtons of thrust, being used aboard each spacecraft for attitude control. Sun and star trackers and inertial measurement units will allow the spacecraft to determine their orientation. The spacecraft carry avionics systems which are derived from those developed for the Mars Reconnaissance Orbiter, which was launched in 2005.
Each spacecraft carries two transponders operating in the IEEE S band (NATO E band), which will be used to relay data to the ground and to upload commands to the spacecraft. A further S band transponder, the Time-Transfer Assembly, will be used to transmit signals between the spacecraft to synchronise their onboard chronometers.
Two IEEE X band (NATO I or J band) transponders, the Radio Science Beacon, will be used to transmit signals to Earth for Doppler ranging. Finally an IEEE Ka band (NATO K band) transponder, the Microwave Assembly, will be used to find the distance between the two spacecraft, and track their relative motion.
The Ka band transponder forms part of the Lunar Gravity Ranging System or LGRS, which is GRAIL’s primary instrument. LGRS consists of four elements; the Ultra-Stable Oscillator, or USO, will be used to generate an oscillating signal to synchronise the instruments. This signal will then be transmitted through both the Microwave Assembly (MWA) and Time-Transfer Assembly (TTA) antennae. TTA broadcasts the signal as a ranging code, similar to those transmitted by Global Positioning Satellites. Finally, the data is collected by the Gravity Recovery Processor Assembly, or GPA, which processes it for transmission back to Earth.
LGRS is derived from the K-Band Ranging (KBR) instrument aboard the Gravity Recovery And Climate Experiment, or GRACE, spacecraft, which were launched in March 2002. GRACE, like GRAIL, consists of two spacecraft using radio signals to map the gravitational field, however it is studying Earth’s gravitational field instead of the Moon’s.
The two spacecraft also carry the Moon Knowledge Acquired by Middle school students, or MoonKAM, student outreach payload. This will be used to image areas of the Moon at the request of schoolchildren. A similar programme for Earth imagery, EarthKAM, has been operated aboard the International Space Station since 2001 and also flown on Space Shuttle missions STS-89 and STS-99. A prototype, KidSat, was also flown on STS-76, STS-81 and STS-86.
GRAIL is the eleventh mission to be launched as part of NASA’s Discovery programme, which was started in 1992. Discovery is a medium-class programme intended to study the Solar system. many of NASA’s recent planetary missions have been conducted as part of it. The first Discovery mission, NEAR, was launched in February 1996 to explore the Asteroid 433 Eros.
The next mission, Mars Pathfinder was launched in December 1996, placing a lander on Mars, and deploying the Sojourner rover. The third mission, Lunar Prospector, was launched in 1998 to study the lunar surface via spectroscopy. In 1999 the Stardust spacecraft was launched to return samples from the comet 81P/Wild 2. These samples were returned in January 2006, and the spacecraft subsequently performed an extended mission with a flyby of 9P/Tempel 1 this February, before finally being deactivated on 24 March.
The Genesis spacecraft, launched in August 2001, was the fifth Discovery mission. It collected a sample of solar wind, and returned it to Earth. During its return to Earth, its parachute failed to deploy, however some of its samples were still usable. The sixth mission, CONTOUR, was less successful. Intended to perform flybys of comets 2P/Encke and 73P/Schwassmann-Wachmann 3, CONTOUR was launched in July 2002. The spacecraft was destroyed due to a malfunction of an onboard kick motor which was intended to propel it out of Earth orbit towards its first comet encounter.
MESSENGER, the seventh mission, was launched in August 2004 and entered orbit around Mercury on 18 March this year. The eighth mission, Deep Impact, fired a probe into the comet 1P/Tempel 1, in order to study its composition. Launched in 2005, its impactor hit Tempel 1 in July of the same year, with the spacecraft then being used for an extended mission to 103P/Hartley 2.
The ninth mission of the programme, Dawn, was launched in September 2007, and entered orbit around the Asteroid 4 Vesta on 16 July this year. Following a year orbiting Vesta, it will depart for the dwarf planet Ceres, which it will also orbit. The tenth mission, Kepler, is a space telescope which is being used to look for exoplanets. It was launched in 2009.
GRAIL was launched by a Delta 356, which is a Delta II Heavy flying in the 7920H-10C configuration. It was the sixth and possibly last Delta II Heavy to be launched. Overall, its launch marked the 150th Delta II mission, and potentially its penultimate flight.
The Delta II 7920H-10C configuration consists of an Extra-Extended Long Tank Thor first stage with an RS-27A engine, fuelled by RP-1 propellant and liquid oxygen oxidiser.
Early in the ascent the first stage was augmented by nine GEM-46 solid rocket motors; six of which ignited at launch, and the other three shortly before the first six burnt out.
The second stage was a Delta-K, powered by an AJ-10-118K engine. The second stage is fuelled by Aerozine-50 propellant, with dinitrogen tetroxide being used as an oxidiser. The 7920H-10C configuration does not incorporate a third stage. A three metre, or ten foot, composite payload fairing encapsulated the spacecraft.
In the mid-1980s, launches of Delta rockets were winding down, with future payloads expected to fly aboard the Space Shuttle. Following the Challenger accident in 1986 this policy was reviewed, and in January 1987 the US Air Force ordered a new series of Delta rockets, primarily to launch Global Positioning Satellites. The Delta II made its maiden flight on 4 February 1989, in the 6925 configuration.
The 6000-series Delta IIs were built as an interim whilst the more capable 7000-series was in development. It used an Extra-Extended Long Tank Thor first stage powered by an RS-27 engine, a Delta-K second stage, and nine Castor-4A solid rocket motors. The 7000 series, which first flew in November 1990, introduced an uprated RS-27A engine, and GEM-40 solids.
The 6000-series made seventeen flights; three in the 6920 configuration and 14 in the 6925 configuration. Its final flight was made on 24 July 1992, carrying the Geotail spacecraft to study Earth’s magnetosphere. Other 6000-series payloads included nine Block II GPS satellites, four commercial communications satellites, and SDI technology demonstration experiment and two astronomy satellites; EUVE and ROSAT.
The 7000 series has seen a wider variety of configurations, with the Delta II Lite programme resulting in the development of configurations with three or four solid rocket motors, and launches being made with two or three stages, and with two different types of third stage. In all, 127 have been launched; ten in the 7320 two-stage configuration with three solid rocket motors, thirteen in the 7420 configuration with four SRMs, and twenty seven in the 7920 configuration with nine SRMs.
The 7326 configuration made three flights and the 7426 made a single flight; these had three and four solid rocket motors respectively, and both had Star-37FM third stages. The 7425 configuration, with four solid rocket motors and a Star-48B third stage, made four flights. The most-launched configuration is the 7925, which features nine SRMs and a Star-48B upper stage, and has made sixty nine flights.
The Delta II Heavy has the same configuration as the 7000 series, except that it has more powerful solid rocket motors. The GEM-46 motors, which were originally developed for the Delta III, allow the rocket to carry a heavier payload into orbit.
The Delta II is statistically the most reliable rocket in service, having only failed twice. Both failures were of 7925 configuration rockets, and both were caused by problems with the solid rocket motors.
In August 1995, during the launch of Koreasat 1, one of the nine solids failed to separate after burning out. The rocket continued to orbit, however the additional mass of the spent solid rocket motor resulted in it reaching a lower orbit than had been planned. The satellite was able to raise its own orbit, but at the expense of a significant amount of fuel.
The second failure occurred in January 1997, during the launch of the first Block IIR Global Positioning Satellite, GPS IIF-1. Thirteen seconds into the flight, the rocket self-destructed following the structural failure of one of the number 2 solid rocket motor. Over 220 tonnes of debris fell within a kilometre of the launch pad, with one piece landing in the blockhouse car park, destroying twenty vehicles.
An investigation concluded that recent changes in equipment used to transport the solid rocket motors had resulted in pressure being put onto an area of the booster, and that this had caused a crack to form around six seconds after launch. The equipment was redesigned, and additional inspections were added for future launches. The Delta II has not failed in any of the ninety four launches it has made since then.
Delta 356 could have launched GRAIL in one of two instantaneous launch windows available per day. The first of these windows opened at 8:29:45am Eastern (12:29:45 UTC), but was not taken due to unacceptable upper level winds. The second launch opportunity was, however, taken at 9:0am Eastern (13:08 UTC).
Had the vehicle launched during the first window, the rocket would have flown on an azimuth of 93 degrees. During the second window, an azimuth of 99 degrees was employed.
About two seconds before launch, the RS-27A main engine ignited, along with two LR-107-AN-11 vernier engines. About two tenths of a second before the scheduled liftoff time, six of the nine solid rocket motors ignited. At T-0, Delta 356 was released to begin its ascent into orbit. Twenty nine seconds into flight, the rocket was travelling at Mach 1, the speed of sound.
Around 79 seconds after liftoff, the three remaining solid rocket motors ignited. A second and a half later the six ground-lit solids were jettisoned in two groups of three, having expended their fuel. The three air-lit motors also burned for 80.5 seconds, before they too were jettisoned.
Around 263.2 seconds after launch the first stage depleted its fuel, and its main engine shut down, an event designated Main Engine Cutoff, or MECO. Shortly afterwards, Vernier Engine Cutoff, or VECO, occured, when the two vernier engines also shut down. Eight seconds after MECO the first stage was jettisoned, and five and a half seconds after that the second stage’s AJ-10 engine ignited. The payload fairing was jettisoned 4.3 seconds after second stage ignition.
Events after fairing separation tracked slightly different times, based on the 93 or 99 degree flight profile. In the 93 degree flight profile, the first burn of the second stage was to last 153 seconds. With the 99 degree profile used, it was seven tenths of a second longer.
The first burn was followed by a long coast phase, lasting 58 minutes and 40.6 seconds for the 99 degree profile. After coasting, the second stage ignited for its second and final burn, lasting 271.7 seconds on the 99 degree profile.
Nine and a half minutes after the second burn was complete, separation of the first spacecraft, GRAIL-A, occured. The upper stage was then manoeuvred, before the separation of GRAIL-B, which occured eight and a quarter minutes after that of GRAIL-A. A RocketCam mounted on the second stage was used to verify that separation had taken place.
Shortly after launch, the GRAIL spacecraft will deploy their solar arrays as they pass into sunlight for the first time since separating from their carrier rocket. GRAIL will travel to the Moon on a low-energy trajectory, via the Sun-Earth Lagrange 1 point.
The spacecraft are expected to enter selenocentric, or lunar, orbit between 31 December and 1 January. The spacecraft will subsequently manoeuvre into lower orbits, before they are moved into formation to begin collecting scientific data. At the start of the scientific phase of the mission, the spacecraft will be in circular orbits at an altitude of 55 kilometres.
Scientific operations are expected to commence on 8 March next year, and last for 82 days. Decommissioning of the spacecraft will begin on 29 May, and the spacecraft are expected to impact the lunar surface in June.
Delta 356 was the last rocket planned to depart from Cape Canaveral’s Space Launch Complex 17. It launched from SLC-17B, SLC-17A having been closed in 2009. Launch Complex 17, as it was then designated, was built between August and December 1956 to accommodate tests of the Thor missile.
The first Thor launch occurred from LC-17B on 26 January 1957, however it ended in failure when the rocket lost thrust and exploded on the launch pad. A second launch in April was erroneously destroyed by range safety after a faulty console caused the RSO to believe the rocket was flying in the wrong direction. The first successful launch occurred on 20 September, also from LC-17B.
Missile tests were made from LC-17B until 1957, after which it began to be used for orbital launches. The first orbital launch to be made from the pad occurred on 13 April 1960, when a Thor-Ablestar launched Transit 1B. The last of ten Thor-Ablestar launches from the pad occurred in May 1962, after which Delta launches from LC-17B began.
The first Delta launch from LC-17B was of Delta 11, carrying Telstar 1, the first commercial communications satellite. The pad was subsequently used by Delta A, B, C, E1, G and C1 rockets between 1962 and 1969. Between 1963 and 1965, six suborbital flights were also launched from LC-17B, carrying ASSET reentry vehicles to demonstrate technology for the X-20 DynaSoar spacecraft.
Three of these launches used the single-stage Thor DSV-2F, and the other three used the two-stage Thor DSV-2G, which included a Delta upper stage, however its launches are not officially listed as Delta launches. None of the six ASSET flights reached space; instead they flew shallower atmospheric flight profiles.
Delta launches from LC-17B resumed in September 1972, when the Delta 1000-series started using LC-17B. The 2000-series began to launch from the pad in 1974, with the last Delta 2000 launch from the complex occurring in 1979. From 1983 to 1989 it was used for Delta 3000-series launches and the short-lived interim Delta 4000 series made both of its launches from LC-17B; the first on 27 August 1989 and the second on 12 June 1990.
Delta II launches from LC-17B began on 11 December 1989. On 8 January 1991 the first Delta II 7000-series launch from LC-17B orbited a NATO communications satellite. In the mid 1990s LC-17B received modifications to accommodate the Delta III rocket, and in 1997 it was redesignated Space Launch Complex 17.
The first Delta III launch occurred on 27 August 1998, carrying the Galaxy 10 satellite. The mission ended in failure after the vehicle’s solid rocket motors ran out of hydraulic fluid, resulting in a loss of control and the destruction of the rocket by range safety.
The second Delta III launch in May 1999 also failed, after the second stage engine’s combustion chamber ruptured, leaving the Orion 3 communications satellite in a useless low Earth orbit. A third launch with a mock-up satellite also underperformed, reaching a lower than planned orbit. After these failures the Delta III was retired.
Because of its modifications to accommodate the Delta III, SLC-17B is the only launch pad which can accommodate the Delta II Heavy. The first launch of the Delta II Heavy occurred on June 10, 2003, carrying the Spirit spacecraft bound for Mars. Launches of standard 7000 series Delta IIs continued throughout the time that the Delta III and Delta II Heavy have used the pad, with the most recent launch from the complex having been made in September 2009 carrying the two STSS-Demo satellites for the US military.
In total GRAIL is the 164th launch to have been made from SLC-17B. Payloads launched from the pad in the past include Telstar 1, Syncom 1, Pioneers 8 and 9, Wind, NEAR, Mars Pathfinder, Mars Polar Lander, WMAP, the Opportunity rover, Spitzer, MESSENGER, Deep Impact, STEREO, THEMIS, Dawn, Fermi and Kepler.
The other pad in Space Launch Complex 17, SLC-17A, was used for 161 launches, beginning with a Thor test flight on 30 August 1957. The pad was used by Thor DM-18, Thor-Able, Thor-Delta, Thor DSV-2D rockets, followed by the Delta A, B, C, D, E, E1, G, L, M, M6, N, 2000 and 3000 series. From 1989 Delta II launches were made from the pad, using both the 6000 and 7000 series configurations. The final launch from the pad was of the last GPS IIR satellite, in August 2009.
The launch of GRAIL was the second of three planned Delta II launches this year. The next launch is the last currently on the manifest; however components to produce five more rockets do exist.
These components are for the Delta II Heavy configuration, however United Launch Alliance has stated that they can be converted to regular 7000-series rockets, which would be able to launch from Vandenberg Air Force Base without modifications to the launch pad. NASA is currently considering restoring the Delta II to its list of available launch systems after repeated failures of the Taurus-XL rocket.
The remaining Delta II launch is also United Launch Alliance’s next scheduled mission. It will carry the NPP weather satellite for NASA and NOAA, and is scheduled to launch from Vandenberg Air Force Base at the end of next month. Excess capacity on the rocket will be used to launch several small satellites.
(Images: ULA, NASA, L2 Historical (several gbs of hi res “Old School” Launch Vehicle photos – such as AFMTC and AMR era)
(As the shuttle fleet retire, NSF and L2 are providing full transition level coverage, available no where else on the internet. Click here: http://www.nasaspaceflight.com/l2/ - to view how you can support NASASpaceflight.com)