Wednesday, February 4, 2009

Plane

Furniture is the mass noun for the movable objects which may support the human body (seating furniture and beds), provide storage, or hold objects on horizontal surfaces above the ground. Storage furniture (which often makes use of doors, drawers, and shelves) is used to hold or contain smaller objects such as clothes, tools, books, and household goods. (See List of furniture types.)

Furniture can be a product of artistic design and is considered a form of decorative art. In addition to furniture's functional role, it can serve a symbolic or religious purpose. Domestic furniture works to create, in conjunction with furnishings such as clocks and lighting, comfortable and convenient interior spaces. Furniture can be made from many materials, including metal, plastic, and wood. Furniture can be made using a variety of woodworking joints which often reflect the local culture.
Furniture in fashion has been a part of the human experience since the development of non-nomadic cultures. Evidence of furniture survives from the Neolithic Period and later in antiquity in the form of paintings, such as the wall Murals discovered at Pompeii; sculpture, and examples have been excavated in Egypt and found in tombs in Ghiordes, in modern day Turkey.

A range of unique stone furniture has been excavated in Skara Brae a Neolithic village, located in Orkney, Scotland. The site dates from 3100-2500BC and due to a shortage of wood in Orkney, the people of Skara Brae were forced to build with stone, a readily available material that could be worked easily and turned into items for use within the household. Each house shows a high degree of sophistication and was equipped with an extensive assortment of stone furniture, ranging from cupboards, dressers and beds to shelves, stone seats and limpet tanks. [[1]] The stone dressers were regarded as the most important as it symbolically faces the entrance in each house and is therefore the first item seen when entering, perhaps displaying symbolic objects, including decorative artwork such as several Neolithic Carved Stone Balls also found at the site.

[edit] The Classical World
Early furniture has been excavated from the 8th-century B.C. Phrygian tumulus, the Midas Mound, in Gordion, Turkey. Pieces found here include tables and inlaid serving stands. There are also surviving works from the 9th-8th-century B.C. Assyrian palace of Nimrud. The earliest surviving carpet, the Pazyryk Carpet was discovered in a frozen tomb in Siberia and has been dated between the 6th and 3rd century B.C.. Recovered Ancient Egyptian furniture includes a 3rd millennium B.C. bed discovered in the Tarkhan Tomb, a c.2550 B.C. gilded set from the tomb of Queen Hetepheres, and a c. 1550 B.C. stool from Thebes. Ancient Greek furniture design beginning in the 2nd millennium B.C., including beds and the klismos chair, is preserved not only by extant works, but by images on Greek vases. The 1738 and 1748 excavations of Herculaneum and Pompeii introduced Roman furniture, preserved in the ashes of the 79 A.D. eruption of Vesuvius, to the eighteenth century.

The furniture of the Middle Ages was usually heavy, oak, and ornamented with carved designs. Along with the other arts, the Italian Renaissance of the fourteenth and fifteenth century marked a rebirth in design, often inspired by the Greco-Roman tradition. A similar explosion of design, and renaissance of culture in general, occurred in Northern Europe, starting in the fifteenth century. The seventeenth century, in both Southern and Northern Europe, was characterized by opulent, often gilded Baroque designs that frequently incorporated a profusion of vegetal and scrolling ornament. Starting in the eighteenth century, furniture designs began to develop more rapidly. Although there were some styles that belonged primarily to one nation, such as Palladianism in Great Britain, others, such as the Rococo and Neoclassicism were perpetuated throughout Western Europe.

[edit] 19th Century

The nineteenth century is usually defined by concurrent revival styles, including Gothic, Neoclassicism, Rococo, and the Eastlake Movement. The design reforms of the late century introduced the Aesthetic movement and the Arts and Crafts movement. Art Nouveau was influenced by both of these movements.

[edit] Early North American
This design was in many ways rooted in necessity and emphasizes both form and materials. Early American chairs and tables are often constucted with turned spindles and chair backs often constructed with steaming to bend the wood. Wood choices tend to be decidious hardwoods with a particular emphasis on the wood of edible or fruit bearing trees such as Cherry or Walnut.he first three-quarters of the twentieth century are often seen as the march towards Modernism. Art Deco, De Stijl, Bauhaus, Wiener Werkstätte, and Vienna Secession designers all worked to some degree within the Modernist idiom. Postmodern design, intersecting the Pop art movement, gained steam in the 1960s and 70s, promoted in the 80s by groups such as the Italy-based Memphis movement. Transitional furniture is intended to fill a place between Traditional and Modern tastes.

Asian furniture has a quite distinct history. The traditions out of Pakistan, China, India, and Japan are some of the best known, but places such as Korea, Mongolia, and the countries of South East Asia have unique facets of their own.

Chinese furniture is traditionally better known for more ornate pieces. The use of uncarved wood and bamboo and the use of heavy lacquers are well known Chinese styles. It is worth noting that China has an incredibly rich and diverse history, and architecture, religion, furniture and culture in general can vary widely from one dynasty to the next.

Traditional Japanese furniture is well known for its minimalist style, extensive use of wood, high-quality craftsmanship and reliance on wood grain instead of painting or thick lacquer. Japanese chests are known as Tansu, and are some of the most sought-after of Japanese antiques. The antiques available generally date back to the Tokugawa era.

Gun

n modern parlance, a gun is a projectile weapon using a hollow, tubular barrel with a closed end—the breech—as the means for directing the projectile as well as other purposes—an expansion chamber for propellant, stabilizing the projectile's trajectory, aiming, etc.—and assumes a generally flat trajectory for the projectile.

The generic form of a trigger initiated, hand-held, and hand-directed tool with an extending bore has additionally been applied to implements resembling guns in either form or concept. Examples of this application include items such as staple guns, nail guns, and glue guns. Occasionally, this tendency is ironically reversed, such as the case of the American M3 submachine gun which carries the nickname "Grease Gun".

The projectile may be a simple, single-piece item like a bullet, a casing containing a payload like a shotshell or explosive shell, or complex projectile like a sub-caliber projectile and sabot. The propellant may be air, an explosive solid, or an explosive liquid. Some variations like the Gyrojet and certain other types combine the projectile and propellant into a single item.

Most guns are described by the type of barrel used, the means of firing, the purpose of the weapon, the caliber, or the commonly accepted name for a particular variation.

Barrel types include rifled—a series of spiraled grooves or angles within the barrel—when the projectile requires an induced spin to stabilize it and smoothbore when the projectile is stabilized by other means or is undesired or unnecessary. Typically, interior barrel diameter and the associated projectile size is a means to identify gun variations. Barrel diameter is reported in several ways. The more conventional measure is reporting the interior diameter of the barrel in decimal fractions of the inch or in millimeters. Some guns—such as shotguns—report the weapon's gauge or—as in some British ordnance—the weight of the weapon's usual projectile.

The use of the term "cannon" is interchangeable with "gun" as a words borrowed from the French language during the early 15th century, from Old French canon, itself a borrowing from the Italian cannone, a "large tube" augmentive of Latin canna "reed or cane".[1]

In military use, the term "gun" refers primarily to direct fire weapons that capitalize on their velocity for penetration or range. In modern parlance, these weapons are breech-loaded and built primarily for long range fire with a low or almost flat ballistic arc. A variation is the howitzer or gun-howitzer designed to offer the ability to fire both low or high-angle ballistic arcs. In this use, example guns include naval guns. A less strict application of the word is to identify one artillery weapon system or non-machine gun projectile armament on aircraft.

The word cannon is retained in some cases for the actual gun tube but not the weapon system. The title gunner is applied to the member of the team charged with operating, aiming, and firing a gun.

Autocannon are automatic guns designed primarily to fire shells and are mounted on a vehicle or other mount. Machine guns are similar, but usually designed to fire simple projectiles. In some calibers and some usages, these two definitions overlap.

A related military use of the word is in describing gun-type fission weapon. In this instance, the "gun" is part of a nuclear weapon and contains an explosively propelled sub-critical slug of fissile material within a barrel to be fired into a second sub-critical mass in order to initiate the fission reaction. Potentially confused with this usage are small nuclear devices capable of being fired by artillery or recoilless rifle.

In civilian use, a related item used in agriculture is a captive bolt gun. Such captive piston guns are often used to humanely stun farm animals for slaughter.[2]

Shotguns are normally civilian weapons used primarily for hunting. These weapons are typically smooth bored and fire a shell containing small lead or steel balls. Variations use rifled barrels or fire other projectiles including solid lead slugs, a Taser XREP projectile capable of stunning a target, or other payloads. In military versions, these weapons are often used to burst door hinges or locks in addition to antipersonnel uses.

In 2002, there were 1,240 warships operating in the world, not counting small vessels such as patrol boats. The United States accounted for 3 million tons worth of these vessels, Russia 1.35 million tons, the United Kingdom 504,660 tons and China 402,830 tons. The twentieth century saw many naval engagements during the two world wars, the Cold War, and the rise to power of naval forces of the two blocs. The world's major powers have recently used their naval power in cases such as the United Kingdom in the Falkland Islands and the United States in Iraq. Warships were also key in history's great explorations and scientific and technological development. Navigators such as Zheng He spread such inventions as the compass and gunpowder. On one hand, ships have been used for colonization and the slave trade. On the other, they also have served scientific, cultural, and humanitarian needs.
The harbor at Fuglafjørður, Faroe Islands shows seven typical Faroe boats used for fishing.

The size of the world's fishing fleet is more difficult to estimate. The largest of these are counted as commercial vessels, but the smallest are legion. Fishing vessels can be found in most seaside villages in the world. As of 2004, the United Nations Food and Agriculture Organization estimated 4 million fishing vessels were operating worldwide.[12] The same study estimated that the world's 29 million fishermen[13] caught 85.8 million metric tons of fish and shellfish that year.[14]

Ship

A ship /ʃɪp/ en-us-ship.ogg Audio (US) (help·info) is a large vessel that floats on water. Ships are generally distinguished from boats based on size. Ships may be found on lakes, seas, and rivers and they allow for a variety of activities, such as the transport of persons or goods, fishing, entertainment, public safety, and warfare.

Ships and boats have developed alongside mankind. In major wars, and in day to day life, they have become an integral part of modern commercial and military systems. Fishing boats are used by millions of fishermen throughout the world. Military forces operate highly sophisticated vessels to transport and support forces ashore. Commercial vessels, nearly 35,000 in number, carried 7.4 billion tons of cargo in 2007.[1]

These vessels were also key in history's great explorations and scientific and technological development. Navigators such as Zheng He spread such inventions as the compass and gunpowder. Ships have been used for such purposes as colonization and the slave trade, and have served scientific, cultural, and humanitarian needs.

As Thor Heyerdahl demonstrated with his tiny boat the Kon-Tiki, it is possible to achieve great things with a simple log raft. From Mesolithic canoes to today's powerful nuclear-powered aircraft carriers, ships tell the history of humankind.

Ships can usually be distinguished from boats based on size and the ship's ability to operate independently for extended periods.[2] A commonly used rule of thumb is that if one vessel can carry another, the larger of the two is a ship.[3] As dinghies are common on sailing yachts as small as 35 feet (11 m), this rule of thumb is not foolproof. In a more technical and now rare sense, the term ship refers to a sailing ship with at least 3 square-rigged masts and a full bowsprit.

A number of large vessels are traditionally referred to as boats. Submarines are a prime example.[4] Other types of large vessels which are traditionally called boats are the Great Lakes freighter, the riverboat, and the ferryboat.[citation needed] Though large enough to carry their own boats and heavy cargoes, these vessels are designed for operation on inland or protected coastal waters.

[edit] History

The history of boats parallels the human adventure. The first known boats date back to the Neolithic Period, about 10,000 years ago. These early vessels had limited function: they could move on water, but that was it. They were used mainly for hunting and fishing. The oldest dugout canoes found by archaeologists were often cut from coniferous tree logs, using simple stone tools

By around 3000 BC, Ancient Egyptians already knew how to assemble planks of wood into a ship hull.[5] They used woven straps to lash the planks together,[5] and reeds or grass stuffed between the planks helped to seal the seams.[5] The Greek historian and geographer Agatharchides had documented ship-faring among the early Egyptians: "During the prosperous period of the Old Kingdom, between the 30th and 25th centuries B. C., the river-routes were kept in order, and Egyptian ships sailed the Red Sea as far as the myrrh-country."[6]

At about the same time, people living near Kongens Lyngby in Denmark invented the segregated hull, which allowed the size of boats to gradually be increased. Boats soon developed into keel boats similar to today's wooden pleasure craft.

The first navigators began to use animal skins or woven fabrics as sails. Affixed to the top of a pole set upright in a boat, these sails gave early ships range. This allowed men to explore widely, allowing, for example the settlement of Oceania about 3,000 years ago.

The ancient Egyptians were perfectly at ease building sailboats. A remarkable example of their shipbuilding skills was the Khufu ship, a vessel 143 feet (44 m) in length entombed at the foot of the Great Pyramid of Giza around 2,500 BC and found intact in 1954. According to Herodotus, the Egyptians made the first circumnavigation of Africa around 600 BC.

The Phoenicians and Greeks gradually mastered navigation at sea aboard triremes, exploring and colonizing the Mediterranean via ship. Around 340 BC, the Greek navigator Pytheas of Massalia ventured from Greece to Western Europe and Great Britain.[7]

Before the introduction of the compass, celestial navigation was the main method for navigation at sea. In China, early versions of the magnetic compass were being developed and used in navigation between 1040 and 1117.[8] The true mariner's compass, using a pivoting needle in a dry box, was invented in Europe no later than 1300.[9][10]

[edit] Through the Renaissance

Until the Renaissance, navigational technology remained comparatively primitive. This absence of technology didn't prevent some civilizations from becoming sea powers. Examples include the maritime republics of Genoa and Venice, and the Byzantine navy. The Vikings used their knarrs to explore North America, trade in the Baltic Sea and plunder many of the coastal regions of Western Europe.

Towards the end of the fourteenth century, ships like the carrack began to develop towers on the bow and stern. These towers decreased the vessel's stability, and in the fifteenth century, caravels became more widely used. The towers were gradually replaced by the forecastle and sterncastle, as in the carrack Santa María of Christopher Columbus. This increased freeboard allowed another innovation: the freeing port, and the artillery associated with it.
A Japanese atakebune from the 16th century

In the sixteenth century, the use of freeboard and freeing ports become widespread on galleons. The English modified their vessels to maximize their firepower and demonstrated the effectiveness of their doctrine, in 1588, by defeating the Spanish Armada.

At this time, ships were developing in Asia in much the same way as Europe. Japan used defensive naval techniques in the Mongol invasions of Japan in 1281. It is likely that the Mongols of the time took advantage of both European and Asian shipbuilding techniques. In Japan, during the Sengoku era from the fifteenth to seventeenth century, the great struggle for feudal supremacy was fought, in part, by coastal fleets of several hundred boats, including the atakebune.

Fifty years before Christopher Columbus, Chinese navigator Zheng He traveled the world at the head of what was for the time a huge armada. The largest of his ships had nine masts, were 130 metres (430 ft) long and had a beam of 55 metres (180 ft). His fleet carried 30,000 men aboard 70 vessels, with the goal of bringing glory to the Chinese emperor.

Parallel to the development of warships, ships in service of marine fishery and trade also developed in the period between antiquity and the Renaissance. Still primarily a coastal endeavor, fishing is largely practiced by individuals with little other money using small boats.

Maritime trade was driven by the development of shipping companies with significant financial resources. Canal barges, towed by draft animals on an adjacent towpath, contended with the railway up to and past the early days of the industrial revolution. Flat-bottomed and flexible scow boats also became widely used for transporting small cargoes. Mercantile trade went hand-in-hand with exploration, which is self-financing by the commercial benefits of exploration.

During the first half of the eighteenth century, the French Navy began to develop a new type of vessel, featuring seventy-four guns. This type of ship became the backbone of all European fighting fleets. These ships were 56 metres (180 ft) long and their construction required 2,800 oak trees and 40 kilometres (25 mi) of rope; they carried a crew of about 800 sailors and soldiers.
A small pleasure boat and a tugboat in Rotterdam

Ship designs stayed fairly unchanged until the late nineteenth century. The industrial revolution, new mechanical methods of propulsion, and the ability to construct ships from metal triggered an explosion in ship design. Factors including the quest for more efficient ships, the end of long running and wasteful maritime conflicts, and the increased financial capacity of industrial powers created an avalanche of more specialized boats and ships. Ships built for entirely new functions, such as firefighting, rescue, and research, also began to appear.

In light of this, classification of vessels by type or function can be difficult. Even using very broad functional classifications such as fishery, trade, military, and exploration fails to classify most of the old ships. This difficulty is increased by the fact that the terms such as sloop and frigate are used by old and new ships alike, and often the modern vessels sometimes have little in common with their predecessors.

In 2007, the world's fleet included 34,882 commercial vessels with gross tonnage of more than 1,000 tons,[11] totaling 1.04 billion tons.[1] These ships carried 7.4 billion tons of cargo in 2006, a sum that grew by 8% over the previous year.[1] In terms of tonnage, 39% of these ships are tankers, 26% are bulk carriers, 17% container ships and 15% were other types.[1]

In 2002, there were 1,240 warships operating in the world, not counting small vessels such as patrol boats. The United States accounted for 3 million tons worth of these vessels, Russia 1.35 million tons, the United Kingdom 504,660 tons and China 402,830 tons. The twentieth century saw many naval engagements during the two world wars, the Cold War, and the rise to power of naval forces of the two blocs. The world's major powers have recently used their naval power in cases such as the United Kingdom in the Falkland Islands and the United States in Iraq. Warships were also key in history's great explorations and scientific and technological development. Navigators such as Zheng He spread such inventions as the compass and gunpowder. On one hand, ships have been used for colonization and the slave trade. On the other, they also have served scientific, cultural, and humanitarian needs.
The harbor at Fuglafjørður, Faroe Islands shows seven typical Faroe boats used for fishing.

The size of the world's fishing fleet is more difficult to estimate. The largest of these are counted as commercial vessels, but the smallest are legion. Fishing vessels can be found in most seaside villages in the world. As of 2004, the United Nations Food and Agriculture Organization estimated 4 million fishing vessels were operating worldwide.[12] The same study estimated that the world's 29 million fishermen[13] caught 85.8 million metric tons of fish and shellfish that year.[14]

Satellite

The first fictional depiction of a satellite being launched into orbit is a short story by Edward Everett Hale, The Brick Moon. The story is serialized in The Atlantic Monthly, starting in 1869.[1][2] The idea surfaces again in Jules Verne's The Begum's Millions (1879).

In 1903 Konstantin Tsiolkovsky (1857–1935) published Исследование мировых пространств реактивными приборами (The Exploration of Cosmic Space by Means of Reaction Devices), which is the first academic treatise on the use of rocketry to launch spacecraft. He calculated the orbital speed required for a minimal orbit around the Earth at 8 km/s, and that a multi-stage rocket fueled by liquid propellants could be used to achieve this. He proposed the use of liquid hydrogen and liquid oxygen, though other combinations can be used.

In 1928 Slovenian Herman Potočnik (1892–1929) published his sole book, Das Problem der Befahrung des Weltraums - der Raketen-Motor (The Problem of Space Travel — The Rocket Motor), a plan for a breakthrough into space and a permanent human presence there. He conceived of a space station in detail and calculated its geostationary orbit. He described the use of orbiting spacecraft for detailed peaceful and military observation of the ground and described how the special conditions of space could be useful for scientific experiments. The book described geostationary satellites (first put forward by Tsiolkovsky) and discussed communication between them and the ground using radio, but fell short of the idea of using satellites for mass broadcasting and as telecommunications relays.

In a 1945 Wireless World article the English science fiction writer Arthur C. Clarke (1917-2008) described in detail the possible use of communications satellites for mass communications.[3] Clarke examined the logistics of satellite launch, possible orbits and other aspects of the creation of a network of world-circling satellites, pointing to the benefits of high-speed global communications. He also suggested that three geostationary satellites would provide coverage over the entire planet.

he first artificial satellite was Sputnik 1, launched by the Soviet Union on 4 October 1957, and that started the whole Soviet Sputnik program, with Sergei Korolev as chief designer and Kerim Kerimov as his assistant.[4] This triggered the Space Race between the Soviet Union and the United States.

Sputnik 1 helped to identify the density of high atmospheric layers through measurement of its orbital change and provided data on radio-signal distribution in the ionosphere. Because the satellite's body was filled with pressurized nitrogen, Sputnik 1 also provided the first opportunity for meteoroid detection, as a loss of internal pressure due to meteoroid penetration of the outer surface would have been evident in the temperature data sent back to Earth. The unanticipated announcement of Sputnik 1's success precipitated the Sputnik crisis in the United States and ignited the so-called Space Race within the Cold War.

Sputnik 2 was launched on November 3, 1957 and carried the first living passenger into orbit, a dog named Laika.[5]

In May, 1946, Project RAND had released the Preliminary Design of an Experimental World-Circling Spaceship, which stated, "A satellite vehicle with appropriate instrumentation can be expected to be one of the most potent scientific tools of the Twentieth Century.[6] The United States had been considering launching orbital satellites since 1945 under the Bureau of Aeronautics of the United States Navy. The United States Air Force's Project RAND eventually released the above report, but did not believe that the satellite was a potential military weapon; rather, they considered it to be a tool for science, politics, and propaganda. In 1954, the Secretary of Defense stated, "I know of no American satellite program."[7]

On July 29, 1955, the White House announced that the U.S. intended to launch satellites by the spring of 1958. This became known as Project Vanguard. On July 31, the Soviets announced that they intended to launch a satellite by the fall of 1957.

Following pressure by the American Rocket Society, the National Science Foundation, and the International Geophysical Year, military interest picked up and in early 1955 the Air Force and Navy were working on Project Orbiter, which involved using a Jupiter C rocket to launch a satellite. The project succeeded, and Explorer 1 became the United States' first satellite on January 31, 1958.[8]

In June 1961, three-and-a-half years after the launch of Sputnik 1, the Air Force used resources of the United States Space Surveillance Network to catalog 115 Earth-orbiting satellites.[9]

The largest artificial satellite currently orbiting the Earth is the International Space Station.

The United States Space Surveillance Network (SSN) has been tracking space objects since 1957 when the Soviets opened the space age with the launch of Sputnik I. Since then, the SSN has tracked more than 26,000 space objects orbiting Earth. The SSN currently tracks more than 8,000 man-made orbiting objects. The rest have re-entered Earth's turbulent atmosphere and disintegrated, or survived re-entry and impacted the Earth. The space objects now orbiting Earth range from satellites weighing several tons to pieces of spent rocket bodies weighing only 10 pounds. About seven percent of the space objects are operational satellites (i.e. ~560 satellites), the rest are space debris.[10] USSTRATCOM is primarily interested in the active satellites, but also tracks space debris which upon reentry might otherwise be mistaken for incoming missiles. The SSN tracks space objects that are 10 centimeters in diameter (baseball size) or larger.

[edit] Non-Military Satellite Services

The first satellite, Sputnik 1, was put into orbit around Earth and was therefore in geocentric orbit. By far this is the most common type of orbit with approximately 2456 artificial satellites orbiting the Earth. Geocentric orbits may be further classified by their altitude, inclination and eccentricity.

The commonly used altitude classifications are Low Earth Orbit (LEO), Medium Earth Orbit (MEO) and High Earth Orbit (HEO). Low Earth orbit is any orbit below 2000 km, and Medium Earth Orbit is any orbit higher than that but still below the altitude for geosynchronous orbit at 35786 km. High Earth Orbit is any orbit higher than the altitude for geosynchronous orbit.
This list includes countries with an independent capability to place satellites in orbit, including production of the necessary launch vehicle. Note: many more countries have the capability to design and build satellites — which relatively speaking, does not require much economic, scientific and industrial capacity — but are unable to launch them, instead relying on foreign launch services. This list does not consider those numerous countries, but only lists those capable of launching satellites indigenously, and the date this capability was first demonstrated. Does not include consortium satellites or multi-national satellites.

Both North Korea (1998) and Iraq (1989) have claimed orbital launches (satellite and warhead accordingly), but these claims are unconfirmed.

In addition to the above, countries such as South Africa, Spain, Italy, Germany, Canada, Australia, Argentina, Egypt and private companies such as OTRAG, have developed their own launchers, but have not had a successful launch.

As of 2009, only eight countries from the list above ( Russia and Ukraine instead of USSR, also USA, Japan, China, India, Israel and Iran) and one regional organization (the European Space Agency, ESA) have independently launched satellites on their own indigenously developed launch vehicles. (The launch capabilities of the United Kingdom and France now fall under the ESA.)

Several other countries, including South Korea, Brazil, Pakistan, Romania, Kazakhstan, Australia, Malaysia[citation needed] and Turkey, are at various stages of development of their own small-scale launcher capabilities.

It is scheduled that in early 2008 South Korea will launch a KSLV rocket (created with assistance of Russia).

[edit] Launch capable private entities
On September 28th, 2008, the private aerospace firm SpaceX successfully launched its Falcon 1 rocket in to orbit. This marked the first time that a privately built liquid-fueled booster was able to reach orbit.[16] The rocket carried a prism shaped 1.5 m (5 ft) long payload mass simulator that was set into orbit. The dummy satellite, known as Ratsat, will remain in orbit for between five and ten years before burning up in the atmosphere.[16]

In recent times satellites have been hacked by militant organisations to broadcast propaganda and to pilfer classified information from military communication networks.[22][23]

Satellites in low earth orbit have been destroyed by ballistic missiles launched from earth. Russia, the United States and China have demonstrated the ability to eliminate satellites.[24] In 2007 the Chinese military shot down an aging weather satellite,[24] followed by the US Navy shooting down a defunct spy satellite in February 2008.[25] Russia and the United States have also shot down satellites during the Cold war.
Due to the low received signal strength of satellite transmissions they are prone to jamming by land-based transmitters. Such jamming is limited to the geographical area within the transmitter's range. GPS satellites are potential targets for jamming,[26][27] but satellite phone and television signals have also been subjected to jamming.[28][29]

Helicopter

A helicopter is an aircraft that is lifted and propelled by one or more horizontal rotors, each rotor consisting of two or more rotor blades. Helicopters are classified as rotorcraft or rotary-wing aircraft to distinguish them from fixed-wing aircraft because the helicopter achieves lift with the rotor blades which rotate around a mast. The word 'helicopter' is adapted from the French hélicoptère, coined by Gustave de Ponton d'Amecourt in 1861, which originates from the Greek helix/helik- (ἕλικ-) = "spiral" or "turning" and pteron (πτερόν) = "wing".[1][2]

The primary advantage of a helicopter is from the rotor which provides lift without the aircraft needing to move forward, allowing the helicopter take off and land vertically without a runway. For this reason, helicopters are often used in congested or isolated areas where fixed-wing aircraft cannot take off or land. The lift from the rotor also allows the helicopter to hover in one area and more efficiently than other forms of vertical takeoff and landing (VTOL) aircraft, allowing it to accomplish tasks that fixed-wing aircraft cannot perform.

Although helicopters were developed and built during the first half-century of flight, some even reaching limited production, it was not until 1942 that a helicopter designed by Igor Sikorsky reached full-scale production,[3] with 131 aircraft built.[4] Even though most previous designs used more than one main rotor, it was the single main rotor with antitorque tail rotor configuration of this design that would come to be recognized worldwide as the helicopter.
Since 400 BC,[5] Chinese children have played with bamboo flying toys.[6][7] A book written in 4th-century China, referred to as Pao Phu Tau (also Pao Phu Tzu or Bao Pu Zi, 抱朴子), is reported to describe some of the ideas inherent to rotary wing aircraft:[8]

In the early 1480s, Leonardo da Vinci created a design for a machine that could be described as an "aerial screw". His notes suggested that he built small flying models, but there were no indications for any provision to stop the rotor from making the whole craft rotate.[10][11] As scientific knowledge increased and became more accepted, men continued to pursue the idea of vertical flight. Many of these later models and machines would more closely resemble the ancient bamboo flying top with spinning wings, rather than Da Vinci's screw.

In July 1754, Mikhail Lomonosov demonstrated a small coaxial rotor to the Russian Academy of Sciences. It was powered by a spring and suggested as a method to lift meteorological instruments. In 1783, Christian de Launoy, and his mechanic, Bienvenu, made a model with a pair of counter-rotating rotors, using turkey's flight feathers as rotor blades, and in 1784, demonstrated it to the French Academy of Sciences. Sir George Cayley, influenced by a childhood fascination with the Chinese flying top, grew up to develop a model of feathers, similar to Launoy and Bienvenu, but powered by rubber bands. By the end of the century, he had progressed to using sheets of tin for rotor blades and springs for power. His writings on his experiments and models would become influential on future aviation pioneers.[10] Alphonse Pénaud would later develop coaxial rotor model helicopter toys in 1870, also powered by rubber bands. One of these toys, given as a gift by their father, would inspire the Wright brothers to pursue the dream of flight.[12]

In 1861, the word "helicopter" was coined by Gustave de Ponton d'Amécourt, a French inventor who demonstrated a small steam-powered model. While celebrated as an innovative use of a new metal, aluminum, the model never lifted off the ground. D'Amecourt's linguistic contribution would survive to eventually describe the vertical flight he had envisioned. Steam power was popular with other inventors as well. Enrico Forlanini's unmanned helicopter was also powered by a steam engine. It was the first of its type that rose to a height of 13 meters (43 ft), where it remained for some 20 seconds after a vertical take-off from a park in Mila
powered by a steam engine. It was the first of its type that rose to a height of 13 meters (43 ft), where it remained for some 20 seconds after a vertical take-off from a park in Milan, in 1877, and Emmanuel Dieuaide's steam-powered design featured counter-rotating rotors powered through a hose from a boiler on the ground. Dandrieux's design had counter-rotating rotors and a 7.7-pound (3.5-kilogram) steam engine. It rose more than 40 feet (12 m) and flew for 20 seconds circa 1878.[10]In 1885, Thomas Edison was given US$1,000 by James Gordon Bennett, Jr., to conduct experiments towards developing flight. Edison built a helicopter and used paper for a stock ticker to create guncotton, with which he attempted to power an internal combustion engine. The helicopter was damaged by explosions and he badly burned one of his workers. Edison reported that it would take a motor with a ratio of three to four pounds per horsepower produced to be successful, based on his experiments.[13] Ján Bahýľ, a Slovak inventor, adapted the internal combustion engine to power his helicopter model that reached a height of 0.5 meters (1.6 ft) in 1901. On 5 May 1905, his helicopter reached four meters (13 ft) in altitude and flew for over 1,500 meters (4,900 ft).[14] In 1908, Edison patented his own design for a helicopter powered by a gasoline engine with box kites attached to a mast by cables for a rotor, but it never flew.[15]

In 1906, two French brothers, Jacques and Louis Breguet, began experimenting with airfoils for helicopters and in 1907, those experiments resulted in the Gyroplane No.1. Although there is some uncertainty about the dates, sometime between 14 August and 29 September 1907, the Gyroplane No. 1 lifted its pilot up into the air about two feet (0.6 m) for a minute.[3] However, the Gyroplane No. 1 proved to be extremely unsteady and required a man at each corner of the airframe to hold it steady. For this reason, the flights of the Gyroplane No. 1 are considered to be the first manned flight of a helicopter, but not a free or untethered flight.
Paul Cornu's helicopter in 1907

That same year, fellow French inventor Paul Cornu designed and built a Cornu helicopter that used two 20-foot (6 m) counter-rotating rotors driven by a 24-hp (18-kW) Antoinette engine. On 13 November 1907, it lifted its inventor to 1 foot (0.3 m) and remained aloft for 20 seconds. Even though this flight did not surpass the flight of the Gyroplane No. 1, it was reported to be the first truly free flight with a pilot.[n 1] Cornu's helicopter would complete a few more flights and achieve a height of nearly 6.5 feet (2 m), but it proved to be unstable and was abandoned.[3]

In the early 1920s, Argentine Raúl Pateras Pescara, while working in Europe, demonstrated one of the first successful applications of cyclic pitch.[3] Coaxial, contra-rotating, biplane rotors could be warped to cyclically increase and decrease the lift they produced; and the rotor hub also could, allowing the aircraft lateral movement without a separate propeller to push or pull it. Pescara also demonstrated the principle of autorotation, by which helicopters safely land after engine failure; by January of 1924, Pescara's helicopter No. 3 could fly for up ten minutes.

One of Pescara's contemporaries, Frenchman Etienne Oemichen, set the first helicopter world record recognized by the Fédération Aéronautique Internationale (FAI) on 14 April 1924, flying his helicopter 360 meters (1,181 ft). On 18 April 1924, Pescara beat Oemichen's record, flying for a distance of 736 meters (nearly a half mile) in 4 minutes and 11 seconds (about 8 mph, 13 km/h) maintaining a height of six feet (2 m).[16] Not to be outdone, Oemichen reclaimed the world record on 4 May when he flew his No. 2 machine again for a 14-minute flight covering 5,550 feet (1.05 mi, 1.692 km) while climbing to a height of 50 feet (15 m).[16] Oemichen also set the 1 km closed-circuit record at 7 minutes 40 seconds.[3]

Meanwhile, Juan de la Cierva was developing the first practical rotorcraft in Spain. In 1923, the aircraft that would become the basis for the modern helicopter rotor began to take shape in the form of an autogyro, Cierva's C.4.[17] Cierva had discovered aerodynamic and structural deficiencies in his early designs that could cause his autogyros to flip over after takeoff. The flapping hinges that Cierva designed for the C.4 allowed the rotor to develop lift equally on the left and right halves of the rotor disk. A crash in 1927 led to the development of a drag hinge to relieve further stress on the rotor from its flapping motion.[17] These two developments allowed for a stable rotor system, not only in a hover, but in forward flight.

Albert Gillis von Baumhauer, a Dutch aeronautical engineer, began studying rotorcraft design in 1923. His first prototype "flew" ("hopped" and hovered in reality) on 24 September 1925, with Dutch Army-Air arm Captain Floris Albert van Heijst at the controls. The controls that Captain van Heijst used were Von Baumhauer's inventions, the cyclic and collective. Patents were granted to von Baumhauer for his cyclic and collective controls by the British ministry of aviation on 31 January 1927, under patent number 265,272.

In 1930, the Italian engineer Corradino D'Ascanio built his D'AT3, a coaxial helicopter. His relatively large machine had two, two-bladed, counter-rotating rotors. Control was achieved by using auxiliary wings or servo-tabs on the trailing edges of the blades,[18] a concept that was later adopted by other helicopter designers, including Bleeker and Kaman. Three small propellers mounted to the airframe were used for additional pitch, roll, and yaw control. The D'AT3 held modest FAI speed and altitude records for the time, including altitude (18 m or 59 ft), duration (8 minutes 45 seconds) and distance flown (1,078 m or 3,540 ft).[18]

At this same time, in the Soviet Union, the aeronautical engineers Boris N. Yuriev and Alexei M. Cheremukhin, working at TsAGI, constructed and flew the TsAGI 1-EA single rotor helicopter, which used an open tubing framework, a four blade main rotor, and twin sets (one set of two each at the nose and tail) of 1.8 meters (6 ft) diameter anti-torque rotors. Powered by two M-2 powerplants, themselves up-rated Soviet copies of the Gnome Monosoupape rotary radial engine of World War I, the TsAGI 1-EA made several successful low altitude flights, and by 14 August 1932 Cheremukhin managed to get the 1-EA up to an unofficial altitude of 605 meters (1,985 ft), shattering d'Ascanio's earlier achievement. As the Soviet Union was not yet a member of the FAI, however, Cheremukhin's record remained unrecognized.[2][3]

Nicolas Florine, a Russian engineer, built the first twin tandem rotor machine to perform a free flight. It flew in Sint-Genesius-Rode, at the Laboratoire Aérotechnique de Belgique (now von Karman Institute) in April 1933 and attained an altitude of six meters (20 ft) and an endurance of eight minutes. Florine chose a co-rotating configuration because the gyroscopic stability of the rotors would not cancel. Therefore the rotors had to be tilted slightly in opposite directions to counter torque. Using hingeless rotors and co-rotation also minimised the stress on the hull. At the time, it was probably the most stable helicopter in existence.[19][20]

The Bréguet-Dorand Gyroplane Laboratoire was built in 1933. After many ground tests and an accident, it first took flight on 26 June 1935. Within a short time, the aircraft was setting records with pilot Maurice Claisse at the controls. On 14 December 1935, he set a record for closed-circuit flight with a 500-meter (1,600 ft) diameter. The next year, on 26 September 1936, Claisse set a height record of 158 meters (520 ft). And, finally, on 24 November 1936, he set a flight duration record of one hour, two minutes and 5 seconds over a 44 kilometer (27 mi) closed circuit at 44.7 km/h (27.8 mph). The aircraft was destroyed in 1943 by an Allied airstrike at Villacoublay airport.

Despite the success of the Gyroplane Laboratoire, the German Focke-Wulf Fw 61, first flown in 1936, would eclipse its accomplishments. The Fw 61 broke all of the helicopter world records in 1937, demonstrating a flight envelope that had only previously been achieved by the autogyro. In February 1938, Hanna Reitsch became the first female helicopter pilot, exhibiting the Fw 61 before crowds in the Deutschlandhalle.

Nazi Germany would use helicopters in small numbers during World War II for observation, transport, and medical evacuation. The Flettner Fl 282 Kolibri synchropter was used in the Mediterranean Sea, while the Focke Achgelis Fa 223 Drache was used in Europe. Extensive bombing by the Allied forces prevented Germany from producing any helicopters in large quantities during the war.

In the United States, Igor Sikorsky and W. Lawrence LePage, were competing to produce the United States military's first helicopter. Prior to the war, LePage had received the patent rights to develop helicopters patterned after the Fw 61, and built the XR-1.[21] Meanwhile, Sikorsky had settled on a simpler, single rotor design, the VS-300. After experimenting with configurations to counteract the torque produced by the single main rotor, he settled on a single, smaller rotor mounted vertically on the tailboom.

Developed from the VS-300, Sikorsky's R-4 became the first mass produced helicopter with a production order for 100 aircraft. The R-4 was the only Allied helicopter to see service in World War II, primarily being used for rescue in Burma, Alaska, and other areas with harsh terrain. Total production would reach 131 helicopters before the R-4 was replaced by other Sikorsky helicopters such as the R-5 and the R-6. In all, Sikorsky would produce over 400 helicopters before the end of World War II.[22]

As LePage and Sikorsky were building their helicopters for the military, Bell Aircraft hired Arthur Young to help build a helicopter using Young's semi-rigid, teetering-blade rotor design, which used a weighted stabilizing bar. The subsequent Model 30 helicopter demonstrated the simplicity and ease of the design. The Model 30 was developed into the Bell 47, which became the first helicopter certificated for civilian use in the United States. Produced in several countries, the Bell 47 would become the most popular helicopter model for nearly 30 y

Rocket

A rocket or rocket vehicle is a missile, aircraft or other vehicle which obtains thrust by the reaction of the rocket to the ejection of fast moving fluid exhaust from a rocket engine. Chemical rockets create their exhaust by the combustion of rocket propellant. The action of the exhaust against the inside of combustion chambers and expansion nozzles accelerates the gas to extremely high speed and exerts a large reactive thrust on the rocket (since every action has an equal and opposite reaction).

The history of rockets goes back to at least the 13th century, and military and recreational display use dates from that time.[1] Widespread military, scientific, and industrial use did not occur until the 20th century, when rocketry was the enabling technology of the Space Age, with man visiting the moon.

Rockets are used for fireworks and weaponry, ejection seats and launch vehicles for artificial satellites, human spaceflight and exploration of other planets. While inefficient for low speed use, they are, compared to other propulsion systems, very lightweight and powerful, capable of generating large accelerations and of attaining extremely high speeds with reasonable efficiency.

Chemical rockets store a large amount of energy in an easily-released form, and can be very dangerous. However, careful design, testing, construction, and use minimizes risks.

The availability of black powder (gunpowder) to propel projectiles was a precursor to the development of the first solid rocket. Ninth Century Chinese Taoist alchemists discovered black powder while searching for the Elixir of life; this accidental discovery led to experiments in the form of weapons such as bombs, cannon, incendiary fire arrows and rocket-propelled fire arrows.

Exactly when the first flights of rockets occurred is contested. Some say that the first recorded use of a rocket in battle was by the Chinese in 1232 against the Mongol hordes. There were reports of fire arrows and 'iron pots' that could be heard for 5 leagues (25 km, or 15 miles) when they exploded upon impact, causing devastation for a radius of 600 meters (2,000 feet), apparently due to shrapnel.[2] The lowering of the iron pots may have been a way for a besieged army to blow up invaders. The fire arrows were either arrows with explosives attached, or arrows propelled by gunpowder, such as the Korean Hwacha.[3]

Less controversially, one of the earliest devices recorded that used internal-combustion rocket propulsion was the 'ground-rat,' a type of firework, recorded in 1264 as having frightened the Empress-Mother Kung Sheng at a feast held in her honor by her son the Emperor Lizong.[4]

Subsequently, one of the earliest texts to mention the use of rockets was the Huolongjing, written by the Chinese artillery officer Jiao Yu in the mid-14th century. This text also mentioned the use of the first known multistage rocket, the 'fire-dragon issuing from the water' (huo long chu shui), used mostly by the Chinese navy.[5]

Rocket technology first became known to Europeans following their use by the Mongols Genghis Khan and Ögedei Khan when they conquered parts of Russia, Eastern, and Central Europe. The Mongolians had acquired the Chinese technology by conquest of the northern part of China and also by the subsequent employment of Chinese rocketry experts as mercenaries for the Mongol military. Reports of the Battle of Sejo in the year 1241 describe the use of rocket-like weapons by the Mongols against the Magyars.[6] Rocket technology also spread to Korea, with the 15th century wheeled hwacha that would launch singijeon rockets.

Additionally, the spread of rockets into Europe was also influenced by the Ottomans at the siege of Constantinople in 1453, although it is very likely that the Ottomans themselves were influenced by the Mongol invasions of the previous few centuries. In their history of rockets published on the Internet, NASA says "Rockets appear in Arab literature in 1258 A.D., describing Mongol invaders' use of them on February 15 to capture the city of Baghdad."[6]

Between 1270 and 1280, Hasan al-Rammah wrote al-furusiyyah wa al-manasib al-harbiyya (The Book of Military Horsemanship and Ingenious War Devices), which included 107 gunpowder recipes, 22 of which are for rockets.[7] According to Ahmad Y Hassan, al-Rammah's recipes were more explosive than rockets used in China at the time.[8] He also invented a torpedo running on water with a rocket system filled with explosive materials.[citation needed]

The name Rocket comes from the Italian Rocchetta (i.e. little fuse), a name of a small firecracker created by the Italian artificer Muratori in 1379.[9]

Between 1529 and 1556 Conrad Haas wrote a book that described the concept of multi-stage rockets.

"Artis Magnae Artilleriae pars prima" ("Great Art of Artillery, the First Part", also known as "The Complete Art of Artillery"), first printed in Amsterdam in 1650, was translated to French in 1651, German in 1676, English and Dutch in 1729 and Polish in 1963. For over two centuries, this work of Polish-Lithuanian Commonwealth nobleman Kazimierz Siemienowicz[10] was used in Europe as a basic artillery manual. The book provided the standard designs for creating rockets, fireballs, and other pyrotechnic devices. It contained a large chapter on caliber, construction, production and properties of rockets (for both military and civil purposes), including multi-stage rockets, batteries of rockets, and rockets with delta wing stabilizers (instead of the common guiding rods).

In 1792, iron-cased rockets were successfully used militarily by Tipu Sultan, Ruler of the Kingdom of Mysore in India against the larger British East India Company forces during the Anglo-Mysore Wars. The British then took an active interest in the technology and developed it further during the 19th century.

The major figure in the field at this time became William Congreve, son of the Comptroller of the Royal Arsenal, Woolwich, London.[11] From 1801, Congreve set on a vigorous research and development programme at the Arsenal's laboratory. Congreve prepared a new propellant mixture, and developed a rocket motor with a strong iron tube with conical nose, weighing about 32 pounds (14.5 kilograms). The Royal Arsenal's first demonstration of solid fuel rockets was in 1805. The rockets were effectively used during the Napoleonic Wars and the War of 1812. Congreve published three books on rocketry.[12]

From there, the use of military rockets spread throughout Europe. At the Battle of Baltimore in 1814, the rockets fired on Fort McHenry by the rocket vessel HMS Erebus were the source of the rockets' red glare described by Francis Scott Key in The Star-Spangled Banner.[13] Rockets were also used in the Battle of Waterloo.[14]

Early rockets were very inaccurate. Without the use of spinning or any gimballing of the thrust, they had a strong tendency to veer sharply off course. The early British Congreve rockets[11] reduced this somewhat by attaching a long stick to the end of a rocket (similar to modern bottle rockets) to make it harder for the rocket to change course. The largest of the Congreve rockets was the 32-pound (14.5 kg) Carcass, which had a 15-foot (4.6 m) stick. Originally, sticks were mounted on the side, but this was later changed to mounting in the center of the rocket, reducing drag and enabling the rocket to be more accurately fired from a segment of pipe.

The accuracy problem was mostly solved in 1844 when William Hale[15] modified the rocket design so that thrust was slightly vectored, causing the rocket to spin along its axis of travel like a bullet. The Hale rocket removed the need for a rocket stick, travelled further due to reduced air resistance, and was far more accurate.

In 1903, high school mathematics teacher Konstantin Tsiolkovsky (1857–1935) published Исследование мировых пространств реактивными приборами[16] (The Exploration of Cosmic Space by Means of Reaction Devices), the first serious scientific work on space travel. The Tsiolkovsky rocket equation—the principle that governs rocket propulsion—is named in his honor (although it had been discovered previously[17]). He also advocated the use of liquid hydrogen and oxygen as fuel, calculating their maximum exhaust velocity. His work was essentially unknown outside the Soviet Union, but inside the country it inspired further research, experimentation and the formation of the Society for Studies of Interplanetary Travel in 1924.

In 1912, Robert Es

e country it inspired further research, experimentation and the formation of the Society for Studies of Interplanetary Travel in 1924.

In 1912, Robert Esnault-Pelterie published a lecture on rocket theory and interplanetary travel. He independently derived Tsiolkovsky's rocket equation, did basic calculations about the energy required to make round trips to the Moon and planets, and he proposed the use of atomic power (i.e. Radium) to power a jet drive.
Robert Goddard

Robert Goddard began a serious analysis of rockets in 1912, concluding that conventional solid-fuel rockets needed to be improved in three ways. First, fuel should be burned in a small combustion chamber, instead of building the entire propellant container to withstand the high pressures. Second, rockets could be arranged in stages. And third, the exhaust speed (and thus the efficiency) could be greatly inc

Rabies

Rabies (pronounced /ˈreɪbiz/. From Latin: rabies. Also known as “hydrophobia”) is a viral zoonotic neuroinvasive disease that causes acute encephalitis (inflammation of the brain) in mammals. It is most commonly caused by a bite from an infected animal, but occasionally by other forms of contact. If left untreated in humans it is usually fatal. In some countries it is a significant killer of livestock.

The rabies virus makes its way to the brain by following the peripheral nerves. The incubation period of the disease depends on how far the virus must travel to reach the central nervous system, usually taking a few months.[1] Once the infection reaches the central nervous system and symptoms begin to show, the untreated infection is usually fatal within days.

In the beginning stages of rabies, the symptoms are malaise, headache, and fever, while in later stages it includes acute pain, violent movements, uncontrolled excitements, depressions, and the inability to swallow water (hence the name hydrophobia). In the final stages, the patient begins to have periods of mania and lethargy, and coma. Death generally occurs due to respiratory insufficiency.[1]
The rabies virus is the type species of the Lyssavirus genus, which encompasses similar viruses such as Australian bat lyssavirus, Mokola virus, Lagos bat virus, and Duvenhage virus. Lyssaviruses have helical symmetry, with a length of about 180 nm and a cross-sectional diameter of about 75 nm. From the point of entry, the virus travels quickly along the neural pathways into the central nervous system (CNS), and then further into other organs. The salivary glands receive high concentrations of the virus thus allowing to be further transmitted.he rabies vi

Any mammal may become infected with the rabies virus and develop symptoms, including humans. Most animals can be infected by the virus and can transmit the disease to humans. Infected bats, monkeys, raccoons, foxes, skunks, cattle, wolves, dogs, mongoose (normally yellow mongoose) or cats provide the greatest risk to humans. Rabies may also spread through exposure to infected domestic farm animals, groundhogs, weasels, bears and other wild carnivores. Rodents (mice, squirrels etc) are seldom infected.[verification needed]

The virus is usually present in the nerves and saliva of a symptomatic rabid animal.[2][3] The route of infection is usually, but not necessarily, by a bite. In many cases the infected animal is exceptionally aggressive, may attack without provocation, and exhibits otherwise uncharacteristic behavior.[4] Transmission may also occur via an aerosol through mucous membranes; transmission in this form may have happened in people exploring caves populated by rabid bats that will bite.

Transmission between humans is extremely rare. A few cases have been recorded through transplant surgery,[5] or, even more rarely, through bites, kisses or sexual relations.[citation needed]

After a typical human infection by bite, the virus enters the peripheral nervous system. It then travels along the nerves towards the central nervous system. During this phase, the virus cannot be easily detected within the host, and vaccination may still confer cell-mediated immunity to prevent symptomatic rabies. Once the virus reaches the brain, it rapidly causes encephalitis. This is called the “prodromal” phase. At this time, treatment is useless. Then symptoms appear. Rabies may also inflame the spinal cord producing myelitis.

The rabies virus survives in wide-spread, varied, rural fauna reservoirs. However, in Asia, parts of America and large parts of Africa, dogs remain the principal host. Mandatory vaccination of animals is less effective in rural areas. Especially in developing countries, pets may not be privately kept and their destruction may be unacceptable. Oral vaccines can be safely distributed in baits, and this has successfully reduced rabies in rural areas of France, Ontario, Texas, Florida and elsewhere, like in the City of Montréal (Québec) where baits are successfully used among raccoons in the Mont-Royal park area. Vaccination campaigns may be expensive, and a cost-benefit analysis can lead those responsible to opt for policies of containment rather than elimination of the disease.

There are an estimated 55,000 human deaths annually from rabies worldwide, with about 31,000 in Asia, and 24,000 in Africa.[6] One of the sources of recent flourishing of rabies in East Asia is the pet boom. China introduced in the city of Beijing the “one-dog policy” in November 2006 to control the problem.[7] India has been reported as having the highest rate of human rabies in the world, primarily because of stray dogs.[8]

Rabies was once rare in the United States outside the Southern states[citation needed] , but raccoons in the mid-Atlantic and northeast United States have been suffering from a rabies epidemic since the 1970s, which is now moving westwards into Ohio.[9] In the midwestern United States, skunks are the primary carriers of rabies, composing 134 of the 237 documented non-human cases in 1996. The most widely distributed reservoir of rabies in the United States, however, and the source of most human cases in the U.S., are bats.[citation needed]

Rabies is infectious to mammals. Three stages of rabies are recognized in dogs and other animals. The first stage is a one to three day period characterized by behavioral changes and is known as the prodromal stage. The second stage is the excitative stage, which lasts three to four days. It is this stage that is often known as furious rabies due to the tendency of the affected dog to be hyperreactive to external stimuli and bite at anything near. The third stage is the paralytic stage and is caused by damage to motor neurons. Incoordination is seen due to rear limb paralysis and drooling and difficulty swallowing is caused by paralysis of facial and throat muscles. Death is usually caused by respiratory arrest.[10]

Recently[provide time reference] new symptoms of rabies of wild animals have been observed, namely in foxes. Probably at the beginning of the prodromal stage foxes, who are extremely cautious by nature, absolutely lose wild instincts. Animals come into settlements, reach for people, and behave as if tame. How long such "euphoria" lasts is not known. But even in such status the animal is extremely dangerous, as its saliva and excretions still contain the virus. In an August 2008 blog article[unreliable source?], one author observed and photographed such a subject.
Almost every infected case with rabies resulted in death until a vaccine was developed by Louis Pasteur and Emile Roux in 1885. Their original vaccine was harvested from infected rabbits, from which the nerve-tissue was weakened by allowing to dry for five to ten days.[11] Similar nerve tissue-derived vaccine are still used in some countries, as they are much cheaper than modern cell culture vaccines.[12] The human diploid cell rabies vaccine (H.D.C.V.) was started in 1967, however a new and less expensive purified chicken embryo cell vaccine and purified vero cell rabies vaccine are now available.[citation needed] A recombinant vaccine called V-RG has been successfully used in the field of Belgium, France, Germany and the United States to prevent outbreaks of rabies in wildlife.[13] Currently pre-exposure immunization has been used in both human and non-human populations, where as in many jurisdictions domesticated animals are required to be vaccinated.[citation needed]

The period between infection and the first flu-like symptoms is normally two to twelve weeks, but can be as long as two years. Soon after, the symptoms expand to slight or partial paralysis, cerebral dysfunction, anxiety, insomnia, confusion, agitation, abnormal behavior, paranoia, terror, hallucinations, progressing to delirium.[citation needed] The production of large quantities of saliva and tears coupled with an inability to speak or swallow are typical during the later stages of the disease; this can result in “hydrophobia”, where the victim has difficulty swallowing because the throat and jaw become slowly paralyzed, shows panic when presented with liquids to drink, and cannot quench his or her thirst. The disease itself was also once commonly known as hydrophobia, from this characteristic symptom. The patient experiences the response of “foaming at the mouth” as a result of the body's inability to quench its thirst; essentially, the overproduction of saliva as a last-resort attempt at retaining fluids.

Death almost invariably results two to ten days after the first symptoms; the few humans who are known to have survived the disease[citation needed] were all left with severe brain damage, with the exception of Jeanna Giese (see below). It is neurotropic in nature.

The reference method for diagnosing rabies is by performing PCR or viral culture on brain samples taken after death. The diagnosis can also be reliably made from skin samples taken before death.[14] It is also possible to make the diagnosis from saliva, urine and cerebrospinal fluid samples, but this is not as sensitive. Inclusion bodies called Negri bodies are 100% diagnostic for rabies infection, but found only in 20% of cases.

The differential diagnosis in a case of suspected human rabies may initially include any cause of encephalitis, particularly infection with viruses such as herpesviruses, enteroviruses, and arboviruses (e.g., West Nile virus). The most important viruses to rule out are herpes simplex virus type 1, varicella-zoster virus, and (less commonly) enteroviruses, including coxsackieviruses, echoviruses, polioviruses, and human enteroviruses 68 to 71. In addition, consideration should be given to the local epidemiology of encephalitis caused by arboviruses belonging to several taxonomic groups, including eastern and western equine encephalitis viruses, St. Louis encephalitis virus, Powassan virus, the California encephalitis virus serogroup, and La Crosse virus.

New causes of viral encephalitis are also possible, as was evidenced by the recent outbreak in Malaysia of some 300 cases of encephalitis (mortality rate, 40%) caused by Nipah virus, a newly recognized paramyxovirus.[15] Similarly, well-known viruses may be introduced into new locations, as is illustrated by the recent outbreak of encephalitis due to West Nile virus in the eastern United States.[16] Epidemiologic factors (e.g., season, geographic location, and the patient’s age, travel history, and possible exposure to animal bites, rodents, and ticks) may help direct the diagnostic workup.

Cheaper rabies diagnosis will be possible for low-income settings according to research reported on the Science and Development Network website in 2008. Accurate rabies diagnosis can be done ten times more cheaply, according to researchers from the Farcha Veterinary and Livestock Research Laboratory and the Support International Health Centre in N'Djamena, Chad. The scientists evaluated a method using light microscopy, cheaper than the standard tests, and say this could provide better rabies control across Africa.[17]

SARS

Severe acute respiratory syndrome (SARS) is a respiratory disease in humans which is caused by the SARS coronavirus (SARS-CoV).[1] There has been one near pandemic to date, between November 2002 and July 2003, with 8,096 known infected cases and 774 deaths (a case-fatality rate of 9.6%) worldwide being listed in the World Health Organization's (WHO) 21 April 2004 concluding report.[2] Within a matter of weeks in early 2003, SARS spread from the Guangdong province of China to rapidly infect individuals in some 37 countries around the world[3]

Mortality by age group as of 8 May 2003 is below 1 percent for people aged 24 or younger, 6 percent for those 25 to 44, 15 percent in those 45 to 64 and more than 50 percent for those over 65.[4] For comparison, the case fatality rate for influenza is usually around 0.6 percent (primarily among the elderly) but can rise as high as 33 percent in locally severe epidemics of new strains. The mortality rate of the primary viral pneumonia form is about 70 percent.

The epidemic of SARS appears to have started in Guangdong Province, China in November 2002. The first case of SARS was reportedly originated in Shunde, Foshan, Guangdong in Nov 2002, and the patient, a farmer, was treated in the First People's Hospital of Foshan (Mckay Dennis). The patient died soon after, and no definite diagnosis was made on his cause of death. ("Patient #0" -- first reported symptoms -- has been attributed to Charles Bybelezar of Montreal, Quebec, Canada) and, despite taking some action to control it, Chinese government officials did not inform the World Health Organization of the outbreak until February 2003, restricting media coverage in order to preserve public confidence. This lack of openness caused delays in efforts to control the epidemic, resulting in criticism of the People’s Republic of China (PRC) from the international community. The PRC has since officially apologized for early slowness in dealing with the SARS epidemic.[6]

The first clue of the outbreak appears to be 27 November 2002 when Canada's Global Public Health Intelligence Network (GPHIN), an electronic warning system which is part of the World Health Organization's (WHO) Global Outbreak and Alert Response Network (GOARN), picked up reports of a "flu outbreak" in China through internet media monitoring and analysis and sent them to the WHO. Importantly, while GPHIN's capability had recently been upgraded to enable Arabic, Chinese, English, French, Russian and Spanish translation, the system was limited to English or French in presenting this information. Thus, while the first reports of an unusual outbreak were in Chinese, an English report was not generated until 21 January 2003.[7][7][8] Subsequently, the WHO requested information from Chinese authorities on 5 and 11 December. Despite the successes of the network in previous outbreak of diseases, it was proven rather defective after receiving intelligence on the media reports from China several months after the outbreak of SARS. Along with the second alert, WHO released the name, definition, as well as an activation of a coordinated global outbreak response network that brought sensitive attention and containment procedures (Heyman, 2003). However, by then although the new definitions do give nations a guideline to contain SARS, over five hundred deaths and an additional two thousand cases had already occurred worldwide.[8]

In early April, there appeared to be a change in official policy when SARS began to receive a much greater prominence in the official media. Some have directly attributed this to the death of American James Earl Salisbury.[9] However, it was also in early April that accusations emerged regarding the undercounting of cases in Beijing military hospitals. After intense pressure, PRC officials allowed international officials to investigate the situation there. This revealed problems plaguing the aging mainland Chinese healthcare system, including increasing decentralization, red tape, and inadequate communication.

In late April, revelations occurred as the PRC government admitted to under-reporting the numerous cases of SARS due to the problems inherent in the healthcare system. Dr. Jiang Yanyong exposed the coverup that was occurring in China, at great personal risk. He reported that there were more SARS patients in his hospital alone than were being reported in all of China. A number of PRC officials were fired from their posts, including the health minister and mayor of Beijing, and systems were set up to improve reporting and control in the SARS crisis. Since then, the PRC has taken a much more active and transparent role in combating the SARS epidemic. However, the death toll occurred in the epidemic was disastrous. PRC government's initial denial was considered to be irresponsible and put the whole world at great risk.

The epidemic reached the public spotlight in February 2003, when an American businessman traveling from China became afflicted with pneumonia-like symptoms while on a flight to Singapore. The plane stopped at Hanoi, Vietnam, where the victim died in The French Hospital of Hanoi. Several of the medical staff who treated him soon developed the same disease despite basic hospital procedures. Italian doctor Carlo Urbani identified the threat and communicated it to WHO and the Vietnamese government; he later succumbed to the disease. The severity of the symptoms and the infection of hospital staff alarmed global health authorities fearful of another emergent pneumonia epidemic. On 12 March 2003, the WHO issued a global alert, followed by a health alert by the United States Centers for Disease Control and Prevention (CDC). Local transmission of SARS took place in Toronto, Ottawa, San Francisco, Ulan Bator, Manila, Singapore, Taiwan, Hanoi and Hong Kong, whereas within the mainland China it spread to Guangdong, Jilin, Hebei, Hubei, Shaanxi, Jiangsu, Shanxi, Tianjin and Inner Mongolia.

In Hong Kong the first cohort of affected people were discharged from the hospital on 29 March 2003. The disease spread in Hong Kong from a mainland doctor who arrived in February and stayed at the 9th floor of the Metropole Hotel in Kowloon Peninsula, infecting 16 of the hotel visitors. Those visitors traveled to Canada, Singapore, Taiwan and Vietnam, spreading SARS to those locations.[10] Another, larger, cluster of cases in Hong Kong centred on the Amoy Gardens housing estate. Its spread is suspected to have been facilitated by defects in the sewage system of the estate. Concerned citizens in Hong Kong worried that information was not reaching people quickly enough and created a website called sosick.org, eventually forced the Hong Kong government to provide information related to SARS in a timely manner.
Initial symptoms are flu like and may include: fever, myalgia, lethargy, gastrointestinal symptoms, cough, sore throat and other non-specific symptoms. The only symptom that is common to all patients appears to be a fever above 38 °C (100.4 °F). Shortness of breath may occur later. Symptoms usually appear 2–10 days following exposure, but up to 13 days has been reported. In most cases symptoms appear within 2–3 days. About 10–20% of cases require mechanical ventilation.

The chest X-ray (CXR) appearance of SARS is variable. There is no pathognomonic appearance of SARS but is commonly felt to be abnormal with patchy infiltrates in any part of the lungs. The initial CXR may be clear.

White blood cell and platelet counts are often high. Early reports indicated a tendency to relative neutrophilia and a relative lymphopenia — relative because the total number of white blood cells tends to be low. Other laboratory tests suggest raised lactate dehydrogenase and slightly raised creatine kinase and C-Reactive protein levels.

With the identification and sequencing of the RNA of the coronavirus responsible for SARS on 12 April 2003, several diagnostic test kits have been produced and are now being tested for their suitability for use.

Three possible diagnostic tests have emerged, each with drawbacks. The first, an ELISA (enzyme-linked immunosorbent assay) test detects antibodies to SARS reliably but only 21 days after the onset of symptoms. The second, an immunofluorescence assay, can detect antibodies 10 days after the onset of the disease but is a labour and time intensive test, requiring an immunofluorescence microscope and an experienced operator. The last test is a polymerase chain reaction (PCR) test that can detect genetic material of the SARS virus in specimens ranging from blood, sputum, tissue samples and stools. The PCR tests so far have proven to be very specific but not very sensitive. This means that while a positive PCR test result is strongly indicative that the patient is infected with SARS, a negative test result does not mean that the patient does not have SARS.

The WHO has issued guidelines for using these diagnostic tests.[5] There is currently no rapid screening test for SARS and research is ongoing.

Antibiotics are ineffective as SARS is a viral disease. Treatment of SARS so far has been largely supportive with antipyretics, supplemental oxygen and ventilatory support as needed.

Suspected cases of SARS must be isolated, preferably in negative pressure rooms, with complete barrier nursing precautions taken for any necessary contact with these patients.

There was initially anecdotal support for steroids and the antiviral drug ribavirin, but no published evidence has supported this therapy. Many clinicians now suspect that ribavirin is detrimental.[citation needed]

Researchers are currently testing all known antiviral treatments for other diseases including AIDS, hepatitis, influenza and others on the SARS-causing coronavirus.

There is some evidence that some of the more serious damage in SARS is due to the body's own immune system overreacting to the virus. There may be some benefit from using steroids and other immune modulating agents in the treatment of the more acute SARS patients. Research is continuing in this area.

In December 2004 it was reported that Chinese researchers had produced a SARS vaccine. It has been tested on a group of 36 volunteers, 24 of whom developed antibodies against the virus.[11]

A 2006 systematic review of all the studies done on the 2003 SARS epidemic found no evidence that antivirals, steroids or other therapies helped patients. A few suggested they caused harm.[12]

The clinical treatment of SARS has been relatively ineffective with most high risk patients requiring artificial ventilation. Currently, corticosteroids and Ribavirin are the most common drugs used for treatment of SARS (Wu et al., 2004). In vitro studies of Ribavirin have yielded little results at clinical, nontoxic concentrations. Better combinations of drugs that have yielded a more positive clinical outcome (when administered early) have included the use of Kaletra, Ribavirin and corticosteroids. The administration of corticosteroids, marketed as Prednisone, during viral infections has been controversial. Lymphopenia can also be a side effect of corticosteroids even further decreasing the immune response and allowing a spike in the viral load; yet physicians must balance the need for the anti-inflammatory treatment of corticosteroids (Murphy 2008). Clinicians have also noticed positive results during the use of human interferon and Glycyrrhizin. No compounds have yielded inhibitory results of any significance. The HIV protease inhibitors Ritonavir and Saquinavir did not show any inhibitory affect at nontoxic levels. Iminocyclitol 7 has been found to have an inhibitory effect on SARS-CoV in that it disrupts the envelope glycoprotein processing. Iminocyclitol 7 specifically inhibits the production of human fucosidase and in vitro trials yielded promising results in the treatment of SARS, yet one problem exists. A deficiency of fucosidase can lead to a condition known as fucosidosis in which there is a decrease in neurological function.

Avian Flu

Avian influenza, sometimes Avian flu, and commonly Bird flu refers to "influenza caused by viruses adapted to birds."[1][2][3][4][5][6][7]

"Bird flu" is a phrase similar to "Swine flu", "Dog flu", "Horse flu", or "Human flu" in that it refers to an illness caused by any of many different strains of influenza viruses that have adapted to a specific host. All known viruses that cause influenza in birds belong to the species: Influenza A virus. All subtypes (but not all strains of all subtypes) of Influenza A virus are adapted to birds, which is why for many purposes avian flu virus is the Influenza A virus (note that the "A" does not stand for "avian").

Adaptation is non-exclusive. Being adapted towards a particular species does not preclude adaptations, or partial adaptations, towards infecting different species. In this way strains of influenza viruses are adapted to multiple species, though may be preferential towards a particular host. For example, viruses responsible for influenza pandemics are adapted to both humans and birds. Recent influenza research into the genes of the Spanish Flu virus shows it to have genes adapted to both birds and humans; with more of its genes from birds than less deadly later pandemic strains.

For more details on this topic, see Influenza pandemic.

Pandemic flu viruses have some avian flu virus genes and usually some human flu virus genes. Both the H2N2 and H3N2 pandemic strains contained genes from avian influenza viruses. The new subtypes arose in pigs coinfected with avian and human viruses and were soon transferred to humans. Swine were considered the original "intermediate host" for influenza, because they supported reassortment of divergent subtypes. However, other hosts appear capable of similar coinfection (e.g., many poultry species), and direct transmission of avian viruses to humans is possible.[8] The Spanish flu virus strain may have been transmitted directly from birds to humans.[9]

In spite of their pandemic connection, avian influenza viruses are noninfectious for most species. When they are infectious they are usually asymptomatic, so the carrier does not have any disease from it. Thus while infected with an avian flu virus, the animal doesn't have a "flu". Typically, when illness (called "flu") from an avian flu virus does occur, it is the result of an avian flu virus strain adapted to one species spreading to another species (usually from one bird species to another bird species). So far as is known, the most common result of this is an illness so minor as to be not worth noticing (and thus little studied). But with the domestication of chickens and turkeys, humans have created species subtypes (domesticated poultry) that can catch an avian flu virus adapted to waterfowl and have it rapidly mutate into a form that kills in days over 90% of an entire flock and spread to other flocks and kill 90% of them and can only be stopped by killing every domestic bird in the area. Until H5N1 infected humans in the 1990s, this was the only reason avian flu was considered important. Since then, avian flu viruses have been intensively studied; resulting in changes in what is believed about flu pandemics, changes in poultry farming, changes in flu vaccination research, and changes in flu pandemic planning.

H5N1 has evolved into a flu virus strain that infects more species than any previously known flu virus strain, is deadlier than any previously known flu virus strain, and continues to evolve becoming both more widespread and more deadly causing Robert Webster, a leading expert on avian flu, to publish an article titled "The world is teetering on the edge of a pandemic that could kill a large fraction of the human population" in American Scientist. He called for adequate resources to fight what he sees as a major world threat to possibly billions of lives.[10] Since the article was written, the world community has spent billions of dollars fighting this threat with limited success.

For more details on this topic, see H5N1 and Transmission and infection of H5N1.

The highly pathogenic Influenza A virus subtype H5N1 virus is an emerging avian influenza virus that has been causing global concern as a potential pandemic threat. It is often referred to simply as "bird flu" or "avian influenza" even though it is only one subtype of avian influenza causing virus.

H5N1 has killed millions of poultry in a growing number of countries throughout Asia, Europe and Africa. Health experts are concerned that the co-existence of human flu viruses and avian flu viruses (especially H5N1) will provide an opportunity for genetic material to be exchanged between species-specific viruses, possibly creating a new virulent influenza strain that is easily transmissible and lethal to humans.[11]

Since the first H5N1 outbreak occurred in 1997, there has been an increasing number of HPAI H5N1 bird-to-human transmissions leading to clinically severe and fatal human infections. However, because there is a significant species barrier that exists between birds and humans, the virus does not easily cross over to humans, though some cases of infection are being researched to discern whether human to human transmission is occurring.[8] More research is necessary to understand the pathogenesis and epidemiology of the H5N1 virus in humans. Exposure routes and other disease transmission characteristics such as genetic and immunological factors, that may increase the likelihood of infection, are not clearly understood. [12]

On January 18th, 2009, A 27-year-old woman from eastern China has died of bird flu, Chinese authorities said, making her the second person to die this year from the deadly virus.Two tests on the woman were positive for H5N1 avian influenza, said the ministry, which did not say how she might have contracted the virus[13].

Although millions of birds have become infected with the virus since its discovery, 248 humans have died from the H5N1 in twelve countries according to WHO data as of January 2009.[14] View the most current WHO Data regarding:Cumulative Number of Human Cases

The Avian Flu claimed at least 200 humans in Indonesia, Vietnam, Laos, Romania, China, Turkey and Russia. Epidemiologists are afraid that the next time such a virus mutates, it could pass from human to human. If this form of transmission occurs, another pandemic could result. Thus disease-control centers around the world are making avian flu a top priority. These organizations encourage poultry-related operations to develop a preemptive plan to prevent the spread of H5N1 and its potentially pandemic strains. The recommended plans center on providing protective clothing for workers and isolating flocks to prevent the spread of the virus.[15]

Chinese Food

In most dishes in Chinese cuisine, food is prepared in bite-sized pieces, ready for direct picking up and eating. The food selected is often eaten together with some rice either in one bite or in alternation.

Traditionally, Chinese culture use chopsticks at the table. However, many non-Chinese are uncomfortable with allowing a person's individual utensils (which might have traces of saliva) to touch the communal food dishes. In areas with strong Western influences, such as Hong Kong, diners are provided individually with a heavy metal spoon for this purpose.
Pork is generally used over beef in Chinese cuisine due to economic, religious, and aesthetic reasons; swine are easy to feed and are not used for labour, and are so closely tied to the idea of domesticity that the character for "home" (家) depicts a pig under a roof. The colour of the meat and the fat of pork are regarded as more appetizing, while the taste and smell are described as sweeter and cleaner. It is also considered easier to digest. Buddhist cuisine restricts the use of meats and Chinese Islamic cuisine excludes pork. [1Vegetarianism is not uncommon or unusual in China; though, as is the case in the West, it is only practiced by a relatively small fraction of the population. Most Chinese vegetarians are Buddhists, following the Buddhist teachings about minimizing suffering. Chinese vegetarian dishes often contain large varieties of vegetables (e.g. Bok Choy, shiitake mushroom, sprouts, corn) and some imitation meat. Such imitation meat is created mostly with soy protein and/or wheat gluten to imitate the texture, taste, and appearance of duck, chicken, or pork. Imitation seafood items, made from other vegetable substances such as konjac, are also available.

According to the United Nations Food and Agriculture Organization estimates for 2001–2003, 12% of the population of the People’s Republic of China was undernourished.[2] The number of undernourished people in the country has fallen from 386.6 million in 1969–1971 to 150.0 million in 2001–2003.[3]

Undernourishment is a problem mainly in the central and western part of the country, while "unbalanced nutrition" is a problem in developed coastal and urban areas. Decades of food shortages and rationing ended in the 1980s. A study in 2004 showed that fat intake among urban dwellers had grown to 38.4 percent, beyond the 30 percent limit set by the World Health Organization. Excessive consumption of fats and animal protein has made chronic diseases more prevalent. As of 2008, 22.8 percent of the population were overweight and 18.8 percent had high blood pressure. The number of diabetes cases in China is the highest in the world. In 1959, the incidence of high blood pressure was only 5.9 percent.[4][5]

A typical Chinese peasant before industrialization would have eaten meat rarely and most meals would have consisted of rice accompanied with green vegetables, with protein coming from foods like peanuts and soya product. Fats and sugar were luxuries not eaten on a regular basis by most of the population. With increasing wealth, Chinese diets have become richer over time, consuming more meats, fats, and sugar.

Health advocates put some of the blame on the increased popularity of Western foods, especially fast food, and other culinary products and habits. Many Western, especially American, fast food chains have appeared in China, and are highly successful economically. These include McDonald's, Pizza Hut, and Kentucky Fried Chicken (KFC).

An extensive epidemiological study called the China Project is being conducted to observe the relationship of disease patterns to diet, particularly the move from the traditional Chinese diet to one which incorporates more rich Western-style foods. Controversially, Professor T. Colin Campbell has implicated the increased consumption of animal protein in particular as having a strong correlation with cancer, diabetes, heart disease, and other diseases that, while common in Western countries, were considered rare in China. He suggests that even a small increase in the consumption of animal protein can dramatically raise the risk of the aforementioned diseases.[citation needed]