Monday 30 September 2013

Aeroelasticity

Aeroelasticity is the science which studies the interactions among inertial, elastic, and aerodynamic forces. It was defined by Arthur Roderick Collar in 1947 as "the study of the mutual interaction that takes place within the triangle of the inertial, elastic, and aerodynamic forces acting on structural members exposed to an airstream, and the influence of this study on design." In more simple terms, it is the same set of conditions which causes a flag to flutter in a breeze or a reed to tremble in fast-flowing water. Flutter may occur in any fluid medium.

Introduction

Airplane structures are not completely rigid, and aeroelastic phenomena arise when structural deformations induce changes on aerodynamic forces. The additional aerodynamic forces cause an increase in the structural deformations, which leads to greater aerodynamic forces in a feedback process. These interactions may become smaller until a condition of equilibrium is reached, or may diverge catastrophically if resonance occurs.

Aeroelasticity can be divided in two fields of study: steady (static) and dynamic aeroelasticity.

Steady aeroelasticity

Steady aeroelasticity studies the interaction between aerodynamic and elastic forces on an elastic structure. Mass properties are not significant in the calculations of this type of phenomena.

Divergence

Divergence occurs when a lifting surface deflects under aerodynamic load so as to increase the applied load, or move the load so that the twisting effect on the structure is increased. The increased load deflects the structure further, which brings the structure to the limit loads and to failure.

One such example case where this occurs is the divergent loading of a wing whose span-wise stiffness increases across the wing chord. In this case the trailing edge of the wing is stiffer along the wing span than the leading edge. When the wing is loaded aerodynamically, under lift, the leading edge deflects faster than the trailing edge yielding an increased angle of attack. This, in turn, increases the coefficient of lift resulting in increased lift load which further increases the wing loading. Failure to arrest this divergence is likely to result in structural failure of the wing.

Control surface reversal

Control surface reversal is the loss (or reversal) of the expected response of a control surface, due to structural deformation of the main lifting surface.

Dynamic aeroelasticity

Dynamic Aeroelasticity studies the interactions among aerodynamic, elastic, and inertial forces. Examples of dynamic aeroelastic phenomena are:

Flutter

Flutter is a self-feeding and potentially destructive vibration where aerodynamic forces on an object couple with a structure's natural mode of vibration to produce rapid periodic motion. Flutter can occur in any object within a strong fluid flow, under the conditions that a positive feedback occurs between the structure's natural vibration and the aerodynamic forces. That is, the vibrational movement of the object increases an aerodynamic load, which in turn drives the object to move further. If the energy input by the aerodynamic excitation in a cycle is larger than that dissipated by the damping in the system, the amplitude of vibration will increase, resulting in self-exciting oscillation. The amplitude can thus build up and is only limited when the energy dissipated by aerodynamic and mechanical damping matches the energy input, which can result in large amplitude vibration and potentially lead to rapid failure. 

Because of this, structures exposed to aerodynamic forces — including wings and aerofoils, but also chimneys and bridges — are designed carefully within known parameters to avoid flutter. In complex structures where both the aerodynamics and the mechanical properties of the structure are not fully understood, flutter can only be discounted through detailed testing. Even changing the mass distribution of an aircraft or the stiffness of one component can induce flutter in an apparently unrelated aerodynamic component. At its mildest this can appear as a "buzz" in the aircraft structure, but at its most violent it can develop uncontrollably with great speed and cause serious damage to or lead to the destruction of the aircraft,as in Braniff Flight 542.

In some cases, automatic control systems have been demonstrated to help prevent or limit flutter-related structural vibration.

Flutter can also occur on structures other than aircraft. One famous example of flutter phenomena is the collapse of the original Tacoma Narrows Bridge.

Flutter as a controlled aerodynamic instability phenomenon is used intentionally and positively in wind mills for generating electricity and in other works like making musical tones on ground-mounted devices, as well as on musical kites. Flutter is not always a destructive force; recent progress has been made in small scale (table top) wind generators for underserved communities in developing countries, designed specifically to take advantage of this effect. Peter Allan Sharp (of Oakland, California) and Jonathan Hare (of University of Sussex) demonstrated, in March 2007, a linear generator run by two flutter wings.[7] The wind energy industry distinguishes between flutter wings, flip wings, and oscillating tensionally-held sweeping membrane wings for wind milling

Dynamic response

Dynamic response or forced response is the response of an object to changes in a fluid flow such as aircraft to gusts and other external atmospheric disturbances. Forced response is a concern in axial compressor and gas turbine design, where one set of aerofoils pass through the wakes of the aerofoils upstream.

Buffeting

Buffeting is a high-frequency instability, caused by airflow separation or shock wave oscillations from one object striking another. It is caused by a sudden impulse of load increasing. It is a random forced vibration. Generally it affects the tail unit of the aircraft structure due to air flow down stream of the wing.

Transonic Aeroelasticity

Flow is highly non-linear in the transonic regime, dominated by moving shock waves. It is mission-critical for aircraft that fly through transonic Mach numbers. The role of shock waves was first analyzed by Holt Ashley.A phenenenon that impacts stability of aircraft known as 'transonic dip', in which the flutter speed can get close to flight speed, was reported in May 1976 by Farmer and Hanson of the Langley Research Center.

Aerocapture

Aerocapture is a technique used to reduce velocity of a spacecraft, arriving at a celestial body with a hyperbolic trajectory, in order to bring it in an orbit with an eccentricity of less than 1. It uses the drag created by the atmosphere of the celestial body to decelerate. Only one pass in the atmosphere is required by this technique, in contrast with aerobraking. However, this approach requires significant thermal protection and precision closed-loop guidance during the maneuver. This level of control authority requires either the production of significant lift, or relatively large attitude control thrusters.

In practice

Aerocapture has not yet been tried on a planetary mission, but the re-entry skip by Zond 6 and Zond 7 upon lunar return were aerocapture maneuvers, since they turned a hyperbolic orbit into an elliptical orbit. On these missions, since there was no attempt to raise the perigee after the aerocapture, the resulting orbit still intersected the atmosphere, and re-entry occurred at the next perigee.

Aerocapture was originally planned for the Mars Odyssey orbiter, but later changed to aerobraking for reasons of cost and commonality with other missions.Aerocapture has been proposed and analyzed for arrival at Saturn's moon Titan.



In fiction

Aerocapture within fiction can be read in Arthur C. Clarke's novel 2010: Odyssey Two, in which two spacecraft (one Russian, one Chinese) both use aerocapture in Jupiter's atmosphere to shed their excess velocity and position themselves for exploring Jupiter's satellites. This can be seen as a special effect in the movie version in which only a Russian spacecraft undergoes aerocapture (in the film incorrectly called aerobraking).

Aerobraking

Aerobraking is a spaceflight maneuver that reduces the high point of an elliptical orbit (apoapsis) by flying the vehicle through the atmosphere at the low point of the orbit (periapsis). The resulting drag slows the spacecraft. Aerobraking is used when a spacecraft requires a low orbit after arriving at a body with an atmosphere, and it requires less fuel than does the direct use of a rocket engine.

Method

When an interplanetary vehicle arrives at its destination, it must change its velocity to remain in the vicinity of that body. When a low, near-circular orbit around a body with substantial gravity (as is required for many scientific studies) is needed, the total required velocity changes can be on the order of several kilometers per second. If done by direct propulsion, the rocket equation dictates that a large fraction of the spacecraft mass must be fuel. This in turn means the spacecraft is limited to a relatively small science payload and/or the use of a very large and expensive launcher. Provided the target body has an atmosphere, aerobraking can be used to reduce fuel requirements. The use of a relatively small burn allows the spacecraft to be captured into a very elongated elliptic orbit. 

Aerobraking is then used to circularize the orbit. If the atmosphere is thick enough, a single pass through it can be sufficient to slow a spacecraft as needed. However, aerobraking is typically done with many orbital passes through a higher altitude, and therefore thinner region of the atmosphere. This is done to reduce the effect of frictional heating, and because unpredictable turbulence effects, atmospheric composition, and temperature make it difficult to accurately predict the decrease in speed that will result from any single pass. When aerobraking is done in this way, there is sufficient time after each pass to measure the change in velocity and make any necessary corrections for the next pass.

 Achieving the final orbit using this method takes a long time (e.g., over six months when arriving at Mars), and may require several hundred passes through the atmosphere of the planet or moon. After the last aerobraking pass, the spacecraft must be given more kinetic energy via rocket engines in order to raise the periapsis above the atmosphere.

The kinetic energy dissipated by aerobraking is converted to heat, meaning that a spacecraft using the technique needs to be capable of dissipating this heat. The spacecraft must also have sufficient surface area and structural strength to produce and survive the required drag, but the temperatures and pressures associated with aerobraking are not as severe as those of atmospheric reentry or aerocapture. Simulations of the Mars Reconnaissance Orbiter aerobraking use a force limit of 0.35 N per square meter with a spacecraft cross section of about 37 m², and a maximum expected temperature as 340 °F (170 °C). The force density, of roughly 0.2 N (0.04 lbf) per square meter, that was exerted on the Mars Observer, during aerobraking is comparable to the force of a 40 mph (60 km/h) wind on a human hand at sea level on Earth.

Related methods

Aerocapture is a related but more extreme method in which no initial orbit-injection burn is performed. Instead, the spacecraft plunges deeply into the atmosphere without an initial insertion burn, and emerges from this single pass in the atmosphere with an apoapsis near that of the desired orbit. Several small correction burns are then used to raise the periapsis and perform final adjustments. This method was originally planned for the Mars Odyssey orbiter, but the significant design impacts proved too costly.

Another related technique is that of aerogravity assist, in which the spacecraft flies through the upper atmosphere and utilises aerodynamic lift instead of drag at the point of closest approach. If correctly oriented, this can increase the deflection angle above that of a pure gravity assist, resulting in a larger delta-v.

Spacecraft missions

Although the theory of aerobraking is well developed, utilising the technique is difficult because a very detailed knowledge of the character of the target planet's atmosphere is needed in order to plan the maneuver correctly. Currently, the deceleration is monitored during each maneuver and plans are modified accordingly. Since no spacecraft can yet aerobrake safely on its own, this requires constant attention from both human controllers and the Deep Space Network. This is particularly true near the end of the process, when the drag passes are relatively close together (only about 2 hours apart for Mars).[citation needed] NASA has used aerobraking four times to modify a spacecraft’s orbit to one with lower energy, reduced apoapsis altitude, and smaller orbit.

On 19 March 1991, aerobraking was demonstrated by the Hiten spacecraft. This was the first aerobraking maneuver by a deep space probe.Hiten (a.k.a. MUSES-A) was launched by the Institute of Space and Astronautical Science (ISAS) of Japan. Hiten flew by the Earth at an altitude of 125.5 km over the Pacific at 11.0 km/s. Atmospheric drag lowered the velocity by 1.712 m/s and the apogee altitude by 8665 km.[10] Another aerobraking maneuver was conducted on 30 March.

In May 1993, aerobraking was used during the extended Venusian mission of the Magellan spacecraft. It was used to circularize the orbit of the spacecraft in order to increase the precision of the measurement of the gravity field. The entire gravity field was mapped from the circular orbit during a 243 day cycle of the extended mission. During the termination phase of the mission, a "windmill experiment" was performed: Atmospheric molecular pressure exerts a torque via the then windmill-sail-like oriented solar cell wings, the necessary counter-torque to keep the sonde from spinning is measured.

In 1997, the Mars Global Surveyor (MGS) orbiter was the first spacecraft to use aerobraking as the main planned technique of orbit adjustment. The MGS used the data gathered from the Magellan mission to Venus to plan its aerobraking technique. The spacecraft used its solar panels as "wings" to control its passage through the tenuous upper atmosphere of Mars and lower the apoapsis of its orbit over the course of many months. Unfortunately, a structural failure shortly after launch severely damaged one of the MGS's solar panels and necessitated a higher aerobraking altitude (and hence one third the force) than originally planned, significantly extending the time required to attain the desired orbit. More recently, aerobraking was used by the Mars Odyssey and Mars Reconnaissance Orbiter spacecraft, in both cases without incident.

Aerobraking in fiction

In Robert A. Heinlein's 1948 novel Space Cadet, aerobraking is used to save fuel while slowing the spacecraft Aes Triplex for an unplanned extended mission and landing on Venus, during a transit from the Asteroid Belt to Earth.

In the fourth episode of Stargate Universe, the Ancient ship Destiny suffers an almost complete loss of power and must use aerobraking to change course. The episode ends in a cliffhanger with Destiny headed directly toward a star.

The spacecraft Cosmonaut Alexey Leonov in Arthur C. Clarke's novel 2010: Odyssey Two uses aerobraking in the upper layers of Jupiter's atmosphere to establish itself at the L1 Lagrangian point of the Jupiter - Io system.

In Space Odyssey: Voyage to the Planets (2004) the crew of the international spacecraft Pegasus perform an aerobraking in Jupiter's upper atmosphere to slow them down enough to enter Jovian orbit.

In the space simulation sandbox game Kerbal Space Program, this is a very common method of slowing a craft's orbital speed.

Aerodynamic braking

Aerodynamic braking is a method used in landing aircraft to assist the wheel brakes in stopping the plane. It is often used for short runway landings or when conditions are wet, icy or slippery. Aerodynamic braking is performed immediately after the rear wheels (main mounts) touch down, but before the nose wheel drops. The pilot begins to pull back on the stick, applying elevator pressure to hold the nose high. The nose-high attitude exposes more of the craft's surface-area to the flow of air, which produces greater drag, helping to slow the plane. The raised elevators also cause air to push down on the rear of the craft, forcing the rear wheels harder against the ground, which aids the wheel brakes by helping to prevent skidding. The pilot will usually continue to hold back on the stick even after the elevators lose their authority, and the nose wheel drops, to keep added pressure on the rear wheels.

Aerodynamic braking is a common braking technique during landing, which can also help to protect the wheel brakes from excess wear, or from locking-up and sending the craft sliding out of control. It is often used by private pilots, commercial planes, fighter aircraft, and was used by the space shuttles during landings.

Advanced Space Vision System

The Advanced Space Vision System (also known as the Space Vision System or by its acronym SVS) is a computer vision system designed primarily for International Space Station (ISS) assembly.[1] The system uses regular 2D cameras in the Space Shuttle bay, on the Canadarm, or on the ISS along with cooperative targets to calculate the 3D position of an object

Because of the small number of viewing ports on the station and on the shuttle most of the assembly and maintenance is done using cameras, which do not give stereoscopic vision, and thus do not allow a proper evaluation of depth. In addition the difficult conditions created by the particular conditions of illumination and obscurity in space, make it much more difficult to distinguish objects, even when the assembly work can be viewed directly, without using a camera. For instance, the harsh glare of direct sunlight can blind human vision. Also, the contrasts between objects in black shadows and objects in the solar light are much greater than in Earth's atmosphere, even where no glare is involved.

Background

The Advanced Space Vision System images objects with cooperative targets and uses the known positions of the targets to triangulate their exact relative positions in real time. The targets are composed of thin films of silicon dioxide layered with inconel to form an inconel interference stack. A stack like this has nearly no reflectivity in the Electromagnetic spectrum. The result is a black color that appears even blacker than the flattest black paint. In photos the disks look like small black dots, and a minimum of three are needed, so they are quite unobtrusive on most payloads.

Development

The basic elements of the system were devised at the National Research Council of Canada in the 1970s, to study car collisions. In 1990, development was transferred to Neptec Design Group, a small commercial enterprise located in Kanata, a suburb of Ottawa. The system runs on Neptec's Advanced Vision Unit (AVU) processing platform, which handles video routing, algorithm processing, video overlays, and the system interface. The operating system is the Unix-like and POSIX compliant QNX Real-time operating system, running the Photon windowing interface. The Photon implementation was optimized to be the most worry free direct manipulation interface possible for the particular needs and work habits of the astronauts.
The Canadian Space Agency was involved at several stages in the development and deployment of the space vision system. Training for the system takes place in the simulators located at the agency's headquarters at the John H. Chapman Space Centre near Montreal.

Implementation

The system was first tested in its early form on STS-52 in October 1992, and used in subsequent missions. The advanced version was first tested on STS-74 in November 1995.[3] The system has been used with success on shuttle flights since then, and with equal success for the assembly and maintenance of the station since 1997

Admissible Heuristic

In computer science, specifically in algorithms related to pathfinding, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path. [1] An admissible heuristic is also known as an optimistic heuristic.

Search algorithms

An admissible heuristic is used to estimate the cost of reaching the goal state in an informed search algorithm. In order for a heuristic to be admissible to the search problem, the estimated cost must always be lower than or equal to the actual cost of reaching the goal state. The search algorithm uses the admissible heuristic to find an estimated optimal path to the goal state from the current node. For example, in A* search the evaluation function (where n is the current node) is:

f(n) = g(n) + h(n)

where
f(n) = the evaluation function.
g(n) = the cost from the start node to the current node
h(n) = estimated cost from current node to goal.
h(n) is calculated using the heuristic function. With a non-admissible heuristic, the A* algorithm could overlook the optimal solution to a search problem due to an overestimation in f(n).

Formulation

n is a node
h is a heuristic
h(n) is cost indicated by h to reach a goal from n
C(n) is the actual cost to reach a goal from n
h is admissible if
\forall n, h(n) \leq C(n)

Construction

An admissible heuristic can be derived from a relaxed version of the problem, or by information from pattern databases that store exact solutions to subproblems of the problem, or by using inductive learning methods.

Examples

Two different examples of admissible heuristics apply to the fifteen puzzle problem:


  • Hamming distance
  • Manhattan distance


The Hamming distance is the total number of misplaced tiles. It is clear that this heuristic is admissible since the total number of moves to order the tiles correctly is at least the number of misplaced tiles (each tile not in place must be moved at least once). The cost (number of moves) to the goal (an ordered puzzle) is at least the Hamming distance of the puzzle.

The Manhattan distance of a puzzle is defined as:

h(n)=\sum_{all tiles}distance(tile, correct position)

The Manhattan distance is an admissible heuristic because every tile will have to be moved at least the amount of spots in between itself and its correct position. Consider the puzzle below:

43 61 30 81
72 123 93 144
153 132 14 54
24 101 111

The subscripts show the Manhattan distance for each tile. The total Manhattan distance for the shown puzzle is:

h(n)=3+1+0+1+2+3+3+4+3+2+4+4+4+1+1=36

Notes

While all consistent heuristics are admissible, not all admissible heuristics are consistent.For tree search problems, if an admissible heuristic is used, the A* search algorithm will never return a suboptimal goal node.

References

Jump up  Russell, S.J.; Norvig, P. (2002). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 0-13-790395-2.


Heuristic function
Search algorithm

ACROSS Project

ACROSS is a Singular Strategic R&D Project led by Treelogic[1] funded by the Spanish Ministry of Industry, Tourism and Trade[2] activities in the field of Robotics and Cognitive Computing over an execution time-frame from 2009 to 2011. ACROSS project involves a number higher than 100 researchers from 13 Spanish entities.

ACROSS project objectives

ACROSS modifies the design of social robotics, blocked in providing predefined services, going further by means of intelligent systems. These systems are able to self-reconfigure and modify their behavior autonomously through the capacity for understanding, learning and software remote access.
In order to provide an open framework for collaboration between universities, research centers and the Administration, ACROSS develops Open Source Services available to everybody.

Three application domains

ACROSS works in three application domains:


  1. Autonomous living: robots are used as technological tools to help handicapped person into daily tasks.
  2. Psycho-Affective Disorders (autism): robots are used to mitigate cognitive disorders.
  3. Marketing: robots are used to interact with humans in a recreational approach.


Consortium


  1. Treelogic
  2. Alimerka
  3. Bizintek
  4. Universitat Politécnica de Catalunya
  5. University of Deusto
  6. European Centre for Soft Computing
  7. Fatronik - Tecnalia
  8. Fundació Hospital Comarcal Sant Antoni Abat
  9. Fundación Pública Andaluza para la Gestión de la Investigación en Salud de Sevilla, "Virgen del Rocío" University Hospitals
  10. m-BOT
  11. Omicron Electronic
  12. Universidad de Extremadura - RoboLab
  13. Verbio Technologies

Handwriting recognition

Handwriting recognition (or HWR[1]) is the ability of a computer to receive and interpret intelligible handwritten input from sources such as paper documents, photographs, touch-screens and other devices. The image of the written text may be sensed "off line" from a piece of paper by optical scanning (optical character recognition) or intelligent word recognition. Alternatively, the movements of the pen tip may be sensed "on line", for example by a pen-based computer screen surface.

Handwriting recognition principally entails optical character recognition. However, a complete handwriting recognition system also handles formatting, performs correct segmentation into characters and finds the most plausible words.

Off-line recognition

Off-line handwriting recognition involves the automatic conversion of text in an image into letter codes which are usable within computer and text-processing applications. The data obtained by this form is regarded as a static representation of handwriting. Off-line handwriting recognition is comparatively difficult, as different people have different handwriting styles. And, as of today, OCR engines are primarily focused on machine printed text and ICR for hand "printed" (written in capital letters) text. There is no OCR/ICR engine that supports handwriting recognition as of today.

Problem domain reduction techniques

Narrowing the problem domain often helps increase the accuracy of handwriting recognition systems. A form field for a U.S. ZIP code, for example, would contain only the characters 0-9. This fact would reduce the number of possible identifications.

Primary techniques:

Specifying specific character ranges
Utilization of specialized forms

Character extraction

Off-line character recognition often involves scanning a form or document written sometime in the past. This means the individual characters contained in the scanned image will need to be extracted. Tools exist that are capable of performing this step. However, there are several common imperfections in this step. The most common is when characters that are connected are returned as a single sub-image containing both characters. This causes a major problem in the recognition stage. Yet many algorithms are available that reduce the risk of connected characters.

Character recognition

After the extraction of individual characters occurs, a recognition engine is used to identify the corresponding computer character. Several different recognition techniques are currently available.

Neural networks

Neural network recognizers learn from an initial image training set. The trained network then makes the character identifications. Each neural network uniquely learns the properties that differentiate training images. It then looks for similar properties in the target image to be identified. Neural networks are quick to set up; however, they can be inaccurate if they learn properties that are not important in the target data.

Feature extraction

Feature extraction works in a similar fashion to neural network recognizers however, programmers must manually determine the properties they feel are important.

Some example properties might be:

Aspect Ratio
Percent of pixels above horizontal half point
Percent of pixels to right of vertical half point
Number of strokes
Average distance from image center
Is reflected y axis
Is reflected x axis

This approach gives the recognizer more control over the properties used in identification. Yet any system using this approach requires substantially more development time than a neural network

On-line recognition

On-line handwriting recognition involves the automatic conversion of text as it is written on a special digitizer or PDA, where a sensor picks up the pen-tip movements as well as pen-up/pen-down switching. This kind of data is known as digital ink and can be regarded as a digital representation of handwriting. The obtained signal is converted into letter codes which are usable within computer and text-processing applications.

The elements of an on-line handwriting recognition interface typically include:

  • a pen or stylus for the user to write with.
  • a touch sensitive surface, which may be integrated with, or adjacent to, an output display.
  • a software application which interprets the movements of the stylus across the writing surface, translating the resulting strokes into digital text.


General process

The process of online handwriting recognition can be broken down into a few general steps:

The purpose of preprocessing is to discard irrelevant information in the input data, that can negatively affect the recognition.

This concerns speed and accuracy. Preprocessing usually consists of binarization, normalization, sampling, smoothing and denoising. The second step is feature extraction. Out of the two- or more-dimensional vector field received from the preprocessing algorithms, higher dimensional data is extracted. The purpose of this step is to highlight important information for the recognition model. This data may include information like pen pressure, velocity or the changes of writing direction. The last big step is classification. In this step various models are used to map the extracted features to different classes and thus identifying the characters or words the features represent.

Research

Handwriting Recognition has an active community of academics studying it. The biggest conferences for handwriting recognition are the International Conference on Frontiers in Handwriting Recognition (ICFHR), held in even-numbered years, and the International Conference on Document Analysis and Recognition (ICDAR), held in odd-numbered years. Both of these conferences are endorsed by the IEEE. Active areas of research include:

  • Online Recognition
  • Offline Recognition
  • Signature Verification
  • Postal-Address Interpretation
  • Bank-Check Processing
  • Writer Recognition

Sunday 29 September 2013

OLPC XO-1

The XO-1, previously known as the $100 Laptop,[2] Children's Machine,[3] and 2B1,[4] is an inexpensive subnotebook computer intended to be distributed to children in developing countries around the world,[5] to provide them with access to knowledge, and opportunities to "explore, experiment and express themselves" (constructionist learning).[6] The laptop is developed by the One Laptop per Child (OLPC) non-profit, 501(c)(3) organization and manufactured by Quanta Computer.

The subnotebooks are designed for sale to government-education systems which then give each primary school child their own laptop. Pricing was set to start at $188 in 2006, with a stated goal to reach the $100 mark in 2008 and the 50-dollar mark by 2010.[7] When offered for sale in the Give One, Get One campaigns of Q4 2006 and Q4 2007, the laptop was sold at $199.[8]

These rugged, low-power computers use flash memory instead of a hard drive, and come with a distribution of Linux derived from Red Hat's Fedora as their pre-installed operating system with the new Sugar GUI.[9] Mobile ad hoc networking via 802.11s WiFi mesh networking protocol is used to allow many machines to share Internet access as long as at least one of them can see and connect to a router or other access point.
The XO-1 is also nicknamed ceibalita in Uruguay after the Ceibal project.



Design

The XO-1 is designed to be low-cost, small, durable, and efficient. It is shipped with a slimmed-down version of Fedora Linux and a GUI named Sugar that is intended to help young children collaborate. The XO-1 includes a video camera, a microphone, long-range Wi-Fi, and a hybrid stylus/touch pad. In addition to a standard plug-in power supply, human power and solar power sources are available, allowing operation far from a commercial power grid. Mary Lou Jepsen has listed the design goals of the device as follows:

Various use models had been explored by OLPC with the help of Design Continuum and Fuseproject, including: laptop, e-book, theatre, simulation, tote, and tablet architectures. The current design, by Fuseproject, uses a transformer hinge to morph between laptop, e-book, and router modes

Hardware

The latest version of the OLPC XO is XO-4. The specs for older builds are documented in the Major Builds section, below. Data about the XO-1 comes from the hardware specification.

  1. XO-1 motherboard
  2. XO 1
  3. CPU: 433 MHz x86 AMD Geode LX-700 at 0.8 watts, with integrated graphics controller
  4. 256 MB of Dual (DDR266) 133 MHz DRAM (in 2006 the specification called for 128 MB of RAM)[29]
  5. 1024 kB (1 MB) flash ROM with open-source Open Firmware
  6. 1024 MB of SLC NAND flash memory (in 2006 the specifications called for 512 MB of flash memory)[30]
  7. Average battery life 3 hrs
  8. XO 1.5[31]
  9. Release Date early 2010
  10. CPU: 400 MHz to 1000 MHz x86 VIA C7 at 0.8 watts, with integrated graphics controller
  11. 512 to 1024 MB of Dual (DDR266) 133 MHz DRAM
  12. 1024 kB (1 MB) flash ROM with open-source Open Firmware
  13. 4 GB of SLC NAND flash memory (upgradable, microSD)
  14. Average battery life 3-5 hrs (varies w/ active suspend)
  15. XO 1.75[32][33]
  16. Release Date TBD late 2011 ?
  17. CPU: 400 MHz to 1000 MHz ARM Marvell Armada 610 at 0.8 watts, with integrated graphics controller
  18. 1024 to 2048 MB of DDR3 (TBD)
  19. 1024 TBD kB (1 MB) flash ROM with open-source Open Firmware
  20. 4-8 GB of SLC NAND flash memory (upgradable, microSD)
  21. Accelerometer
  22. Average battery life 5-10 hrs
  23. Screen
  24. 1200×900 7.5 inch (19 cm) diagonal LCD (200 dpi) that uses 0.1 to 1.0 W depending on mode. The two modes are:
  25. Reflective (backlight off) monochrome mode for low-power use in sunlight. This mode provides very sharp images for high-quality text.
  26. Backlit color mode, with an alternance of red,green and blue pixels. See below for details.
  27. XO 1.75 developmental version for XO-3 has an optional touch screen
  28. Storage
  29. Internal SD card slot[34]
  30. Wireless
  31. Wireless networking using an “Extended Range” 802.11b/g and 802.11s (mesh) Marvell 8388 wireless chip, chosen due to its ability to autonomously forward packets in the mesh even if the CPU is powered off. When connected in a mesh, it is run at a low bitrate (2 Mbit/s) to minimize power consumption. Despite the wireless chip's minimalism, it supports WPA.[35] An ARM processor is included.
  32. Dual adjustable antennas for diversity reception.
  33. Inputs/Ports
  34. Water-resistant membrane keyboard, customized to the locale in which it will be distributed.[36] The multiplication and division symbols are included. The keyboard is designed for the small hands of children.
  35. Five-key cursor-control pad; four directional keys plus Enter
  36. Four "Game Buttons" (functionally PgUp, PgDn, Home, and End) modeled after the PlayStation Controller layout (Triangle, Circle, Cross, and Square).
  37. Touchpad for mouse control and handwriting input
  38. Built-in color camera, to the right of the display, VGA resolution (640×480)
  39. Built-in stereo speakers
  40. Built-in microphone
  41. Audio based on the AC'97 codec, with jacks for external stereo speakers and microphones, Line-out, and Mic-in
  42. 3 external USB 2.0 ports.
  43. Power sources:
  44. DC input, ±11–18 V, maximum 15 W power draw
  45. 5-cell rechargeable NiMH battery pack, 3000 mAh minimum 3050 mAh typical 80% usable, charge at 0…45°C (deprecated in 2009)
  46. 2-cell rechargeable LiFePO4 battery pack, 2800 mAh minimum 2900 mAh typical 100% usable, charge at 0…60°C
  47. 4-cell rechargeable LiFePO4 battery pack, 3100 mAh minimum 3150 mAh typical 100% usable, charge at −10…50°C
  48. External manual power options included a clamp-on crank generator similar to the original built-in one (see photo in the Gallery, below), but they generated 1/4 the power initially hoped, and less than a thousand were produced. A pull-string generator was also designed by Potenco[37] but never mass-produced.
  49. External power options include 110–240 Volt AC as well as input from an external solar panel.[38] Solar is the predominant alternate power source for schools using XOs.


Major builds

XO 1 has major builds indicated by build numbers or less-than-one increments of the version number. The changes made in each build are to be documented in this section. Other versions (OLPC XO-3) are documented in other articles.

The hardware specifications that were different in older versions of the XO-1 are listed below.

  1. XO-1[edit]
  2. XO prototype, displayed in 2005.
  3. Power option: built-in hand-crank generator.
  4. XO-1 beta. Released in early 2007.
  5. Power option: separate hand-crank generator.
  6. XO-1. Released in late 2007. as of November 2007 are:[39]
  7. Power option: solar panel.
  8. XO 1.5. Released in early 2010.
  9. Via/x86 CPU 4.5 W
  10. Fewer physical parts
  11. Lower power consumption
  12. Power option: solar panel.
  13. XO 1.75. Released in TBD late 2011. Slated to have:
  14. 2 Watt ARM CPU
  15. Fewer physical parts, 40% lower power consumption.
  16. Power option: solar panel.[40]
  17. XO 2[edit]
  18. XO 2. Previously scheduled for release in 2010, canceled in favor of XO 3. Price target $75. Elegant, lighter, folding dual touch-screen design (See photo in the Gallery section, below). Hardware would have been open-source and sold by various manufacturers. Choice of OS (Windows XP or Linux) outside of the US. $150 price target in the USA includes 2 computers, one donated.[41]
  19. XO 3[edit]
  20. XO 3 Canceled in favor of the XO-4
  21. Scheduled for release in late 2012.
  22. One solid color multi-touch screen design. For details see OLPC XO-3.
  23. Power option: solar panel in cover or carrying case.
  24. XO 4[edit]
  25. The XO 4 is a refresh of the XO 1 to 1.75 with a later ARM CPU and an optional touch screen. This model will not be available for consumer sales. There is a mini HDMI port to allow connecting to a display.[42]
  26. XO Tablet[edit]
  27. The XO Tablet was designed by third-party Vivitar, rather than OLPC, and based on the Android platform[citation needed] whereas all previous XO models were based on Sugar running on top of Fedora.

Display

The first-generation OLPC laptops have a novel low-cost LCD. Later generations of the OLPC laptop are expected to use low-cost, low-power and high-resolution color displays with an appearance similar to electronic paper.

The display is the most expensive component in most laptops. In April 2005, Negroponte hired Mary Lou Jepsen—who was interviewing to join the Media Arts and Sciences faculty at the MIT Media Lab in September 2008—as OLPC Chief Technology Officer. Jepsen developed a new display for the first-generation OLPC laptop, inspired by the design of small LCDs used in portable DVD players, which she estimated would cost about $35. In the OLPC XO-1, the screen is estimated to be the second most expensive component (after the CPU and chipset).

Jepsen has described the removal of the filters that color the RGB subpixels as the critical design innovation in the new LCD. Instead of using subtractive color filters, the display uses a plastic diffraction grating and lenses on the rear of the LCD to illuminate each pixel.[dubious – discuss] This grating pattern is stamped using the same technology used to make DVDs. The grating splits the light from the white backlight into a spectrum. The red, green and blue components are diffracted into the correct positions to illuminate the corresponding pixel with R, G or B. 

This innovation results in a much brighter display for a given amount of backlight illumination: while the color filters in a regular display typically absorb 85% of the light that hits them, this display absorbs little of that light. Most LCD screens use cold cathode fluorescent lamp backlights which are fragile, difficult or impossible to repair, require a high voltage power supply, are relatively power-hungry, and account for 50% of the screens' cost (sometimes 60%). The LED backlight in the XO-1 is easily replaceable, rugged, and inexpensive.

The remainder of the LCD uses existing display technology and can be made using existing manufacturing equipment. Even the masks can be made using combinations of existing materials and processes.

Ust-Ilimsk Hydroelectric Power Station

The Ust-Ilimsk Hydroelectric Power Station (Ust-Ilimsk HPS) is a concrete gravity dam on the Angara River and adjacent hydroelectric power station. It is located near Ust-Ilimsk, Irkutsk Oblast in Russia and is the third and last dam on the Angara cascades. Construction on the dam began in 1963, its reservoir began filling in 1974 and its power plant was commissioned in 1980.

Background

Between 1951 and 1955, construction of the Ust-Ilimsk HPS was designated as a priority and in September 1960, the State Commission determined the most suitable spot for the dam. It would be constructed on the Angara River, 20 km (12 mi) below the mouth of the Ilim River. Gidroproekt All-Union Design and Exploratory Institute produced the design of the HPS and on June 8, 1962, the Central Committee of the CPSU and Ministerial Council of the USSR determined the schedule of construction and the project's scope.

Construction

Construction on Stage I of the HPS began in 1963. This included preparing the dam's foundation, various construction facilities and 220 kV power lines. In addition, the village of Ust-Ilimsk was created as well to support workers on the project. By 1966, the Bratsk-Ust-Ilimsk motorway was opened to traffic and in March of that year, construction began on the actual power plant. On April 22, 1968, the first concrete was poured into the dam's foundation and by October 3, 1974, the dam began to inundate the Angara River, creating its reservoir. The dam was accepted for industrial operation in 1980

Generation

By 1981, the HPS had generated its first 100 billion kWh which had doubled by 1986 and doubled again by 1995 with 400 billion kWh. By October 1, 2005, it had produced 600 billion kWh of electricity.[2] For comparison, in 2008, U.S. residential and commercial sectors consumed about 517 billion kWh for lighting.[3] On average, the station produces 21.7 billion kWh annually and utilizes its installed capacity for 5,050 hours out of 8,760 a year.

Specifications

The main dam is 1,475 m (4,839 ft) long and 105 m (344 ft) high with a spillway of 242 m (794 ft) in length. It is flanked by two earth-fill auxiliary dams, the one on its left (west) bank that is 1,710 m (5,610 ft) long and 28 m (92 ft) high. On the right (east) bank, the auxiliary dam is 538 m (1,765 ft) long and 47 m (154 ft) high. The power station is located on the right (east) bank and is 440 m (1,440 ft) long and houses 16 turbines with an installed capacity of 3,840 MW. The power station is designed to support another two turbines which, if installed, would bring its capacity up to 4,320 MW.

OpenBSD

OpenBSD is a Unix-like computer operating system descended from Berkeley Software Distribution (BSD), a Unix derivative developed at the University of California, Berkeley. It was forked from NetBSD by project leader Theo de Raadt in late 1995. As well as the operating system, the OpenBSD Project has produced portable versions of numerous subsystems, most notably PF, OpenSSH and OpenNTPD, which are very widely available as packages in other operating systems.

The project is also widely known for the developers' insistence on open-source code and quality documentation, uncompromising position on software licensing, and focus on security and code correctness. The project is coordinated from de Raadt's home in Calgary, Alberta, Canada. Its logo and mascot is a pufferfish named Puffy.

OpenBSD includes a number of security features absent or optional in other operating systems, and has a tradition in which developers audit the source code for software bugs and security problems. The project maintains strict policies on licensing and prefers the open-source BSD licence and its variants—in the past this has led to a comprehensive license audit and moves to remove or replace code under licences found less acceptable.

As with most other BSD-based operating systems, the OpenBSD kernel and userland programs, such as the shell and common tools like cat and ps, are developed together in one source code repository. Third-party software is available as binary packages or may be built from source using the ports tree. Also like most modern BSD operating systems, it is capable of running binary code compiled for Linux in a compatible computer architecture at full speed in compatibility mode.

The OpenBSD project maintains ports for 20 different hardware platforms, including the DEC Alpha, Intel i386, Hewlett-Packard PA-RISC, x86-64 and Motorola 68000 processors, Apple's PowerPC machines, Sun SPARC and SPARC64-based computers, the VAX and the Sharp Zaurus

Security

OpenBSD's security enhancements, built-in cryptography and the pf packet filter suit it for use in the security industry, for example on firewalls, intrusion-detection systems and VPN gateways.


Proprietary systems from several manufacturers are based on OpenBSD, including devices from Armorlogic (Profense web application firewall), Calyptix Security, GeNUA mbH, RTMX Inc, and .vantronix GmbH
Later versions of Microsoft's Services for UNIX, an extension to the Windows operating system which provides some Unix-like functionality, use much OpenBSD code included in the Interix interoperability suite, developed by Softway Systems Inc., which Microsoft acquired in 1999. Core Force, a security product for Windows, is based on OpenBSD's pf firewall.

Desktop

OpenBSD ships with the X window system and is suitable for use on the desktop.[11] Packages for popular desktop tools are available, including desktop environments GNOME, KDE, and Xfce; web browsers Konqueror, Mozilla Firefox and Chromium; and multimedia programs MPlayer, VLC media player and xine.The Project also supports minimalist window management philosophies by including the cwm stacking window manager in the main distribution.

Enterprise

Open source software consultancy "M:tier" has deployed OpenBSD on servers, desktops and firewalls in corporate environments of many large Fortune 500 companies.

Server

OpenBSD features a full server suite and is easily configured as a mail server, web server, ftp server, DNS server, router, firewall, or NFS file server. Software providing support for other server protocols such as SMB (Samba) is available as packages.

OpenBSD component projects

Despite the small team size and relatively low usage of OpenBSD, the project has successfully spun off widely available portable versions of numerous parts of the base system, including:


  1. OpenBGPD, a free implementation of the Border Gateway Protocol 4 (BGP-4)
  2. OpenOSPFD, a free implementation of the Open Shortest Path First (OSPF) routing protocol
  3. OpenNTPD, a simple alternative to ntp.org's Network Time Protocol (NTP) daemon
  4. OpenSMTPD, a free Simple Mail Transfer Protocol (SMTP) daemon with IPv4/IPv6, PAM, Maildir and virtual domains support
  5. OpenSSH, a highly regarded implementation of the Secure Shell (ssh) protocol
  6. OpenIKED, a free implementation of the Internet Key Exchange (IKEv2) protocol
  7. Common Address Redundancy Protocol (CARP), a free alternative to Cisco's patented HSRP/VRRP server redundancy protocols
  8. PF, an IPv4/IPv6 stateful firewall with NAT, PAT, QoS and traffic normalization support
  9. pfsync, a firewall states synchronization protocol for PF firewall with High Availability support using CARP
  10. spamd, a spam filter with greylisting capability designed to inter-operate with the PF firewall
  11. tmux, a free, secure and maintainable alternative to the GNU Screen terminal multiplexer
  12. sndio, a compact audio and MIDI framework
  13. Xenocara, a customized X.Org build infrastructure
  14. cwm, a stacking window manager
  15. Some of the subsystems have been integrated into the core system of several other BSD projects, and all are available widely as packages for use in other Unix-like systems, and in some cases in Microsoft Windows.


Development and release process

Development is continuous, and team management is open and tiered. Anyone with appropriate skills may contribute, with commit rights being awarded on merit and de Raadt acting as coordinator.[14] Two official releases are made per year, with the version number incremented by 0.1,[15] and these are each supported for twelve months. Snapshot releases are also available at very frequent intervals. Maintenance patches for supported releases may be applied manually or by regularly updating the system against the patch branch of the CVS repository for that release.

Alternatively a system administrator may opt to upgrade using a snapshot release and then regularly update the system against the "current" branch of the CVS repository, in order to gain pre-release access to recently added features.

The standard GENERIC OpenBSD kernel, as maintained by the project, is strongly recommended for universal use, and customized kernels are not supported by the project, in line with the philosophy that 'attempts to customize or "optimize" the kernel causes more problems than they solve.'

Packages outside the main system build are maintained by CVS through a ports tree and are the responsibility of the individual maintainers (known as porters). As well as keeping the current branch up to date, the porter of a package is expected to apply appropriate bug-fixes and maintenance fixes to branches of the package for supported releases. Ports are not subject to the same continuous rigorous auditing as the main system because the project lacks the manpower to do this.

Binary packages are built centrally from the ports tree for each architecture. This process is applied for the current version, for each supported release, and for each snapshot. Administrators are recommended to use the package mechanism rather than build the package from the ports tree, unless they need to perform their own source changes.

With every new release a song is also released.

Licensing

A goal of the OpenBSD project is to "maintain the spirit of the original Berkeley Unix copyrights", which permitted a "relatively un-encumbered Unix source distribution".To this end, the Internet Systems Consortium (ISC) licence, a simplified version of the BSD licence with wording removed that is unnecessary under the Berne convention, is preferred for new code, but the MIT or BSD licences are accepted. The widely used GNU General Public License is considered overly restrictive in comparison with these.

In June 2001, triggered by concerns over Darren Reed's modification of IPFilter's licence wording, a systematic licence audit of the OpenBSD ports and source trees was undertaken.Code in more than a hundred files throughout the system was found to be unlicensed, ambiguously licensed or in use against the terms of the licence. To ensure that all licences were properly adhered to, an attempt was made to contact all the relevant copyright holders: some pieces of code were removed, many were replaced, and others, including the multicast routing tools, mrinfo and map-mbone, which were licensed by Xerox for research only, were relicensed so that OpenBSD could continue to use them; also removed during this audit was all software produced by Daniel J. Bernstein. 

At the time, Bernstein requested that all modified versions of his code be approved by him prior to redistribution, a requirement to which OpenBSD developers were unwilling to devote time or effort. The removal led to a clash with Bernstein who felt the removal of his software to be uncalled for. He cited the Netscape web browser as much less freely licensed and accused the OpenBSD developers of hypocrisy for permitting Netscape to remain while removing his software. The OpenBSD project's stance was that Netscape, although not open source, had licence conditions that could be more easily met. They asserted that Bernstein's demand for control of derivatives would lead to a great deal of additional work and that removal was the most appropriate way to comply with his requirements.

The OpenBSD team has developed software from scratch, or adopted suitable existing software, because of licence concerns. Of particular note is the development, after licence restrictions were imposed on IPFilter, of the pf packet filter, which first appeared in OpenBSD 3.0 and is now available in DragonFly BSD, NetBSD and FreeBSD. OpenBSD developers have also replaced GPL licensed tools (such as diff, grep and pkg-config) with BSD licensed equivalents and founded new projects including the OpenBGPD routing daemon and OpenNTPD time service daemon. Also developed from scratch was the globally used software package OpenSSH.

Distribution and marketing

OpenBSD is available freely in various ways: the source can be retrieved by anonymous CVS, and binary releases and development snapshots can be downloaded either by FTP, HTTP, rsync or AFS.Prepackaged CD-ROM sets can be ordered online for a small fee, complete with an assortment of stickers and a copy of the release's theme song. These, with their artwork and other bonuses, are one of the project's few sources of income, funding hardware, bandwidth and other expenses.

In common with other operating systems, OpenBSD provides a package management system for easy installation and management of programs which are not part of the base operating system. Packages are binary files which are extracted, managed and removed using the package tools. On OpenBSD, the source of packages is the ports system, a collection of Makefiles and other infrastructure required to create packages. In OpenBSD, the ports and base operating system are developed and released together for each version: this means that the ports or packages released with, for example, 4.6 are not suitable for use with 4.5 and vice versa.

OpenBSD at first used the BSD daemon mascot created by Phil Foglio, updated by John Lasseter and copyright Marshall Kirk McKusick. Subsequent releases saw variations, eventually settling on Puffy,described as a pufferfish. Since then Puffy has appeared on OpenBSD promotional material and featured in release songs and artwork. The promotional material of early OpenBSD releases did not have a cohesive theme or design but later the CD-ROMs, release songs, posters and tee-shirts for each release have been produced with a single style and theme, sometimes contributed to by Ty Semaka of the Plaid Tongued Devils. These have become a part of OpenBSD advocacy, with each release expounding a moral or political point important to the project, often through parody.

Past themes have included: in OpenBSD 3.8, the Hackers of the Lost RAID, a parody of Indiana Jones linked to the new RAID tools featured as part of the release; The Wizard of OS, making its debut in OpenBSD 3.7, based on the work of Pink Floyd and a parody of The Wizard of Oz related to the project's recent wireless work; and OpenBSD 3.3's Puff the Barbarian, including an 80s rock-style song and parody of Conan the Barbarian, alluding to open documentation.

Source : 

  • Absolute OpenBSD, 2nd Edition by Michael W. Lucas. ISBN 978-1-59327-476-4
  • The OpenBSD Command-Line Companion, 1st ed. by Jacek Artymiak. ISBN 83-916651-8-6.
  • Building Firewalls with OpenBSD and PF: Second Edition by Jacek Artymiak. ISBN 83-916651-1-9.
  • Mastering FreeBSD and OpenBSD Security by Yanek Korff, Paco Hope and Bruce Potter. ISBN 0-596-00626-8.
  • Absolute OpenBSD, Unix for the Practical Paranoid by Michael W. Lucas. ISBN 1-886411-99-9 (online copy here)
  • Secure Architectures with OpenBSD by Brandon Palmer and Jose Nazario. ISBN 0-321-19366-0.
  • The OpenBSD PF Packet Filter Book: PF for NetBSD, FreeBSD, DragonFly and OpenBSD published by Reed Media Services. ISBN 0-9790342-0-5.
  • Building Linux and OpenBSD Firewalls by Wes Sonnenreich and Tom Yates. ISBN 0-471-35366-3.
  • The OpenBSD 4.0 Crash Course by Jem Matzan. ISBN 0-596-51015-2.
  • The Book of PF A No-Nonsense Guide to the OpenBSD Firewall, 2nd edition by Peter N.M. Hansteen ISBN 978-1-59327-274-6 .

Voyager 1

Voyager 1

Voyager 1 is a 722-kilogram (1,592 lb) space probe launched by NASA on September 5, 1977 to study the outer Solar System. Operating for 36 years and 23 days as of September 28, the spacecraft communicates with the Deep Space Network to receive routine commands and return data. At a distance of about 125.75 AU (1.881×1010 km) from the Sun as of 28 September 2013,[3][4] it is the farthest manmade object from Earth.

The primary mission ended on November 20, 1980, after encounters with the Jovian system in 1979 and the Saturnian system in 1980. It was the first probe to provide detailed images of the two planets and their moons. As part of the Voyager program, like its sister craft Voyager 2, the spacecraft is in an extended mission to locate and study the regions and boundaries of the outer heliosphere, and finally to begin exploring the interstellar medium.

On September 12, 2013, NASA announced that Voyager 1 had crossed the heliopause and entered interstellar space on August 25, 2012, making it the first manmade object to do so.[6][7][8][9][10][11] As of 2013, the probe was moving with a relative velocity to the Sun of about 17 km/s.[12] The probe is expected to continue its mission until 2025, when it will be no longer supplied with enough power from its generators to operate any of its instruments.

Spacecraft design

Voyager 1 was constructed by the Jet Propulsion Laboratory. It has 16 hydrazine thrusters, three-axis stabilization gyroscopes, and referencing instruments (Sun sensor/Canopus Star Tracker) to keep the probe's radio antenna pointed toward Earth. Collectively, these instruments are part of the Attitude and Articulation Control Subsystem (AACS), along with redundant units of most instruments and 8 backup thrusters. The spacecraft also included 11 scientific instruments to study celestial objects such as planets as it travels through space

Communication system

The radio communication system of Voyager 1 was designed to be used up to and beyond the limits of the Solar System. The communication system includes a 3.7 meters (12 ft) diameter parabolic dish high-gain antenna to send and receive radio waves via the three Deep Space Network stations on the Earth.[19] Voyager 1 normally transmits data to Earth over Deep Space Network Channel 18, using a frequency of either 2296.481481 MHz or 8420.432097 MHz, while signals from Earth to Voyager are broadcast at 2114.676697 MHz.

When Voyager 1 is unable to communicate directly with the Earth, its digital tape recorder (DTR) can record up to 69.63 kilobytes of data for transmission at another time.[21] As of 2013, signals from Voyager 1 take over 17 hours to reach Earth.

Power

Voyager 1 has three radioisotope thermoelectric generators (RTGs) mounted on a boom. Each MHW-RTG contains 24 pressed plutonium-238 oxide spheres. The RTGs generated about 470 watts of electric power at the time of launch, with the remainder being dissipated as waste heat.[22] The power output of the RTGs does decline over time (due to the short 87.7 yr half-life of the fuel and degradation of the thermocouples), but the RTGs of Voyager 1 will continue to support some of its operations until around 2025.

Computers

Unlike the other onboard instruments, the operation of the cameras for visible light is not autonomous, but rather it is controlled by an imaging parameter table contained in one of the on-board digital computers, the Flight Data Subsystem (FDS). More recent space probes, since about 1990, usually have completely autonomous cameras.

The computer command subsystem (CCS) controls the cameras. The CCS contains fixed computer programs such as command decoding, fault detection, and correction routines, antenna pointing routines, and spacecraft sequencing routines. This computer is an improved version of the one that was used in the Viking orbiter.[24] The hardware in both custom-built CCS subsystems in the Voyagers is identical. There is only a minor software modification for one of them that has a scientific subsystem that the other lacks.

The Attitude and Articulation Control Subsystem (AACS) controls the spacecraft orientation (its attitude). It keeps the high-gain antenna pointing towards the Earth, controls attitude changes, and points the scan platform. The custom-built AACS systems on both Voyagers are identical.

Encounter with Jupiter

Voyager 1 began photographing Jupiter in January 1979. Its closest approach to Jupiter was on March 5, 1979, at a distance of about 349,000 kilometers (217,000 mi) from the planet's center.[33] Because of the greater photographic resolution allowed by a closer approach, most observations of the moons, rings, magnetic fields, and the radiation belt environment of the Jovian system were made during the 48-hour period that bracketed the closest approach. Voyager 1 finished photographing the Jovian system in April 1979.

Discovery of active volcanic activity on the satellite Io was probably the greatest surprise. It was the first time active volcanoes had been seen on another body in the solar system. It appears that activity on Io affects the entire Jovian system. Io appears to be the primary source of matter that pervades the Jovian magnetosphere - the region of space that surrounds the planet influenced by the planet's strong magnetic field. Sulfur, oxygen, and sodium, apparently erupted by Io's volcanoes and sputtered off the surface by impact of high-energy particles, were detected at the outer edge of the magnetosphere of Jupiter.

The two Voyager space probes made a number of important discoveries about Jupiter, its satellites, its radiation belts, and its never-before-seen planetary rings. The most surprising discovery in the Jovian system was the existence of volcanic activity on the moon Io, which had not been observed either from the ground, or by Pioneer 10 or Pioneer 11.

Encounter with Saturn

The gravitational assist trajectories at Jupiter were successfully carried out by both Voyagers, and the two spacecraft went on to visit Saturn and its system of moons and rings. Voyager 1's Saturnian flyby occurred in November 1980, with the closest approach on November 12, 1980, when the space probe came within 124,000 kilometers (77,000 mi) of Saturn's cloud-tops. The space probe's cameras detected complex structures in the rings of Saturn, and its remote sensing instruments studied the atmospheres of Saturn and its giant moon Titan.

Voyager 1 found that about 7 percent of the volume of Saturn's upper atmosphere is helium (compared with 11 percent of Jupiter's atmosphere), while almost all the rest is hydrogen. Since Saturn's internal helium abundance was expected to be the same as Jupiter's and the Sun's, the lower abundance of helium in the upper atmosphere may imply that the heavier helium may be slowly sinking through Saturn's hydrogen; that might explain the excess heat that Saturn radiates over energy it receives from the Sun. Winds blow at high speeds in Saturn. Near the equator, the Voyagers measured winds about 500 m/s (1,100 mi/hr). The wind blows mostly in an easterly direction.

The Voyagers found aurora-like ultraviolet emissions of hydrogen at mid-latitudes in the atmosphere, and auroras at polar latitudes (above 65 degrees). The high-level auroral activity may lead to formation of complex hydrocarbon molecules that are carried toward the equator. The mid-latitude auroras, which occur only in sunlit regions, remain a puzzle, since bombardment by electrons and ions, known to cause auroras on Earth, occurs primarily at high latitudes.

Both Voyagers measured the rotation of Saturn (the length of a day) at 10 hours, 39 minutes, 24 seconds.

Because Pioneer 11 had one year earlier detected a thick, gaseous atmosphere over Titan, the Voyager space probes' controllers at the Jet Propulsion Laboratory elected for Voyager 1 to make a close approach of Titan, and of necessity end its Grand Tour there. (For the continuation of the Grand Tour, see the Uranus and Neptune sections of the article on Voyager 2.) Its trajectory with a close fly-by of Titan caused an extra gravitational deflection that sent Voyager 1 out of the plane of the ecliptic, thus ending its planetary science mission. Voyager 1 could have been commanded onto a different trajectory, whereby the gravitational slingshot effect of Saturn's mass would have steered and boosted Voyager 1 out to a fly-by of Pluto. However, this Plutonian option was not exercised, because the other trajectory that led to the close fly-by of Titan was decided to have more scientific value and less risk.

Exit from the heliosphere

A set of grey squares trace roughly left to right. A few are labeled with single letters associated with a nearby coloured square. J is near to a square labeled Jupiter; E to Earth; V to Venus; S to Saturn; U to Uranus; N to Neptune. A small spot appears at the centre of each coloured square

The "family portrait" of the Solar System taken by Voyager 1

Voyager 1, on February 14, 1990, took the first ever "family portrait" of the Solar System as seen from outside,[38] which includes the famous image of planet Earth known as "Pale Blue Dot". Soon afterwards its cameras were deactivated to conserve power and computer resources for other equipment. Camera software have been removed from the spacecraft, so it now would be complex to get them working again (also Earth-side software and computers for reading the images are no longer available)

On February 17, 1998, Voyager 1 reached a distance of 69 AU from the Sun and overtook Pioneer 10 as the most distant manmade object from Earth.

It is currently the most distant functioning space probe to receive commands and transmit information to Earth. Travelling at about 17 kilometers per second (11 mi/s) it has the fastest heliocentric recession speed of any manmade object.

As Voyager 1 headed for interstellar space, its instruments continued to study the Solar System.Jet Propulsion Laboratory scientists used the plasma wave experiments aboard Voyager 1 and 2 to look for the heliopause, the boundary at which the solar wind transitions into the interstellar medium

Future of the probe

Voyager 1 will take about 30,000 years to pass through the Oort cloud. It is not heading towards any particular star, but in about 40,000 years it will pass within 1.6 light years of the star Gliese 445, which is at present in the constellation Camelopardalis. That star is generally moving towards the Solar System at about 119 km/s (430,000 km/h; 270,000 mph).

Provided Voyager 1 does not collide with anything and is not retrieved, the New Horizons space probe will never pass it, despite being launched from Earth at a faster speed than either Voyager spacecraft. New Horizons is traveling at about 15 km/s, 2 km/s slower than Voyager 1, and is still slowing down. When New Horizons reaches the same distance from the Sun as Voyager 1 is now, its speed will be about 13 km/s (8 mi/s)

Tuesday 24 September 2013

3D Computer Graphics

3D computer graphics (in contrast to 2D computer graphics) are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time.

3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and 3D may use 2D rendering techniques.

3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic until it is displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations.

History

3D computer graphics creation falls into three basic phases:

3D modeling – the process of forming a computer model of an object's shape

Layout and animation – the motion and placement of objects within a scene

3D rendering – the computer calculations that, based on light placement, surface types, and other qualities, generate the image

Modeling

The model describes the process of forming the shape of an object. The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, and models scanned into a computer from real-world objects. Models can also be produced procedurally or via physical simulation. Basically, a 3D model is formed from points called vertices (or vertexes) that define the shape and form polygons. A polygon is an area formed from at least three vertexes (a triangle). A four-point polygon is a quad, and a polygon of more than four points is an n-gon[citation needed]. The overall integrity of the model and its suitability to use in animation depend on the structure of the polygons.

Layout and animation

Before rendering into an image, objects must be placed (laid out) in a scene. This defines spatial relationships between objects, including location and size. Animation refers to the temporal description of an object, i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion capture. These techniques are often used in combination. As with modeling, physical simulation also specifies motion.

Distinction from photorealistic 2D graphics


Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with 3D photorealistic effects are often achieved without wireframe modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D effects and manually render photorealistic effects without the use of filters.