Monday, December 14, 2009

Calculator

By the 1900s, earlier mechanical calculators, cash registers, accounting machines, and so on were redesigned to use electric motors, with gear position as the representation for the state of a variable. The word "computer" was a job title assigned to people who used these calculators to perform mathematical calculations. By the 1920s Lewis Fry Richardson's interest in weather prediction led him to propose human computers and numerical analysis to model the weather; to this day, the most powerful computers on Earth are needed to adequately model its weather using the Navier-Stokes equations.
Companies like Friden, Marchant Calculator and Monroe made desktop mechanical calculators from the 1930s that could add, subtract, multiply and divide. During the Manhattan project, future Nobel laureate Richard Feynman was the supervisor of the roomful of human computers, many of them female mathematicians, who understood the use of differential equations which were being solved for the war effort.
In 1948, the Curta was introduced. This was a small, portable, mechanical calculator that was about the size of a pepper grinder. Over time, during the 1950s and 1960s a variety of different brands of mechanical calculators appeared on the market. The first all-electronic desktop calculator was the British ANITA Mk.VII, which used a Nixie tube display and 177 subminiature thyratron tubes. In June 1963, Friden introduced the four-function EC-130. It had an all-transistor design, 13-digit capacity on a 5-inch (130 mm) CRT, and introduced Reverse Polish notation (RPN) to the calculator market at a price of $2200. The EC-132 model added square root and reciprocal functions. In 1965, Wang Laboratories produced the LOCI-2, a 10-digit transistorized desktop calculator that used a Nixie tube display and could compute logarithms.

World Wide Web

The World Wide Web, abbreviated as WWW and W3 and commonly known as The Web, is a system of interlinked hypertext documents contained on the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia and navigate between them using hyperlinks. Using concepts from earlier hypertext systems, English physicist Sir Tim Berners-Lee, now the Director of the World Wide Web Consortium, wrote a proposal in March 1989 for what would eventually become the World Wide Web.He was later joined by Belgian computer scientist Robert Cailliau while both were working at CERN in Geneva, Switzerland. In 1990, they proposed using "HyperText [...] to link and access information of various kinds as a web of nodes in which the user can browse at will",and released that web in December.
"The World-Wide Web (W3) was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project." If two projects are independently created, rather than have a central figure make the changes, the two bodies of information could form into one cohesive piece of work.

Wireless networks (WLAN, WWAN)

A wireless network is basically the same as a LAN or a WAN but there are no wires between hosts and servers. The data is transferred over sets of radio transceivers. These types of networks are beneficial when it is too costly or inconvenient to run the necessary cables. For more information, see Wireless LAN and Wireless wide area network. The media access protocols for LANs come from the IEEE.
The most common IEEE 802.11 WLANs cover, depending on antennas, ranges from hundreds of meters to a few kilometers. For larger areas, either communications satellites of various types, cellular radio, or wireless local loop (IEEE 802.16) all have advantages and disadvantages. Depending on the type of mobility needed, the relevant standards may come from the IETF or the ITU. 8------->

Metropolitan area network (MAN)

A metropolitan network is a network that is too large for even the largest of LAN's but is not on the scale of a WAN. It also integrates two or more LAN networks over a specific geographical area ( usually a city ) so as to increase the network and the flow of communications. The LAN's in question would usually be connected via " backbone " lines.
For more information on WANs, see Frame Relay, ATM and Sonet.

Wide area network (WAN)

A wide area network is a network where a wide variety of resources are deployed across a large domestic area or internationally. An example of this is a multinational business that uses a WAN to interconnect their offices in different countries. The largest and best example of a WAN is the Internet, which is a network composed of many smaller networks. The Internet is considered the largest network in the world. The PSTN (Public Switched Telephone Network) also is an extremely large network that is converging to use Internet technologies, although not necessarily through the public Internet.
A Wide Area Network involves communication through the use of a wide range of different technologies. These technologies include Point-to-Point WANs such as Point-to-Point Protocol (PPP) and High-Level Data Link Control (HDLC), Frame Relay, ATM (Asynchronous Transfer Mode) and Sonet (Synchronous Optical Network). The difference between the WAN technologies is based on the switching capabilities they perform and the speed at which sending and receiving bits of information (data) occur.

Local area network (LAN)

A local area network is a network that spans a relatively small space and provides services to a small number of people.
A peer-to-peer or client-server method of networking may be used. A peer-to-peer network is where each client shares their resources with other workstations in the network. Examples of peer-to-peer networks are: Small office networks where resource use is minimal and a home network. A client-server network is where every client is connected to the server and each other. Client-server networks use servers in different capacities. These can be classified into two types:
1. Single-service servers
2. Print server
The server performs one task such as file server, while other servers can not only perform in the capacity of file servers and print servers, but also can conduct calculations and use them to provide information to clients (Web/Intranet Server). Computers may be connected in many different ways, including Ethernet cables, Wireless networks, or other types of wires such as power lines or phone lines.
The ITU-T G.hn standard is an example of a technology that provides high-speed (up to 1 Gbit/s) local area networking over existing home wiring (power lines, phone lines and coaxial cables).

Computer networking

Computer networking is the engineering discipline concerned with communication between computer systems or devices. Networking, routers, routing protocols, and networking over the public Internet have their specifications defined in documents called RFCs.Computer networking is sometimes considered a sub-discipline of telecommunications, computer science, information technology and/or computer engineering. Computer networks rely heavily upon the theoretical and practical application of these scientific and engineering disciplines. There are three types of networks: 1.Internet. 2.Intranet. 3.Extranet. A computer network is any set of computers or devices connected to each other with the ability to exchange data.Examples of different networks are:
-Local area network (LAN), which is usually a small network constrained to a small geographic area. An example of a LAN would be a computer network within a building.
-Metropolitan area network (MAN), which is used for medium size area. examples for a city or a state.
-Wide area network (WAN) that is usually a larger network that covers a large geographic area.
-Wireless LANs and WANs (WLAN & WWAN) are the wireless equivalent of the LAN and WAN.
All networks are interconnected to allow communication with a variety of different kinds of media, including twisted-pair copper wire cable, coaxial cable, optical fiber, power lines and various wireless technologies.The devices can be separated by a few meters (e.g. via Bluetooth) or nearly unlimited distances (e.g. via the interconnections of the Internet).

Computer programming

Computer programming is the iterative process of writing or editing source code. Editing source code involves testing, analyzing, and refining, and sometimes coordinating with other programmers on a jointly developed program. A person who practices this skill is referred to as a computer programmer or software developer. The sometimes lengthy process of computer programming is usually referred to as software development. The term software engineering is becoming popular as the process is seen as an engineering discipline.

Usage of Transistor

The bipolar junction transistor, or BJT, was the most commonly used transistor in the 1960s and 70s. Even after MOSFETs became widely available, the BJT remained the transistor of choice for many analog circuits such as simple amplifiers because of their greater linearity and ease of manufacture. Desirable properties of MOSFETs, such as their utility in low-power devices, usually in the CMOS configuration, allowed them to capture nearly all market share for digital circuits; more recently MOSFETs have captured most analog and power applications as well, including modern clocked analog circuits, voltage regulators, amplifiers, power transmitters, motor drivers, etc.

Importance of Transistor

The transistor is considered by many to be one of the greatest inventions of the twentieth century.The transistor is the key active component in practically all modern electronics. Its importance in today's society rests on its ability to be mass produced using a highly automated process (fabrication) that achieves astonishingly low per-transistor costs.
Although several companies each produce over a billion individually-packaged (known as discrete) transistors every year,the vast majority of transistors produced are in integrated circuits (often shortened to IC, microchips or simply chips) along with diodes, resistors, capacitors and other electronic components to produce complete electronic circuits. A logic gate consists of up to about twenty transistors whereas an advanced microprocessor, as of 2006, can use as many as 1.7 billion transistors (MOSFETs)."About 60 million transistors were built this year [2002] ... for [each] man, woman, and child on Earth."
The transistor's low cost, flexibility, and reliability have made it a ubiquitous device. Transistorized mechatronic circuits have replaced electromechanical devices in controlling appliances and machinery. It is often easier and cheaper to use a standard microcontroller and write a computer program to carry out a control function than to design an equivalent mechanical control function.

History of the transistor

Physicist Julius Edgar Lilienfeld filed the first patent for a transistor in Canada in 1925, describing a device similar to a Field Effect Transistor or "FET".However, Lilienfeld did not publish any research articles about his devices,[citation needed] and in 1934, German inventor Oskar Heil patented a similar device.
In 1947, John Bardeen and Walter Brattain at AT&T's Bell Labs in the United States observed that when electrical contacts were applied to a crystal of germanium, the output power was larger than the input. Solid State Physics Group leader William Shockley saw the potential in this, and over the next few months worked to greatly expand the knowledge of semiconductors, and thus could be described as the "father of the transistor". The term was coined by John R. Pierce.According to physicist/historian Robert Arns, legal papers from the Bell Labs patent show that William Shockley and Gerald Pearson had built operational versions from Lilienfeld's patents, yet they never referenced this work in any of their later research papers or historical articles.
The first silicon transistor was produced by Texas Instruments in 1954.This was the work of Gordon Teal, an expert in growing crystals of high purity, who had previously worked at Bell Labs.The first MOS transistor actually built was by Kahng and Atalla at Bell Labs in 1960.

Usage of Vacuum Tube

Vacuum tubes were critical to the development of electronic technology, which drove the expansion and commercialization of radio broadcasting, television, radar, sound reproduction, large telephone networks, analog and digital computers, and industrial process control. Some of these applications pre-dated electronics, but it was the vacuum tube that made them widespread and practical.
For most purposes, the vacuum tube has been replaced by solid-state devices such as transistors and solid-state diodes. Solid-state devices last much longer, are smaller, more efficient, more reliable, and cheaper than equivalent vacuum tube devices. However, tubes are still used in specialized applications: for engineering reasons, as in high-power radio frequency transmitters; or for their aesthetic appeal and distinct sound signature, as in audio amplification. Cathode ray tubes are still used as display devices in television sets, video monitors, and oscilloscopes, although they are being replaced by LCDs and other flat-panel displays. A specialized form of the electron tube, the magnetron, is the source of microwave energy in microwave ovens and some radar systems. The klystron, a powerful but narrow-band radio-frequency amplifier, is commonly deployed by broadcasters as a high-power UHF television transmitter.

History of development of Vaxuum Tube

The 19th century saw increasing research with evacuated tubes, such as the Geissler and Crookes tubes. Scientists who experimented with such tubes included Eugen Goldstein, Nikola Tesla, Johann Wilhelm Hittorf, Thomas Edison, and many others. These tubes were mostly for specialized scientific applications, or were novelties, with the exception of the light bulb. The groundwork laid by these scientists and inventors, however, was critical to the development of vacuum tube technology.
Though the thermionic emission effect was originally reported in 1873 by Frederick Guthrie, it is Thomas Edison's 1884 investigation of the Edison Effect that is more often mentioned. Edison patented what he found, but he did not understand the underlying physics, or the potential value of the discovery. It wasn't until the early 20th century that this effect was put to use, in applications such as John Ambrose Fleming's diode used as a radio detector, and Lee De Forest's "audion" (now known as a triode) used in the first telephone amplifiers. These developments led to great improvements in telecommunications technology, particularly the first coast to coast telephone line in the US, and the birth of broadcast radio.

vacuum tube

A vacuum tube consists of electrodes in a vacuum in an insulating heat-resistant envelope which is usually tubular. Many tubes have glass envelopes, though some types such as power tubes may have ceramic or metal envelopes. The electrodes are attached to leads which pass through the envelope via an airtight seal. On most tubes, the leads are designed to plug into a tube socket for easy replacement.
The simplest vacuum tubes resemble incandescent light bulbs in that they have a filament sealed in a glass envelope which has been evacuated of all air. When hot, the filament releases electrons into the vacuum: a process called thermionic emission. The resulting negatively charged cloud of electrons is called a space charge. These electrons will be drawn to a metal plate inside the envelope, if the plate (also called the anode) is positively charged relative to the filament (or cathode). The result is a flow of electrons from filament to plate. This cannot work in the reverse direction because the plate is not heated and does not emit electrons. This very simple example described can thus be seen to operate as a diode: a device that conducts current only in one direction. The vacuum tube diode conducts conventional current from plate (anode) to the filament (cathode); this is the opposite direction to the flow of electrons (called electron current).
Vacuum tubes require a large temperature difference between the hot cathode and the cold anode. Because of this, vacuum tubes are inherently power-inefficient; enclosing the tube within a heat-retaining envelope of insulation would allow the entire tube to reach the same temperature, resulting in electron emission from the anode that would counter the normal one-way current. Because the tube requires a vacuum to operate, convection cooling of the anode is typically not possible. Instead anode cooling occurs primarily through black-body radiation and conduction of heat to the outer glass envelope via the anode mounting frame. Cold cathode tubes do not rely on thermionic emission at the cathode and usually have some form of gas discharge as the operating principle; such tubes are used for lighting (neon lamps) or as voltage regulators.
The vacuum tube is a voltage-controlled device, with the relationship between the input and output circuits determined by a transconductance function. The voltage between the control grid and the cathode controls the amount of current in the tube that goes from cathode to anode. Control grid current is practically negligible in most circuits. The solid-state device most closely analogous to the vacuum tube is the JFET, although the vacuum tube typically operates at far higher voltage (and power) levels than the JFET.

History of computer science

The early foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks, such as the abacus, have existed since antiquity. Wilhelm Schickard built the first mechanical calculator in 1623.Charles Babbage designed a difference engine in Victorian times helped by Ada Lovelace. Around 1900, punch-card machines were introduced. However, all of these machines were constrained to perform a single task, or at best some subset of all possible tasks.
During the 1940s, as newer and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s, with the creation of the first computer science departments and degree programs.Since practical computers became available, many applications of computing have become distinct areas of study in their own right.
Although many initially believed it impossible that computers themselves could actually be a scientific field of study, in the late fifties it gradually became accepted among the greater academic population.It is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM (short for International Business Machines) released the IBM 704 and later the IBM 709 computers, which were widely used during the exploration period of such devices. "Still, working with the IBM [computer] was frustrating...if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again".During the late 1950s, the computer science discipline was very much in its developmental stages, and such issues were commonplace.
Time has seen significant improvements in the usability and effectiveness of computer science technology. Modern society has seen a significant shift from computers being used solely by experts or professionals to a more widespread user base.

Sunday, December 13, 2009

Computer science

Computer science or computing science is the study of the theoretical foundations of information and computation, and of practical techniques for their implementation and application in computer systems.It is frequently described as the systematic study of algorithmic processes that create, describe and transform information. According to Peter J. Denning, the fundamental question underlying computer science is, "What can be (efficiently) automated?"Computer science has many sub-fields; some, such as computer graphics, emphasize the computation of specific results, while others, such as computational complexity theory, study the properties of computational problems. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describing computations, while computer programming applies specific programming languages to solve specific computational problems, and human-computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to people.
The general public sometimes confuses computer science with vocational areas that deal with computers (such as information technology), or think that it relates to their own experience of computers, which typically involves activities such as gaming, web-browsing, and word-processing. However, the focus of computer science is more on understanding the properties of the programs used to implement software such as games and web-browsers, and using that understanding to create new programs or improve existing ones.

Analytical engine

Soon after the attempt at making the difference engine crumbled, Babbage started designing a different, more complex machine called the Analytical Engine. The engine is not a single physical machine but a succession of designs that he tinkered with until his death in 1871. The main difference between the two engines is that the Analytical Engine could be programmed using punch cards. He realized that programs could be put on these cards so the person had only to create the program initially, and then put the cards in the machine and let it run. The analytical engine would have used loops of Jacquard's punched cards to control a mechanical calculator, which could formulate results based on the results of preceding computations. This machine was also intended to employ several features subsequently used in modern computers, including sequential control, branching, and looping, and would have been the first mechanical device to be Turing-complete.
Ada Lovelace, an impressive mathematician, and one of the few people who fully understood Babbage's ideas, created a program for the Analytical Engine. Had the Analytical Engine ever actually been built, her program would have been able to calculate a sequence of Bernoulli numbers. Based on this work, Lovelace is now widely credited with being the first computer programmer.In 1979, a contemporary programming language was named Ada in her honour. Shortly afterward, in 1981, a satirical article by Tony Karp in the magazine Datamation described the Babbage programming language as the "language of the future".

Difference engine

In Babbage’s time, numerical tables were calculated by humans who were called ‘computers’, meaning "one who computes", much as a conductor is "one who conducts". At Cambridge, he saw the high error-rate of this human-driven process and started his life’s work of trying to calculate the tables mechanically. He began in 1822 with what he called the difference engine, made to compute values of polynomial functions. Unlike similar efforts of the time, Babbage's difference engine was created to calculate a series of values automatically. By using the method of finite differences, it was possible to avoid the need for multiplication and division.

The London Science Museum's Difference Engine #2, built from Babbage's design.
The first difference engine was composed of around 25,000 parts, weighed fifteen tons (13,600 kg), and stood 8 ft (2.4 m) high. Although he received ample funding for the project, it was never completed. He later designed an improved version, "Difference Engine No. 2", which was not constructed until 1989-1991, using Babbage's plans and 19th century manufacturing tolerances. It performed its first calculation at the London Science Museum returning results to 31 digits, far more than the average modern pocket calculator

Charles Babage's Born and His Education

Birth

Babbage's birthplace is disputed, but he was most likely born at 44 Crosby Row, Walworth Road, London, England. A blue plaque on the junction of Larcom Street and Walworth Road commemorates the event.
His date of birth was given in his obituary in The Times as 25 December 1792. However after the obituary appeared, a nephew wrote to say that Charles Babbage was born one year earlier, in 1791. The parish register of St. Mary's Newington, London, shows that Babbage was baptized on 6 January 1792, supporting a birth year of 1791.
Babbage's father, Benjamin Babbage, was a banking partner of the Praeds who owned the Bitton Estate in Teignmouth. His mother was Betsy Plumleigh Teape. In 1808, the Babbage family moved into the old Rowdens house in East Teignmouth, and Benjamin Babbage became a warden of the nearby St. Michael’s Church.
Education
His father's money allowed Charles to receive instruction from several schools and tutors during the course of his elementary education. Around the age of eight he was sent to a country school in Alphington near Exeter to recover from a life-threatening fever. His parents ordered that his "brain was not to be taxed too much" and Babbage felt that "this great idleness may have led to some of my childish reasonings." For a short time he attended King Edward VI Grammar School in Totnes, South Devon, but his health forced him back to private tutors for a time.[7] He then joined a 30-student Holmwood academy, in Baker Street, Enfield, Middlesex under Reverend Stephen Freeman. The academy had a well-stocked library that prompted Babbage's love of mathematics. He studied with two more private tutors after leaving the academy. Of the first, a clergyman near Cambridge, Babbage said, "I fear I did not derive from it all the advantages that I might have done." The second was an Oxford tutor from whom Babbage learned enough of the Classics to be accepted to Cambridge.
Babbage arrived at Trinity College, Cambridge in October 1810.He had read extensively in Leibniz, Joseph Louis Lagrange, Thomas Simpson, and Lacroix and was seriously disappointed in the mathematical instruction available at Cambridge. In response, he, John Herschel, George Peacock, and several other friends formed the Analytical Society in 1812. Babbage, Herschel and Peacock were also close friends with future judge and patron of science Edward Ryan. Babbage and Ryan married two sisters.
In 1812 Babbage transferred to Peterhouse, Cambridge.He was the top mathematician at Peterhouse, but did not graduate with honours. He instead received an honorary degree without examination in 1814.

Charles Babbage (Father of cpmputer)

Charles Babbage, FRS (26 December 1791 – 18 October 1871) was an English mathematician, philosopher, inventor and mechanical engineer who originated the concept of a programmable computer[citation needed]. Parts of his uncompleted mechanisms are on display in the London Science Museum. In 1991, a perfectly functioning difference engine was constructed from Babbage's original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage's machine would have worked. Nine years later, the Science Museum completed the printer Babbage had designed for the difference engine, an astonishingly complex device for the 19th century. Considered a "father of the computer" Babbage is credited with inventing the first mechanical computer that eventually led to more complex designs.

Supercomputer

A supercomputer is a computer is at the front line of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation(CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".

Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. As of July 2009, the Cray Jaguar is the fastest supercomputer in the world.

The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units others being custom designs. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.

Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion). A particular class of problems, known as Grand Challenge Problems, are problems whose full solution requires semi-infinite computing resources.

Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.

Abacus (disambiguation)

The abacus, also called a counting frame, is a calculating tool used primarily in parts of Asia for performing arithmetic processes. Today, abacuses are often constructed as a bamboo frame with beads sliding on wires, but originally they were beans or stones moved in grooves in sand or on tablets of wood, stone, or metal. The abacus was in use centuries before the adoption of the written modern numeral system and is still widely used by merchants, traders and clerks in Asia,Afirca, and elsewhere.

Computer networking and Internet

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.

In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network it produced was called the ARPANET.The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

Multiprocessing

Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers. Multiprocessor and multi-core(multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.

Computer multitasking

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.

One means by which this is done is with a special signal called an interrupt can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.

Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly — in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.

Input/output (I/O)

I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.

Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics[citation needed]. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.

Computer data storage

A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.

Computer main memory comes in two principal varieties: random-access memory or RAM and read only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.

In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

Thursday, December 10, 2009

Arithmetic logic unit(ALU)

The ALU is capable of performing two classes of operations: arithmetic and logic.
The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating points represent real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?").
Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for creating complicated conditional statements and processing boolean logic.
Superscalar computers may contain multiple ALUs so that they can process several instructions at the same time.[25] Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform arithmetic on vectors and matrices.

CPU design and Control unit

The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into a series of control signals which activate other parts of the computer. Control systems in advanced computers may change the order of some instructions so as to improve performance.
A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.
The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
1. Read the code for the next instruction from the cell indicated by the program counter.
2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
8. Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).
It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program—and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer that runs a microcode program that causes all of these events to happen.

Central processing unit and Microprocessor

A general purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by busses, often made of groups of wires.
Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.
The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.

History of Computer

The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century onwards though, the word began to take on its more familiar meaning, describing a machine that carries out computations.
The history of the modern computer begins with two separate technologies—automated calculation and programmability—but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. Examples of early mechanical calculating devices include the ,the silde and arguably the astrolabe and the Antikythera mechanism (which dates from about 150–100 BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when. This is the essence of programmability.
The "castle clock", an astronomical cock invented by Al-Jazari in 1206, is considered to be the earliest progammable analog computer .It displayed the zodiac , the solar and lunarorbits , a crescent moon -shaped pointer travelling across a gateway causing automatic doors to open every hour and five ronotic musicians who played music when struck by levers operated by a camshaft attached to a water wheel . The length of day and night could be re-programmed to compensate for the changing lengths of day and night throughout the year.
The Renaissance saw a re-invigoration of European mathematics and engineering. Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers, but none fit the modern definition of a computer, because they could not be programmed.
In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.
It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine. Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed.
In the late 1880s, Herman Hollerith invented the recording of data on a machine readable medium. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..." To process these punched cards he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.
Alan Turing is widely regarded to be the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine. Of his role in the modern computer, Time Magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine."
The inventor of the program-controlled computer was Konrad Zuse, who built the first working computer in 1941 and later in 1955 the first computer based on magnetic storage.
George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.
A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult.Shannon 1940 Notable achievements include:

EDSAC was one of the first computers to implement the stored program (von Neumann) architecture.

Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging.
Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.
The non-programmable Atanasoff–Berry Computer (1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory. The use of regenerative memory allowed it to be much more compact then its peers (being approximately the size of a large desk or workbench), since intermediate results could be stored and then fed back into the same set of computation elements.
The secret British Colossus computers (1943), which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.
The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.
The U.S. Army's Ballistic Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.
Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of these being completed in Great Britain. The first to be demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM or "Baby"), while the EDSAC, completed a year after SSEM, was the first practical implementation of the stored program design. Shortly thereafter, the machine originally described by von Neumann's paper—EDVAC—was completed but did not see full-time use for an additional two years.
Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture.
Computers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at the University of Manchester in 1953. In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.
Modern smartphones are fully-programmable computers in their own right, and as of 2009 may well be the most common form of such computers in existence.

Information about Computer

A computer is a machine that manipulates data according to a set of instructions.
Although mechanical examples of computers have existed through much of recorded human history, the first electronic computers were developed in the mid-20th century (1940–1945). These were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.Simple computers are small enough to fit into a wristwatch, and can be powered by a battery. personal computers in their various forms are icons of the Information age and are what most people think of as "computers". The embedded computer found in many devices from mp3 players to fighter aircraft and from toys to industrial robots are however the most numerous.
The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church-Turing is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore computers ranging from a mobil to a supercomputer are all able to perform the same computational tasks, given enough time and storage capacity.

Wednesday, December 9, 2009

Master's Degree Programmes in Computer Science

Modern society increasingly relies on always-on, collaborative communication environments supported by the Internet, as well as intelligent IT applications. The Department of Computer Science of the Faculty of Science at the University of Helsinki offers two Master's Degree Programmes in English:
1. Master's Degree Programme on Algorithms and Machine Learning
You are educated in computationally efficient algorithmic techniques needed in processing large volumes of data and and learning from it. The topics of study include
-machine learning
-data mining
-probabilistic models
-information-theoretic modelling
-unsupervised learning
-string algorithms
-approximation algorithms
The Faculty includes Aapo Hyvärinen, Jyrki Kivinen, Veli Mäkinen, Petri Myllymäki, Juho Rousu, Hannu Toivonen, and Esko Ukkonen, known for their seminal work on string algorithms, data mining, independent component analysis, and machine learning.
2. Master's Degree Programme on Networking and Services
You learn to design, assess and develop complex systems. The programme educates students in
protocols
-communication, service, and information networking architectures
-trust, security and privacy
-mobile networking
-open interfaces
-collaborative and interoperable computing
-enterprise and business network interoperability.
The department is renowned for the research and development on Linux, mobile Linux, and wireless communications. It is highly visible in international standardisation forums and organizations such as IETF, W3C, WWRF, and IFIP. Globally known research partners include Nokia, Nokia Siemens Networks, Ericsson, and F-Secure.The Faculty boasts a group of international experts including Patrik Floréen, Jussi Kangasharju, Markku Kojo, Lea Kutvonen, and Sasu Tarkoma.
Both the programmes educate IT professionals for research and development in these areas, and they are both based on the unique research profile at the University of Helsinki.
These programmes are two-year, full-time degree programmes of 120 ECTS credits each, and teaching in both programmes is given fully in English. Some supplementary studies (up to 60 ECTS) can be required if necessary. Successful completion of the programme will give you a Master of Science (MSc) degree in Computer Science.