Tuesday 23 September 2008

What is Motherboard ?

What is Motherboard ?


Also known as the mainboard or logic board, this is the main circuit board of your computer. If you ever open your computer up, the biggest piece of silicon you see is the motherboard. This is where you'll find the CPU, the ROM, memory expansion slots, PCI slots, serial ports, USB ports, and all the controllers for things like the hard drive, DVD drive, keyboard, and mouse. Basically, the motherboard is what makes everything in your computer work together. Each motherboard has a collection of chips and controllers that is known as the "chipset". When new motherboards are developed, they often use new chipsets. The good news is that these boards are typically more efficient and faster than their predecessors. The bad news is that you may not be able to add certain memory and CPU upgrades to older motherboards. Of course, that's typical of the computer industry.

The motherboard of a typical desktop consists of a large printed circuit board. It holds electronic components and interconnects, as well as physical connectors (sockets, slots, and headers) into which other computer components may be inserted or attached.



Peripheral card slots
A typical motherboard of 2007 will have a different number of connections depending on its standard. A standard ATX motherboard will typically have 1x PCI-E 16x connection for a graphics card, 2x PCI slots for various expansion cards and 1x PCI-E 1x which will eventually supersede PCI.
A standard Super ATX motherboard will have 1x PCI-E 16x connection for a graphics card. It will also have a varying number of PCI and PCI-E 1x slots. It can sometimes also have a PCI-E 4x slot. This varies between brands and models.
Some motherboards have 2x PCI-E 16x slots to allow more than 2 monitors without special hardware or to allow use of a special graphics technology called SLI (for Nvidia) and Crossfire (for ATI). These allow 2 graphics cards to be linked together to allow better performance in intensive graphical computing tasks such as gaming and video editing.
As of 2007, virtually all motherboards come with at least 4x USB ports on the rear with at least 2 connections on the board internally for wiring additional front ports that are built into the computers case. Ethernet is also included now. This is a standard networking cable for connecting the computer to a network or a modem. A sound chip is always included on the motherboard to allow sound to be output without the need for any extra components. This allows computers to be far more multimedia based than before. Cheaper machines now often have their graphics chip built into the motherboard rather than a separate card.



Nvidia SLI and ATI Crossfire
Nvidia SLI and ATI Crossfire technology allows 2 or more of the same series graphics cards to be linked together to allow a faster graphics experience. Almost all medium to high end Nvidia cards and most high end ATI cards support the technology.
They both require compatible motherboards. There is an obvious need for 2x PCI-E 16x slots to allow 2 cards to be inserted into the computer. The same function can be acheived in 650i motherboards by NVIDIA, with a pair of x8 slots. Originally, tri-Crossfire was acheived at 8x speeds with 2 16x slots and 1 8x slot albeit at a slower speed. ATI opened the technology up to Intel in 2006 and such all new Intel chipsets support Crossfire.
SLI is a little more proprietary in its needs. It requires a motherboard with Nvidia's own NForce chipset series to allow it to run.
Although this might seem like a great idea, it is important to note that SLI and Crossfire only offer up to 1.5x the performance of a single card when using a dual setup. They also do not double the effective amount of Vram.

Information From

Data http://en.wikipedia.org/wiki/Motherboard , http://www.iwebtool.com/what_is_motherboard.html

Double-Data-Rate

Double-Data-Rate Synchronous Dynamic Random Access Memory

Double-Data-Rate Synchronous Dynamic Random Access Memory, better known as DDRDDRDDR SDRAM or DDRDDRDDR RAMRAMRAM for short, is a type of very fast computer memory. DDRDDRDDR RAMRAMRAM is based on the same architecture as SDRAM, but utilizes the clock signal differently to transfer twice the data in the same amount of time.
In a computer system. the clock signal is an oscillating frequency used to coordinate interaction between digital circuits. Simply put, it synchronizes communication. Digital circuits designed to operate on the clock signal may respond at the rising or falling edge of the signal. SDRAM memory chips utilized only the rising edge of the signal to transfer data, while DDRDDRDDR RAMRAMRAM transfers data on both the rising and falling edges of the clock signal. Hence, DDRDDRDDR RAMRAMRAM is essentially twice as fast as SDRAM.
RAMRAMRAM speed works in conjunction with the front side bus (FSB) of a computer system. The FSB is the two-way data channel that sends information from the central processing unit (CPU) throughout the Motherboard to the various components, including the RAMRAMRAM, BIOS chips, hard drives and PCI slots. Therefore, a computer system with a FSB of 133MHz running DR SDRAM will essentially perform like a 266MHz machine.
The 184-pin DDRDDRDDR RAMRAMRAM dual in-line memory modules (DIMMS) only work properly in a motherboard designed for their use. While DDRDDRDDR RAMRAMRAM comes in various speeds, installing a version faster than a motherboard can support is a waste of money. The DDRDDRDDR RAMRAMRAM will only run as fast as the motherboard permits. DDRDDRDDR RAMRAMRAM is visually differentiated from SDRAM in that SDRAM is a 168-pin DIMM with a double notch at the bottom along the pins -- one notch just off-center, the other offside. The 184-pin DDRDDRDDR SDRAM has a single off-center notch.
DDRDDRDDR RAMRAMRAM is generally made for processors 1GHz and faster. Designations like PC1600 DDRDDRDDR SDRAM and PC2100 DDRDDRDDR SDRAM coincide with particular FSB and CPU speeds. AMD and Intel use different schemes to designate processor speed, and the various technicalities in RAMRAMRAM designations and standards can be confusing. Check your motherboard manual to see what RAMRAMRAM type is compatible with your system before purchasing memory.

Information From

http://www.wisegeek.com/what-is-sdram.htm

Synchronous Dynamic Random Access Memory

Synchronous Dynamic Random Access Memory
SDRAM is another of those powerful acronyms that describes a lot more than it sounds like it does. The letters stand for Synchronous Dynamic Random Access Memory, and it is a fast method of delivering computing capacity. SDRAM can run at 133 Mhz, which is much faster than earlier RAM technologies.
SDRAM is very protective of its data bits, storing them each in a separate capacitor. The benefit of this is the avoidance of corruption and the maintenance of "pristine" data. The drawback is that those same capacitors that are so useful at storing the SDRAM bits also happen to be very bad at keeping electrons in check; the result is where the Dynamic part of the name comes in, as refreshes are required to maintain data integrity. Once all of that dynamic refreshing and storing are done with, the result is a dense package of data, one of the densest in the business world.
It add the Synchronous part with a subroutine that lines itself up with the computer system bus and processor, so that all operations take place at the same time. Specifically, the computer's internal clock drives the entire mechanism. Once the clock sends out a signal saying that another unit of time has passed, the SDRAM chips go to work. In addition to the dense data package of DRAM, SDRAM allows a more complex memory pattern, giving you an extremely powerful method of storing and accessing data.
Another benefit of SDRAM is what is called pipelining. Because the SDRAM chips are so dense and complex, they can accept more than one write command at a time. This means that a chip can be processing one command while it accepts another one, even if that new command has to wait its turn in the pipeline. Previous RAM chips required proprietary access, allowing only one command at a time throughout the chip. In this way, SDRAM chips are faster than their predecessors.
This mostly describes single-data SDRAM chips, or SDR SDRAM. An even newer kind of chip is double-data-rate SDRAM, or DDR SDRAM. This allows for even greater bandwidth by making pipeline data transfers twice for every unit of time put forth by the computer's internal clock. One transfer takes place at the beginning of the new unit of time; the other takes place at the end.
SDRAM chips first came to the computing forefront in 1997. In just three years, they had become the dominant force in memory chips across the computing spectrum.
SDRAM control signals
All commands are timed relative to the rising edge of a clock signal. In addition to the clock, there are 6 control signals, mostly active, which are sampled on the rising edge of the clock:
CKE Clock Enable. When this signal is low, the chip behaves as if the clock has stopped. No commands are interpreted and command latency times do not elapse. The state of other control lines is not relevant. The effect of this signal is actually delayed by one clock cycle. That is, the current clock cycle proceeds as usual, but the following clock cycle is ignored, except for testing the CKE input again. Normal operations resume on the rising edge of the clock after the one where CKE is sampled high.Put another way, all other chip operations are timed relative to the rising edge of a masked clock. The masked clock is the logical AND of the input clock and the state of the CKE signal during the previous rising edge of the input clock.
/CS Chip Select. When this signal is high, the chip ignores all other inputs (except for CKE), and acts as if a NOP command is received.
DQM Data Mask. (The letter Q appears because, following digital logic conventions, the data lines are known as "DQ" lines.) When high, these signals suppress data I/O. When accompanying write data, the data is not actually written to the DRAM. When asserted high two cycles before a read cycle, the read data is not output from the chip. There is one DQM line per 8 bits on a x16 memory chip or DIMM.
/RAS Row Address Strobe. Despite the name, this is not a strobe, but rather simply a command bit. Along with /CAS and /WE, this selects one of 8 commands.
/CAS Column Address Strobe. Despite the name, this is not a strobe, but rather simply a command bit. Along with /RAS and /WE, this selects one of 8 commands.
/WE Write enable. Along with /RAS and /CAS, this selects one of 8 commands. This generally distinguishes read-like commands from write-like commands.
SDRAM devices are internally divided into 2 or 4 independent internal data banks. One or two bank address inputs (BA0 and BA1) select which bank a command is directed toward.
Many commands also use an address presented on the address input pins. Some commands, which either do not use an address, or present a column address, also use A10 to select variants.
The commands understood are as follows.


Information From

Data http://www.wisegeek.com/what-is-sdram.htm

Pictuce http://en.wikipedia.org/wiki/SDRAM

Random access memory

Random access memory

Random access memory or RAM most commonly refers to computer chips that temporarily store dynamic data to enhance computer performance. By storing frequently used or active files in random access memory, the computer can access the data faster than if it to retrieve it from the far-larger hard drive. Random access memory is also used in printers and other devices.
Random access memory is volatile memory, meaning it loses its contents once power is cut. This is different from non-volatile memory such as hard disks and flash memory, which do not require a power source to retain data. When a computer shuts down properly, all data located in random access memory is committed to permanent storage on the hard drive or flash drive. At the next boot-up, RAM begins to fill with programs automatically loaded at startup, and with files opened by the user.
There are several different types of random access memory chips which come several to a "stick." A stick of RAM is a small circuit board shaped like a large stick of gum. Sticks of RAM fit into "banks" on the motherboard. Adding one or more sticks increases RAM storage and performance.
Random access memory is categorized by architecture and speed. As technology progresses, RAM chips become faster and employ new standards so that RAM must be matched to a compatible motherboard. The motherboard will only support certain types of random access memory, and it will also have a limit as to the amount of RAM it can support. For example, one motherboard may support dual-channel Synchronous Dynamic Random Access Memory (SDRAM), while an older motherboard might only support Single In-line Memory Modules (SIMMS) or Dual In-line Memory Modules (DIMMS).
Since random access memory can improve performance, the type and amount of RAM a motherboard will support becomes a major factor when considering a new computer. If there is a faster, better random access memory chip on the market, the buyer will want to consider purchasing a motherboard capable of using it. A year down the road, that 'new' RAM might be standard, while the buyer may be stuck with an old style motherboard. A new variety of non-volatile random access memory made with nanotubes or other technologies will likely be forthcoming in the near future. These RAM chips would retain data when powered down.
RAM varies in cost depending on type, capacity and other factors. Brand name random access memory often comes with a lifetime guarantee at a competitive price. That's one guarantee that can't be beat.

Information From
http://www.wisegeek.com/what-is-random-access-memory.htm

Designing for 64-bit Windows

Designing for 64-bit Windows

The 64-bit editions of the Microsoft Windows operating system support both workstation and server computers. Implementing hardware and firmware support for a 64-bit system requires special considerations that differ from 32-bit platform design. This paper describes the special considerations for firmware, hard disk partitions, and device drivers. This paper does not address processor-related issues.


ACPI Support for 64-bit Windows
Advanced Configuration and Power Interface (ACPI) Specification defines the system board, firmware, and operating system requirements for operating system control of configuration.
ACPI Revision 2.0 defines expanded interfaces to support 64-bit systems through extended Table definitions, new ACPI Source Language (ASL), and ACPI Machine Language(AML) 64-bit functions.


Firmware for 64-bit Systems
Firmware provides boot support for initializing the hardware before the operating system is started. In x86-based (32-bit) systems, this capability is provided by the BIOS. Because traditional BIOS-based boot will not work with 64-bit Windows, other firmware boot solutions must be implemented.
EFIExtensible Firmware Interface
EFI is a new standard for the interface provided by the firmware that boots PCs, based on the Extensible Firmware Interface Specification, Version 1.02 (Intel Corporation). Microsoft supports EFI as the only firmware interface for booting 64-bit Windows operating systems.
Because 64-bit Windows will not boot with BIOS or with System Abstraction Layer alone, EFI is a requirement for all Intel Itanium-based systems.
In addition to protocols required in the EFI specification, Microsoft recommends that the firmware also support PXE_BC (remote/network boot), SERIAL_IO, and SIMPLE_NETWORK protocols as defined in the EFI specification. Support for these protocols is required by the "Designed for Windows" logo program for 64-bit systems.


The ESP contains the OS Loader, EFI drivers, and other files necessary to boot the system. MBR disks can also have an ESP, identified by partition type 0xEF. Although EFI specifies booting from either the GPT or MBR, 64-bit Windows does not support booting EFI from MBR disks or 0xEF partitions.

Information From

http://www.microsoft.com/whdc/system/platform/64bit/IA64_ACPI.mspx

PCI Express FAQ for Graphics

PCI Express
PCI Express (PCIe) is a new I/O bus technology that, over time, will replace Peripheral Component Interconnect (PCI), PCI-X, and Accelerated Graphics Port (AGP). By providing advanced features and increased bandwidth, PCIe addresses many of the shortcomings of PCI, PCI-X, and AGP. PCIe retains full software compatibility with PCI Local Bus Specification 2.3, and it replaces the parallel multidrop bus architecture of PCI and PCI-X with a serial, point-to-point connection bus architecture.
Two PCIe devices are connected by a link, and each link is made up of one or more lanes. Each lane consists of two low-voltage, differential signal pairs carrying 2.5 Gbps of traffic in opposite directions. One pair is used for transmitting, and the other pair is used for receiving. To further increase the bandwidth of a link, multiple lanes can be placed in parallel (x1, x2, x4, x8, x12, x16, or x32 lanes) between two PCIe devices to aggregate the bandwidth of each individual lane. In the future, the signaling rate of the link can be increased to provide even more bandwidth.
PCIe hardware is backwards compatible with PCI software on the Microsoft Windows 2000 and Microsoft Windows XP operating systems. The PCI features supported by current Windows operating systems will continue to work with PCIe without any need for modifications in the applications, drivers, or operating system; however, the advanced PCIe features will be natively supported only in Windows Vista and later versions of Windows.

PCI Express Graphics
It is well known that graphics can always use more bandwidth than what is available. Graphics data transfers cause maximum traffic on the PCI bus. The continual increase in graphics demand and complexity eventually made the PCI bus insufficient, which led to the invention of AGP. Now we are pushing the limits of what AGP can deliver, and we need a better solution. PCIe surpasses AGP in bandwidth availability, with more room for expansion in the near future. By increasing the number of lanes in a link, graphics adapters can take advantage of increased bandwidth and faster data transfer. Graphics adapters will be using the X16 link, which will provide a bandwidth of 4 Gbps in each direction.
Given the higher bandwidth offered by PCIe, systems are already moving away from AGP to PCIe. There will not be many systems that provide both AGP and PCIe connectors. The first X16 graphics adapters and PCIe systems should be available in summer 2004.
PCI Express Graphics in Vista
The Windows Vista Display Driver Model (WDDM) will have specific requirements for PCIe graphics adapters. It will require that the 64-bit addressing mode be supported by the GPU. However, a minimum of 40 bits of physical address bits must be implemented. The unimplemented bits should be forced to zero. These requirements are not applicable to the Windows XP display driver model.
PCIe Graphics & AGP
In addition to the bandwidth considerations mentioned above, there are several other differences between AGP and PCIe.
By definition, AGP requires a chipset with a graphics address relocation table (GART), which provides a linear view of nonlinear system memory to the graphics device. PCIe, however, requires that the memory linearization hardware exist on the graphics device itself instead of on the chipset. Consequently, driver support for memory linearization in PCIe must exist in the video driver, instead of as an AGP-style separate GART miniport driver. Graphics hardware vendors who want to use nonlocal video memory in their Windows XP driver model (XPDM) drivers must implement both memory linearization hardware and the corresponding software. All PCIe graphics adapters that are compatible with the WDDM must support memory linearization in hardware and software.
AGP was dedicated to graphics adapters, and no other device class used it. PCIe is intended to be used by all device classes that previously used PCI. With AGP, a number of video drivers were directly programming the chipset, which gave rise to severe ill effects such as crashing and memory corruption in the graphics stack. Because PCIe will be used for all devices in the system, it is even more important that video drivers not program the chipset directly.

Information From
http://www.microsoft.com/whdc/system/platform/64bit/IA64_ACPI.mspx

Harddisk Drive

Harddisk

A hard disk drive (HDD), commonly referred to as a hard drive, hard disk, or fixed disk drive,is a Non volatile storage device which stores digitally encoded data on rapidly rotating platters with magnetic surfaces. Strictly speaking, "drive" refers to a device distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk. Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media.
Originally, the term "hard" was temporary slang, substituting "hard" for "rigid", before these drives had an established and universally-agreed-upon name. A HDD is a rigid-disk drive although it is rarely referred to as such. By way of comparison, a floppy drive (more formally, a diskette drive) has a disc that is flexible. Some time ago, IBM's internal company term for a HDD was "file".
HDDs (introduced in 1956 as data storage for an IBM accouting computer) were originally developed for use with general purpose computers, see History of hard disk drives.
In the 21st century, applications for HDDs have expanded to include digital video recorders , digital audion players, personal digital and video game consoles . In 2005 the first mobile phones to include HDDs were introduced by Samsung and Nokia The need for large-scale, reliable storage, independent of a particular device, led to the introduction of configurations such as RAID arrays, network attched storage(NAS) systems and storage area network (SAN) systems that provide efficient and reliable access to large volumes of data. Note that although not immediately recognizable as a computer, all the aforementioned applications are actually embedded computing devices of some sort.
The earliest “form factor” hard disk drives inherited their dimensions from flopy-diskso that either could be mounted in chassis slots, and thus the HDD form factors became colloquially named after the corresponding FDD types. "Form factor" compatibility continued after the 3½ inch size even though floppy disk drives with new smaller dimensions ceased to be offered.
8 inch: 9.5 in × 4.624 in × 14.25 in (241.3 mm × 117.5 mm × 362 mm)In 1979, Shugart SA1000 was the first form factor compatible HDD, having the same dimensions and a compatible interface to the 8″ FDD. Both "full height" and "half height" (2.313 in) versions were available.
5.25 inch: 5.75 in × 1.63 in × 8 in (146.1 mm × 41.4 mm × 203 mm)This smaller form factor, first used in an HDD by Seagate in 1980, was the same size as full height 5¼-inch diameter FDD, i.e., 3.25 inches high. This is twice as high as "half height" commonly used today; i.e., 1.63 in (41.4 mm). Most desktop models of drives for optical 120 mm disks DVD ,CD use the half height 5¼″ dimension, but it fell out of fashion for HDDs. The Quatum Bigfoot HDD was the last to use it in the late 1990s, with “low-profile” (≈25 mm) and “ultra-low-profile” (≈20 mm) high versions.
3.5 inch: 4 in × 1 in × 5.75 in (101.6 mm × 25.4 mm × 146 mm)This smaller form factor, first used in an HDD by Rodime in 1984, was the same size as the "half height" 3½″ FDD, i.e., 1.63 inches high. Today has been largely superseded by 1-inch high “slimline” or “low-profile” versions of this form factor which is used by most desktop HDDs.
2.5 inch: 2.75 in × 0.374–0.59 in × 3.945 in (69.85 mm × 9.5–15 mm × 100 mm)This smaller form factor was introduced by PrairieTek in 1988; there is no corresponding FDD. It is widely used today for hard-disk drives in mobile devices (laptops, music players, etc.) and as of 2008 replacing 3.5 inch enterprise-class drives. Today, the dominant height of this form factor is 9.5 mm for laptop drives, but high capacity drives have a height of 12.5 mm. Enterprise-class drives can have a height up to 15 mm.
1.8 inch: 54 mm × 8 mm × 71 mmThis form factor, originally introduced by Integral Peripherals in 1993, has evolved into the ATA-7 LIF with dimensions as stated. It is increasingly used in digital audio players and subnotebooks. An original variant exists for 2–5 GB sized HDDs that fit directly into a PC Card expansion slot. These became popular for their use in iPods and other HDD based MP3 players.
1 inch: 42.8 mm × 5 mm × 36.4 mmThis form factor was introduced in 1999 as IBM's Microdive to fit inside a CF Type II slot. Samsung calls the same form factor "1.3 inch" drive in its product literature.0.85 inch: 24 mm × 5 mm × 32 mm Toshiba announced this form factor in January 2004 for use in mobile phones and similar applications, including SD/MMC slot compatible HDDs optimized for video storage on 4G handsets. Toshiba currently sells a 4 GB (MK4001MTD) and 8 GB (MK8003MTD) version and holds the Guiness for the smallest harddisk drive.
Major manufacturers discontinued the development of new products for the 1-inch (1.3-inch) and 0.85-inch form factors in 2007, due to falling prices of Flash memor,although Samsung introduced in 2008 with the SpinPoint A1 another 1.3-inch drive.
The inch-based nickname of all these form factors usually do not indicate any actual product dimension (which are specified in millimeters for more recent form factors), but just roughly indicate a size relative to disk diameters, in the interest of historic continuity.

Central Processing Unit(CPU)

Central Processing Unit (CPU),

A Central Processing Unit (CPU), is a description of a class of logic machines that can execute Computer programe. This broad definition can easily be applied to many early computers that existed long before the term "CPU" ever came into widespread usage. The term itself and its initialism have been in use in the
computer industry at least since the early 1960s. (The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation has remained much the same
Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of desingnig custom CPUs for a particular application has largely given way to the development of mass-produced processors that are suited for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicoputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phonse to children's toys.



History of CPUs

Prior to the advent of machines that resemble today's CPUs, computers such as the ENIAC had to be physically rewired in order to perform different tasks. These machines are often referred to as "fixed-program computers," since they had to be physically reconfigured in order to run a different program. Since the term "CPU" is generally defined as a software (computer program) execution device, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.
The idea of a stored-program computer was already present during ENIAC's design, but was initially omitted so the machine could be finished sooner. On June 30, 1945, before ENIAC was even completed, mathematician John von Neuman distributed the paper entitled "First Draft of a Report ont the EDVAC." It outlined the design of a stored-program computer that would eventually be completed in August 1949 . EDVAC was designed to perform a certain number of instructions (or operations) of various types. These instructions could be combined to create useful programs for the EDVAC to run. Significantly, the programs written for EDVAC were stored in high-speed computer memory. rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the large amount of time and effort it took to reconfigure the computer to perform a new task. With von Neumann's design, the program, or software, that EDVAC ran could be changed simply by changing the contents of the computer's memory.
While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, others before him such as Konrad Zuse had suggested similar ideas. Additionally, the so-called Harvard architecture of the Harvard Mark which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but elements of the Harvard architecture are commonly seen as well.
Being digital devices, all CPUs deal with discrete states and therefore require some kind of switching elements to differentiate between and change these states. Prior to commercial acceptance of the transistor, electrical relays and vacuum tubes (thermionic valves) were commonly used as switching elements. Although these had distinct speed advantages over earlier, purely mechanical designs, they were unreliable for various reasons. For example, building direct current sequential logic circuits out of relays requires additional hardware to cope with the problem of contact bounce. While vacuum tubes do not suffer from contact bounce, they must heat up before becoming fully operational and eventually stop functioning altogether.Usually, when a tube failed, the CPU would have to be diagnosed to locate the failing component so it could be replaced. Therefore, early electronic (vacuum tube based) computers were generally faster but less reliable than electromechanical (relay based) computers.
Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark failed very rarely In the end, tube based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs (see below for a discussion of clock rate). Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.

Information from

http://en.wikipedia.org/wiki/CPU

My Blog List

Servers and Storage

Followers