How Computers Work (EMMA) Orientation · Web viewBIOS Chip Monitor Connection VGA (CRT/LCD) DVI...

21
EMMA HS3 Advanced Hardware Outline Week #5 How Graphics Cards Work Basics Four Main Components Motherboard Connection – Interface to CPU and Power Graphics Processing Unit – Decide what to do with each Pixel on screen Memory – Holds info about each pixel Frame Buffer – holds completed images Dual Ported – reads and writes at the same time Memory Type BIOS Chip Monitor Connection VGA (CRT/LCD) DVI (Digital Visual Interface) Performance Frame rate – Frames per Second (FPS) Number of Complete images per second Triangles or Vertices per second – Speed of Wire Frame Image Pixel Fill Rate – Number of Pixels processed in a second Hardware Specifications GPU Clock speed (MHz) Size of memory bus (bits) Amount of available memory (MB) Memory clock rate (MHz) Memory bandwidth ( GB/s) RAMDAC speed (MHz) New Egg Graphics Card Specifications Chipset Manufacturer – ATI & nVidia Interface – PCI, AGP, PCI Express x1, PCI Express x16, PCI Express 2.1 x16 GPU - FireGL, Radeon, GeForce, FireMV Memory Size – Up to 1.792 GB PixelPipelines vs Stream Processors Memory Interface – Up to 512 bit bus size on graphics board Memory Type – DDR, GDDR2, GDDR3, GDDR4, GDDR5 Direct X requirements I/O Connections – D-Sub, DVI, TV-Out, HDMI, ViVo Max Resolution – up to 3840 x 2400 Crossfire Support – ATI paired graphics card support SLI Support – Scalable Link Interface – nVidia – link two or more cards to single output Cooler – Fan or Fanless, water cooled HDCP - high-bandwidth digital-content protection

Transcript of How Computers Work (EMMA) Orientation · Web viewBIOS Chip Monitor Connection VGA (CRT/LCD) DVI...

How Computers Work (EMMA) Orientation

EMMA HS3 Advanced Hardware Outline

Week #5

How Graphics Cards Work

Basics

Four Main Components

Motherboard Connection – Interface to CPU and Power

Graphics Processing Unit – Decide what to do with each Pixel on screen

Memory – Holds info about each pixel

Frame Buffer – holds completed images

Dual Ported – reads and writes at the same time

Memory Type

BIOS Chip

Monitor Connection

VGA (CRT/LCD)

DVI (Digital Visual Interface)

Performance

Frame rate – Frames per Second (FPS) Number of Complete images per second

Triangles or Vertices per second – Speed of Wire Frame Image

Pixel Fill Rate – Number of Pixels processed in a second

Hardware Specifications

GPU Clock speed (MHz)

Size of memory bus (bits)

Amount of available memory (MB)

Memory clock rate (MHz)

Memory bandwidth ( GB/s)

RAMDAC speed (MHz)

New Egg Graphics Card Specifications

Chipset Manufacturer – ATI & nVidia

Interface – PCI, AGP, PCI Express x1, PCI Express x16, PCI Express 2.1 x16

GPU - FireGL, Radeon, GeForce, FireMV

Memory Size – Up to 1.792 GB

PixelPipelines vs Stream Processors

Memory Interface – Up to 512 bit bus size on graphics board

Memory Type – DDR, GDDR2, GDDR3, GDDR4, GDDR5

Direct X requirements

I/O Connections – D-Sub, DVI, TV-Out, HDMI, ViVo

Max Resolution – up to 3840 x 2400

Crossfire Support – ATI paired graphics card support

SLI Support – Scalable Link Interface – nVidia – link two or more cards to single output

Cooler – Fan or Fanless, water cooled

HDCP - high-bandwidth digital-content protection

Documentation -How Graphics Cards Work

Graphics Card Specification Definitions, Graphics Card GDDR Memory

Homework -

Graphics Card Quiz

Wish List Graphics Card

Graphics Card GDDR Memory

RAM memory is also used on video cards to make the video memory circuit. Until recently the video memory used the exact same technology as the system RAM memory that is installed on the motherboard. High-end video cards, however, needed memory chips faster than the ones used on the PC. So the manufacturers decided to go for DDR2 and DDR3 technologies.

DDR2 and DDR3 memories used on video card have different characteristics than the DDR2 and DDR3 memories used on the PC – especially the voltage. That’s the reason they are called GDDR2 and GDDR3 (the “G” comes from “Graphics”).

In our DDR2 Memory Tutorial we explained the differences between DDR and DDR2 memories. As we mentioned there, one of the main differences is the voltage: while DDR works at 2.5 V, DDR2 works at 1.8 V. This leads to a lower power consumption and less heat.

GDDR2 memories continue to work at 2.5 V. Since they run at higher clock rates compared to DDR memories, they generate more heat. This is the reason why only a few video cards used GDDR2 memories – only GeForce FX 5700 Ultra and GeForce FX 5800 Ultra used this kind of memory. Shortly after GeForce FX 5700 Ultra was released many video card manufacturers released a GeForce FX 5700 Ultra using GDDR3 memories, maybe to lower the heat and power consumption effects.

GDDR3 memories can work at 2.0 V (Samsung chips) or at 1.8 V (chips from other manufacturers), solving the heat problem. This is the reason why this kind of memory is used by high-end video cards.

DDR3 memories are not released to PCs yet, but they will probably work at 1.5 V, being different from GDDR3 memories.

GDDR-3 graphics memory is the next evolution of high-speed DDR SDRAM technologies that have played a major role in enabling GPUs to drive complex geometries and character animations and deliver visual effects on par with the hottest motion pictures. GDDR-3 graphics memory enables higher memory clock frequencies at a lower power level with fewer components and less constraints on system designers.

The main advantage DDR2 has over good ole DDR memory is that it runs on a lower voltage, which lowers the power requirements, and allows it to scale higher with a small latency penalty.

GDDR3 (Graphics Double Data Rate3) takes this one step further, requiring less voltage than DDR2, and scaling even futher (though with some latency penalty). While the motherboard industry is making the transition from DDR to DDR2 memory, right now GDDR3 is only used on graphics cards.

How Graphics Cards Work

by Tracy V. Wilson and Jeff Tyson

The images you see on your monitor are made of tiny dots called pixels. At most common resolution settings, a screen displays over a million pixels, and the computer has to decide what to do with every one in order to create an image. To do this, it needs a translator -- something to take binary data from the CPU and turn it into a picture you can see. Unless a computer has graphics capability built into the motherboard, that translation takes place on the graphics card.

A graphics card's job is complex, but its principles and components are easy to understand. In this article, we will look at the basic parts of a video card and what they do. We'll also examine the factors that work together to make a fast, efficient graphics card.

Graphics Card BasicsThink of a computer as a company with its own art department. When people in the company want a piece of artwork, they send a request to the art department. The art department decides how to create the image and then puts it on paper. The end result is that someone's idea becomes an actual, viewable picture.

Photo courtesy of HowStuffWorks ShopperThe four main components of a graphics card are connections for the motherboard and monitor, a processor, and memory.

A graphics card works along the same principles. The CPU, working in conjunction with software applications, sends information about the image to the graphics card. The graphics card decides how to use the pixels on the screen to create the image. It then sends that information to the monitor through a cable.

The Evolution of Graphics Cards

Graphics cards have come a long way since IBM introduced the first one in 1981. Called a Monochrome Display Adapter (MDA), the card provided text-only displays of green or white text on a black screen. Now, the minimum standard for new video cards is Video Graphics Array (VGA), which allows 256 colors. With high-performance standards like Quantum Extended Graphics Array (QXGA), video cards can display millions of colors at resolutions of up to 2040 x 1536 pixels.

Creating an image out of binary data is a demanding process. To make a 3-D image, the graphics card first creates a wire frame out of straight lines. Then, it rasterizes the image (fills in the remaining pixels). It also adds lighting, texture and color. For fast-paced games, the computer has to go through this process about sixty times per second. Without a graphics card to perform the necessary calculations, the workload would be too much for the computer to handle. The graphics card accomplishes this task using four main components:

· A motherboard connection for data and power

· A processor to decide what to do with each pixel on the screen

· Memory to hold information about each pixel and to temporarily store completed pictures

· A monitor connection so you can see the final result

The GPU

Like a motherboard, a graphics card is a printed circuit board that houses a processor and RAM. It also has an input/output system (BIOS) chip, which stores the card's settings and performs diagnostics on the memory, input and output at startup. A graphics card's processor, called a graphics processing unit (GPU), is similar to a computer's CPU. A GPU, however, is designed specifically for performing the complex mathematical and geometric calculations that are necessary for graphics rendering. Some of the fastest GPUs have more transistors than the average CPU. A GPU produces a lot of heat, so it is usually located under a heat sink or a fan.

In addition to its processing power, a GPU uses special programming to help it analyze and use data. ATI and nVidia produce the vast majority of GPUs on the market, and both companies have developed their own enhancements for GPU performance. To improve image quality, the processors use:

· Full scene anti aliasing (FSAA), which smoothes the edges of 3-D objects

· Anisotropic filtering (AF), which makes images look crisper

Each company has also developed specific techniques to help the GPU apply colors, shading, textures and patterns.

Integrated Graphics

Many motherboards have integrated graphics capabilities and function without a separate graphics card. These motherboards handle 2-D images easily, so they are ideal for productivity and Internet applications. Plugging a separate graphics card into one of these motherboards overrides the onboard graphics functions.

As the GPU creates images, it needs somewhere to hold information and completed pictures. It uses the card's RAM for this purpose, storing data about each pixel, its color and its location on the screen. Part of the RAM can also act as a frame buffer, meaning that it holds completed images until it is time to display them. Typically, video RAM operates at very high speeds and is dual ported, meaning that the system can read from it and write to it at the same time.

The RAM connects directly to the digital-to-analog converter, called the DAC. This converter, also called the RAMDAC, translates the image into an analog signal that the monitor can use. Some cards have multiple RAMDACs, which can improve performance and support more than one monitor. You can learn more about this process in How Analog and Digital Recording Works.

The RAMDAC sends the final picture to the monitor through a cable. We'll look at this connection and other interfaces in the next section.

PCI Connection

ADC Connectors

At one time, Apple made monitors that used the proprietary Apple Display Connector (ADC). Although these monitors are still in use, new Apple monitors use a DVI connection.

Graphics cards connect to the computer through the motherboard. The motherboard supplies power to the card and lets it communicate with the CPU. Newer graphics cards often require more power than the motherboard can provide, so they also have a direct connection to the computer's power supply.

Connections to the motherboard are usually through one of three interfaces:

· Peripheral component interconnect (PCI)

· Advanced graphics port (AGP)

· PCI Express (PCIe)

PCI Express is the newest of the three and provides the fastest transfer rates between the graphics card and the motherboard. PCIe also supports the use of two graphics cards in the same computer.

Most graphics cards have two monitor connections. Often, one is a DVI connector, which supports LCD screens, and the other is a VGA connector, which supports CRT screens. Some graphics cards have two DVI connectors instead. But that doesn't rule out using a CRT screen; CRT screens can connect to DVI ports through an adapter.

Most people use only one of their two monitor connections. People who need to use two monitors can purchase a graphics card with dual head capability, which splits the display between the two screens. A computer with two dual head, PCIe-enabled video cards could theoretically support four monitors.

Photo courtesy of HowStuffWorks ShopperThis Radeon X800XL graphics card has DVI, VGA and ViVo connections.

In addition to connections for the motherboard and monitor, some graphics cards have connections for:

· TV display: TV-out or S-video

· Analog video cameras: ViVo or video in/video out

· Digital cameras: FireWire or USB

Some cards also incorporate TV tuners.

DirectX and Open GL

DirectX and Open GL are application programming interfaces, or APIs. An API helps hardware and software communicate more efficiently by providing instructions for complex tasks, like 3-D rendering. Developers optimize graphics-intensive games for specific APIs. This is why the newest games often require updated versions of DirectX or Open GL to work correctly.

APIs are different from drivers, which are programs that allow hardware to communicate with a computer's operating system. But as with updated APIs, updated device drivers can help programs run correctly.

Choosing a Good Graphics Card

A top-of-the-line graphics card is easy to spot. It has lots of memory and a fast processor. Often, it's also more visually appealing than anything else that's intended to go inside a computer's case. Lots of high-performance video cards are illustrated or have decorative fans or heat sinks.

But a high-end card provides more power than most people really need. People who use their computers primarily for e-mail, word processing or Web surfing can find all the necessary graphics support on a motherboard with integrated graphics. A mid-range card is sufficient for most casual gamers. People who need the power of a high-end card include gaming enthusiasts and people who do lots of 3-D graphic work.

Photo courtesy of HowStuffWorks ShopperSome cards, like the ATI All-in-Wonder, include connections for televisions and video as well as a TV tuner.

A good overall measurement of a card's performance is its frame rate, measured in frames per second (FPS). The frame rate describes how many complete images the card can display per second. The human eye can process about 25 frames every second, but fast-action games require a frame rate of at least 60 FPS to provide smooth animation and scrolling. Components of the frame rate are:

· Triangles or vertices per second: 3-D images are made of triangles, or polygons. This measurement describes how quickly the GPU can calculate the whole polygon or the vertices that define it. In general, it describes how quickly the card builds a wire frame image.

· Pixel fill rate: This measurement describes how many pixels the GPU can process in a second, which translates to how quickly it can rasterize the image.

The graphics card's hardware directly affects its speed. These are the hardware specifications that most affect the card's speed and the units in which they are measured:

· GPU clock speed (MHz)

· Size of the memory bus (bits)

· Amount of available memory (MB)

· Memory clock rate (MHz)

· Memory bandwidth (GB/s)

· RAMDAC speed (MHz)

The computer's CPU and motherboard also play a part, since a very fast graphics card can't compensate for a motherboard's inability to deliver data quickly. Similarly, the card's connection to the motherboard and the speed at which it can get instructions from the CPU affect its performance.

Overclocking

Some people choose to improve their graphics card's performance by manually setting their clock speed to a higher rate, known as overclockings. People usually overclock their memory, since overclocking the GPU can lead to overheating. While overclocking can lead to better performance, it also voids the manufacturer's warranty.

Graphics Card Specification Definitions

The pixel pipeline was a component within 3D accelerators, most prominently prior to DirectX 9. The term encompasses one of a number of parallel processing pipelines within a graphics processing unit (GPU). Each pipeline processes pixel, texture, and frequently geometric data. Various GPUs had differing numbers of pixel pipelines, and larger numbers of these pipelines increased the pixel/texel per clock performance of the accelerator. This performance was measured in pixel and texture fill-rate. Real-time 3D rendering performance scales well with additional parallelism because of the nature of 3D graphics functions.

Every image you see on a screen is made up of thousands of pixels. A pixel is a single point within an image, and is normally capable of displaying either three colours (red, green, blue) or four colours (cyan, yellow, magenta, black). Pixels are associated with the screen resolution of your display, so if you were to play a game at a common resolution such as 1280x1024, your display would show 1280 pixels across the screen, and 1024 pixels from top to bottom.

The pixel pipeline processes the pixel, texture and geometric data received from the Vertex Shaders. Different GPUs (Graphics Processing Unit) have different numbers of pixel pipelines, but as a rule of thumb, the more pipelines a graphics card has, the faster the card can process the data for rendering the images on-screen.

Each pixel is made up of a series of fragments, which are processed by the pixel shader according to calculations made by the vertex shader. Once each fragment is processed it is held in a buffer where it is built into a complete pixel by the Raster Operator unit.

Pixel shading is usually the most intensive part of the graphics rendering process on a modern GPU and so usually takes the most time.

Microsoft DirectX is a collection of application programming interfaces for handling tasks related to multimedia, especially game programming and video, on Microsoft platforms. Originally, the names of these APIs all began with Direct, such as Direct3D, DirectDraw, DirectMusic, DirectPlay, DirectSound, and so forth. DirectX, then, was the generic term for all of these Direct-something APIs, and that term became the name of the collection.

The High-Definition Multimedia Interface (HDMI) is a licensable audio/video connector interface for transmitting uncompressed, encrypted digital streams. HDMI connects DRM-enforcing digital audio/video sources, such as a set-top box, a Blu-ray Disc player, a PC running Windows Vista, a video game console, or an AV receiver, to a compatible digital audio device and/or video monitor, such as a digital television (DTV). HDMI began to appear in 2006 on HDTV camcorders and high-end digital still cameras.

CrossFire is a brand name for ATI Technologies' multi-GPU solution, which competes with its rival nVidia's Scalable Link Interface (SLI). The technology allows a pair of graphics cards to be used in a single computer to improve graphics performance. Although only recently announced for consumer level hardware, similar technology known as AMR has been used for some time in professional grade cards for flight simulators and similar applications available from Evans & Sutherland, ATI had also previously released a similar dual RAGE 128 consumer card called the Fury MAXX.

The system requires a CrossFire-compliant motherboard with a pair of PCI Express (PCIe) graphics cards, which can be enabled via either hardware or software. Radeon x800s, x850s, x1800s and x1900s come in a 'CrossFire Edition' that has 'master' capability built into the hardware. One must buy a Master card, and pair it with a normal card from the same series.

Scalable Link Interface (SLI) is a brand name for a multi-GPU solution developed by NVIDIA for linking two (or more) video cards together to produce a single output. SLI is an application of parallel processing for computer graphics, meant to increase the processing power available for graphics. With SLI, it is possible to theoretically double the power of your graphics solution just by adding a second video card with an identical GPU. The name SLI was first used by 3dfx under the full name Scan-Line Interleave, which was introduced in 1998 and used in the Voodoo2 line of graphics accelerators. When 3dfx collapsed financially, its intellectual property was purchased by NVIDIA. NVIDIA later reintroduced the SLI name in 2004 and intends for it to be used in modern computer systems based on the PCI Express (PCIe) bus.

The basic idea of SLI is to allow two (or more) graphics processing units (or GPUs) to share the work load when rendering a 3D scene. Ideally, two identical graphics cards are installed in a motherboard that contains two PCI-Express x16 slots, set up in a master-slave configuration. Both cards are given the same part of the game (scene) to render, but effectively half of the work load is sent to the slave card through a connector dubbed the SLI Bridge. For example, in some cases the slave card will work on the bottom half of the screen. The slave then sends its rendered output to the master card, where it is incorporated into the master card's own image (in the frame buffer) and sent to the screen.

Vertex Shader

Vertices are points on a 3D map that are used to create the outlines of the images that you see within 3D games. Images are typically made up of many vertices, and are used to determine every object's position within the scene to be rendered. Once each object's location has been established on the map, the map is passed to the vertex shader.

The Vertex Shader is responsible for adding special effects to objects in a 3D environment. It does this by performing mathematical calculations on the objects' vertex data using an array of variables, such as the object's co-ordinates, colour and position and space. The vertex shader is responsible for calculating the 3D aspects of a scene, such as colouring, lighting etc and converting the data into a 2D map which it is passed to the pixel shader for further processing and rendering.

ROP Unit

The Raster Operator handle the final transition from the pixel pipeline to the display by building the pixel fragments generated from the pixel pipeline into complete pixels. Most modern graphics cards have multiple ROP units. The ROP unit also optimises the display image to save memory bandwidth, such as when dealing with depth compression and colour comparison.

Stream Processer

Stream processors are a relatively new technology to be introduced to graphics cards. Essentially, stream processors can be allocated different processes to perform depending on what graphical environment is to be generated. For example, in indoor scenes the stream processors can be set as shaders, while in outdoor scenes the stream processors can be used to map vertices.

Stream processors are commonly used in the newer generations of graphics cards, replacing dedicated vertex shaders and pixel pipelines.

The above are the main elements which make up the graphics processing power of your graphics card. There are, however, other areas of the card which can be set and controlled using software such as antialiasing and anisotropic filtering. Depending on the game you are playing, and the more powerful the graphics card you have, the higher these additional factors can be pushed.

What is HDCP?

Short for high-bandwidth digital-content protection, a specification developed by Intel for protecting digital entertainment content that uses the DVI interface. HDCP encrypts the transmission of digital content between the video source, or transmitter -- such as a computer, DVD player or set-top box -- and the digital display, or receiver -- such as a monitor, television or projector. HDCP is not designed to prevent copying or recording of digital content but to protect the integrity of content as it is being transmitted.

Implementation of HDCP requires a license obtainable from the Digital Content Protection, LLC, which then issues a set of unique secret device keys to all authorized devices. During authentication, the receiver will only accept content once it demonstrates knowledge of the keys. Furthermore, to prevent eavesdropping and stealing of the data, the transmitter and receiver will generate a shared secret value that is consistently checked throughout the transmission. Once authentication is established, the transmitter encrypts the data and sends it to the receiver for decryption.

Video card

From Wikipedia, the free encyclopedia

A video card, video adapter, graphics accelerator card, display adapter, or graphics card is an

card" expansion card whose function is to generate output images to a display. Many video cards offer added functions, such as accelerated rendering of 3D scenes and

computer graphics" 2D graphics, video capture, TV-tuner adapter, MPEG-2/MPEG-4 decoding, FireWire, light pen, TV output, or the ability to connect multiple monitors (multi-monitor). Other modern high performance video cards are used for more graphically demanding purposes, such as PC games.

Video hardware can be integrated on the motherboard, often occurring with early machines. In this configuration it is sometimes referred to as a video controller or graphics controller. Modern low-end to mid-range motherboards often include a graphics chipset developed by the developer of the northbridge (i.e. an nForce chipset with nVidia graphics or an Intel chipset with Intel graphics) on the motherboard. This graphics chip usually has a small quantity of embedded memory and takes some of the system's main RAM, reducing the total RAM available. This is usually called integrated graphics or on-board graphics, and is low-performance and undesirable for those wishing to run 3D applications. A dedicated Graphics Card on the other hand has its own RAM and Processor specifically for processing video images, and thus offloads this work from the CPU and system RAM. Almost all of these motherboards allow the disabling of the integrated graphics chip in BIOS, and have an AGP, PCI, or PCI Express slot for adding a higher-performance graphics card in place of the integrated graphics. Despite the performance limitations, around 95% of new computers are sold with integrated graphics processors, leaving it for the individual user to decide whether to install a dedicated Graphics card.

History

Year

Text Mode(columns/lines)

Graphics Mode(resolution/colors)

Memory

MDA

1981

80×25

-

4 KB

CGA

1981

80×25

640×200 / 4

16 KB

HGC

1982

80×25

720×348 / 2

64 KB

PGA

1984

80×25

640×480 / 256

320 KB

EGA

1984

80×25

640×350 / 16

256 KB

8514

1987

80×25

1024×768 / 256

-

MCGA

1987

80×25

320×200 / 256

-

VGA

1987

80×25

640×480 / 16

256 KB

SVGA(

BIOS Extensions" VBE 1.x)

1989

80×25

800×600 / 256

512 KB

640×480+ / 256+

512 KB+

XGA

1990

80×25

1024×768 / 256

1 MB

XGA-2

1992

80×25

1024×768 / 65,536

2 MB

SVGA(

BIOS Extensions" VBE 3.0)

1998

132×60

1280×1024 / 16.7M

-

The first IBM PC video card, which was released with the first

Personal Computer" IBM PC, was developed by IBM in 1981. The MDA (Monochrome Display Adapter) could only work in text mode representing 80 columns and 25 lines (80x25) in the screen. It had a 4KB video memory and just one color.[1]

Starting with the MDA in 1981, several video cards were released, which are summarized in the attached table.[2]

HYPERLINK "http://en.wikipedia.org/wiki/Video_card" \l "cite_note-2" [3]

HYPERLINK "http://en.wikipedia.org/wiki/Video_card" \l "cite_note-3" [4]

HYPERLINK "http://en.wikipedia.org/wiki/Video_card" \l "cite_note-4" [5]

VGA was widely accepted, which led some corporations such as

Technologies" ATI, Cirrus Logic and S3 to work with that video card, improving its resolution and the number of colours it used. This developed into the SVGA (Super VGA) standard, which reached 2 MB of video memory and a resolution of 1024x768 at 256 color mode.

In 1995 the first consumer 2D/3D cards were released, developed by Matrox,

Technology" Creative, S3, ATI and others. These video cards followed the SVGA standard, but incorporated 3D functions. In 1997, 3dfx released the Voodoo graphics chip, which was more powerful compared to other consumer graphics cards, introducing 3D effects such as mip mapping, Z-buffering and anti-aliasing into the consumer market. After this card, a series of 3D video cards were released, such as Voodoo2 from 3dfx, TNT and TNT2 from NVIDIA. The bandwidth required by these cards was approaching the limits of the PCI bus capacity. Intel developed the AGP (Accelerated Graphics Port) which solved the bottleneck between the microprocessor and the video card. From 1999 until 2002, NVIDIA controlled the video card market (taking over 3dfx) with the GeForce family. The improvements carried out at this time were focused in 3D algorithms and graphics processor clock rate. Video memory was also increased to improve their data rate;

rate" DDR technology was incorporated, improving the capacity of video memory from 32 MB with GeForce to 128 MB with GeForce 4.

Since 2003, ATI and Nvidia have dominated the high performance video card market with their Radeon and GeForce lines (respectively), sharing around 90% of the independent graphics card market and forcing other manufacturers into smaller, niche markets. Most PCs computer include Intel integrated video, making Intel the leading manufacturer in total volume of video solutions, but its chipsets are not presently incorporated on discrete video cards because of their low performance.

Components

A modern video card consists of a

circuit board" printed circuit board on which the components are mounted. These include:

Graphics processing unit (GPU)

A GPU is a dedicated processor optimized for accelerating graphics. The processor is designed specifically to perform floating-point calculations, which are fundamental to 3D graphics rendering. The main attributes of the GPU are the core

frequency" clock frequency, which typically ranges from 250 MHz to 4 GHz and the number of pipelines (vertex and fragment shaders), which translate a 3D image characterized by vertices and lines into a 2D image formed by pixels.

Modern GPUs are massively parallel, and fully programmable. Their computing power is orders of magnitude higher than that of CPUs. As consequence, they challenge CPU in high performance computing, leading manufacturers like Intel and AMD to integrate video, or massive parallelism, on processors.

Video BIOS

The

BIOS" video BIOS or firmware contains the basic program, which is usually hidden, that governs the video card's operations and provides the instructions that allow the computer and software to interact with the card. It may contain information on the memory timing, operating speeds and voltages of the graphics processor, RAM, and other information. It is sometimes possible to change the BIOS (e.g. to enable factory-locked settings for higher performance), although this is typically only done by video card overclockers and has the potential to irreversibly damage the card.

Video memory

Type

Memory clock rate (MHz)

Bandwidth (GB/s)

SDRAM" DDR

166 - 950

1.2 - 30.4

SDRAM" DDR2

533 - 1000

8.5 - 16

GDDR3

700 - 2400

5.6 - 156.6

GDDR4

2000 - 3600

128 - 200

GDDR5

3400 - 5600

130 - 230

The memory capacity of most modern video cards ranges from 128 MB to 4 GB. Since video memory needs to be accessed by the GPU and the display circuitry, it often uses special high-speed or multi-port memory, such as VRAM, WRAM, SGRAM, etc. Around 2003, the video memory was typically based on

rate" DDR technology. During and after that year, manufacturers moved towards DDR2, GDDR3, GDDR4, and even GDDR5 utilized most notably by the ATI Radeon HD 4870. The effective memory clock rate in modern cards is generally between 400 MHz and 3.8 GHz.

Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which manages the depth coordinates in

computer graphics" 3D graphics, textures, vertex buffers, and compiled shader programs.

RAMDAC

The RAMDAC, or Random Access Memory Digital-to-Analog Converter, converts

signal" digital signals to analog signals for use by a computer display that uses analog inputs such as CRT displays. The RAMDAC is a kind of RAM chip that regulates the functioning of the graphics card. Depending on the number of bits used and the RAMDAC-data-transfer rate, the converter will be able to support different computer-display refresh rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, in order to minimize flicker.(With LCD displays, flicker is not a problem.) Due to the growing popularity of digital computer displays and the integration of the RAMDAC onto the GPU die, it has mostly disappeared as a discrete component. All current LCDs, plasma displays and TVs work in the digital domain and do not require a RAMDAC. There are few remaining legacy LCD and plasma displays that feature analog inputs (

Graphics Array" VGA, component, SCART etc.) only. These require a RAMDAC, but they reconvert the analog signal back to digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analog-to-digital conversion.

Outputs

9-pin VIVO for S-Video (TV-out), DVI for HDTV, and DE-15 for

Graphics Array" VGA outputs.

The most common connection systems between the video card and the computer display are:

Video Graphics Array (VGA) (DE-15)

Graphics Array" Video Graphics Array (VGA) (DE-15).

Analog-based standard adopted in the late 1980s designed for CRT displays, also called VGA connector. Some problems of this standard are electrical noise,

distortion" image distortion and

error" sampling error evaluating pixels.

Digital Visual Interface (DVI)

Digital Visual Interface (DVI-I).

Digital-based standard designed for displays such as flat-panel displays (LCDs, plasma screens, wide high-definition television displays) and video projectors. In some rare cases high end CRT monitors also use DVI. It avoids image distortion and electrical noise, corresponding each pixel from the computer to a display pixel, using its

resolution" native resolution. It is worth to note that most manufacturers include DVI-I connector, allowing(via simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input.

Video In Video Out (VIVO) for S-Video, Composite video and Component video

9 Pin Mini DIN Connector, Frequently Used for VIVO Connections.

Included to allow the connection with televisions, DVD players, video recorders and

console" video game consoles. They often come in two 9-pin

connector" Mini-DIN connector variations, and the VIVO splitter cable generally comes with either 4 connectors (S-Video in and out + composite video in and out), or 6 connectors (S-Video in and out + component PB out + component PR out + component Y out [also composite out] + composite in).

High-Definition Multimedia Interface (HDMI)

High-Definition Multimedia Interface (HDMI)

An advanced digital audio/video interconnect released in 2003 and is commonly used to connect game consoles and DVD players to a display. HDMI supports copy protection through HDCP.

DisplayPort

DisplayPort

An advanced license- and royalty-free digital audio/video interconnect released in 2007. DisplayPort intends to replace VGA and DVI for connecting a display to a computer.

Other types of connection systems

Composite video

Analog system with lower resolution; it uses the RCA connector.

Component video

It has three cables, each with RCA connector (YCBCR for digital component, or YPBPR for analog component); it is used in projectors, DVD players and some televisions.

An analog standard once used by

Microsystems" Sun Microsystems,

Graphics" SGI and IBM.

DMS-59

A connector that provides two DVI outputs on a single connector.

Motherboard interface

Main articles:

(computing)" Bus (computing) and

card" Expansion card

Chronologically, connection systems between video card and motherboard were, mainly:

· S-100 bus: designed in 1974 as a part of the Altair 8800, it was the first industry-standard bus for the microcomputer industry.

· ISA: Introduced in 1981 by IBM, it became dominant in the marketplace in the 1980s. It was an 8 or 16-bit bus clocked at 8 MHz.

· NuBus: Used in Macintosh II, it was a 32-bit bus with an average bandwidth of 10 to 20 MB/s.

· MCA: Introduced in 1987 by IBM it was a 32-bit bus clocked at 10 MHz.

· EISA: Released in 1988 to compete with IBM's MCA, it was compatible with the earlier ISA bus. It was a 32-bit bus clocked at 8.33 MHz.

· Local Bus" VLB: An extension of ISA, it was a 32-bit bus clocked at 33 MHz.

PCI: Replaced the EISA, ISA, MCA and VESA buses from 1993 onwards. PCI allowed dynamic connectivity between devices, avoiding the

· (computing)" jumpers manual adjustments. It is a 32-bit bus clocked 33 MHz.

UPA: An interconnect bus architecture introduced by

· Microsystems" Sun Microsystems in 1995. It had a 64-bit bus clocked at 67 or 83 MHz.

· USB: Mostly used for other types of devices, but there are USB displays.

· AGP: First used in 1997, it is a dedicated-to-graphics bus. It is a 32-bit bus clocked at 66 MHz.

· PCI-X: An extension of the PCI bus, it was introduced in 1998. It improves upon PCI by extending the width of bus to 64-bit and the clock frequency to up to 133 MHz.

Express" PCI Express: Abbreviated PCIe, it is a

· point" point to point interface released in 2004. In 2006 provided double the data-transfer rate of AGP. It should not be confused with PCI-X, an enhanced version of the original PCI specification.

In the attached table is a comparison between a selection of the features of some of those interfaces.

Bus

Width (bits)

Clock rate (MHz)

Bandwidth (MB/s)

Style

ISA XT

8

4,77

8

Parallel

ISA AT

16

8,33

16

Parallel

MCA

32

10

20

Parallel

EISA

32

8,33

32

Parallel

VESA

32

40

160

Parallel

PCI

32 - 64

33 - 100

132 - 800

Parallel

AGP 1x

32

66

264

Parallel

AGP 2x

32

66

528

Parallel

AGP 4x

32

66

1000

Parallel

AGP 8x

32

66

2000

Parallel

PCIe x1

1

2500 / 5000

250 / 500

Serial

PCIe x4

1 × 4

2500 / 5000

1000 / 2000

Serial

PCIe x8

1 × 8

2500 / 5000

2000 / 4000

Serial

PCIe x16

1 × 16

2500 / 5000

4000 / 8000

Serial

PCIe x16 2.0

1 × 16

5000 / 10000

8000 / 16000

Serial

Cooling devices

Video cards may use a lot of electricity, which is converted into heat. If the heat isn't dissipated, the video card could overheat and be damaged. Cooling devices are incorporated to transfer the heat elsewhere. Three types of cooling devices are commonly used on video cards:

· Heat sink: a heat sink is a passive-cooling device. It conducts heat away from the graphics card's core, or memory, by using a heat-conductive metal (most commonly aluminum or copper); sometimes in combination with heat pipes. It uses air (most common), or in extreme cooling situations, water (see water block), to remove the heat from the card. When air is used, a fan is often used to increase cooling effectiveness.

· fan" Computer fan: an example of an active-cooling part. It is usually used with a heat sink. Due to the moving parts, a fan requires maintenance and possible replacement. The fan speed or actual fan can be changed for more efficient or quieter cooling.

· block" Water block: a water block is a heat sink suited to use water instead of air. It is mounted on the graphics processor and has a hollow inside. Water is pumped through the water block, transferring the heat into the water, which is then usually cooled in a radiator. This is the most effective cooling solution without extreme modification.

Power demand

As the processing power of video cards has increased, so has their demand for electrical power. Present fast video cards tend to consume a great deal of power. While CPU and power supply makers have recently moved toward higher efficiency, power demands of GPUs have continued to rise, so the video card may be the biggest electricity user in a computer. Although power supplies are increasing their power too, the bottleneck is due to the PCI-Express connection, which is limited to supplying 75 Watts. Modern video cards with a power consumption over 75 Watts usually include a combination of six-pin (75W) or eight-pin (150W) sockets that connect directly to the power supply via a

connector" Molex connector to supplement power.