Introduction to High-end Graphics Cards
- Advertisement -
The importance of the graphics card in a proper high-end (simulator/gaming) set-up cannot be underestimated; it can make or break the system’s performance, because it not only generates a video signal, but also takes care of the specialist rendering computations which would otherwise be carried out by the main CPU. Also take into account that the graphics card architecture is designed with these rendering actions in mind, leading to a component that is completely optimized for this purpose. The rendering performance of the graphics card cannot, therefore, be beaten, even by the fastest main CPUs. Simulation and gaming software depends heavily on the performance of the graphics card, so this should not be the component that you try to save your money on. Let’s take a look at the key elements that set the cards apart.
Graphics processing unit (GPU)
The processing unit of a graphics card specializes in conducting floating point operations, which are the cornerstone of rendering calculations. This is carried out by the Graphical Processing Unit (GPU). The issues of concern are two-fold.
The first is the GPU's clock frequency, which is like every other processing unit: the higher the better. The second issue is the architecture and number of pipelines. A pipeline translates a 3D image, characterized by vertices and lines, into a 2D image formed by pixels. Pipelines operate in parallel, so having a lot of them will help a great deal when it comes to achieving fast rendering times.
The performance of the GPU does not only depend on its clock frequency, with memory access time also being a critical parameter in the chain which defines overall performance. For this reason, graphics cards have their own memory so that the GPU can directly access this. The more memory that is available for the GPU, the better the overall performance will be.
It is also crucial that the GPU can transfer massive amounts of data at lightning speed. The two most important figures here are the number of bits the memory interface is comprised of, and the memory clock frequency. The memory bandwidth, which consists of these two elements, is expressed in GB per second.
The bus interface connects the graphics card to the main CPU system bus. Overall performance increases with the available bandwidth of this interface. To date, the most important bus interfaces are PCI, AGP and PCI-Express (PCI-E). In this section, we will only focus on cards with a PCI-E interface, as this has been widely adopted by PC manufacturers and is also the fastest. Of course, your PC must be equipped with a PCI-E type of extension slot to use this type of graphics card.
Multi monitor support
All of the mainstream graphics cards that are suitable for use with simulation software can drive two monitors independently. There are also graphics cards on the market that can drive up to six displays. The native resolutions vary between manufacturers.
Some graphics cards are also capable of driving a TV set. This might be appealing when it comes to modern HD televisions, but most perform poorly when it comes to simulation as a result of several of the image enhancing algorithms conducted by the set. These algorithms are adequate for television but not for PC video signals (they often introduce some time lag, which would be disastrous for most simulation purposes). What’s more, the viewing expanse of televisions isn't designed for short distances.
There are three important video interfaces that are used to connect the monitor to the graphics card: DVI-I (digital and analogue), VGA (analogue) and DisplayPort (digital). Most modern systems use the DVI-I connector. If there is a mismatch between the connector on the graphics card and the one on the display system, a connector adapter must be used to remedy the problem.
Two parallel graphics cards
Both AMD and NVidia have developed ways to enable two graphics cards to work together in parallel on the same image. In this configuration, you can theoretically increase performance two-fold. AMD calls their version CrossFireX and NVidia talks about the SLI (Scalable Link Interface). With AMD, the cards don't have to be the same GPU version, while with NVidia they must be. The NVidia solution also needs an SLI compliant motherboard for their approach to work.