1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

34
1) 1 MB L2 Cache 2) Floating Point Unit 3) Schedulers 4) Allocator 5) Arithmetic & Logic Unit 6) 16 KB Data Cache 7) Microcode rom 8) Trace Cache 9) Data Translation Buffer 10) Brach Perdition Unit 11) Instruction TLB Microprocessor Tutorial Pentium 4

description

Microprocessor Tutorial. Pentium 4. 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache Microcode rom Trace Cache Data Translation Buffer Brach Perdition Unit Instruction TLB. Knowledge Dimension (Blooms Revised Taxonomy). Factual Knowledge - PowerPoint PPT Presentation

Transcript of 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Page 1: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

1) 1 MB L2 Cache

2) Floating Point Unit

3) Schedulers

4) Allocator

5) Arithmetic & Logic Unit

6) 16 KB Data Cache

7) Microcode rom

8) Trace Cache

9) Data Translation Buffer

10) Brach Perdition Unit

11) Instruction TLB

Microprocessor Tutorial

Pentium 4

Page 2: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Knowledge Dimension (Blooms Revised Taxonomy)

Factual Knowledge The basic elements students must know to be acquainted with a discipline or solve problems in it.

-Technical vocabulary, Knowledge of specific details and elements Conceptual Knowledge

The interrelationships among the basic elements within a larger structure that enable them to function together.

-Knowledge of theories, models and structuresProcedural Knowledge

How to do something, methods of inquiry, and criteria for using skills, algorithms, techniques, and methods.

-Knowledge of criteria for determining when to use appropriate procedures

Page 3: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

MemoryDiagram pg 6 text

CPUControl

UnitData Path

Input / OutputKeyboard , Mouse

10101010

11011011

Disk Drive

Case Study: Microprocessors

Factual Knowledge

•Disk drives are approximately 1000 times slower for transferring data to the CPU’s then RAM/Flash Drives

•L3 Cache is new to Quad Core processors

Conceptual Knowledge

•L2 Cache, on the CPU chip external to the CPU, is used to move data so that the CPU doesn’t have to wait for data transfer

Procedural Knowledge

•L1 Cache is also on the CPU but is located right next to the hardware that need is. Pre-staging data into sections of the CPU when needed speeds up processing dramatically

L2/L3 Cache1000 XFaster

Page 4: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Case Study: Microprocessors (continued)

These "dynamic branch prediction" algorithms usually involve the use of either one or both of two types of tables,

• Branch History Table (BHT) • Branch Target Buffer (BTB),

Factual Knowledge

•The Branch Prediction unit is used to decide if the code will take a branch in future CPU operations.

Conceptual Knowledge

•The BPU loads that code to save time. Loads Future Branches into Cache.

Procedural Knowledge

•There is Dynamic prediction that uses the branch prediction table to load possible branches instruction in L2 cache saving approximately 90% of CPU time over older units without the feature

Page 5: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Case Study: Microprocessors (continued)

The P4's long pipeline means that bubbles take a long time to propagate off the CPU,

so a single bubble results in a lot of wasted cycles

Intel Longer Pipeline

AMD Shorter Pipeline

Factual Knowledge

•Pipelines are used to stage data,

Conceptual Knowledge

•When operations in the CPU are performed processed to be performed are ready immediately when needed (i.e. have all steps for a process available when the CPU is ready)

Procedural Knowledge

•Philosophically the discussion over longer vs. shorter pipeline is ongoing. Longer pipelines, Intel chips, can stage longer operations.

Page 6: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Factual Knowledge

Photolithography (also optical lithography) is a process used in microfabrication to selectively remove parts of a thin film (or the bulk of a substrate). It uses light to transfer a geometric pattern from a photomask to a light-sensitive chemical (photoresist, or simply "resist") on the substrate. A series of chemical treatments then engraves the exposure pattern into the material underneath the photoresist.

Light sourcesHistorically, photolithography has used ultraviolet light from gas-discharge lamps using mercury, sometimes in combination with noble gases such as xenon. These lamps produce light across a broad spectrum with several strong peaks in the ultraviolet range. This spectrum is filtered to select a single spectral line.

• The current templates used are 90 and 45 Nanometers (billionth of a meter)

Case Study: Microprocessors (continued)

Page 7: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

End of Tutorial

Page 8: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache
Page 9: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

64 bit data bus

Page 10: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Pentium Dual Core Block Diagram The dual core Pentium 4 is essentially two complete Prescott cores on a single die,

sharing a connection to the front side bus.• (230 million transistors built on 90nm process technology )

• The 840 will support Intel's EM64T 64-bit technology

Page 11: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Pentium Dual Core (D) & Model 840 Extreme Edition

• Note that each of the two CPU cores shares a front-side bus to the memory controller hub (MCH) of the 945/955X chipset.

• The dual-core runs at 3.2GHz

•Processor architects need to design for particular thermal envelopes. •Two cores obviously generate twice as much heat as a single core CPU of the same type, at the same clock rate,So it's not surprising that the high-end dual-core CPU would clock more slowly than a high-end CPUwith only one processor core

Additionally, the 840 will support Intel's EM64T 64-bit technology.

Page 12: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Hyper-Threading Technology (HTT) is Intel's trademark for their implementation of the simultaneous multithreading technology on the Pentium 4 micro-architecture. It is basically a more advanced form of Super-threading that first debuted on the Intel Xeon processors and was later added to Pentium 4 processors. The technology improves processor performance under certain workloads by providing useful work for execution units that would otherwise be idle, for example during a cache miss.

SecurityIn May 2005 Colin Percival presented a paper, Cache Missing for Fun and Profit, demonstrating that a malicious

thread operating with limited privileges in theory permits monitoring of the execution of another thread, allowing for the possibility of theft of cryptographic keys.

Hyper-Threading works by duplicating certain sections of the processor—those that store the architectural state—but not duplicating the main execution resources

Except for its performance implications, this innovation is transparent to operating systems and programs. All that is required to take advantage of Hyper-Threading is symmetric multiprocessing (SMP) support in the operating system

Page 13: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

The future of Hyper-ThreadingThe future of Hyper-Threading is not bright (has changed 2006). With Intel shifting its priority in processor design to energy efficiency and multi-core instead of single-core performance, Hyper-Threading seems to be a dead end.

Hyper-Threading consumes about the same amount of power as an additional processor core, even though adding the technology only offers a fraction of the performance increase compared to adding another core to a processor.

This is demonstrated in the difference between the mainstream Pentium D which does not support Hyper Threading and the top of the line Pentium Extreme Edition which does support Hyper-Threading.

Page 14: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Basic Instruction Flow: ( In General )

• When instructions are fetched from the cache or main memory, they must be decoded and dispatched for execution.

• This fetching, decoding and dispatching takes place in the processor's front end.

Cache (Fetch)

Front End ( Decode)

Execution Engine ( Execute )

Work of number crunching gets

done

Write back to Main Memory

Basic 4 stage pipeline

Page 15: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Complicated instructions require multiple clock cycles to complete their instructions

To accommodate these multi-cycle instructions, the different functional units their own

EXECUTE pipelines

Multi-Cycle Pipelines instructions ( In General )

Page 16: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

These "dynamic branch prediction" algorithms usually involve the use of either one or both of two types of tables,

• the Branch History Table (BHT) • and the Branch Target Buffer (BTB),

Fast ALU Scheduler - Schedules Arithmetic-Logic Unit operations (simple integer and logical ops) for the the P4's two double-pumped ALU units. run at twice the main core's clock speed

Slow ALU/General FPU Scheduler - Schedules the rest of the ALU functions and most of the floating-point

functions. 

Page 17: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

• While some processors still have the classic, four stage pipeline described above, most modern CPUs are more complicated •The P4, has 20 stages in its basic pipeline, takes this tactic to the extreme. •The P4 takes a "narrow and deep" approach to moving through the instruction stream •The fact that each functional unit has a very deep pipeline means that each unit has a large number of available execution slots and can thus work on quite a few instructions at once

 Figure 3.1

The P4's long pipeline means that bubbles take a long time to propagate off the CPU,

so a single bubble results in a lot of wasted cycles

Page 18: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Stages 1 and 2 - Instruction Fetch: • These two stages are both dedicated primarily to grabbing an instruction from the L1 cache. The can fetch four instructions per clock cycle from the L1 cache and send them on to the next stage. Hopefully, the needed instructions are in the L1 cache.• If they aren't in the L1 cache, then the has to hit the much slower L2 cache to find them, which can add up to 9 cycles of delay into the instruction pipeline. 

Stage 3 - Decode/Dispatch: Once an instruction has been fetched, it goes into a 12-entry instruction queue to be decoded. •The decoding stage is where the processor determines what kind of instruction it's looking at and where the instruction should be sent for execution. Once the instruction is decoded, it is dispatched to the proper issue queue. The decoder can dispatch up to three instructions per clock cycle to the next stage.

Stage 4 - Issue: There are three issue queues for the three main types of instructions that the can execute. • The first queue is the Floating-Point Issue Queue (FIQ), which holds floating-point (FP) instructions that are waiting to be executed. • The second queue is the Vector Issue Queue (VIQ), which holds vector (or Altivec) operations, • The third queue is the General Instruction Queue (GIQ), which holds everything else. Once the instruction leaves its issue queue, it goes to the execution engine to be executed.

Stage 5 - Execute: This stage is pretty straightforward. Here, the instructions can pass out of order from their issue queues into their respective functional units and be executed

Stages 6 and 7 - Complete and Write-Back: In these two stages, the instructions are put back into program order

(the order in which they came into the processor), and their results are written back to memory

Page 19: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Branch processing and prediction                                                                                                                  • If you look on left side of the front end you'll see a unit attached to the Instruction Fetch and Decode/Dispatch pipeline stages. • This is the branch processing unit (BPU), an execution unit which acts as the rudder that steers the front end (and behind it the rest of the processor)

• In older processors, the entire processor would just sort of sit idle and wait for the branch condition to be evaluated, a wait that could be quite long if the evaluation involved a complex calculation of some sort.

•Modern processors use a technique called "speculative execution," which involves making an educated guess at the which direction the branch is going to take and then beginning execution at the new branch target before the branch's conditional is actually evaluated.

There are two main types of branch prediction: static prediction and dynamic prediction.

• Static branch prediction is simple, and relies on the assumption that the majority of backwards pointing branches occur in the context of repetitive loops, where a branch instruction is used to determine whether or not to repeat the loop again. • Most of the time a loop's conditional will evaluate to "taken," thereby instructing the machine to repeat the loop's code one more time. This being the case, static branch prediction merely assumes that all backwards branches are "taken." • For a branch that points forward to a block of code that comes later in the program, the static predictor assumes that the branch is "not taken."

•Static prediction is very fast, as it doesn't involve any table lookups or calculations, •but its success rate varies widely with the program's instruction mix. • If the program is full of loops, static prediction works ok;•if it's not, static branch prediction performs quite poorly.

Page 20: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

To get around the problems associated with static prediction computer architects use a variety of algorithms for predicting branches.

These "dynamic branch prediction" algorithms usually involve the use of either one or both of two types of tables,

•the Branch History Table (BHT) •and the Branch Target Buffer (BTB),

to record information about the outcomes of branches that've already been executed.

•The BHT stores each conditional branch that the BPU has encountered on its last few cycles,

•along with some bits that indicate the likelihood that the branch will be taken based on its past history.

• For a regular 2-bit branch history scheme, branches are marked as • "strongly taken,“• "taken,“• "not taken," • and "strongly not taken."

•When the front end encounters a branch instruction that has an entry in its BHT, the branch predictor uses branch history information to decide whether or not to speculatively execute the branch. 

•Should the branch predictor decide to speculatively execute the branch, it needs to know exactly where in the L1 cache the branch is pointing--in other words, it needs a branch target. •The Branch Target Buffer (BTB) stores the branch targets of previously executed branches, so when a branch is taken the BPU grabs the speculative branch target from the BTB and points the front end to begin fetching instructions from that address

•branch predictor has a success rate of around 91%

Branch Prediction ( Continued )

Page 21: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Normal x86 processor's critical execution path

In a conventional x86 processor like the PIII or the Athlon, x86 instructions make their way from the instruction cache into the decoder, where they're broken down into multiple smaller,

more uniform, more easily managed instructions called µops

Pentium 4

• for a block of code that's executed thousands and thousands of times the number of cycles spent repeatedly translating and decoding the same group of instructionscan add up quickly The P4 reclaims those lost cycles by removing the need to translate those x86 instructions into µops each time they're executed The P4's instruction cache takes translated, decoded µops that are primed and ready to be sent straight out to the OOO execution engine, and it arranges them into little mini-programs called "traces."

Page 22: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

• As the front end executes the stored traces, the trace cache sends up to 3 µops per cycle

directly to the OOO execution engine, without the need for them to pass through any translation or decoding logic.

Trace Cache ( cont)

• Only when there's an L1 cache miss does that top part of the front end kick in in order to fetch and decode instructions from the L2 cache

The trace cache operates in two modes. "Execute mode" is the mode pictured above, where the trace cache is

feeding stored traces to the execution logic to be executed. "trace segment build mode." In this mode, the front end fetches x86 code from the L2 cache, translates into µops,

builds a "trace segment" with it, and loads that segment into the trace cache to be executed. You'll notice in Figure 6.3 that the trace cache execution path knocks the BPU out of the picture,The trace cache actually uses branch prediction when it builds a trace

Page 23: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Pentium 4Pentium 4

1) 1 MB L2 Cache

2) Floating Point Unit

3) Schedulers

4) Allocator

5) Arithmetic & Logic Unit

6) 16 KB Data Cache

7) Microcode rom

8) Trace Cache

9) Data Translation Buffer

10) Brach Perdition Unit

11) Instruction TLB

MemoryDiagram pg 6 text

CPUControl

UnitData Path

Input / OutputKeyboard , Mouse

10101010

11011011

Disk Drive

Case Study: Microprocessors

Data Sheet:

•Disk drives are approximately 1000 times slower for transferring data to the CPU’s then RAM or Flash Drives

•L2 Cache, on the CPU chip, is used to move data so that the CPU doesn’t have to wait for data transfer

•L2 Cache is approximately 1 MB per CPU

•L1 Cache is also on the CPU but is located right next to the hardware that need is.

Page 24: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Case Study: Microprocessors (continued)

These "dynamic branch prediction" algorithms usually involve the use of either one or both of two types of tables,

• the Branch History Table (BHT) • and the Branch Target Buffer (BTB),

•The Branch Prediction unit is used to decide if the code will take a branch in future CPU operations and load that code to save time

•There is Dynamic prediction that uses the branch prediction table to load possible branches instruction in L2 cache saving approximately 90% of CPU time over older units without the feature

Page 25: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache

Case Study: Microprocessors (continued)

The P4's long pipeline means that bubbles take a long time to propagate off the CPU,

so a single bubble results in a lot of wasted cycles

Intel

AMD

Pipelines are used to stage data, so its ready immediately when needed

Page 26: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache
Page 27: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache
Page 28: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache
Page 29: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache
Page 30: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache
Page 31: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache
Page 32: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache
Page 33: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache
Page 34: 1 MB L2 Cache Floating Point Unit Schedulers Allocator Arithmetic & Logic Unit 16 KB Data Cache