ATLAS: A Scalable and High-Performance Scheduling Algorithm ...

ATLAS: A Scalable and High-Performance Scheduling Algorithm ...

18-740/640 Computer Architecture Lecture 16: Interconnection Networks Prof. Onur Mutlu Carnegie Mellon University Fall 2015, 11/4/2015 Required Readings Required Reading Assignment: Dubois, Annavaram, Stenstrom, Chapter 6. Recommended References: Moscibroda and Mutlu, A Case for Bufferless Routing in OnChip Networks, ISCA 2009. Das et al., Application-Aware Prioritization Mechanisms for On-Chip Networks, MICRO 2009. 2 Interconnection Network Basics 3 Where Is Interconnect Used? To connect components

Many examples Processors and processors Processors and memories (banks) Processors and caches (banks) Caches and caches I/O devices Interconnection network 4 Why Is It Important? Affects the scalability of the system How large of a system can you build? How easily can you add more processors? Affects performance and energy efficiency

How fast can processors, caches, and memory communicate? How long are the latencies to memory? How much energy is spent on communication? 5 Interconnection Network Basics Topology Routing (algorithm) Specifies the way switches are wired Affects routing, reliability, throughput, latency, building ease How does a message get from source to destination Static or adaptive

Buffering and Flow Control What do we store within the network? Entire packets, parts of packets, etc? How do we throttle during oversubscription? Tightly coupled with routing strategy 6 Terminology Network interface Link Bundle of wires that carry a signal

Switch/router Module that connects endpoints (e.g. processors) to network Decouples computation/communication Connects fixed number of input channels to fixed number of output channels Channel A single logical connection between routers/switches 7 More Terminology Node Message

Unit of transfer for networks clients (processors, memory) Packet A router/switch within a network Unit of transfer for network Flit Flow control digit Unit of flow control within network 8 Some More Terminology Direct or Indirect Networks Endpoints sit inside (direct) or outside (indirect) the network E.g. mesh is direct; every node is both endpoint and switch

Router (switch), Radix of 2 (2 inputs, 2 outputs) Abbreviation: Radix-ary These routers are 2-ary 15 14 15 14 13 12 13 12 11 10 11 10 9 8 9 8 7 6 7

6 5 4 5 4 3 2 3 2 1 0 1 0 Indirect Direct 9 Interconnection Network Topology 10 Properties of a

Topology/Network Regular or Irregular Routing Distance number of links/hops along a route Diameter Regular if topology is regular graph (e.g. ring, mesh). maximum routing distance within the network Average Distance Average number of hops across all valid routes 11

Properties of a Topology/Network Bisection Bandwidth Often used to describe network performance Cut network in half and sum bandwidth of links severed (Min # channels spanning two halves) * (BW of each channel) Meaningful only for recursive topologies Can be misleading, because does not account for switch and routing efficiency (and certainly not execution time) Blocking vs. Non-Blocking If connecting any permutation of sources & destinations is possible, network is non-blocking; otherwise network is blocking.

Rearrangeable non-blocking: Same as non-blocking but might require rearranging connections when switching 12 from one permutation to another. Topology Bus (simplest) Point-to-point connections (ideal and most costly) Crossbar (less costly) Ring Tree Omega Hypercube Mesh Torus Butterfly 13

Metrics to Evaluate Interconnect Topology Cost Latency (in hops, in nanoseconds) Contention Many others exist you should think about Energy Bandwidth Overall system performance 14 Bus All nodes connected to a single link + Simple + Cost effective for a small number of nodes + Easy to implement coherence (snooping and serialization) - Not scalable to large number of nodes (limited bandwidth, electrical loading reduced frequency) - High contention fast saturation 0 1

2 3 4 Memory 5 Memory Memory cache cache cache cache Proc Proc Proc Proc 6

7 Memory 15 Point-to-Point Every node connected to every other with direct/isolated links 0 7 1 + Lowest contention + Potentially lowest latency + Ideal, if cost is no issue 6 -- Highest cost O(N) connections/ports per node O(N2) links -- Not scalable -- How to lay out on chip? 2 5

3 4 16 Crossbar Every node connected to every other with a shared link for each destination Enables concurrent transfers to non-conflicting destinations Could be cost-effective for small 7 number of nodes 6 + Low latency and high throughput 5 - Expensive 4 - Not scalable O(N2) cost 3 - Difficult to arbitrate as N increases2 1 Used in core-to-cache-bank networks in - IBM POWER5 - Sun Niagara I/II

0 0 1 2 3 4 5 6 7 17 Another Crossbar Design 0 1 2 3 4 5 6 7 0 1

2 3 4 5 6 7 18 Sun UltraSPARC T2 Core-to-Cache Crossbar High bandwidth interface between 8 cores and 8 L2 banks & NCU 4-stage pipeline: req, arbitration, selection, transmission 2-deep queue for

each src/dest pair to hold data transfer request 19 Bufferless and Buffered Crossbars 0 N I 1 N I 2 N I 3 N I Flow Control Flow Control

+ Efficient support for variablesize packets Flow Control Flow Control Bufferless Buffered Crossbar + Simpler arbitration/ scheduling Output Arbiter Output Arbiter Output Arbiter Output Arbiter - Requires N2 buffers

20 Can We Get Lower Cost than A Crossbar? Yet still have low contention compared to a bus? Idea: Multistage networks 21 Multistage Logarithmic Networks Idea: Indirect networks with multiple layers of switches between terminals/nodes Cost: O(NlogN), Latency: O(logN) Many variations (Omega, Butterfly, Benes, Banyan, ) Omega Networ k Omega Network: 000 000

001 001 010 011 010 011 100 100 101 101 110 111 110 111 conflict Blocking or Non-blocking? 22

Multistage Networks (Circuit 0 0 Switched) 1 1 2 2 3 3 4 4 5 5 6 6 7

A multistage network has more restrictions on feasible concurrent Tx-Rx pairs vs a crossbar But more scalable than crossbar in cost, e.g., O(N logN) for Butterfly 7 2-by-2 crossbar 23 Multistage Networks (Packet Switched) 0 0 1 1 2 2 3 3 4

4 5 5 6 6 7 7 2-by-2 router Packets hop from router to router, pending availability of the next-required switch and buffer 24 Aside: Circuit vs. Packet Circuit switching sets up full path before Switching transmission Establish route then send data Noone else can use those links while circuit is set + faster arbitration + no buffering -- setting up and bringing down path takes time

Packet switching routes per packet in each router Route each packet individually (possibly via different paths) If link is free, any packet can use it -- potentially slower --- must dynamically switch -- need buffering + no setup, bring down time 25 Switching vs. Topology Circuit/packet switching choice independent of topology It is a higher-level protocol on how a message gets sent to a destination However, some topologies are more amenable to circuit vs. packet switching 26 Another Example: Delta Network

Single path from source to destination Each stage has different routers Proposed to replace costly crossbars as processormemory interconnect Janak H. Patel,ProcessorMemory Interconnections for Multiprocessors, ISCA 1979. 8x8 Delta network 27 Another Example: Omega Network Single path from source to destination

All stages are the same Used in NYU Ultracomputer Gottlieb et al. The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer, IEEE Trans. On Comp., 28 Combining Operations in the Network Idea: Combine multiple operations on a shared memory location Example: Omega network switches combine fetchand-add operations in NYU Ultracomputer Fetch-and-add(M, I): return M, replace M with M+I

Common when parallel processors modify a shared variable, e.g. obtain a chunk of the array Combining reduces synchronization latency 29 Butterfly Equivalent to Omega Network Indirect Used in BBN Butterfly Conflicts can cause tree saturation Randomization of route selection helps 15 14 15 14 13 12

13 12 11 10 11 10 9 8 9 8 7 6 7 6 5 4 5 4 3 2 3

2 1 0 1 0 30 Review: Topologies 3 2 1 0 0 1 Topology 2 3 7 6 7 6 5 4

5 4 3 2 3 2 1 0 1 0 Crossbar Multistage Logarith. Mesh Direct/Indirect Indirect Indirect Direct Blocking/ Non-blocking

Non-blocking Blocking Blocking Cost O(N2) O(NlogN) O(N) Latency O(1) O(logN) O(sqrt(N)) 31 Ring Each node connected to exactly two other nodes. Nodes form a continuous pathway such that packets can reach any node. + Cheap: O(N) cost - High latency: O(N) - Not easy to scale - Bisection bandwidth remains constant Used in Intel Haswell,

S Intel Larrabee, IBM Cell, P many commercial systems today M RING S S P P M M 32 Unidirectional Ring R R R R N-2

N-1 2x2 router 0 1 2 Single directional pathway Simple topology and implementation Reasonable performance if N and performance needs (bandwidth & latency) still moderately low O(N) cost N/2 average hops; latency depends on utilization 33 Bidirectional Rings Multi-directional pathways, or multiple rings + Reduces latency + Improves scalability - Slightly more complex injection policy (need to select which ring to inject a packet into)

34 Hierarchical Rings + More scalable + Lower latency - More complex 35 More on Hierarchical Rings Ausavarungnirun et al., Design and Evaluation of Hierarchical Rings with Deflection Routing, SBAC-PAD 2014. http://users.ece.cmu.edu/~omutlu/pub/hierarchical-ringswith-deflection_sbacpad14. pdf Discusses the design and implementation of a mostlybufferless hierarchical ring 36 Mesh

Each node connected to 4 neighbors (N, E, S, W) O(N) cost Average latency: O(sqrt(N)) Easy to layout on-chip: regular and equal-length links Path diversity: many ways to get from one node to another Used in Tilera 100-core And many on-chip network prototypes 37 Torus Mesh is not symmetric on edges: performance very sensitive to placement of task on edge vs. middle Torus avoids this problem + Higher path diversity (and bisection bandwidth) than mesh - Higher cost - Harder to lay out on-chip - Unequal link lengths 38

Torus, continued Weave nodes to make inter-node latencies ~constant S MP S MP S MP S MP S MP S MP S

MP S MP 39 Trees Planar, hierarchical topology Latency: O(logN) Good for local traffic + Cheap: O(N) cost + Easy to Layout - Root can become a bottleneck Fat trees avoid this problem (CM-5) H-Tree Fat Tree 40 CM-5 Fat Tree Fat tree based on 4x2 switches Randomized routing on the way up Combining, multicast, reduction operators supported in hardware

Thinking Machines Corp., The Connection Machine CM-5 Technical Summary, Jan. 1992. 41 Hypercube N-dimensional cube or N-cube 0-D 1-D Latency: O(logN) Radix: O(logN) #links: O(NlogN) + Low latency - Hard to lay out in 2D/3D 2-D 3-D 4-D 11 01

11 0100 01 01 01 00 00 01 00 00 11 10 11 01 10 11 11 10 01 10 00 00 11 10 11 10

10 00 10 42 Caltech Cosmic Cube 64-node message passing machine Seitz, The Cosmic Cube, CACM 1985. 43 Routing 44 Routing Mechanism Arithmetic

Simple arithmetic to determine route in regular topologies Dimension order routing in meshes/tori Source Based Source specifies output port for each switch in route + Simple switches no control state: strip output port off header - Large header Table Lookup Based Index into table for output port + Small header - More complex switches 45 Routing Algorithm Three Types

Deterministic: always chooses the same path for a communicating source-destination pair Oblivious: chooses different paths, without considering network state Adaptive: can choose different paths, adapting to the state of the network How to adapt Local/global feedback Minimal or non-minimal paths 46 Deterministic Routing All packets between the same (source, dest) pair take the same path Dimension-order routing

First traverse dimension X, then traverse dimension Y E.g., XY routing (used in Cray T3D, and many on-chip networks) + Simple + Deadlock freedom (no cycles in resource allocation) - Could lead to high contention - Does not exploit path diversity 47 Deadlock No forward progress Caused by circular dependencies on resources Each packet waits for a buffer occupied by another packet downstream 48 Handling Deadlock Avoid cycles in routing Dimension order routing

Cannot build a circular dependency Restrict the turns each packet can take Avoid deadlock by adding more buffering (escape paths) Detect and break deadlock Preemption of buffers 49 Turn Model to Avoid Deadlock Idea Analyze directions in which packets can turn in the network Determine the cycles that such turns can form Prohibit just enough turns to break possible cycles

Glass and Ni, The Turn Model for Adaptive Routing, ISCA 1992. 50 Oblivious Routing: Valiants Algorithm Goal: Balance network load Idea: Randomly choose an intermediate destination, route to it first, then route from there to destination Between source-intermediate and intermediate-dest, can use dimension order routing + Randomizes/balances network load - Non minimal (packet latency can increase) Optimizations: Do this on high load Restrict the intermediate node to be close (in the same quadrant) 51 Adaptive Routing

Minimal adaptive Router uses network state (e.g., downstream buffer occupancy) to pick which productive output port to send a packet to Productive output port: port that gets the packet closer to its destination + Aware of local congestion - Minimality restricts achievable link utilization (load balance) Non-minimal (fully) adaptive Misroute packets to non-productive output ports based on network state + Can achieve better network utilization and load balance 52 More on Adaptive Routing Can avoid faulty links/routers Idea: Route around faults

+ Deterministic routing cannot handle faulty components - Need to change the routing table to disable faulty routes - Assuming the faulty link/router is detected One recent example: Fattah et al., "A Low-Overhead, Fully-Distributed, Guarante ed-Delivery Routing Algorithm for Faulty Netw53 Buffering and Flow Control 54 Recall: Circuit vs. Packet Switching Circuit switching sets up full path before transmission Establish route then send data Noone else can use those links while circuit is set + faster arbitration -- setting up and bringing down path takes time Packet switching routes per packet in each router

Route each packet individually (possibly via different paths) If link is free, any packet can use it -- potentially slower --- must dynamically switch + no setup, bring down time + more flexible, does not underutilize links 55 Packet Switched Networks: Packet Header Format Payload routing and control information carries data (non HW specific information) can be further divided (framing, protocol stacks) Error Code

generally at tail of packet so it can be generated on the way out Header Payload Error Code 56 Handling Contention Two packets trying to use the same link at the same time What do you do? Buffer one Drop one Misroute one (deflection) Tradeoffs? 57

Flow Control Methods Circuit switching Bufferless (Packet/flit based) Store and forward (Packet based) Virtual Cut Through (Packet based) Wormhole (Flit based) 58 Circuit Switching Revisited Resource allocation granularity is high Idea: Pre-allocate resources across multiple switches for a given flow

Need to send a probe to set up the path for preallocation + No need for buffering + No contention (flows performance is isolated) + Can handle arbitrary message sizes - Lower link utilization: two flows cannot use the same link - Handshake overhead to set up a circuit 59 Bufferless Deflection Routing Key idea: Packets are never buffered in the network. When two packets contend for the same link, one is deflected.1 New traffic can be injected whenever there is a free output link. Destination ran, On Distributed Communication Networks. RAND Tech. Report., 1962 / IEEE Trans.Comm.,60 1964. Bufferless Deflection Routing Input buffers are eliminated: packets are buffered in

pipeline latches and on network links Input Buffers North North South South East East West West Local Local Deflection Routing Logic 61 Store and Forward Flow Control Packet based flow control Store and Forward

Packet copied entirely into network router before moving to the next node Flow control unit is the entire packet Leads to high per-packet latency Requires buffering for entire packet in each node S D Can we do better? 62 Cut through Flow Control Another form of packet based flow control Start forwarding as soon as header is received and resources (buffer, channel, etc) allocated Dramatic reduction in latency

Still allocate buffers and channel bandwidth for full packets S D What if packets are large? 63 Cut through Flow Control What to do if output port is blocked? Lets the tail continue when the head is blocked, absorbing the whole message into a single switch. Requires a buffer large enough to hold the largest packet. Degenerates to store-and-forward with high contention Can we do better?

64 Wormhole Flow Control T Packets broken into (potentially) smaller flits (buffer/bw allocation unit) Flits are sent across the fabric in a wormhole fashion B B H Body follows head, tail follows body Pipelined

If head blocked, rest of packet stops Routing (src/dest) information only in head How does body/tail know where to go? Latency almost independent of distance for long messages 65 Wormhole Flow Control Advantages over store and forward flow control + Lower latency + More efficient buffer utilization Limitations - Suffers from head of line blocking - If head flit cannot move due to contention, another worm cannot proceed even though links may be idle Input Queues 1 2 1 Switching Fabric

Outputs 1 2 1 2 2 1 Idle! 2 HOL Blocking 66 Head of Line Blocking A worm can be before another in the router input buffer Due to FIFO nature, the second worm cannot be scheduled even though it may need to access another output port 67 Head of Line Blocking Red holds this channel: channel remains idle

until read proceeds Channel idle but red packet blocked behind blue Buffer full: blue cannot proceed Blocked by other packets 68 Virtual Channel Flow Control Idea: Multiplex multiple channels over one physical channel Divide up the input buffer into multiple buffers sharing a single physical channel Dally, Virtual Channel Flow Control, ISCA 1990. 69 Virtual Channel Flow Control

Idea: Multiplex multiple channels over one physical channel Divide up the input buffer into multiple buffers sharing a single physical channel Dally, Virtual Channel Flow Control, ISCA 1990. 70 Virtual Channel Flow Control Buffer full: blue cannot proceed Blocked by other packets 71 A Modern Virtual Channel Based Router 72 Other Uses of Virtual Channels Deadlock avoidance

Enforcing switching to a different set of virtual channels on some turns can break the cyclic dependency of resources Enforce order on VCs Escape VCs: Have at least one VC that uses deadlockfree routing. Ensure each flit has fair access to that VC. Protocol level deadlock: Ensure address and data packets use different VCs prevent cycles due to intermixing of different packet classes Prioritization of traffic classes Some virtual channels can have higher priority than others 73 Review: Flow Control S Store and Forward S

Cut Through / Wormhole Shrink Buffers D D Reduce latency Any other issues? Head-of-Line Blocking Use Virtual Channels Red holds this channel: channel remains idle until read proceeds Blocked by other packets Channel idle but red packet blocked behind blue Buffer full: blue cannot proceed 74

Review: Flow Control S Store and Forward S Cut Through / Wormhole Shrink Buffers D D Reduce latency Any other issues? Head-of-Line Blocking Use Virtual Channels Buffer full: blue cannot proceed Blocked by other packets 75 Communicating Buffer

Availability Credit-based flow control On/Off (XON/XOFF) flow control Upstream knows how many buffers are downstream Downstream passes back credits to upstream Significant upstream signaling (esp. for small flits) Downstream has on/off signal to upstream Ack/Nack flow control Upstream optimistically sends downstream Buffer cannot be deallocated until ACK/NACK received Inefficiently utilizes buffer space 76 Credit-based Flow Control

Node 1 t1 t2 t3 Node 2 Flit departs router it Cred Process dit e r C Flit t4 t5 Process Round-trip credit delay:

Credit round trip delay Time between when buffer empties and when next flit can be processed from that buffer entry Significant throughput degradation if there are few buffers 77 On/Off (XON/XOFF) Flow Downstream has on/off signal to upstream Control t1 Foffset to prevent flits arriving before t4 from overflowing Fonset so that Node 2 does not run out of flits between t5 and t8 Node 1

t2 t3 t4 Off Proces s Flit Flit Flit Flit Flit t5 t6 t7 t8 On Proces s Foffthreshold reached Node 2 Flit Flit

Flit Flit Flit Flit Flit Flit Flit Fonthreshold reached 78 Interconnection Network Performance 79 Interconnection Network Throughput Performance given by flow control Latency Zero load latency (topology+routin g+flow control) Throughput

given by routing Throughput given by topology Min latency given by routing algorithm Min latency given by topology Injection rate into network 80 Ideal Latency Ideal latency Solely due to wire delay between source and destination Tideal

D L v b D = Manhattan distance size L = packet b = channel bandwidth v = propagation velocity 81 Actual Latency Dedicated wiring impractical Long wires segmented with insertion of routers D L Tactual H Trouter Tc v b

D = Manhattan distance L = packet size b = channel bandwidth v = propagation velocity H = hops Trouter = router latency Tc = latency due to contention 82 Latency and Throughput Curve 60 Latency (cycles) 50 40 30 20 10 0 0.1 0.2 0.3 0.4 0.5 0.6

0.7 0.8 0.9 1 Injected load (fraction of capacity) Ideal On-chip Network 83 Network Performance Metrics Packet latency Round trip latency Saturation throughput Application-level performance: system performance

Affected by interference among threads/applications 84 Buffering and Routing in On-Chip Networks 85 On-Chip Networks PE PE R PE PE PE PE R PE R PE R

R PE R R Connect cores, caches, memory controllers, etc R PE R R Buses and crossbars are not scalable Packet switched 2D mesh: Most commonly used topology Primarily serve cache misses and memory requests Router Processing Element (Cores, L2 Banks, Memory Controllers, etc) 86 On-chip Networks

P E P E P E P E R R R R P E P E P E P E R R R

R P E P E P E P E R R R R P E P E P E P E R Input Port with Buffers VC Identifier

R From East R From West R VC 0 VC 1 VC 2 Control Logic Routing Unit (RC) VC Allocator (VA) Switch Allocator (SA) To East From North To West To North To South To PE

From South R Router P Processing Element E (Cores, L2 Banks, Memory Controllers etc) Onur Mutlu, 2009, 2010 Crossbar(5 x 5) From PE Crossbar 87 On-Chip vs. Off-Chip Interconnects On-chip advantages Low latency between cores

No pin constraints Rich wiring resources Very high bandwidth Simpler coordination On-chip constraints/disadvantages 2D substrate limits implementable topologies Energy/power consumption a key concern Complex algorithms undesirable Logic area constrains use of wiring resources Onur Mutlu, 2009, 2010 88 On-Chip vs. Off-Chip Interconnects (II) Cost

Off-chip: Channels, pins, connectors, cables On-chip: Cost is storage and switches (wires are plentiful) Leads to networks with many wide channels, few buffers Channel characteristics On chip short distance low latency On chip RC lines need repeaters every 1-2mm Can put logic in repeaters Workloads Multi-core cache traffic vs. supercomputer interconnect 89 Onur Mutlu, 2009, 2010 traffic Buffers in NoC Routers Buffers are necessary for high network throughput buffers increase total available bandwidth in network

Avg. packet latency small medium buffers buffers large buffers Injection Rate Buffers in NoC Routers Buffers are necessary for high network throughput buffers increase total available bandwidth in network ? s r e ff

bu f o d i r Dynamic energy when read/write t eeven when not occupied g Static energy e wcomplexity and latency Buffers add n Ca for buffer management Logic Buffers consume significant energy/power Virtual channel allocation Credit-based flow control

Buffers require significant chip area E.g., in TRIPS prototype chip, input buffers occupy 75% of Going Bufferless? How much throughput do we lose? How is latency affected? latency no buffers buffers Injection Rate Up to what injection rates can we use bufferless routing? Are there realistic scenarios in which NoC is operated at injection rates below the threshold? Can we achieve energy reduction? If so, how much?

Answers in Can we reduce area, complexity, etc? our paper! Bufferless Routing in NoCs Moscibroda and Mutlu, A Case for Bufferless Routing in On-Chip Networks, ISCA 2009. https://users.ece.cmu.edu/~omutlu/pub/bless_isca09.pdf 93 Packet Scheduling Which packet to choose for a given output port? Common strategies

Router needs to prioritize between competing flits Which input port? Which virtual channel? Which applications packet? Round robin across virtual channels Oldest packet first (or an approximation) Prioritize some virtual channels over others Better policies in a multi-core environment Use application characteristics 94 Application-Aware Packet Scheduling Das et al., Application-Aware Prioritization Mechanisms for On-Chip Networks, MICRO 2009. The Problem: Packet Scheduling App1App2 P P

App N App N-1 P P P P P P Network-on-Chip L2$L2$ L2$ L2$ L2$ L2$ Bank Bank Bank mem Memory cont Controller Accelerator

Network-on-Chip is a critical resource shared by multiple applications The Problem: Packet Scheduling P E P E P E P E R R R R P E P E P E P E R

R R R P E P E P E P E R R R R P E P E P E P E

R R R R Input Port with Buffers VC Identifier From East From West VC 0 VC 1 VC 2 Control Logic Routing (R ) Unit VC C (VA) Allocator Switc (S Allocato h A)

r To East From North To West To North To South To PE From South R Routers P Processing Element E (Cores, L2 Banks, Memory Controllers etc) Crossbar(5 x5) From PE Crossb ar The Problem: Packet Scheduling From East

From West From North From South From PE VC0 VC1 VC2 Routing (R ) Unit VC C (VA) Allocator Switc Allocat (S h A) or The Problem: Packet Scheduling VC 0 From East From West

From North From South VC0 VC1 VC2 From East Routing (R ) Unit VC C (VA) Allocator Switc Allocat (SA h ) or VC 1 VC 2 From West Conceptual View

From North From South From PE From PE App1 App2 App3 App4 App5 App6 App7 App8 The Problem: Packet Scheduling VC 0 From West From North From South VC0 VC1 VC2 Routing (R ) Unit VC C (VA) Allocator Switc

Allocat (SA h ) or From East VC 1 VC 2 From West Conceptual View From North Scheduler From East From South hich packet to choose? From PE From PE App1 App2 App3 App4 App5 App6 App7 App8 The Problem: Packet

Scheduling Existing scheduling policies Round Robin Age Problem 1: Local to a router Lead to contradictory decision making between routers: packets from one application may be prioritized at one router, to be delayed at next. Problem 2: Application oblivious Treat all applications packets equally But applications are heterogeneous Solution : Application-aware global scheduling policies. Injection Cycles STC Scheduling Example 8 7 Batch 2 6 5 Batching interval length = 3 cycles

4 Batch 1 3 3 2 2 2 Batch 0 1 Core1 Ranking order = Core2 Core3 cket Injection Order at Processor STC Scheduling Example 8 Batch 2 7

5 8 6 2 5 4 7 1 6 2 Batch 1 3 3 2 2 1 4

1 2 Batch 0 Applications 3 1 Scheduler Injection Cycles Router STC Scheduling Example Router Round Robin 3 8 4 3 7 1 6

2 2 3 2 2 8 7 6 Scheduler 5 Time STALL CYCLES RR Age STC 8 6 Avg 11

8.3 STC Scheduling Example Router Round Robin 5 8 4 3 7 1 6 2 2 3 2 3 1 2 2

3 2 8 7 Age Scheduler 5 4 Time 3 3 6 Time 5 4 STALL CYCLES 6

7 8 Avg RR 8 6 11 8.3 Age 4 6 11 7.0 STC Ranking order STC Scheduling Example

Router Round Robin 5 8 4 3 7 6 1 Scheduler 5 4 3 1 2 2 3 Time

2 8 7 6 Age 1 2 2 2 3 3 Time 5 4 6 STC 3 7

8 Time 5 4 6 7 8 2 2 3 2 STALL CYCLES Avg RR 8 6 11 8.3

Age 4 6 11 7.0 STC 1 3 11 5.0 Bufferless Routing in NoCs Das et al., Application-Aware Prioritization Mechanisms for On-Chip Networks, MICRO 2009. https://users.ece.cmu.edu/~omutlu/pub/app-aware-noc_ micro09.pdf 107

18-740/640 Computer Architecture Lecture 16: Interconnection Networks Prof. Onur Mutlu Carnegie Mellon University Fall 2015, 11/4/2015

Recently Viewed Presentations

  • Migrations in Development Neural Crest Migration  The peripheral

    Migrations in Development Neural Crest Migration The peripheral

    Steps In NC Migration Initiation of migration Migration Homing Differentiation Initiation of Migration Transition from tissue ball to migration Loss of cadherins Acquisition of Integrins Acquisition of motile machinery Stimulus to migrate Migrations in Development Neural Crest Migration The peripheral...
  • MREA Powerpoint Template (Title)

    MREA Powerpoint Template (Title)

    Both sides in the peak oil controversy agree that oil is a finite resource and that every year, the world consumes more oil than it discovers. But those are about the only things they agree upon. David J. Lynch, USA...
  • Rebuttals

    Rebuttals

    A rebuttal paragraph in an essay basically has two parts. The first part simply describes your opponent's position or objection accurately. The second part then explains why this idea isn't as good as your idea. Consider the following rebuttal paragraph...
  • DEMAND AND SUPPLY IN SHIPPING 17/10/12 DEMAND Refers

    DEMAND AND SUPPLY IN SHIPPING 17/10/12 DEMAND Refers

    demand and supply in shipping 17/10/12
  • INSTRUCTIONS FOR USE  THIS PRESENTATION IS MEANT FOR

    INSTRUCTIONS FOR USE THIS PRESENTATION IS MEANT FOR

    THIS PRESENTATION IS MEANT FOR TECHNICAL AUDIENCES TO COVER DETAILED ARCHITECTURE FOR THE ORACLE BI APPS ... :2 etc depending on the number of keys of that component In the OBIEE Administration tool, create an Initialization Block and a Session...
  • 中国科技政策 - Embassy of China, London

    中国科技政策 - Embassy of China, London

    Annual rate 7.2% 2000 2020 ¥ 40 tn GDP ¥ 10 tn per capita GDP energy consumption reduces 4% each year main pollutants cut 40% by 2020 Domestic Oil supply and demand Domestic Production Total Consumption 200 100 1980 1985...
  • Classification, Viruses, and Baceria

    Classification, Viruses, and Baceria

    REVIEW. CLASSIFICATION. Binomial . Nomenclature. Genus. Species. Language . used. Binomial nomenclature . is the scientifically accepted naming system using two names. Genus - ALWAYS CAPITALIZED! species- always lower . case! Names are Latin - universal language.
  • Research Seminar Course - Imperial College London

    Research Seminar Course - Imperial College London

    Research Seminar Course For MRes and first-year PhD students Spring term January-March ... Herbert Wiklicky, Uli Harder ... see web page You can swap assignments with your friends You can also swap your assignment for a paper on the shortlist...