GPU-accelerated SDR Implementation of Multi-User Detector for Satellite Return Links

Post on 25-Feb-2016

44 views 1 download

description

GPU-accelerated SDR Implementation of Multi-User Detector for Satellite Return Links. Chen Tang Institute of Communication and Navigation German Aerospace Center. Overview. Introduction and Motivation MUD System Design GPU CUDA Architecture GPU-accelerated Implementation of MUD - PowerPoint PPT Presentation

Transcript of GPU-accelerated SDR Implementation of Multi-User Detector for Satellite Return Links

GPU-accelerated SDR Implementation of Multi-User Detector for Satellite Return Links

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 1

Chen Tang

Institute of Communication and NavigationGerman Aerospace Center

Overview

• Introduction and Motivation

• MUD System Design

• GPU CUDA Architecture

• GPU-accelerated Implementation of MUD

• Simulation Result

• Summary

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 2

Overview

• Introduction and Motivation

• MUD System Design

• GPU CUDA Architecture

• GPU-accelerated Implementation of MUD

• Simulation Result

• Summary

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 3

Introduction and Motivation

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 4

• Bidirectional satellite communication

• Multi-user access issue• MF-TDMA (e.g. DVB-RCS)

• Multiuser Detection (MUD) • Increase spectrum efficiency• Few practical MUD implementations for satellite systems

• High complexity• Sensitive to synchronization and channel estimation errors

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 5

Introduction and Motivation

• NEXT project - Network Coding Satellite Experiment paved the way to the GEO research communication satellite H2Sat.• H2Sat: explore and test new broadband (high data rate) satellite

communication • NEXT Exp 3: Multiuser detection (MUD) for satellite return links

Packet A3 Packet A2 Packet A1

Packet B3 Packet B2 Packet B1Multiuser Detection

Packet A3 Packet A2 Packet A1Packet B3 Packet B2 Packet B1

• Two users transmit at the same frequency and time

• A transparent satellite return link

• Main objectives:• Develop a MUD receiver in SDR• Increase decoding throughput

real-time processing

Overview

• Introduction and Motivation

• MUD System Design

• GPU CUDA Architecture

• GPU-accelerated Implementation of MUD

• Simulation Result

• Summary

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 6

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 7

MUD System Design

• Multiuser detection (MUD) complexity• Optimal MUD proposed by Verdú:

• exponential complexity on number of users

• Suboptimal MUD algorithms:• e.g. PIC; SIC

• We use Successive Interference Cancellation (SIC)• Linear complexity on number of users• Straightforward extension to support more users

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 8

MUD System Design

• Successive Interference Cancellation (SIC)• Sequentially decode users & cancel interference• Multi-stage SIC improve PER• Error propagation• Sensitive to channel estimation errors• Phase noise

• Expectation Maximization Channel Estimation (EM-CE)

LDPC

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 9

MUD System Design

• Real-time implementation of MUD is challenging

• Processing bottlenecks:• LDPC channel decoding• EM channel estimation• Resampling and interference cancellation

• Programmable hardware devices• DSP; FPGA (hard to develop, low flexibility)• Attractive alternative: GPGPU

• High performance• High flexibility

Overview

• Introduction and Motivation

• MUD System Design

• GPU CUDA Architecture

• GPU-accelerated Implementation of MUD

• Simulation Result

• Summary

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 10

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 11

GPGPU

• GPUs are massively multithreaded multi-cores chips• Image and video rendering• General-purpose computations

Ref: Nvidia CUDA_C_Programming_Guide 2013

Nvidia Tesla c2070:448 cores; 515 GFLOPs of double-precision peak performance

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 12

GPGPU

• GPU is specialized for computation-intensive, highly parallel computation (exactly what graphics rendering is about)• More transistors for data processing rather than data caching and flow control

ALU: Arithmetic Logic Unit

• Limited number of concurrent threads• Server with four hex-core processors 24 concurrent active threads (or 48, if HyperThreading supported)

• Much more concurrent threads• Hundreds-cores of processor• more than thousands of

concurrent active threads

CUDA Architecture

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 13

• In Nov. 2006, first GPU built with Nvidia’s CUDA architecture

• CUDA: Compute Unified Device Architecture• Each ALU can be used for general-purpose computations• All execution units can arbitrarily read and write memory• Allows to use high-level programming languages

(C/C++; OpenCL; Fortran; Java&Python)

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 14

CUDA Architecture

• Serial program with parallel kernels • Serial code executes in a host (CPU) thread• Parallel kernel code executes in many device (GPU) threads• Host (CPU) and device (GPU) maintain separate memory spaces

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 15

LDPC Decoder on GPU

• Assign one CUDA thread to work on each edge of each check node

U1:n = 4800k = 3200

𝐶 𝑗→𝑉 𝑖

C1 C2 C3 Cn-k

V1 V2 V3 V4 Vn

…...

…...

U2:n = 4800k = 2400

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 16

LDPC Decoder on GPU

• Assign one CUDA thread to work on each edge of each check node• Speedup: 10x• Throughput: 1.6Mbps (coderate: 2/3, )

U1:n = 4800k = 3200

𝐶 𝑗→𝑉 𝑖

C1 C2 C3 Cn-k

V1 V2 V3 V4 Vn

…...

…...

U2:n = 4800k = 2400

Overview

• Introduction and Motivation

• MUD System Design

• GPU CUDA Architecture

• GPU-accelerated Implementation of MUD

• Simulation Result

• Summary

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 17

MUD receiver on GPU

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 18

• Processing bottlenecks:• LDPC channel decoding• EM channel estimation• Resampling and interference cancellation• Data transfer between host and device memory

(144GB/s of Nvidia Tesla vs. 8GB/s of PCIe*16)

• All parts of each single user receiver and interference cancellation on GPU

• Minimize the latency of intermediate data transfer between host and device memory

GPUCPU

GPUCPU

GPUCPU

GPUCPU

Overview

• Introduction and Motivation

• MUD System Design

• GPU CUDA Architecture

• GPU-accelerated Implementation of MUD

• Simulation Result

• Summary

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 19

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 20

Simulation Setup

• GPU Nvidia Tesla c2070 (1.15GHz)• Comparison benchmark: Intel Xeon CPU E5620 (2.4GHz)

• BPSK modulation• Two user terminals (power imbalance: U1 3dB higher than U2)• Channel coding: LDPC

• Irregular Repeat Accumulate • Blocklength: 4800 bits• U1 coderate: 2/3 , U2 coderate: 1/2

• Baud-rate: 62500 symbols/second real-time threshold: ca. 85ms (66 kbps)

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 21

Simulation Result

Real-time threshold

Overview

• Introduction and Motivation

• MUD System Design

• GPU CUDA Architecture

• GPU-accelerated Implementation of MUD

• Simulation Result

• Summary

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 22

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 23

Summary

• SDR implementation of MUD receiver• High flexibility and low cost• Extension to support more users

• GPU acceleration• 1.8x ~ 3.8x faster than the real-time threshold• Still space to improve• New GPU better performance

• GPU CUDA is very promising for powerful parallel computing• Low learning curve• Heterogeneous: mixed serial-parallel programming• Scalable

• CUDA-powered Matlab (MATLAB® with Parallel Computing Toolbox; Jacket™ from AccelerEyes)

• Days/weeks of simulation hours

• “GNU Radio is a free & open-source software development toolkit that provides signal processing blocks to implement software radios”

• Software Architecture• Main processing of the blocks are in C++

functions processed by CPU on PC

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 24

GNURadio

Python Module

C++ Shared Library

Python Script /GNU Radio Companion

SWIG

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 25

GNURadio + CUDA

• , • CPU LDPC Decoder

• Throughtput: • GPU LDPC Decoder

• Throughput:

Irregular Repeat Accumulate LDPC(IRA)

n = 4800k = 2400

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 26

CUDA core

CPUCPU

monster

CUDAmonster

Thank you !Q&A ?

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 27

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 28

GPGPU

• Advantages of GPU: • High computational processing

power• High memory bandwidth• High flexibility

• Drawbacks of GPU: • Non stand-alone device• Bad at serial processing• Separate memory space• Additional hands-on effort

Comparison of total processing time of MUD between CPU and GPU

> Sino-German Workshop > Chen Tang > 03.2014DLR.de • Chart 29