MQ Shared Queue Overview Dave Elko January 20, 2010 · Toshiba Corporation, ULINK Technology, Inc.,...

31
1 NVMe Overview Murali Iyer July 26, 2018

Transcript of MQ Shared Queue Overview Dave Elko January 20, 2010 · Toshiba Corporation, ULINK Technology, Inc.,...

1

NVMe Overview

Murali IyerJuly 26, 2018

© 2018 IBM CorporationPower Systems I/O2

Agenda:

• NVMe Technology overview

• Comparisons – NVMe to SAS

• NVMe in IBM Power Systems

© 2018 IBM CorporationPower Systems I/O

What is NVMe ? NVMe - Non-Volatile Memory express (PCIe)

NVM Express is a standardized

high performance interface protocol

for PCI Express Solid State Drives

Industry standard

software and drives

Industry consortium spec

defines a queuing interface,

command set, and feature

set to optimize performance

Available in three

different form factors:

PCIe Add in Card, SFF

2.5” U.2 and M.2

Built for SSDs

Architected from the ground

up for SSDs to be efficient,

scalable, and manageable

Direct and Network attached low latency, high bandwidth, cost effective Flash solutions

http://www.nvmexpress.org/

© 2018 IBM CorporationPower Systems I/O

NVMe Industry Consortium

Members:

Oracle, Samsung, Dell, Microsoft, Intel, Cisco, HGST, EMC, PMC, Seagate, Micron, NetApp, SanDisk, Allion

Labs, Inc., Apeiron, Avago Technologies, Baidu, Beijing Memblaze Technology Co. Ltd., Cavium Inc., CNEX

Labs Inc., Elastifile Ltd., Fujitsu, Google, Inc., Greenliant Systems, Hewlett-Packard Company, Hitachi, Ltd.,

Huawei Technologies Co. Ltd., Hyperstone GmbH, IP-Maker, JDSU - Storage Network Test, JMicron

Technology Corp., Kazan Networks Corporation, Mangstor, Marvell Semiconductor, Mobiveil, Inc., NEC

Corporation, NetBRIC Technology Co., Ltd., Phison Electronics Corp., OCZ Storage Solutions, Inc., Qlogic

Corporation, Quanta Computer Inc., Realtek Semiconductor Corp., Silicon Motion, SK hynix memory

solutions, Inc., SMART Modular Technologies, TDK Corporation, Teledyne LeCroy, Tidal Systems, Inc.,

Toshiba Corporation, ULINK Technology, Inc., Western Digital Technologies, Inc., X-IO Technologies

Xilinx, Echostreams Innovative Solutions LLC, eInfochips, Inc., Novachips Co. Ltd., OSR Open Systems

Resources, Inc., Qnap Systems, Inc., SerialTek LLC, Super Micro Computer, Inc., IBM

© 2018 IBM CorporationPower Systems I/O

NVMe* Development Timeline

• Multi-Path IO

• Namespace Sharing

• Reservations

• Autonomous Power

Transition

• Scatter Gather Lists

NVMe 1.1 – Oct ‘12

• Base spec for PCIe*

• Queuing Interface

• Command Set

• E2E Data Protection

• Security

NVMe* 1.0 – Mar ‘11

• Namespace Management

• Controller Memory Buffer

• Temperature Thresholds

• Active/Idle Power & RTD3

• Host Memory Buffer

• Live Firmware Update

NVMe 1.2 – Nov ‘14

• Out-of-band management

• Device discovery

• Health & temp monitoring

• Firmware Update

• And more….

NVMe Management

Interface 1.0 – Nov ‘15

• Enable NVMe SSDs to

efficiently connect via

fabrics like: Ethernet, Fibre

Channel, InfiniBand™, etc

NVMe over Fabrics 1.0

– June ‘16

• Sanitize

• Virtualization

• Directives (Streams)

• Self-Test & Telemetry

• Boot Partitions

• And more…

NVMe 1.3 – May ‘17

NVMe is the place for innovation in SSDs.

• IO Determinism

• Persistent Memory

Region (PMR)

• Multi-pathing

NVMe 1.4 – xxx 2019

© 2018 IBM CorporationPower Systems I/O

Evolution of NVMe

2011• NVM Express Specification 1.0 published by industry leaders on March 1

2012• NVM Express Specification 1.1 released on October 11

2014

• NVM Express Specification 1.2 released on November 3

• NVM Express Work Group was incorporated at NVM Express, Inc., the consortium responsible for the development of the NVM Express specification

• Work on the NVM Express over Fabrics (NVMe-oF) Specification kicked-off

2015• NVM Express Management Interface (NVMe-MI) Specification officially released. Provides out-of-band management for NVMe components and systems and a common baseline management feature set across all NVMe devices and systems.

2016• NVM Express over Fabrics (NVMe-oF) Specification published; extending NVMe onto fabrics such as Ethernet, Fibre Channel and InfiniBand®, providing access to individual NVMe devices and storage systems.

2017• NVM Express Specification 1.3 published. Addresses the needs of mobile devices, with their need for low power consumption and other technical features, making it the only storage interface available for all platforms from mobile devices through data center storage systems.

© 2018 IBM CorporationPower Systems I/O

New Features for Revision 1.3

Namespace Identifier list

Mandatory, TP 019

Device Self-Test

Optional, TP 001a

Sanitize

Optional, TP 004

Directives (Streams)

Optional, TP 009

Boot Partitions

Optional, TP 003

Telemetry

Optional, TP 009

Virtualization Enhancements

Optional, TP 010

NVMe-MI Enhancements

Optional, TP 018

Host Controlled Thermal Management

Optional, TP 012

Timestamp

Optional, TP 002

Emulated Controller Performance Enhancement

Optional, TP 024

© 2018 IBM CorporationPower Systems I/O

NVMe PCIe Product Form Factors (2017-20)

AIC

2.5”

SFF

U.2

U.3

M.2NGSFF

EDSFF

2.5” SFF (U.2)

▪ 2x2 / 1x4 PCIe Gen3/4, Dual port support, 8639 Connector

▪ Split into two power offerings: 15-25W (15mm), 9-12W (7mm)

▪ Expect to be direct attached to CPU vs through HBA

▪ U.3 device is compatible with U.2 system design

M.2: x2 / x4 PCIe Gen3, Size: 22x30, 22x42, 22x60, 22x80, 22x110mm

▪ Power less than 9 watts, un-encapsulated, No hot swap

▪ Targeting as replacement for SATA SSDs

NGSFF (M.3): x2 / x4 PCIe Gen3/4, Size 31x110mm (proposal)

▪ Power: 16W (max), Hot-Plug and front loading

▪ High density: up to 64x devices in 1U server

SFF-TA-1002: Ruler, x4 / x8 PCIe Gen3/4/5(Ready)

▪ Power: up to 50W, Hot-Plug and front loading

▪ Size 1.5”x12”x9.5mm, up to 32x devices in 1U server

HHHL FHHL

2.5” 7mm 2.5” 15mm

22-80mm 22-110mm

Add-in Card

▪ x4 / x8 PCIe Gen3/4

▪ High Performance and Power (25-50 Watts)

▪ Focus on High capacity

▪ Up to 6.4TB on Read Intensive (3 – 5 DWPD)

▪ Up to 15.x TB on Very Read Intensive (~1DWPD)

▪ Best performance and lowest latency

31-110mm

38.6-325.35x9.5mm(z height)

© 2018 IBM CorporationPower Systems I/O9

NVMe End Device Requirements

• Controller Memory Buffer

• General Scatter Gather Lists (SGLs)

• Log indicating Fabrics capable

NVMe Fabrics Device Requirements

• Any Modern RDMA Capable Device

• Access (primitive verbs)

• Discovery

• Send Message

• Receive Message

• Read

• Write

• NVMe Over TCP - New

© 2018 IBM CorporationPower Systems I/O

Storage Latency and Command / Data Throughput

Late

ncyBus Media

Read

Lat.

(us)

Write

Lat.

(us)

Read

(IOPs)

Write

(IOPs)

Read Tp

(GB/s)

Write Tp

(GB/s)Cost

Memory

(in CEC)

DRAM

DIMM<1 <1 Not a Persistent Storage

SCM <1 <1 Persistent Storage

PCIe

(NVMe)

SCM <10 <10 550K 500K 2.4 2.0

LL

Flash<15 <15 750K 180K 3.2 3.0

Flash <90 <25 750K 180K 3.5 3.0

SAS Flash 150 60 420K 50K 2.2 1.6

SATA Flash 1.8ms 3.6ms 93K 25K 0.5 0.5

SAS /

SATA

HDD >ms >ms 200 200 0.15 0.15

TAPE “secs” “secs” “slow” “slow” “slow” “slow”

10

SCM: 3DXP from Intel/Micron. Bytes addressable in DIMM (Apache Pass) and Block addressable(M.2/U.2/AIC..) in NVMe interface.

NVMe/SCM: Performance numbers are of Intel’s Optane PCIe Gen 3 x4 Add in Card. Endurance 30 DWPD.

NVMe/LL Flash: Performance numbers are of Samsung’s zSSD projections.

NVMe: Intel, Samsung, WD, Micron adapters are PCIe Gen 3 x 4. Performance limited by the controller.

SAS SSD: Assumes 12G dual port active/active. Performance of single port operation (typical) expected to be lower.

IOPs and Latencies: Normally measured on a random 4K ops. * <1us for 1K transfer utilizing Persistent Log Buffer feature

Data throughput: Normally measured on a large sequential 256KB ops

© 2018 IBM CorporationPower Systems I/O11

© 2018 IBM CorporationPower Systems I/O12

13

Comparisons – NVMe to SAS

© 2018 IBM CorporationPower Systems I/O14

Description SAS SSD NVMe Remarks

MTBF ≥ 2 million hours ≥ 2 million hours Media is a major contributor

End of Life Data Retention ≥ 3 months ≥ 3 months

Power Loss Protection Yes Yes

Meta Data Support Yes Yes

Data Integrity Field/Extension (DIF/DIX) Yes/No Yes/Yes NVMe RAID adapter might use DIF and SAS might not

require DIX. BOLT adapter supports DIF & DIX.

RAID Protection (Internal) Yes Yes Tolerates flash module (sub) failures

Hardware RAID (External) Yes(Via SAS RAID)

No(NVMe RAID card)

Software RAID & Erasure Coding Yes Yes

SMART Attributes (Wear, Thermal,

advanced warnings etc.)

Partial Yes Available in NVMe in one consolidated interface.

Live Firmware Update Yes Yes Earlier NVMe devices requires reset after activation

Endurance (DWPD) Varies Varies Various levels available on both SAS & NVMe

Dual Path Support Yes Yes Some NVMe form factor might not support

Directives, Advanced Power Mgmt.,

Reservations etc.

Partial Yes Improves QoS, space mgmt., endurance, power etc. (Linux

adopting rapidly)

Multi Initiator Yes No Additional functions in PCIe switch might require for NVMe

Management Interface No Yes Allows in-band and side-band (SMS) management.

Surprise add/remove (Hot plug) Yes Yes NVMe physical remove/replace varies by form factor

Device Error Logging Yes Yes NVMe supports persistent error logs as well

RAS Features Comparison

© 2018 IBM CorporationPower Systems I/O15

SATA vs SAS vs NVMe SSD Comparison

1.6x

4.3x

3.3x2.3x

2.0x

4.0x

2.4x

1.3x

4.0x

4.3x

16

NVMe in IBM Power Systems

© 2018 IBM CorporationPower Systems I/O

NVMe PCIe Product Form Factors (2017-20)

AIC

2.5”

SFF

U.2

U.3

M.2NGSFF

EDSFF

2.5” SFF (U.2)

▪ 2x2 / 1x4 PCIe Gen3/4, Dual port support, 8639 Connector

▪ Split into two power offerings: 15-25W (15mm), 9-12W (7mm)

▪ Expect to be direct attached to CPU vs through HBA

▪ U.3 device is compatible with U.2 system design

M.2: x2 / x4 PCIe Gen3, Size: 22x30, 22x42, 22x60, 22x80, 22x110mm

▪ Power less than 9 watts, un-encapsulated, No hot swap

▪ Targeting as replacement for SATA SSDs

NGSFF (M.3): x2 / x4 PCIe Gen3/4, Size 31x110mm (proposal)

▪ Power: 16W (max), Hot-Plug and front loading

▪ High density: up to 64x devices in 1U server

SFF-TA-1002: Ruler, x4 / x8 PCIe Gen3/4/5(Ready)

▪ Power: up to 50W, Hot-Plug and front loading

▪ Size 1.5”x12”x9.5mm, up to 32x devices in 1U server

HHHL FHHL

2.5” 7mm 2.5” 15mm

22-80mm 22-110mm

LEAFGen3 x4

SPARKP9 SO

BOLTGen3 x8

Add-in Card

▪ x4 / x8 PCIe Gen3/4

▪ High Performance and Power (25-50 Watts)

▪ Focus on High capacity

▪ Up to 6.4TB on Read Intensive (3 – 5 DWPD)

▪ Up to 15.x TB on Very Read Intensive (~1DWPD)

▪ Best performance and lowest latency

31-110mm

38.6-325.35x9.5mm(z height)

POSEIDONP9 SU

© 2018 IBM CorporationPower Systems I/O18

POWER8 vs POWER9 2U & 4U SAS Subsystem:

POWER8 SAS Subsystem configs

2U- https://www.ibm.com/support/knowledgecenter/8284-22A/p8eg2/p8eg2_8xx_kickoff.htm

• EJ0T = 2U Base (57D7 non-cache SAS RAID) – 12 SFF (P1-C14)

• EJ0V = 2U Split DASD backplane - 2 57D7 – 6+6 SFF (P1-C14/C15)

• EJ0U = 2U High Function (Dual 57D8 cache SAS RAID w/SuperCap card, rear SAS cable) (P1-C14-E1/C15-E1)

4U- https://www.ibm.com/support/knowledgecenter/8286-42A/p8eg2/p8eg2_8xx_kickoff.htm

• EJ0N = 4U Base (57D7 non-cache SAS RAID) – 12 SFF (P1-C14)

• EJ0S = 4U Split DASD backplane - 2 57D7 – 6+6 SFF (P1-C14/C15)

• EJ0P = 4U High Function (Dual 57D8 cache SAS RAID w/SuperCap card, rear SAS cable) – 8 SFF and 6 1.8”

SSD (P1-C14/15)

POWER9 SAS configs

2U- https://www.ibm.com/support/knowledgecenter/9009-22A/p9hdx/9009_22a_landing.htm

• EJ1F = Base (57D7 non-cache SAS RAID) - 8 SFF (P1-C49)

• EJ1H = Split DASD – 2 57D7 – 4+4 SFF (P1-C49/C50)

• EJ1G = High Function – Single 57DC cache SAS RAID w/SuperCap card, rear sas cable - 8 SFF (P1-C49/C49-E1)

4U- https://www.ibm.com/support/knowledgecenter/9009-42A/p9hdx/9009_42a_landing.htm

• EJ1C = Base (57D7 non-cache SAS RAID) - 12 SFF (P1-C49)

• EJ1E = Split 2 57D7 - 12 SFF (P1-C49/C50)

• EJ1D = High Function (Dual 57D8 cache SAS RAID w/SuperCap card, rear SAS cable) - 18 SFF (P1-C49/C50)

• EJ1M = High Function (Dual 57D8 cache SAS RAID w/SuperCap card, rear SAS cable) - 12 SFF (P1-C49/C50)

© 2018 IBM CorporationPower Systems I/O

S924 / H924 Internal Storage Options

Supported Media Overview

o NVMe M.2 Flash devices

400GB 1.5 DWPD (ES14)

o SFF HDDs

600GB, 1200GB, 1800GB - 10K RPM

300GB, 600GB – 15K RPM

o SFF SSDs

387GB, 775GB, 1551GB – 10 DWPD

931GB, 1860GB, 3720GB – 1 DWPD

o RDX Disk Cartridge

1TB Disk Cartridge (EU01)

2TB Disk Cartridge (EU2T)

FC Description

EC59 NVMe Card with two M.2 connectors

EJ1CSingle RAID 0,10,5,6

12 SFF bays (Gen3-Carrier), 1 RDX bay

EJ1ESplit Backplane RAID 0,10,5,6

6+6 SFF bays (Gen3-Carrier), 1 RDX bay

EJ1MDual Write Cache RAID 0,10,5,6,5T2,6T2

12 SFF bays (Gen3-Carrier), 1 RDX bay

EJ1DDual Write Cache RAID 0,10,5,6,5T2,6T2

18 SFF bays (Gen3-Carrier)

EU00 RDX Docking Station

12 SFF bays,

1 RDX bay18 SFF bays

D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12 D13 D14 D15 D16 D17 D18

Internal Storage Options

M.2

NVMe

Device

PCIe Gen3 x8

Internal NVMe

Card

© 2018 IBM CorporationPower Systems I/O

S924 / H924 Scale Out Server

20

5 PCIe Gen4 slots

6 PCIe Gen3 slots

16 DDR4 RDIMM’s per processor

32 DDR4 RDIMM’s total

2 Processor

modules

2 Internal Storage

slots

12 SFF bays & 1 RDX bay,

or 18 SFF bays

6 Blowers

D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12 D13 D14 D15 D16 D17 D18

LCD Display

USB 3.0

© 2018 IBM CorporationPower Systems I/O21

POWER9 2U & 4U SAS Subsystem:

EJ1G = 2U High Function – Single 57DC cache SAS RAID w/SuperCap card, rear sas cable - 8 SFF

- Same SAS adapters with different mechanical assemblies

P2

P2

57DC (qty 1) = 2U57D7 non-cache SAS RAID is same assembly for 2U and 4U

57D8 (qty 2) = 4U

• Same Disk backplane as base (no SAS expanders)

• Single Cache SAS RAID adapter with SuperCap card

• Same adapter that has been used in ESS solution

• ** Single copy of Cache – Linux now, AIX GA2, No IBMi support.

Internal SAS adapters installed in slots C49 and C50

© 2018 IBM CorporationPower Systems I/O22

2U/4U Base 2U/4U Split 2U High Function

4U High Function

Internal SAS Cabling

© 2018 IBM CorporationPower Systems I/O23

POWER9 2U & 4U SAS Subsystem with NVMe:

POWER9 Scale Out SAS and/or NVMe configs

2U

• EJ1F + EC59 = Base (57D7 non-cache SAS RAID) - 8 SFF (P1-C49) + one NVMe carrier adapter (P1-C50)

• EC59 + EC59 = Two NVMe carrier adapters (P1-C49/C50)

4U

• EJ1C = Base (57D7 non-cache SAS RAID) - 12 SFF (P1-C49) + one NVMe carrier adapter (P1-C50)

• EC59 + EC59 = Two NVMe carrier adapters (P1-C49/C50)

Each (FC EC59) NVMe carrier adapter contains 1 or 2 NVMe M.2 Flash modules (FC ES14)

Carrier Adapter 1 or 2 M.2 Flash Modules

© 2018 IBM CorporationPower Systems I/O

View of how the M.2 NVMe flash modules show up

© 2018 IBM CorporationPower Systems I/O

S924 VIOS with dual NVMe vs VIOS with dual SAS

adapters, Split backplane and four SSDs

© 2018 IBM CorporationPower Systems I/O

Storage Technology Performance

Latency DeviceRead

Lat. (us)

Write

Lat. (us)

Read

(IOPs)

Write

(IOPs)

Read Tp

(GB/s)

Write Tp

(GB/s)

DRAM DIMM

(in CEC)<1 <1 Not a Persistent Storage

SPARK

NVMe90 35 240K 50K 1.8 1.1

SAS SSD

(6G DP)137 84 150K 23K 1.0 0.9

SAS SSD

(6G SP)143 69 98K 64K 0.53 0.50

SAS HDD >ms >ms 200 190 0.15 0.15

26

Notes:

• SPARK device targeted to be a load source with up to 4 in a P9 scale out system. Mirroring (recommended) & supported by

the OS

• SPARK NVMe M.2 device is 400GB with endurance of ~1.5 Drive Write Per Day (DWPD) for 5 years

• SPARK performance will be most throttled on 1S4U and least on 2S4U on sustained work load due to thermal/acoustics

• SAS SSD 6G Dual Path: 6G Caching Dual Adapter (ZO6) with Asprey SSD FC #ES81, ES8K utilizing dual ports

• SAS SSD 6G Single Path: 6G Non Caching Single Adapter (Solstice/GTO) with Taurus-4, FC #ES10, ES11 utilizing single port

• SAS HDD: Any Enterprise class HDD offered

Command throughput (IOPs) and Latencies: Measured with random 4K ops.

Data throughput: Measured with large sequential 256KB ops

© 2018 IBM CorporationPower Systems I/O

27

Burst Buffer & Log Tip Feature▪ Over 5000+ compute nodes per system

▪ Each compute node (Witherspoon) uses one 1.6TB

NVMe Adapter for Burst Buffer function

▪ R/W bandwidth ≥1.5GB/s with 1MB block size for burst

buffer application

▪ Burst buffer transferred between compute and I/O nodes

using NVMe over Fabrics (IB)

▪ PCIe Peer to Peer (P2P) enabled between NIC and

NVMe adapters – P9 feature

▪ NVMe offload enabled on NIC MLX CX-5 adapter, Qs

managed by NIC to reduce jitter

▪ Each I/O node (Boston) uses one 4GB NV1604 NVMe

adapter for ESS Log Tip function▪ Read/Write BW: 5.1/5.4 GB/s

▪ Read/Write IOPS: 1060/929K IOPS

▪ Read/Write Latency: 17/20 us

Tradition

al

Storage

Tradition

al

Storage

Tradition

al

Storage

Tradition

al

Storage

Tradition

al

Storage

RDMA

Fabric

CPU NIC CX-5

DIMMP9 I/O Nodes

HBA

P9

NIC CX-5

DIMM P9 Compute Nodes

GPUGPU

NVMe AIC

1.6TB BOLT

<10us

overhead

NVMe Flash

Backed DRAM

for “Log Tip”

NVMe Over Fabrics - InfiniBand

Workload NVMe

Local

Fabrics

w/o

offload

Fabrics

with

offload

4K Rand. Read Latency (us) 84.5 93.9 90.0

4K Rand. Write Latency (us) 23.1 30.7 26.2

Largest GPFS

250 Peta Bytes

4GB NV1604

© 2018 IBM CorporationPower Systems I/O28

Useful Linux NVMe commands# man nvme – Provides manual page for nvme command.

# nvme smart-log – Display vital information of the device.

# nvme list – Lists the nvme devices found in the system.

# nvme id-ctrl /dev/nvme# – Displays NVMe Identify controller data structure

# nvme id-ns /dev/nvmeXnY – Display NVMe Identify Name Space data structure

# nvme fw-log /dev/nvme0 - This adapter supports 3 firmware slots (3 redundant copies)

# nvme format – Provides various options including selecting block size and end to end protection

# nvme fw-download, fw-activate </dev/nvme#> – Firmware management commands

# nvme list-ns /dev/nvme# – Displays number of name spaces in the NVM sub system

# nvme create-ns, delete-ns, attach-ns # NVM Name Space Management commands

“sosreport” includes NVMe debug information

© 2018 IBM CorporationPower Systems I/O29

NVMe Flash Adapter – Smart Log / Fuel Gauge

“FUEL GAUGE” available through standard Linux distribution# nvme smart-log /dev/nvme0

Smart Log for NVME device:nvme0

critical_warning : 0 (See Notes below)

temperature : 44 C

available_spare : 100%

available_spare_threshold : 10%

percentage_used : 2%

data_units_read : 2,444,827,030 (counter in 512KB)

data_units_written : 1,372,534,735 (counter in 512KB)

host_read_commands : 21,113,180,566

host_write_commands : 9,637,700,954

controller_busy_time : 26,366

power_cycles : 10

power_on_hours : 1,714

unsafe_shutdowns : 3

media_errors : 10

num_err_log_entries : 12

Warning Temperature Time : 0

Critical Composite Temperature Time : 0

Temperature Sensor 1 : 44 C

Temperature Sensor 2 : 42 C

Temperature Sensor 3 : 41 C

Notes: Critical Warning (one byte)bit 0=low spare, 1=thermal, 2=media error, 3=read only, 4=backup circuit

© 2018 IBM CorporationPower Systems I/O30

Useful AIX NVMe commands❖ Standard AIX Diagnostic supports NVMe as well “errpt” specific to NVMe

❖ Command line “nvmemgr” has been developed to support NVMe devices

© 2018 IBM CorporationPower Systems I/O31

Multiple Namespaces and Live Firmware UpdateMultiple Namespaces

An NVM subsystem is comprised of some number of controllers, where each controller may access some number of

namespaces, where each namespace is comprised of some number of logical blocks.

Namespace: A quantity of non-volatile memory that may be formatted into logical blocks. When formatted, a

namespace of size n is a collection of logical blocks with logical block addresses from 0 to (n-1).

•>16 Namespaces supported

• nvme id-ctrl /dev/nvme1

nn : 1 # of valid namespaces under this controller

•List namespaces

• nvme list-ns /dev/nvme1

Lists namespace IDs

Live Firmware Update and Firmware Slots

# nvme id-ctrl /dev/nvme1 | grep frmw

frmw : 0x16 # Expected value

bit 0 == 1 if slot 1 is not updatable, bits 1-3 = # of slots bit 4 == 1 activating firmware w/o reset is supported

# nvme fw-log /dev/nvme0Displays FW loaded on each slot

# nvme fw-download /dev/nvme0 --fw=/path/to/nvme.fw Downloads Firmware to the controller RAM

# nvme fw-activate /dev/nvme0 --slot=1 --action=3 (might want to use “--action=1” to flash fw to the slot first)The image specified by the Firmware Slot field is requested to be activated immediately without reset. (action 3 is

also called Live Firmware update)

NVM Express Controller with Two Namespaces