Introducing the AI/ML and Genomics Workloads from the SPEC ... · Previously known as SPEC SFS …...
Transcript of Introducing the AI/ML and Genomics Workloads from the SPEC ... · Previously known as SPEC SFS …...
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 1
Introducing the AI/ML and Genomics Workloads from the SPEC® Storage Subcommittee
Nick Principe, [email protected] Cantrell, [email protected]
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 2
Agenda
Introduction to SPEC, SPEC Storage and SPEC Storage 2020
Proposed AI/ML Workload Proposed Genomics Workload (Time permitting) Misc SPEC Storage 2020
updates
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 3
SPEC, SPEC Storage, Storage2020
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 4
What is SPEC? The Standard Performance Evaluation Corporation (SPEC,
www.spec.org) is a non-profit corporation formed to establish, maintain and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers. SPEC develops benchmark suites and also reviews and publishes submitted results from member organizations and other benchmark licensees
SPEC and SPEC SFS are registered trademarks of the Standard Performance Evaluation Corporation
SPEC, the SPEC logo and SPEC SFS are registered trademarks of the Standard Performance Evaluation Corporation, reprinted with permission
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 5
SPEC
Open Systems Group (OSG)
Storage CPU (many others)
Graphics & Workstation Performance
Group (GWPG)
High-Performance
Group (HPG)
Research Group (RG)
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 6
Disclaimer and Why Do You Care? The SPEC Storage 2020 release and the
Genomics and AI/ML (Image Recognition) workloads, as represented in this presentation, are pre-release software
The benchmark framework, workload, and features are still under internal SPEC review and may change before final release of SPEC Storage 2020 and/or the individual workloads
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 7
SPEC OSG Member Consortium (as of 9/14/2019)
Acer Incorporated Action S.A. AMD Amazon Web Services, Inc. Apple Inc. ARM ASUSTek Computer Inc. Auristor Inc Bull SAS Charles University Chengdu Haiguang IC
Design Co., Ltd. China Academy of
Information and Communications Technology
Cisco Systems Dell Inc. Digital Ocean Inc. Epsylon Sp. z.o.o. Sp.
Komandytowa Format sp. z o.o. ForTISS -- An-Institut der
Technischen UniversitaetMuenchen
Fujitsu, Ltd. Gartner, Inc. Giga-Byte Technology Co.,
Ltd. Google, Inc. Grigore T. Popa' University
of Medicine and Pharmacy HPE Hitachi Vantara Hitachi, Ltd IBM Indiana University Inspur Institute of Information
Science, Academia Sinica Intel Corporation iXsystems Inc. Japan Advanced Institute of
Science and Technology Karlsruhe Institute of
Technology (KIT) Leibniz-Rechenzentrum,
Bavarian Academy of Science
Lenovo Linaro Limited Marvell Technology Group,
Ltd. Microsoft Corporation National University of
Singapore NEC Corporation NetApp, Inc. Netweb Pte Ltd New H3C Technologies Co.,
Ltd. NVIDIA Corporation Oracle Corporation Principled Technologies, Inc. Pure Storage Qualcomm Technologies Inc. Quanta Computer Inc Red Hat, Inc RWTH Aachen University Samsung Supermicro Computer, Inc. SUSE Taobao (China) Software Co.
Technische UniversitatDarmstadt
Technische UniversitatDresden, ZIH
Telecommunications Technology Association
Tsinghua University University of Aizu University of California at
Berkeley University of Maryland University of Miami University of Texas at Austin University of Tsukuba University of Wuerzburg VIA Technologies, Inc. Virginia Polytechnic Institute
and State University VMware, Inc. WekaIO
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 8
What is The SPEC Storage Benchmark? Previously known as SPEC SFS … being renamed to SPEC Storage An industry-standard storage solution benchmark used for:
Marketing and competitive positioning Internal engineering (sizing, design, validating, stress testing, etc)
Realistic, solution-based workloads SFS2014: DATABASE, SWBUILD, VDA, VDI, EDA Storage2020: GENOMICS, Image Recognition (proposed)
Measures application-level I/O oriented performance Ability to measure a broad-range of products and configurations Not just NAS! Ability to test any fully-featured file system
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 9
SPEC SFS2014/Storage2020Additional Background
SDC 2014: SPEC SFS 2014 – The Workloads and Metrics, an Under-the-Hood Review [Spencer Shepler, Nick Principe, Ken Cantrell] http://www.snia.org/sites/default/files/SpencerShepler_SPEC_Under-the-Hood_Review_Final.pdf http://spec.org/sfs2014/presentations/benchmarking.html
SDC 2015: Application-Level Benchmarking with SPEC SFS 2014 [Nick Principe, Vernon Miller] http://www.snia.org/sites/default/files/SDC15_presentations/performance/Principe_MillerApplication_Level_Benchmarking_v1.
6.pdf https://www.youtube.com/watch?v=4wfeM1q0zHA
SDC 2016: Using SPEC SFS with the SNIA Emerald Program for EPA Energy Star Data Center Storage Program [Nick Principe, Vernon Miller] https://www.youtube.com/watch?v=7gDgcDYatvM https://www.snia.org/sites/default/files/SDC/2016/presentations/green_storage/Miller_Principe_Using_SPEC_SFS_with_SNIA
_Emerald_Program-rev.pdf
SDC 2016: Introducing the EDA Workload for the SPEC SFS2014 Benchmark [Jig Bhadaliya, Nick Principe] https://www.youtube.com/watch?v=LaxXsrOeux4 https://www.snia.org/sites/default/files/SDC/2016/presentations/performance/Principe_Bhadaliya_Introducing_EDA_Workload
_SPEC_SFS_Benchmark_v2.pdf
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 10
SPEC Storage: Proposed AI/ML Workload
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 12
Focus Area
Framework
Model
Dataset
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 13
Neural Nets / Deep Learning
Image Recognition
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 14
Focus Area Image Recognition
Framework
Model
Dataset
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 15
Focus Area Image Recognition
Framework Tensorflow
Model
Dataset
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 16
“Fewer than 5% of our customers are using custom models. Most use something like ResNet, VGG, Inception, SSD, or Yolo.”
[Lambda Labs]
https://lambdalabs.com/blog/best-gpu-tensorflow-2080-ti-vs-v100-vs-titan-v-vs-1080-ti-benchmark/
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 17
Focus Area Image Recognition
Framework Tensorflow
Model VGG16, Resnet50, SSD
Dataset
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 18
Focus Area Image Recognition
Framework Tensorflow
Model VGG16, Resnet50, SSD
Dataset CityScape, ImageNet, COCO
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 19
Data Pipeline Architecture
Training File
Pool
Reader 1
Reader 2
Reader N
DecodeDistort
ShuffleBuffer
Batch Buffer
.
.
One batch
Prefetch Buffer
One example
Resize/Crop
DecodeDistort Resize/Crop
DecodeDistort Resize/Crop
DecodeDistort Resize/Crop
Data Preparation Threads
...
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 20
Data Pipeline Architecture
Training File
Pool
DecodeDistort
ShuffleBuffer
Batch Buffer
.
.
DGX 2
One batch
Prefetch Buffer
One example
Resize/Crop
DecodeDistort Resize/Crop
DecodeDistort Resize/Crop
DecodeDistort Resize/Crop
Data Preparation Threads
...
Reader 1
Reader 2
Reader N
What happens at the CPU/GPU level is interesting for those designing and sizing an overall architecture, but our interest is in the I/O related operations
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 21
What We Traced, and What We Modelled
Training and Validating
Read in large TFRecords Occasional Checkpointing(small writes)
Create TFRecord Files
Read in smaller image files Write out larger TFRecords
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 22
Why Create TFRecords?
From https://www.tensorflow.org/guide/performance/overview:Reading large numbers of small files significantly impacts I/O performance. One approach to get maximum I/O throughput is to preprocess input data into larger (~100MB) TFRecord files. For smaller data sets (200MB-1GB), the best approach is often to load the entire data set into memory. The document Downloading and converting to TFRecord format includes information and scripts for creating TFRecords, and this script converts the CIFAR-10 dataset into TFRecords.
Proposed workload uses 140MB modelled TFRecord files
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 23
What We Traced, and What We Modelled
Training and Validating
Read in large TFRecords Occasional Checkpointing(small writes)
Create TFRecord Files
Read in smaller image files Write out larger TFRecords
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 24
ISL linkISLBCN
STS
ENV
S1 / 33
LS
1234
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
S2 / 34
AFFA800
BCN
STS
ENV
S1 / 33
LS
1234
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
S2 / 34
“Normal” trace point, using network traces
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 25
ISL linkISLBCN
STS
ENV
S1 / 33
LS
1234
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
S2 / 34
AFFA800
BCN
STS
ENV
S1 / 33
LS
1234
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
S2 / 34
Trace point for the image recognition workload, using strace
strace
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 26
What We Can See With strace
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 29
Futex
Data Reading Pattern for Training/Validating Phase
Reader 1 Reader 2 Reader 3 Reader N
training file
training file
training file
training file
. . .
Bursty 256K pread, affected by the prefetch depth (modelled by 2M reads)
Read through the whole file before switching to another one
Bursty 64K sequential reads, controlled by the NFS mounting options
. . .
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 30
Workload Definition High Level Params Business Units (unit of scaling): AI_JOBS. Each represents
approx. load of 1/10th of our measured GPUs. Each composed of 4 independent, concurrent sub-workloads: AI_SF: Reads in the image files AI_TF: Writes out the TFRecord files AI_TR: Reads in the TFRecord files AI_CP: Occasional checkpointing
Thresholds for success: Proc oprate: 75% Global oprate: 95% Workload variance: 5%
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 31
Component Workload CompositionSF TF TR CP
# Instances 4 2 10 1
Oprate / latency 100 / 10ms 2 / 500ms 3 / 333ms 1 / 1s
File Size 1 MiB 140 MiB 140 MiB 30 MiB
# Files/Dir 200 10 10 1
Dir Count 3 2 2 1
Op Mix 37% Read56% Stat
7% Access
100% Write 95% Read5% Stat
100% Write
IO Sizes 5% 1K – 64K95% 256K
Spread out from 10K-2M
100% 2 M 80% < 512B20% 2M
Storage Efficiency 10% compressible; 0% dedupable 0% for both
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 32
Per Business Metric Characteristics Average file size: 9952 KiB Aggregate data set: 88,330 MiB Num of client procs: 17 Aggregate read/write/metadata op rate: 435/s Aggregate read/write data rate: 91.3 MiB/s
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 33
0
5
10
15
20
25
0 5,000 10,000 15,000 20,000
Late
ncy
(ms)
Aggregate Achieved Op/s (raw for aggregate, and normalized for sub-workloads)
Sample (Real) Results for Proposed SPEC SFS2020 AI/ML Workload
avg latency (ms)
AI_SF Latency
AI_TF Latency
AI_TR Latency
AI_CP Latency
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 34
0
5
10
15
20
25
Late
ncy
(ms)
Aggregate and per-Workload Scaled Achieved Op/s(raw for aggregate, and normalized for sub-workloads)
Sample (Real) Results for Proposed SPEC SFS2020 AI/ML Workload
avg latency (ms)
AI_SF Latency
AI_TF Latency
AI_TR Latency
AI_CP Latency
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 35
0
5
10
15
20
25
0 5,000 10,000 15,000 20,000
Late
ncy
(ms)
Aggregate Achieved Op/s (raw for aggregate, and normalized for sub-workloads)
Sample (Real) Results for Proposed SPEC SFS2020 AI/ML Workload
avg latency (ms)
AI_SF Latency
AI_TF Latency
AI_TR Latency
AI_CP Latency
First invalid loadpointNote how the AI_SF subworkload failed to meet its (normalized) op goal
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 36
0
5
10
15
20
25
0 5,000 10,000 15,000 20,000
Late
ncy
(ms)
Aggregate Achieved Op/s (raw for aggregate, and normalized for sub-workloads)
Sample (Real) Results for Proposed SPEC SFS2020 AI/ML Workload
avg latency (ms)
AI_SF Latency
AI_TF Latency
AI_TR Latency
AI_CP Latency
AI_TF, AI_TR, AI_CP all continue to meet their (normalized) op/s goals
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 37
Effect of Client Side Caching on Op Mix Observed at Storage
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 38
0
500
1000
1500
2000
2500
3000
3500
4000
4500
Dat
a T
rans
fer
Rat
es (M
iB/s
)
Storage2020 Increasing Load (AI_JOBS)
Effect of Client Side Caching on Actual Data Rates to StorageSample (Real) Results for Proposed SPEC SFS2020 AI/ML Workload
Storage2020 Reported Aggregate Read/Write MiB/s Storage Array Reported NFSv3 aggregate Read/Write MiB/s
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 39
AI/ML Wrap-Up Current proposed workload focuses on the data set
preparation and training phases of image recognition Workload consists of 4 concurrent but independent
sub-workloads Like all/most application-level workloads, the workload
can be affected by client-side OS/FS behavior YOU can affect what the final workload looks like –
contact Ken, Nick or other members of the SPEC Storage committee to get involved
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 40
SPEC Storage: Proposed Genomics Workload
Special Thanks to Workload Sponsor: Dell EMC
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 41
NGS Workload Background Next Generation Sequencing (NGS) is a
significant workload in the HPC space SPEC Storage wants to characterize the
storage demands of NGS workloads Shared network-attached storage is
predominant storage type used
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 42
Gene Sequencing Workflow
Sequencing• Digitizes physical
sample• Generation of BCL
files by image analysis
Primary Analysis
• Sequencer-specific steps
• Production of sequence reads and quality score
• Often results in FASTQ file
Secondary Analysis
• Quality filtering
• Alignment and assembly
> 6 TiB in ~24 hours
~12 TiB in ~48 hours
~6 TiB in ~44 hours
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 43
Genomics Workload Challenges Lots of variation in workflows used for research
Different sequencers, applications, workflows FDA-approved diagnostic tools will result in more
fixed workflows Taking a similar approach to SFS 2014 EDA
Look at the aggregate workload at the storage Many different jobs/workflows run at same time
Jobs are not synchronized Many different analysis phases ongoing at once
Create an aggregate workload that matches this
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 44
Characterizing the Genomics Workload Run a 9-phase analysis pipeline Take traces for each phase Characterize each phase Distill phases down to minimal set of unique
workloads Validate against multiple real-world
environments
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 45
Current Status of Genomics Workload Collected traces for 9-phase analysis pipeline Collected traces from a single real-world
environment Completed initial analysis of op mix from both
environments Seeking more real-world environments to
validate against Further tracing and analysis ongoing
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 46
Observed Operation Mix
read write lookup access getattr statfs readdirplus setattr create remove
Site #1 70% 8% 2% 5% 10% 0% 1% 2% 1% 1%Test Lab 50% 33% 1% 1% 8% 6% 0% 1% 0% 0%
0%
10%
20%
30%
40%
50%
60%
70%
80%Genomics Workload: Observed Operation Mix
Test Lab Run:• Skipped Data Prep Phase• Ran phases separately
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 47
Genomics Wrap-Up Current intention is to represent the entire genomics
pipeline Seeking more real-world environments to validate
against YOU can affect what the final workload looks like –
contact Ken, Nick or other members of the SPEC Storage committee to get involved
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 48
Misc SPEC Storage2020 Updates
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 49
SPEC Storage 2020 Anchor features for SPEC Storage 2020:
The Genomics and Image Recognition workloads Fundamental changes to the client/server architecture to dramatically
improve scalability Most likely, this will be not be a performance-neutral release; if so:
We expect currently published SFS 2014 will remain valid and stay available on the spec.org website but at a tbd date no new SFS 2014 submissions will be accepted
Storage 2020 results will not be comparable to SFS 2014 results, even for workloads with the same name
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 50
Please take a moment to rate this session.
Your feedback matters to us.
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 51
Appendix: Additional Image Recognition Workload Details
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 52
Dataset References CityScape
https://www.cityscapes-dataset.com/dataset-overview/ https://arxiv.org/pdf/1604.01685.pdf https://arxiv.org/abs/1604.01685
Coco https://www.tensorflow.org/datasets/catalog/coco https://www.tensorflow.org/datasets/catalog/coco2014 http://cocodataset.org/#home https://arxiv.org/abs/1405.0312 https://arxiv.org/pdf/1405.0312.pdf
Imagnet https://www.tensorflow.org/datasets/catalog/imagenet2012 http://image-net.org/
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 53
AI/ML Component Workload Details (1)Ops Reads Writes SE Other
SF 37% Read56% Stat7% Access
5% 1K – 64K95% 256K
none 10% compressible0% dedupable
TF 100% Write 20% 10240-2457620% 32K20% 64K5% 128K20% 196K5% 256K5% 1M5% 2-2.5M
10% compressible0% dedupable
Sharemode = 1Uniform
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 54
AI/ML Component Workload Details (2)Ops Reads Writes Files SE Other
TR 95% Read5% Stat
100% 2M 100% 2M 100% 4K –8191
10% compressible0% dedupable
Sharemode = 1Uniform distro
CP 100% Write 80% 1B – 512B20% 2M
100% 4K –8191
0% compressible0% dedupable
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 55
Appendix: Storage2020 Usage Examples
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 56
SPEC Storage 2020 I/O FrameworkBasic Model
Basic model: netmist speaks file and file system operations to local OS, which in turn are delivered to a file system setattr, getattr, open file, close file, lookup file, read from file, write from
file, create file, delete file, etc Some benchmarks do everything they can to avoid OS involvement.
Storage2020 relies on the OS to translate application commands to the appropriate FS commands. This means that OS level caching, coalescing, prefetching, etc. will occur. Storage2020 is a system benchmark focusing on I/O performance, not a storage array only benchmark.
netmist
(SPEC Storage2020load generator)
file systemOSFile system specific file and file system semantics
OS specific file and file system semantics
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 57
SPEC Storage2020 I/O FrameworkHistorically, the most common configuration (simplified)
The most common historical config is to use a remote file system and access it via SMB or NFS
The patterns delivered to the filer will be affected by the OS
netmist
(SPEC Storage2020load generator)
file systemOSOS specific file and file system semantics
NFS or SMB
FilerClient
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 58
netmist OS
Client
netmist OS
Client
netmist OS
Client
SPEC Storage2020 I/O FrameworkHistorically, the most common configuration (with multiple clients)
Of course, most configurations will use multiple load generators
file systemNFS or SMB
Filer
…
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 59
SPEC Storage2020 I/O FrameworkLocal file system
It is perfectly valid for the file system to be a local file system, so no NAS server is necessary
netmist
(SPEC Storage2020load generator)
file system
(NTFS or ext3, for example)
OSOS specific file and file
system operations
Client
Local HD
File system specific file and file system semantics
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 60
Array exportinga LUN
SPEC Storage2020 I/O FrameworkSimple example of block testing
It is possible to do block testing with SPEC Storage2020, but only indirectly Committee has discussed adding a native block interface, but this is not
planned for formal support in Storage2020
netmist
(SPEC Storage2020load generator)
file system
(NTFS or ext3, for example)
OS
OS specific file and file system operations
Client
iSCSI / FC LUN
File system specific file and file system semantics
LUNiSCSI or FC
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 61
Client
netmist
(SPEC SFS 2014load generator)
file system
(NTFS or ext3, for example)
OS
OS specific file and file system
operations
iSCSI / FC LUN
File system specific file and file system semantics
SPEC Storage2020 I/O FrameworkCloud Support
It is acceptable for the remote file system to be somewhere in the cloud
netmist
(SPEC SFS 2014load generator)
OSOS specific file and file system
semanticsAny remote FS protocol
Client
Cloud-based file system
iSCSI or FC
Cloud-based file system
2019 Storage Developer Conference. © SPEC, iXsystems, NetApp. All Rights Reserved. 62
SPEC Storage2020 I/O FrameworkVirtualization is ok, and complicates configs
With virtualization, this can get considerably more complicated. As a very simple example, consider virtualizing just the single client/server example
file system
Filer
netmist
(SPEC Storage2020load generator)
file system
(NTFS or ext3, for example)
OSOS specific file and file system
operations
Client
Local HD
File system specific file and file system semantics
LUN
Just about anything
Virtualized