www.vmem.com
All Silicon Data Warehouse:
Violin Memory Fast Track Data
Warehouse Reference Architecture
Installation and Configuration Guide 5U Design: Featuring Violin 6212 Storage Array
October 2012
Document: VM-DW-1 ICG
Edit version: 1.3
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 2
Table of Contents Introduction to Fast Track Data Warehouse ................................................................................................. 3
Fast Track Conceptual Overview ................................................................................................................... 3
Paradigm Shift ............................................................................................................................................... 4
Enabling new scenarios ................................................................................................................................. 6
Violin Memory 6212 Performance Overview ............................................................................................... 6
Hardware Overview ...................................................................................................................................... 8
Conclusion ..................................................................................................................................................... 9
Installation and Configuration instructions ................................................................................................ 10
Installing HBA cards and Memory Modules ............................................................................................ 10
Connecting Compute Node to Storage Array ......................................................................................... 11
Operating System Installation ..................................................................................................................... 11
Pre Operating System Installation Tasks ................................................................................................. 11
Operating System Installation ................................................................................................................. 12
Service Packs and Special Instructions .................................................................................................... 12
Post Operating System Installation Configuration .................................................................................. 13
Storage Array Installation ........................................................................................................................... 15
HBA Drivers ............................................................................................................................................. 15
Configuring MPIO .................................................................................................................................... 16
Storage Array Configuration ....................................................................................................................... 17
Presenting LUN’s from Storage Array ......................................................................................................... 17
Creating Initiator Groups ........................................................................................................................ 18
Creating LUN’s ......................................................................................................................................... 19
Exporting LUN’s ....................................................................................................................................... 20
Adding Windows Volumes .......................................................................................................................... 22
SQL Server Installation ................................................................................................................................ 25
Pre SQL Server Installation Tasks ............................................................................................................ 25
Installing SQL Server ............................................................................................................................... 26
Post SQL Server Installation Configuration ............................................................................................. 27
Checklists .................................................................................................................................................... 30
Operating System .................................................................................................................................... 30
Storage .................................................................................................................................................... 30
SQL Server ............................................................................................................................................... 30
High Availability Scenario ............................................................................................................................ 31
Local HA .................................................................................................................................................. 31
Remote HA .............................................................................................................................................. 31
Bill of Materials ........................................................................................................................................... 32
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 3
Target audience: The target audience for this document consists of IT planners, architects, DBAs, CIOs,
CTOs, and business intelligence (BI) users with an interest in options for their BI applications and in the
factors that affect those options.
Introduction to Fast Track Data Warehouse The Microsoft Fast Track Data Warehouse is a combination of Microsoft SQL Server software running on
prescribed hardware configurations that have been specifically tested and approved for data warehouse
workloads by Microsoft to meet various levels of sustainable throughput. Fast Track is intended to
provide an out of the box experience that optimizes the utilization of hardware implemented in a data
warehouse solution. The goal is to provide predictable hardware performance and to remove the guess
work when choosing a hardware solution for a data warehouse implementation. Each configuration has
been thoroughly tested and rated for performance using both throughput certified and capacity
certified ratings. In a traditional system it is the growth or change in usage patterns that causes the
most challenges over the life of the system. As the databases grow in size so does the administration
time and complexity. As the number of users increases, so does the number of concurrently open
threads to storage. Each open query to the system could interact with multiple tables, partitions or data
files further multiplying the total number of concurrent access points to storage. As users come and go
the locality of the access points will migrate around. This causes hot spots and unpredictable
performance in production environments. This has lead decision makers to look for alternatives to their
current infrastructure in order to ensure the availability of consistent, sustainable performance.
Microsoft Fast Track Data Warehouse for Violin Memory is a robust solution to this problem, delivering
easy to follow setup instructions and predictable performance as measured by IOPs and throughput.
Violin Memory Storage Arrays enhance the predictability of the Fast Track system by delivering the same
performance (throughput) regardless of the number of threads, users, tables, files or LUNs. Throughput
will not degrade dependent upon usage patterns or data locality which allows administrators to avoid
chasing periodic or systemic system degradation issues. SSD and other implementations of flash are still
bound by their hard drive like architecture. SSDs, like their hard disk model, still can be affected by data
locality, RAID performance degradation and LUN striping issues. Ongoing maintenance and ongoing
troubleshooting is still required. Violin avoids this by presenting the array as one entire block of flash
storage. All data is securely pre-RAIDed inside of the array and all data is equally accessible, anywhere
in the array, at any time.
Additionally, the setup, configuration and management of the storage tier in the Violin architecture is
significantly faster and more simple than disk or SSD based solutions. There is no LUN striping to
architect, no separation of data, log and temp space or tiering software to admin and all LUNs perform
the same regardless of locality. Such consistency allows CIOs and technical staff to plan for and
maintain performance levels stipulated by an SLA with ease. Further, combing multiple Violin Memory
storage arrays results in linear scalability of the storage component of the system and DW.
Fast Track Conceptual Overview The goal of the Fast Track data warehouse system is to achieve the required performance while
remaining balanced at each layer. Right sizing seeks to deploy sufficient hardware to achieve
performance goals without deploying more hardware than is required to get the job done. Fast Track is
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 4
a set of hardware configurations known as a reference architecture that have been tested and approved
by Microsoft to deliver consistent and predictable performance. This allows customers to take the
guesswork out of determining hardware as the approved hardware configurations have already been
rated for certain sized data warehouses. While the focus of the Fast Track system is on the report
delivery side, the reference also addresses the staging and loading procedures to allow for predictable
results. It is intended to help avoid the common pitfalls of improperly designed and configured systems
and to bring predictability into your data warehouse performance. This is accomplished through using
specific and defined OS configurations, SQL Server settings, loading procedures and storage
configurations which will be outlined later in this paper.
A core requirement of Fast Track reference architectures is to align CPU bandwidth to that of the
storage system. The Fast Track system then provides a series of load steps and procedures to achieve
continued success. Violin Memory Flash Arrays are built with no moving parts such that each piece of
data is equally accessible at any moment. This makes the time consuming load tasks mostly irrelevant
as the array delivers the same high performance for both sequential and random I/O requests. The goal
is no longer to produce physically sequential data on spinning media, regardless of how long that takes.
The goal now is to import as quickly and simply as possible. Rotationally bound systems are also more
difficult to maintain over time as moving the data or rearranging large sections of data causes time
consuming tasks or downtime for the system. Changing one piece of data could cause an entire day to
be reloaded. Many users wanting to all touch data from the same day at the same time could cause
one set of spindles to degrade in performance. Violin’s unique offering of an all-flash array is specifically
designed for high speed random access to any data at any time which eliminates bottlenecks, hot spots
and data segregation requirements.
With this patented design, the array will deliver the same performance if it is hosting one huge LUN or
hundreds of smaller LUNs. The total aggregate IOPS for 4K blocks will be the same for the life or the
array. This eliminates the need to plan for different LUN allocations or leave large portions of purchased
space unused while preserving the option of using LUNs as a logical organization unit for ease of
administration.
This is the first all-silicon Data Warehouse Reference Architecture. There are no physical moving parts in
the Compute or Storage layer. This drastically reduces the chance of failure due to moving parts wearing
out over time. The whole solution is a 5U design delivering the performance normally measured in racks.
This guide focuses on the 11TB certified configuration. Violin Memory also has a larger 25TB certified
configuration based upon the 6232 model storage array. The 11TB (VM 6212) configuration can be
upgraded to 6232 by adding 40 VIMMs into the same enclosure.
Paradigm Shift The original Fast Track reference architecture requires specific, time consuming and complex steps to
sequentially load data into a data warehouse in order to achieve true physically sequential layouts of
data. These steps included single threaded loading steps, ordering data between steps and utilizing
multiple staging tables. It traded overall loading-speed for a physically sequential data layout to
minimize fragmentation and allow for physically sequential reads. This was created to satisfy the
requirement for predictable throughput during range scans (sequential reads) which is the typical
workload of a data warehouse system. The intent was to try to minimize movement of the read/write
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 5
heads on the disks. This is optimal when designing for spinning media. An all-flash array removes this
requirement and opens up the administrators to take the quickest path with parallel loads and reloading
data at will.
Another issue comes when many users need access to the same data at the same time or usage drifts
from the previously defined requirements. Best practices for spinning media provide the best likelihood
for success but do not guarantee it, especially over the multi-year life span of a data warehousing
system. Only an all-flash system can deliver both the quickest and easiest load times and deliver
guaranteed bandwidth regardless of the number of users, queries or data sources in use.
With Violin storage arrays, sequential loading of data is not necessary. The array can handle random
patterns and sequential patterns equally well and since there are no moving parts in the underlying
storage, there is no need to minimize movement of read/write heads. All LUNs are spread evenly over
all internal storage components providing maximum speed at all times. The result is that the system
delivers the same high level of performance for both sequential and random I/O reads and writes
regardless of the usage pattern.
Violin Memory Array is able to achieve this by writing each IO block to all VIMMS (memory modules)
that comprise the array. It is internally designed to scale out IO operations to maximize concurrent
usage of internal flash memory through parallelization down to the bit level. The Violin 6212 has 4 built
in RAID groups and each RAID group is made up of 5 VIMM’s, 1 for parity and 4 for data. Each 4K block
is written to all VIMM’s within the RAID group.
To achieve the optimal performance of storage, when deployed on spinning media, data is loaded into a
primary staging table with all cores (to fully utilize the cores and LUN’s), then ordered, then loaded
again to another staging table with just one core (to achieve true sequential writes), then partition
switched into the final table which was a meta-data operation. All data was written twice and once via a
single core so that no matter how large the system was it was throttled to the speed of one core. Many
DW administrators will understand this can take hours or days, in some cases filling their weekend with
work.
Violin storage arrays eliminate the ordering and second (single core) load steps. Like a modern storage
system should be, the system allows for parallel inserts of the data once and it is done. Getting the data
into the database should be simple, fast and the last step. Now it is. With Violin Memory Based DW the
data can be loaded directly in parallel into the DW real time without compromising the performance of
the read operations. This will dramatically reduce the complexity of your ETL processes while increasing
the speed of loading data by a significant factor.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 6
With spinning media, migrations can also be a challenge as the administrator has to sequentially unload
the data from the old system and perform a full reload into the new Fast Track system to minimize
fragmentation. Fragmentation is not a concern with an all-silicon, random IO storage device.
In addition to eliminating the long and complex loading process, the Violin Memory storage array
accommodates easy maintenance and growth. There are no hot spots to migrate, no tiering tools to
configure and maintain and no costly add-on software to purchase.
Enabling new scenarios SQL Server natively allows for real time Data Warehouse updates while the DW is in full use. But, this
typically is not done because this will cause significant performance degradation due to logical
fragmentation. With a Violin Memory array the logical fragmentation level is irrelevant to throughput
and the administrator is free to utilize the full tools in front of them. Violin Architecture uses patented
technology to remove performance degradation normally associated with writes to hard disk, SSD drives
or Flash based PCIe cards. This performance degradation is called “Write Cliff” and is present on all SSD’s
and PCIe Flash Cards. Violin Memory does not have this problem. The Violin Memory storage array
allows for any administration to occur at any time while still delivering stable, reliable and predictable
performance.
Violin Storage Memory Arrays also deliver “five 9s” reliability and high availability inside the 3U unit.
Every component is doubled up (except flash) and hot swappable while the system is running and
delivering full data rate speeds. This hot swap functionality includes rolling updates to the firmware on
the system. With Violin Array 99.999% uptime is a reality with no requirement to double up storage.
Violin Memory 6212 Performance Overview The testing and certification process requires running a benchmarking system that includes a full set of
tests that simulate real world test queries and metrics. Microsoft requires this benchmark to establish
the MCR (Maximum Consumption Rate) and BCR (Benchmark Consumption Rate). The MCR is the
maximum throughput of a physical core on the server. Multiply the MCR by the number of physical
cores to determine the total CPU throughput capability.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 7
MCR = 350MB/sec X 8 cores = 2,800 MB/sec
The benchmark consumption rate is the maximum throughput of the system under a predefined test
load taking into account the CPU, memory, and storage all working together.
BCR = 1,707 MB/sec
Metric Rating Comment
Rated Database Capacity TB 11 TB Using SQL Compression
Maximum User Database 13TB Using SQL Compression
Total RAW space 12TB Array Raw space without formatting
Total User Usable Space 6.6TB After formatting, active spares, full HA
Fast Track Average I/O CSI 4.11 GB/s Average throughput with SQL
2012Column Store Index Enabled
Peak I/O 2.46 GB/s
This table shows a rated capacity of 11 TB’s with compression enabled. Since administration does not
require any staging space and data is loaded directly into the array, most of the available physical
capacity is available for storing user data at a rated combined Fast Track I/O rating of 1706 MB/s. This is
an outstanding level of performance for a mid-sized system. The maximum Storage (Disk) IO throughput
measured using SQLIO was 1,418 MB/sec. While the actual DW test was running this hardware solution
was able to sustain throughput of 1,340 GB/sec under heavy load directly against the array, with no data
written to or read from RAM.
The storage array was able to maintain 98% of the maximum possible physical IO while under heavy
load compared to the maximum baseline IO rates which are measured running SQLIO when the system
is silent. What this means to end users is they can expect the storage array to provide efficiency under
heavy random I/O load that is roughly equivalent to the array’s maximum IO capability. This is a critical
advantage as data is loaded into the system and used in day to day operations. For comparison it is
common for spinning media based solutions to drop off to 50% or less of the maximum I/O throughput.
In general the more concurrent queries and users placed on a commodity spinning disk the lower the
performance.
Column Store Index is a new feature introduced with SQL Server 2012 that provides significant
performance gains as tested on the Violin Storage Array. The maximum sustained I/O with CSI enabled
was 6343 MB/sec (6.3GB/s). This was achieved with 5 session test. Averaging the 20 session CSI test,
with column store indexes present, the array’s average tested throughput was 4131MB/sec. The
reason is that column store index does not select all data but subsets from each SQL page and thereof
becomes somewhat random I/O. Commodity spinning disks will not benefit from this feature as
dramatically. Furthermore CSI can only be applied to read only tables, Violin Array’s performance
ensures that the index can be dropped and rebuild rapidly when compared to disk based solutions.
The storage array contains a management and administration portal known as vSHARE, which provides
additional insight into the performance of the flash storage above and beyond what you can capture
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 8
with traditional Windows Performance Monitor. The screenshot below illustrates the array’s
performance when running independent SQLIO tests. In this case the array was able to push close to
340,000 1 IOPs and sustain that level for the duration of the test. This is significant in the scenario where
a DW workload only needs a few hundred thousand IOPS; as the array will then provide a comfortable
margin of additional headroom which can be used to run additional applications or simply provide
ample headroom for running mission critical applications during production hours.
Hardware Overview The hardware chosen for this FT RA solution is highlighted below with more detailed information in the
Bill of Materials section. The goal of this section is to provide a high level layout of the hardware
environment.
Component Details
Server Violin Memory Compute VM-6027R-73DARF
CPU Intel Xeon E5-2643 3.3Ghz with Hyper-Threading
Total Sockets 2
Cores Per Socket 4
Total Physical Cores 8
Total Logical Cores 16
Memory 128 GB DDR3 (8 X 16GB DIMM's)
Internal Storage 2x Intel 520 Series SSD (240GB configured in RAID 1)
HBA 4x QLogic QLE2562 (dual port) (8Gbps)
Component Details
Storage Array
Violin 6212 Storage Array
6.8GB Usable Formatted
Hot Swappable Hot Spares (4)
vRAID
Operating System Windows Server 2008 R2 Enterprise SP1
SQL Server SQL Server 2012 Enterprise RTM (Build 11.0.2100)
1 Actual observed Read only performance. These numbers differ from Violin Memory official marking information, because official numbers are
conservative and always use 70% read and 30% write composition.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 9
Conclusion The Violin Fast Track Reference Architecture provides the best combination of performance,
manageability out of all the Fast Track configurations tested and certified by Microsoft. The system
represent exceptionally reliable and predictable infrastructure. With total required space of 5Us the
system saves space and power as compared to many other solutions with the same performance
parameters. It provides a solid infrastructure for data warehousing needs reducing complexity and
increasing data loading speeds and efficiency by a significant factor. The storage design, using direct
connect instead of fabric switches provides a paradigm changing solution compared to other reference
architectures that require hundreds of disks and 12+ racks of space. The evolution of enterprise storage
has placed flash memory in a strategic market position to provide the most usable storage compared to
dollars spent over spinning disks. The 6212 Violin storage array comes out of the box ready to use
making setup easy and with the web based administration UI. The 6212 Violin Memory Storage Array
comes out of the box ready to use with a easy to use setup program and a minimal learning curve for
system administrators.
• For more information visit www.vmem.com
• To find out how to buy the solution or get a POC contact sales at [email protected]
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 1
0
Installation and Configuration instructions The server is a 2U configuration and the storage array is a 3U configuration so minimizing total rack
space was taken into account when architecting this hardware solution. With only 5U needed to house
this hardware, finding room in a data center should be an easy task.
The storage array and server are connected by 8 OM3 fiber channel cables, and with this solution, there
is no need for a fiber switch. Direct connect was the method chosen for connecting the storage array to
the server although nothing prevents using a fiber switch between the server and the storage array if
preferred. Switch can be utilized for additional Compute nodes, possibly intended for compute fail over
or to introduce another application or DW to the same storage array. Storage fail over is already
accounted for inside the 3U array.
Violin Memory 6212 storage arrays can also be configured with Inifiband Adapters, PCIe adapters (built
in) and 10Ge adapters. The PCIe architecture is designed to accept most standard networking
components given there are drivers to run them on the Memory.
One of the major benefits of Violin storage arrays is the ease of use and simplicity of administration
provided. This array is already pre-configured with internal RAID groups and hot spare VIMM cards. A
lot of the Fast Track reference guide focuses on how to lay out your storage and specify RAID levels for
different LUN’s. When looking to purchase storage, one thing to take into account is the price/usable
GB. As compared to RAID 10 configuration requirements (and attendant storage enclosures) of
commodity hard drives needed to satisfy IOPS requirements, Violin is a very desirable solution for
achieving the most IOPS with the lowest price/usable GB. The introduction of flash memory arrays is
starting to change the way people look at storage as a major bottleneck in their infrastructure.
Installing HBA cards and Memory Modules If system does not come preconfigured please validate that Memory Modules and HBA cards are
installed in the following manner. This configuration will deliver the highest possible throughput while
eliminating I/O contention on the motherboard.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 1
1
Connecting Compute Node to Storage Array Below is a schematic of the Compute Node and Storage Array clearly showing direct fiber connection.
The server should have all 4 HBA cards installed per the previous diagram; the storage array comes with
4 HBA cards installed out of the box.
Operating System Installation This section will discuss the installation of the operating system. There are some prerequisite tasks as
well as some post installation configuration tasks that need to take place to configure the server for
optimal performance with accordance to Fast Track guidelines.
Pre Operating System Installation Tasks
Prior to starting the OS install please download the latest drivers from Super Micro’s web site as you will
need these during the OS install when choosing what disk to install the OS onto. Also you will need to
set up the internal storage in a RAID 1 configuration using the LSI configuration tool which can be
started when prompted during boot up by pressing the CTRL-C key combination. Below is a screenshot
of the LSI configuration utility where RAID 1 is set up for the internal storage.
When the <<<Press Ctrl-C to start LSI Logic Configuration Utility>>> prompt is
displayed, press Ctrl+C.
1. Choose Adapter SAS2308 and press ENTER
2. Select “RAID Properties” and hit ENTER
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 1
2
3. Select “Create RAID 1 Volume”
4. Choose “YES” under RAID Disk for the 2 internal SSD drives
a. Disk 1 should be Primary and Disk 2 is Secondary under “Drive Status”
5. Select “C” to create the volume
6. Choose “Save changes then exit this menu”
Operating System Installation
The operating system to be installed is Windows Server 2008 R2 Enterprise. Please follow normal
installation steps for installing the operating system and apply SP1 and all current patches via Windows
Update. When prompted, indicate that you want to install the operating system onto the C:\ drive
created in the previous step.
Service Packs and Special Instructions
Please ensure you have SP1 installed for Windows Server 2008 R2. If SP1 was not included in your initial
install, SP1 is available for download at http://www.microsoft.com/en-
us/download/details.aspx?id=5842.
Below is a screen shot to tell if you have SP1 installed, this is obtained by looking at “Properties” of “My
Computer”
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 1
3
Post Operating System Installation Configuration
After the OS has been installed, you will need to enable the following roles in Server Manager
1. Multipath I/O
2. .NET Framework 3.5.1 features
Below is a screen shot of how your features section of Server Manager should look.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 1
4
Next, change the OS Power Options from “Balanced” to “High Performance”
Last check the BIOS and make sure the Power Options are set to “Performance” and not controlled or
set through the BIOS.
Finally disable all 3 windows firewall’s in the OS
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 1
5
Storage Array Installation
HBA Drivers
Prior to connecting the storage array please ensure there are 4 HBA cards installed in both the server
and the storage array. These will be dual port Qlogic QLE 2562’s for 8 total IO paths. Make sure they are
present in the Storage Controllers section of Device Manager.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 1
6
The current driver as of the time of this writing is 9.1.9.49 dated 3/14/2012. Please make sure your
driver is at least this version for the HBA cards.
Configuring MPIO
When configuring MPIO, please select “Discover Multi-Paths” and select “VIOLIN SAN ARRAY” and select
“ADD”. Reboot when asked.
After rebooting check your MPIO settings and you should see the storage array present as shown below.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 1
7
Storage Array Configuration Connect the storage array to the server using OM3 fiber channel cables. You will need 8 of them to
connect all the ports on the server to the storage array to account for 4 dual port cards.
1. Connect serial port
2. Turn on
3. Configure networking
Presenting LUN’s from Storage Array You will need to create LUN’s using the Admin interface for the Violin Storage array. In our testing we
created 4 LUN’s with the following names and sizes. Note that unlike when using spinning disks, the
number of LUN’s with flash memory is not relevant but we did add some logical separation and these
sizes may be different in your environment.
1. SQLData01 2TB
2. SQLData02 2TB
3. SQLLog01 750GB
4. SQLStage01 2TB
The process of creating LUN’s is done from the admin web interface and is comprised of the following
steps
1. Creating Initiator Groups
2. Creating LUN’s
3. Exporting LUN’s
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 1
8
Creating Initiator Groups
From the home screen of the admin interface select “LUN Management” � “Manage Initiators”. From
this screen please select “Add igroup” from the top right section of the initiator groups
From this screen choose a name for the new initiator group
After the group is created, you will associate all 8 wwn’s to the new group. This is done by selecting the
new group created under the initiator groups section and selecting all 8 wwn’s under the manage
initiators section and then select “Save” followed by “Commit Changes” at the top center of the UI.
It should look like the following once completed.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 1
9
Next click on “Manage Targets” under “LUN Management” and make sure each target is in a good state
At this point you have set up connectivity of the HBA’s between the server and the storage array. In the
next step we will create LUN’s using the admin web tool.
Creating LUN’s
For creating LUN’s, select “Manage LUNs” under the “LUN Management” section of the admin interface.
At this point you should see nothing allocated for the storage array. This process will take you through
creating, exporting, and presenting 1 LUN and this same process can be followed for the number of
LUN’s you decide to present. As mentioned earlier one of the major benefits of Violin storage is you
should achieve predictable performance irrespective of the number of LUN’s present.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 2
0
Select “Create LUN” and from here you give it a name, size, and select 4096 bytes for the “Block Size”.
Click “Yes” when you get the warning box that says “Not all client systems support 4096 block size,
Continue to create LUN(s) using 4096?”
At this point you have a LUN present with a status of “not exported”
Exporting LUN’s
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 2
1
From the right side based on the screen shot above, select “Add Export” to bring up the “Add export for
LUN” screen. Make sure “All Initiators” and “All Ports” is selected and hit OK.
After this process, select “Commit Changes” button in the top middle of the screen.
You have successfully exported the LUN. If you rescan disks on the server using “Disk Management” you
will see the new LUN present.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 2
2
Adding Windows Volumes Once the LUN’s have been presented to the OS, volumes need to be created and formatted in the NTFS
file system in Windows. Under Disk Management format the new Volumes with a 4096 block size. The
LUN’s should be presented as mount points.
Once you see the newly presented disk available in Disk Management the first step is to initialize the
disk by right clicking and selecting “Initialize Disk”, choose GPT (GUID Partition Table) for the partition
style.
At this point the new disk is now online, but unallocated. Next you want to right click the disk and select
“New Simple Volume”.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 2
3
This brings up a wizard to walk you through the process. Make sure to allocate maximum possible size
for the volume. In this example we are using mount points but drive letters are fine as well if that is
easier to manage in your organization. Also make sure to use the NTFS file system with an“Allocation
unit size” of 4096.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 2
4
At this point the drive is online and available for use. Remember to follow these same steps if creating
multiple LUN’s.
The last step is to remove content indexing for these drives. For this go to “Properties” of the drive
mount point or drive letter and remove the check box “Allow files on this drive to have contents indexes
in addition to file properties”.
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 2
5
SQL Server Installation The SQL Server installation used will be SQL Server 2012 Enterprise Edition and will be installed onto the
C:\. The user databases, tempdb, and transaction logs will be pointed to a mount point on the storage
array.
Pre SQL Server Installation Tasks
Prior to installing SQL Server, please create a domain Service Account to run the SQL Server Services, in
particular the Database Engine and the SQL Server Agent.
Once this service account is created, please assign it to the following Local Security Policies in the User
Rights Assignment section
1. Lock Pages In Memory
2. Perform Volume Maintenance Tasks
Below is a screen shot of these settings.
“Start” � “Administrative Tools” � “Local Security Policy”
Once this opens select “Local Policies” � “User Rights Assignment”
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 2
6
Installing SQL Server
The version of SQL Server for this reference architecture is SQL Server 2012 Enterprise. Please perform
a normal install and choose to install all components except Reporting Services (SharePoint)
components, You can choose all the defaults for now as we will change the data and TEMPDB locations
after the installation process. We chose to use the RTM build of SQL Server 2012 at the time of the
writing of this paper.
Below is the discovery report of all the installed features in the current installation
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 2
7
Post SQL Server Installation Configuration
There are a number of tasks to perform after the installation is complete to configure SQL Server.
1. Change the default database file locations to one of the mount points for the storage array.
Ensure that the DW database you create has the same number of files on each LUN of the same
size with auto growth enabled. It is assumed your backups will be to an external file share using
non Tier I storage.
2. Set the MIN/MAX Memory Settings for SQL Server
a. MIN = 100GB
b. MAX = 118GB
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 2
8
3. Move TEMPDB data file to storage array data LUN SQLData01 and create an additional data file
on SQLData02. Move the TEMPDB log file to LUN SQLLog01. Size each data file to ~150GB
4. Set the following startup parameters for the SQL Server Service using SQL Server Configuration
Manager
a. –E
b. –T1117
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 2
9
5. Resource Governor settings
Set the “Memory Grant %” to 16% from the default of 25% by opening “Management” and right
clicking “Resource Governor” and selecting “Settings”
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 3
0
6. Restart the SQL Server services
At this point SQL should be configured and ready for use. Please see below for checklists related to all
settings and configurations.
Checklists
Operating System
Area Description
OS Version Windows Server 2008 R2
SP Version Service Pack 1
Drivers Latest Downloaded from Super Micro before installation
Features Enabled Multipath I/O,
.NET Framework 3.5.1 Features
Power Settings High Performance (OS and BIOS)
Firewall Disable all windows firewalls
Storage
Area Description
Array LUNs 4096 block size
MPIO VIOLIN STORAGE ARRAY present
OS Volumes 4096 allocation unit size
OS Volumes content indexing turned of
SQL Server
Area Description
SQL Version SQL Server 2012 Enterprise Edition RTM
Startup Parameters -E
-T1117
Memory Settings MIN - 102400 MB
MAX - 120586 MB
MAXDOP 0 (unlimited)
Local Security Policy Perform Volume Maintenance Tasks
Lock Pages In Memory
Resource Governor Memory Grant % = 16%
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 3
1
High Availability Scenario
Local HA
It is possible to use the Violin 6212 storage array in a high availability scenario combined with Windows
Server Failover Clustering (WSFC) and SQL Server failover cluster instances (FCI). Since SQL Server
clustering requires shared storage, the LUN’s created on the 6212 must be presented to both compute
nodes in the cluster. The Violin 6212 fully supports SQL Server clustering and this section gives an
overview of how that configuration needs to be architected.
To use this scenario, 2 cards from the 6212 should be connected to 2 cards on each node of the WSFC.
Two dual port cards at 8Gb/s/port can deliver 4GB/s total which provides more than sufficient capacity
to avoid becoming the bottleneck in the storage path. Instead of one server with 4 HBA cards, the
configuration will have 2 servers with 2 HBA cards. When creating the initiator groups and exporting
LUN’s there will be 2 initiator groups, one for each server, and the LUN would be exported to both
initiator groups as well.
Once storage has been presented, you would perform a normal SQL Server cluster install. Below are
some links to setting up a SQL Server Failover Cluster:
SQL Server Clustering Prerequisites
SQL Server Failover Cluster Installation
Create a New SQL Server Failover Cluster
Add or Remove Nodes in a SQL Server Failover Cluster
Remote HA
Another HA/DR that is possible with the Violin storage array is cross data center disaster recovery. For
this scenario, you would need two arrays combined with SQL Server 2012 Mirroring or SQL Server 2012
AlwaysOn Availability Groups for database level protection and/or multi subnet cluster for instance level
protection. SQL Server 2012 has some great new features for multi subnet clustering that eliminate the
need for a stretch VLAN and a flexible failover policy so the administrator has control to set the levels
that would initiate a failover. Depending what the HA/DR objective is you can combine these instance
level and database level availability features utilizing multiple storage array’s and servers for the highest
level of protection. Below are some links to point you in the right direction to getting started with these
new SQL Server 2012 features:
Overview of SQL Server 2012 High Availability Solutions
SQL Server Multi-Subnet Clustering
Overview of AlwaysOn Availability Groups
AlwaysOn Failover Cluster Instances
www.vmem.com Document: VM-DW-1 ICG Edit version 1.3 3
2
Bill of Materials To order a complete solution please use the solution id: VM-DW-1
Individual Components are:
Qty Part Number Description
1 VM-SYS-6027R-73DARF Violin Memory Compute Node 6027R-73DARF
8 VM-MEM-DR316L-SL01-ER16 16GB DDR3-1600 2Rx4 ECC REG DIMM
2 VM-P4X-DPE52643-SR0L7 Sandy B. R 4C E5-2643 3.3G 10M 8GT 130W
2 VM-HDS-2TM-SSDSC2BW240A [NR]Intel 520 series, 240GB, SATA 6Gb/s, MLC
4 V-6000-NI-FCx2 4 HBA Cards for compute node
Qty Part Number Description
1 VM-6212-HA24-8xFC Violin Memory 6212 Storage Array
© Copyright 2012 Violin Memory, Inc. The information contained herein is subject to change without notice. The only warranties for Violin
Memory products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein
should be construed as constituting an additional warranty. Violin Memory shall not be liable for technical or editorial errors or omissions
contained herein.
Microsoft, Windows and SQL Server are U.S. registered trademarks of Microsoft Corporation. Intel and Xeon are trademarks of Intel
Corporation in the U.S. and other countries. All other trademarks and copyrights are property of their respective owners. All rights reserved.
VM-DW-1 ICG ; Created October 2012;
Top Related