This post is simply going to cover some of the results I discovered while doing some testing of Virtual Machines (VMs) in Azure using their Infrastructure as a Service (IaaS). There will be a second post here (link place holder) that will go over in more detail how to configured the disks for testing, primarily the disk striping, as more of a How-To post.
This was the beginning of my process for setting up a SharePoint farm on Azure. As a result, the performance testing, results and conclusions have a slant towards SharePoint and SQL. However, the results could be used to analyze the possibility of using the Azure IaaS for other applications as well. I’ll touch on some of my thoughts around this in the results section.
These test were exclusively testing Azure VM Disks and Disk configuration options. For SQL and SharePoint, disk performance is important. In the case of Azure, you can always spin up a larger VM (up to 32 cores and 448GB of RAM! Yes, Gigabytes!), but they all use the same disks for data (at least until the premium storage becomes available). The only difference is the number of data disks you can attach to the various machines.
In my test, I tested 4 different VMS. The A4 Standard, the D3, the G2 and the G5 (this one is the one with 32 cores and 448GB of RAM). Servers I will test and post results on once I have access to premium storage are: DS3 and the DS14.
To test the disks I used DiskSpd 2.0.12 https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223. The command I used when testing disks was diskspd.exe -b64k -d30 -t8 -o8 #[DiskNumber]. This command can be run in a command prompt. If you wish to use PowerShell to run the command you must run diskspd.exe -b64k -d30 -t8 -o8 `#[DiskNumber]. Notice the ` before the # to escape what is normally a comment. To get the Disk Numbers you can run Get-Disk in PowerShell. I also ran a test with -b8k. With both the 64K and the 8K test, the drives were formated to match the block size of the test. As it is recommended you format your drive with 64K block size for SQL Server when using it for SharePoint, that was my primary focus. Since 8K is the standard block size for Disks in Windows (and quite frankly what most people actually use) I wanted to do a test using that block size as well.
Server Specs (not including DS series machines and I haven’t tested them yet)
A4 Standard ($0.72/hr): 8 cores, 14 GB of RAM, 605 GB of temp storage, attach up to 16 data drives (500 IOPs limit/drive). This is about the minimum specs (cores and memory) I would use for SQL.
D3 ($0.684/hr): 4 cores, 14 GB of RAM, 200 GB of temp storage, attack up to 8 data drives (500 IOPs limit/drive). This is the closest in price to the A4 and contains less temp storage and fewer cores, but the temp storage is SSD.
G2 ($0.67/hr): 2 cores, 28 GB of RAM, 768 GB of temp storage attach up to 4 data drives (500 IOPs limit/drive). This the closest in price to the A4 when looking at the G Series.
G5 (Arm & Leg/hr – $9.65/hr): 32 cores, 448 GB of RAM, 6 TB of temp storage, attach up to 32 data drives (500 IOPs limit/drive). This is the largest VM you can currently get in Azure.
When testing the servers, I tested all the disks on the servers; Data, OS and Temporary. OS Disks and the Temporary Disk you really can’t do anythign with. Data Disks you can strip to increase performance one your data disks. When testing the disks, I had some results that were expected and a few that actually surprised me. I’ll sum up all of the results in the conclusion for those of you that want to skip past the numbers.
Since data disks are the same speed across all servers, I only tested Data disks on my A4 – Standard servers. However, I tested striping the disks staring with a single disk all the way up to 16 disks, the max for this server. Based on the server you choose, data disks are all the same speed, just the max number you can attach to the server changes.
So, here are the numbers that I got when doing a test. For those of you that are visual, the graph of these numbers is below the table.
|Disks||64K IOPs||64K MB/s||8K IOPs||8K MB/s|
With the OS Disks, I obviously couldn’t format them or dictate the block size, so these tests are run against disks formated with the default 8K Block size. As will the Data Disks, for those of you that like pictures, the graph is below the table.
|Server||64K IOPS||64K MB/s||8K IOPS||8K MB/s|
|A4 – Standard||868.85||54.30||4965.03||38.79|
Temp disks are the same as the OS Disks in that I can format or dictate the block size. These disks are exactly what the sound like too, temporary. They will be wiped clean ever time you shut down your machine, reboot, or if it even fails over to another host in Azure (something you have no control over). So, before you get to excited about the crazy disk speed, jump down to the conclusions where I discuss what these actually might be used for.
|Server||64K IOPS||64K MB/s||8K IOPS||8K MB/s|
|A4 – Standard||75,615.56||4,663.47||99,844.83||780.04|
So, just to wrap up my findings and thoughts on these findings:
- Without using premium disks (and even this I’m not sure about), you’ll never reach the disk speeds in Azure than you can current reach on-premises or with a local VM running on SSDs.
- I’m surprised to see the IOPs and MB/s both drop off when using a 64k block size on the disks. It would appear that is some sort of limit that is being hit. Since I know IOPs will go up to 8,000 with an 8K block size, maybe there is some sort of MB/s ceiling that I’m encountering at ~300 MB/s?
- While the temp drives are really fast and have an abundance of storage, I’m not really sure what to do with them. At least in a SharePoint/SQL environment, there isn’t a whole lot you can store on them. Microsoft states they are for the page file. I’ve also ready two conflicting statements from Microsoft about using them for the TempDB file for SQL. This article says it’s fine to do it https://msdn.microsoft.com/library/azure/dn133149.aspx, about 1/2 way down; while this white paper says no (in the TempDB Section) – https://msdn.microsoft.com/library/azure/dn248436.aspx. My conclusion? In a Dev/Test environment go for it! But in a production environment by cautious or just avoid it, at least until Microsoft can come to an agreement internally on if you should or shouldn’t.
- What can you use the Temp Drive for? I’ve chatted with a few people and thrown around a view ideas…Crunching a massive amount of data where you can store it on a temp drive, once you have the data crunching results, the data can be disposed of. Video Rendering: Put the files on the temp drive while you process the video. Again, once you have the final results, everything on the temp drive can be disposed of. Some of the NoSQL products: based on what I’ve been told by Andrew Connell, some of these would allow you to store the data on the Temp drive as long as you have multiple server running and your data stays in sync between these servers. That way if you loose a Temp drive, the other servers will maintain the data. However, then the problem becomes keeping all that data in sync if you start looking at massive amounts of data (like the 6TB that you have available on the G5)
- For whatever reason, the disk speed of the OS Drive (C:) was significantly faster than either the G2 or the G5 servers. I actually would have expected them all to be about the same. This could have just been an anomaly that day. The G2 and the G5 (available only in US West at the time of testing) were in different data centers than the A4 and D4 (US East). It does go to show that the drive speeds may fluctuate quite a bit. I would be curious to take some time and do further test on different days of the week, different times, etc and do further comparisons.
- For a SharePoint/SQL Environment, there is no reason to fork out the extra money for a G series Azure VM. I’ll probably use D/DS series for future SharePoint environments in Azure. I don’t need the extra 400GB of Temp Drive storage from the A4, and, while I may miss the extra 4 cores, the premium storage will definitely be an added perk. Initially I may miss having 16 disks as the D3 only has a max of 8 data disks. However, as my environments are Dev/Test, the added speed of the Temp Drive will be storing my SQL TempDB to give that a try allowing me to use all 8 data drives for Databases Files and Log Files.
I also got done with all of this and realized I forgot to include the disk latency in the various scenarios. Once I get to the next month and my Azure credit renews, I’ll repeat these tests and add the disk latency to the results as well. If you’re interested in the numbers on a particular server before I update this post, let me know and I can run the numbers for you.
Sorry for the length, but I wanted to make sure this covered everything I found and thought about as I was doing these tests, particular from the SharePoint/SQL angle. Stay tuned for some upcoming posts on actually building out a SharePoint environment in Azure.