Synology SSD Cache Setup and Testing
Enabling Synology SSD Cache
Due to budget constraints my Synology DS1815+ is only populated with 6 WD Red drives, and the last two bays have been sitting empty for a while now. After a recent unrelated upgrade I found myself in possession of two Samsung SSD 830 Series 256GB drives. I’m not really in need of any additional storage drives in the DS1815+, so I decided to try out the SSD drives configured as Synology SSD Cache.
Video Walk-Through
If you prefer video format over written documentation I cover how to configure and test the Synology SSD cache in the following Techthoughts video:
Setting up Synology SSD Cache
Before configuring SSD Cache on your Synology you should first familiarize yourself with the Synology SSD Technology Whitepaper.
Setup is relatively simple:
- Backup your Synology critical data (always a good idea before you make any major change)
- Backup your Synology configuration (Control Panel – Update & Restore – Configuration Backup)
- Power down your Synology unit
- Install SSDs (two required for both read and write cache | one required for just read cache)
- Power on Synology unit
- Verify SSDs are showing as healthy and available:
- Create SSD cache
- Choose Synology SSD cache mode
- Associate SSD cache with desired volume
- Select drives that will become members of the cache
- Choose SSD cache RAID type
- Finalize SSD Cache Configuration
- Confirm Synology SSD cache creation
- Allow SSD cache to finish mounting
- Synology SSD Cache configuration complete
Synology SSD Cache Test Results
Testing Setup
- DS1815+
- DSM 6.1.4-15217 Update 1
- WD Rest 2.7TB (6x) – RAID 6 – Brtfs
- Samsung 830 256GB (2x) – RAID 1 – SSD Cache
- IOMeter Settings:
- The number of outstanding I/Os: 32
- per target Worker: 1 (per share)
- Running time: 3 minutes
- Ramp up time: 30 seconds
- Data size: 20GB
- SSD cache size: 238.47GB
- Workload: Random 4KB IOPS
IOMeter settings are essentially the same used in the Synology whitepaper testing for SSD caching. The SSD cache was setup against a normal RAID 6 Brtfs volume containing several shared folders.
Testing results
Initial No Cache results were not surprising with the DS1815+ only putting up around 100 IOPS of random 4KB access on all three tests.
What was really surprising was the dip in performance after I enabled the cache. The same test results with SSD cache enabled dropped down to around 70 IOPS – a 30% decrease in performance.
Once I allowed the IOMeter test file to become fully cached though, performance became nothing short of amazing.
What’s really concerning though is that 30% initial drop. It took four test runs of IOMeter before the test file became fully cached.
You can see the results in the graph below:
This behavior makes a lot of sense to me, with performance dramatically increasing as more data is cached.
What bothers me is that initial dip. Based on these results it would appear that enabling the cache against a normal Brtfs volume actual incurs a penalty for the majority of the data on the volume. Sure, once data is cached it really starts to sing, but is a 30% drop in performance for the non-cached data worth it?
I was able to replicate this effect several times by removing and re-adding the SSD cache on the DS1815+. In each testing cycle, IOPS testing suffers after the SSD cache is added.
Conclusion
Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should. -Ian, Jurassic Park
Having a few SSDs laying around, and a few spare bays in your Synology NAS doesn’t mean your going to suddenly get an amazing increase in performance.
In fact, these testing results indicate that if you’re engaging a normal volume share, you may actually suffer a decrease!
You really need to understand your workloads, and what you hope to accomplish with the NAS.
A larger SSD cache, for example (1TB+) mounted to an iSCSI volume could really make some for some great VM performance on the DS1815+.
For now, despite the results, I’m going to leave the SSD cache in place and continue to report back on this article as I engage the device day-to-day.
If you’ve experienced different results, or have some recommendations on these findings, feel free to comment below.
I know this site gives quality dependent articlles or reviews and other stuff,
is there any other website which gives these kinds of data in quality?
All an all a nice piece but i do have one problem with it.
How did you decide that all uncached data access would get that kind of performance penalty?
Maybe the penalty was because of the initial cache building process?
You need to rethink your testing and observation technics to be more “scientific” in the logical way.
boaz, thanks for the comment, but I’m not sure how to be more black and white about it. Random read/writes without cache across all data on the device are in the low 100’s. Turn caching on, and anything not in the cache drops 30%. I’ve replicated this over and over. The testing is using the same white-paper testing that Synology uses. I’ve had others confirm the same. Turn caching on, and things that get cached will perform very well. Everything not in the cache takes a hit.
Testing method make sense to me. Great article, but what about write performance? Surely the SSD should help here?
What happen if both SSD cache life span is reached 100%? Will the Synology Volume or LUN will crashed?
Got iometer working
Have not installed my SSD cache
How do you connect to the Synology unit from iometer?
Best
Fred
The last 2 bays on the DS1815+ appearently have a known issue with SSDs and its recommended to use the first 2 bays for SSD cache.