Metalogix StoragePoint is a software tool that enables organizations to remotely externalize Binary Large Objects (BLOBs) from the SharePoint 2010 content databases to a variety of external systems. The savings in database sizes and disk costs, as well as performance improvements in backups and user access speed, can be significant. Slalom Consulting’s Derek Martin, Cloud Solutions Architect in Dallas, helped a Large Government Contractor deploy StoragePoint into a highly visible, highly used SharePoint 2010 environment in just under three days. The following are the results of that deployment:
- The SharePoint farm for this environment consisted of an internally facing communications site, which included large amounts of rapidly changing large objects (images, videos, etc.).
- 2The un-externalized content database was 2.93GB. Though small by SharePoint standards, the average object size (>500Kb) made it perfect for externalization.
- Once the BLOBs were externalized with StoragePoint, the final content database size was reduced to under 100MB. We did a basic ‘Shrink Files’ procedure in SQL server after the ex - ternalization.
- Because the content now resided on cheaper file share storage, the expensive database storage was reclaimed and the contractor realized a measurable savings in monthly operat - ing costs.
- Before externalizing the content, the average page load time for most pages was around 4 seconds. After externalizing the content, the page load time dropped to less than 1 second.
- The file shares that host the content for the external BLOBs sit on a storage array that has built-in de-duplication, which provide more storage savings because a number of objects within the farm are repeated.
- The file shares use a storage array-based backup facility, negating the need to back up 90% of the SharePoint farm using traditional backups (SQL or native SharePoint based backups).
- Backups from SQL and SharePoint, which took around 40 minutes previously, now only take about 3 minutes to complete.
For this client and others that I’ve worked for, there are a variety of exceptional ‘softer’ gains realized while using StoragePoint that complement the impressive statistics of ‘things gained’ specifically from StoragePoint.
Setting up StoragePoint is very straight forward. Run the provided instal lation routine on one server (a simple WSP) and the entire farm gets the necessary adjustments. Because StoragePoint is a completely na - tive SharePoint solution, no additional servers or overhead are encountered. Likely the hardest part of this procedure was activating the software because the servers don’t have internet access at this particular client, which required an ‘offline’ procedure – annoying but not at all troublesome.
Once activated, there are a few steps to get up and running. First, the pre-requisites had to be taken care of – getting storage provi - sioned and service accounts getting access to the storage. At this client, we went directly against a NAS share provided by the storage array nearest the VMs running this highly available environment. Because we had a variety of app pools and web applications, it was easiest for us to create a single storage profile and grant all of the service accounts access to it. In this particular environment, the entire farm was dedicated to a single, secure purpose, so that consolidation wasn’t an issue.
After getting the storage, we then needed to enable the EBS provider – click, done. We used EBS rather than RBS because we wanted more fine grained control over WHAT gets externalized, although it is deprecated. Next was setting up the General Settings, System Cache (which was a little confusing because of the way we are set up, not their fault), then an endpoint and the profile. The only part that I dislike about configuration is that iisresets are required when you set up an endpoint. However, it makes perfect sense why. When we set up the profile, we had to poke around a bit because there are a variety of settings that seem redundant. Turns out, they aren’t – thinks like asynchronous vs synchronous mode, where and how to cache, etc. may seem redundant is because I can control the externalization pipeline at a variety of stages. After working with it a bit, it became quite clear and intuitive.
This was the most important, yet easiest, part of the entire process. Once the cache, endpoint and profile were set up, I went in and found the Bulk Externalization job in the StoragePoint settings. I simply hit the ‘externalize now’ button, sat back and smiled. Five minutes later, it was all done and it was quite impressive that the environment was never unavailable (except for the iisresets I had to do during initial configuration). The dashboard reports showed that I was now saving >90% of my storage on the expensive SQL drives and a quick dbcc shrinkfile command later and we were off and running!
During the evaluation of the software, we performed load testing of the exact content in two phases, both using Visual Studio Test. A web application with a single site collection containing the entire system was created in ‘non-externalized’ mode – didn’t do anything StoragePoint related. Then, we set up another web application and did a backup/restore of the site collection to it, this time with externalization enabled. Now, with exact duplicates, one external and one internal, we pointed the test agent at both web applications (one at a time) to get a performance reading. Quite impressive results to go from an average of 4 seconds to less than 1 second!
The configuration is simple. The solution is entirely native SharePoint – no external servers or services required. The maintenance of the farm was greatly simplified. Those three sentences spell out a very compelling case for its use within the environment; so much so, that it was purchased as the RBS solution for the entire enterprise environment at this client.