Testing the performance of an enterprise-class storage system can be complex and time consuming, with much depending on the type and size of the system. Even so, performance testing is one of the most important steps to ensure that storage systems operate at peak efficiency and applications meet their goals.
Storage performance testing is not a one-size-fits-all operation. The testing process must meet the needs of the circumstances. Be specific to enhance efficiency and benefits, such as lower costs, fewer risks and better application performance.
Performance testing can help organizations better understand storage systems and upgrade or modify their configuration. It can also help to compare multiple storage products. Benefits of effective performance testing include the following:
Despite the different possibilities, there are five basic steps that should be part of any storage performance testing process.
Performance testing a storage system can be a significant undertaking. As such, it requires proper preparation to avoid wasting time and resources and to minimize disruptions.
Define your testing objectives and evaluation criteria. What determines that your tests have been successful and are complete?
Identify the key players and stakeholders. Who will carry out which tasks? Who needs to be informed of the testing process? What impact could there be on end users?
Identify, acquire and learn about the tools. You'll need these to perform your tests and report on their results.
Establish a system of documentation. Track all relevant information about the test environment, testing process, assumptions made when conducting the tests and any other important details.
Set up a test schedule. Specify when the tests should begin and how long they'll take. Build some flexibility into the schedule in case a test takes longer than expected, additional tests are warranted, or other complications arise.
Mimic your production environment. When applicable and possible, set up a testing environment and testing resources that approximate the production environment as closely as possible, including the test data and workloads. You might need to purge or pre-condition storage devices before running your tests.
A storage system is made up of more than just SSDs or HDDs. The system typically includes multiple layers that each have their own complexities and characteristics. Bottlenecks can occur in anything along the way: routers, host bus adapters, storage controllers, application hosts, replication servers or any of the other components.
Whether you're dealing with a SAN, NAS, hyperconverged infrastructure appliance, or other storage configuration, there are many points of possible failure, which can make performance testing complex and difficult. On the other hand, you might be concerned with only one or two components. For example, you may need to add a disk to an array and want to gauge its performance impact. In that case, the scope of your project is more limited. Whatever the scenario, determine exactly which components you'll test.
In most cases, you'll want to capture three metrics where appropriate.
Latency. The average time it takes for a component to complete a single data request. It is the measure of time between issuing a request and receiving a response. The lower the latency, the better the performance.
Throughput. The amount of data that passes through or originates from a component during a specific period. Throughput typically refers to the number of bits per second, such as Kbps, Mbps or Gbps. The higher the throughput, the better the performance.
IOPS. The number of I/O operations that a storage system can process each second. IOPS is concerned only with the number of read and write operations. It does not specify the amount of data included in each operation. Higher IOPS is typically better than lower IOPS, but also consider latency and throughput when you evaluate the data.
Evaluate other storage-specific metrics, such as cache usage, queue depth, I/O splitting, or capacity.
Do a trial run to ensure your tools work as expected, you've configured them correctly, and you can capture what you need. If you run the tests for a client, have them review the initial results to verify you tested the correct information. You'll probably need to tweak your process, but after you do, you should be ready to run your actual tests. Keep in mind a few important points:
Continuously verify that you collected the correct data and you can easily access that information for your final reporting.
Performance testing is not complete until you've collected and aggregated the results and provided them to the right people. The results should be in a format that is readable and that users can quickly understand and act upon. A report that contains meaningful visuals, such as tables, charts and other graphics, can go a long way in providing key players with quick insight into what your performance tests revealed. Your reports should also clearly point out any bottlenecks, anomalies or other issues observed.
Your reports should include basic information about the testing process itself, such as the components you tested, the types of tests you performed, the tools you used to perform the tests, and who conducted them and when.
After the key stakeholders review the results, they can then address the situation. They might purchase one system over another, upgrade the storage network, add an application server or replace storage devices.
The tools for performance tests will depend on the type of storage system, the components within that system, the vendors that built the components and the team's level of experience. Tools should meet specific requirements. To this end, there are several questions to ask about any storage performance testing tool:
Verify whether the equipment vendor offers storage performance testing or monitoring tools that meet requirements. Some platforms, such as Windows Server, include built-in tools for tracking performance. Check out open-source tools available for testing performance. For example, Flexible I/O is a popular and highly tunable I/O tester that's available on GitHub.
Incident management is critical to ensure that businesses can deal with unplanned disruptive events. Find out how the ISO:22320:...
New Everbridge CEO David Wagner details his areas of focus for the company. Investor Ancora has suggested that a private equity ...
ISO/TS 22317:2021 is a useful tool for BCDR practitioners and other personnel tasked with executing a BIA, which can often be a ...
Commvault's new ThreatWise deception technology in its Metallic SaaS application tricks ransomware into triggering the alarms ...
Asigra has added the ability to remove ransomware infections from files while keeping the file intact with its new Content Disarm...
Continuous data protection can eliminate your backup window. However, there are many issues to consider, including application ...
ServiceNow doubled down on its commitment to take the complexity out of digital transformation projects with a new version of its...
Arm's roadmap for Neoverse V2 core is designed to handle 5G, HPC and edge workloads. Nvidia will incorporate the offering in its ...
IBMs new generation of Linux-based mainframes can significantly reduce energy use for companies willing to replace x86 servers ...
All Rights Reserved, Copyright 2000 - 2022, TechTarget Privacy Policy Cookie Preferences Do Not Sell My Personal Info