fbpx
Techitup Middle East
Expert Opinion

Using QLC and All-Flash Arrays in Data Center Modernization

By Ricky Martin – Director of Market Strategy, NetApp

With the release of NetApp® C-Series capacity flash arrays, there’s been a lot of discussion about which workloads belong on QLC and which ones should stay on the high-performance A-Series all-flash arrays. One way of looking at this question is the old “tier 1 versus tier 2” split. However, there are plenty of mission-critical tier 1 workloads that don’t do a lot of random reads and writes and therefore don’t need the 200 microsecond response times that NetApp’s premier flash offerings provide.

When to use C-Series QLC capacity flash

One breakdown for the ideal workloads for NetApp C-Series QLC capacity flash looks like this.

  • Large-capacity, high-throughput tier 1 workloads where 2ms to 4ms latency for small block I/O is more than enough:
    • Media and rendering
    • Data lakes for unstructured data analytics
  • Consolidation of tier 2 and tier 3 workloads, where high capacity and data center efficiency are the primary drivers:
    • General-purpose file storage
    • Home directories
    • Non-mission-critical databases
    • General-purpose VMware
    • DevTest sandbox copies for databases or virtual machines (NAS or SAN)
    • Fast recovery for streaming backups
    • Target system for data tiering

The A-series high-performance all-flash arrays should be reserved for things like: 

  • Nonvirtualized enterprise resource planning (ERP) applications and databases used for online transaction processing (OLTP)
  • Time-series databases
  • High-performance applications virtualized under VMware
  • Artificial intelligence, machine learning, and deep learning

In short, QLC is good enough for any workload except for those that demand the highest level of performance for small random I/O.

When to use A-Series systems

The applications and workloads that demand that kind of small block random I/O are typically tier 1 databases like Oracle or SQL Server. In addition, there are new tier 1 apps and workloads that run high-speed analytics over millions of small files for tasks such as training artificial intelligence or running chip simulations with electronic design and automation software. That’s not an exhaustive list because each industry has its own needs, where the 200 microsecond response times that A-Series offers can make a significant difference to the speed of business. NetApp, for example, uses the speed of A-Series systems to accelerate its software build processes, and our healthcare customers use that performance to provide better patient care.

Outside of those tier 1 performance requirements, the biggest drawback to QLC is that compared to the kinds of media used in A-Series systems, it wears out more quickly. That isn’t a worry for our customers because we replace any media that wears out for as long as it is under service. The flip side is that to make sure that our customers never see a service interruption due to drive wear, the data needs to be spread across a reasonably large pool of QLC flash media. For NetApp, the minimum size of that capacity pool is 122TB RAW. That’s much lower than other vendors, who typically start at around 250TB for their block arrays and 1PB for their scale-out unstructured-only data silos. So, although that starting point is the best in the industry, it’s still higher than many AFF installed capacities. By comparison, the minimum starting point with the A-Series AFF A150 is 7.6TB RAW. For customers with more modest capacity requirements, the most cost-effective options are still some form of hybrid flash array technology. Those options combine SSD capacity with cloud storage, such as an AFF A150 or AFF A250 with cloud tiering or a more traditional hybrid flash array like the new FAS2820.

Is QLC right for you?

That 100+ TB starting point means that many customers need to begin by looking at QLC as the perfect technology to modernize and consolidate a broad range of existing workloads on their existing aging arrays. NetApp helps our customers take advantage of the price-performance of QLC by leveraging technology like our industry-leading multitenancy, advanced quality of service, encryption, privacy, and cyber resilience. These features provide a safe default “landing zone” for all of our customers’ data storage modernization efforts. This is where NetApp’s long-term leadership in unified storage helps customers. NetApp ONTAP® has a proven track record of providing a safe technology foundation for migrating, consolidating, and managing data at any scale, easily and efficiently.

Are you still wondering whether QLC might be right for you? Are you looking for a quick way of figuring out what workloads are good candidates for keeping on high-performance flash? Then look at how you connect your workloads to your host environment. If you plan on deploying RDMA-based connections like NVMe-FC or GPU Direct NFS-over-RDMA to get the fastest possible results with the lowest impact on your application hosts, then you should stick with A-Series. But if you’re currently using iSCSI or older 8Gbit Fibre Channel infrastructure or a hybrid flash and disk array, without performance issues, then the capacity flash QLC-based NetApp C-Series will almost certainly be the right choice.

Of course, you will want to do more due diligence, not just “throw all the old hybrid and iSCSI onto QLC.” NetApp can help you to identify what workloads belong where in ways that no other vendor can. For example, our A-Ops features, accessible via NetApp BlueXP™, can help existing customers refresh, upgrade, and consolidate their hybrid systems with confidence. And our observability platform, Cloud Insights, can help customers characterize workloads across their entire storage estate. If you prefer an expert to guide you through that process, our highly trained channel partners can help you. Or you can take advantage of our NetApp Professional Services offerings. NetApp Performance Assessment.

NetApp makes it easy to safely modernize and save money and power by consolidating your existing storage using a combination of QLC capacity flash, high-performance flash, and cloud. In a future blog post, I will discuss more advanced features, like nondisruptive dynamic workload portability between high performance and capacity flash, long-distance caching, and fully automated transparent tiering for cool and cold data. Until then, I hope that this post has helped you understand where QLC capacity flash fits into your data center modernization plans.

Next step

For more insights into improving operations for your entire storage environment, on premises and in the cloud, check out the NetApp Performance Assessment.

Related posts

Opinion: The Evolution of Zero Trust 

Editor

Technology Fueling an Era of Innovation in the Gaming Sector

Editor

AI is Only as Good as the Data that Fuels it

Editor

Leave a Comment