• Sign In / Register
  • Language
    • English
    • Deutsch
    • Française
    • Español
    • Português
    • Italia
    • Русский
    • 日本語
    • 한국어
    • 简体中文
    • 繁體中文

What is the future development of storage technology?

  • September 21, 2020
  • 129

Keep one thing in mind: the ultimate goal is to get closer to the CPU and reduce access time (ie latency).

       Memory storage class


   For many years, suppliers have been developing a technology that can insert non-volatile memory into traditional DIMM slots (slots used by volatile DRAM). Storage Class Memory (SCM) is a newer hybrid storage layer. It is not an exact memory or an exact storage. It is closer to the CPU and has two forms:


  1) Traditional DRAM supported by large capacitors stores data to local NAND chips (for example, NVDIMM-N)


   2) A complete NAND module (NVDIMM-F).


  In the first case, the speed of DRAM is reserved, but there is not enough capacity. Generally, the capacity of DRAM-based NVDIMM is smaller than the latest traditional DRAM. Suppliers such as Viking Technology and Netlist are the main manufacturers of DRAM-based NVDIMM products.


   And the second type will provide you with larger capacity, but it is not as fast as DRAM. Here, you will find that the standard NAND fixed on the traditional DIMM module is exactly the same as the NAND in modern SSDs.


  This type of memory is not registered to the CPU as a traditional memory. According to the DDR4 standard, modern motherboards and processors do not require any special firmware to use this technology. When the operating system is loaded on a system containing this type of memory, it isolates it as a "protected" mode category (for example, 0xe820) and does not use it like standard volatile DRAM. Instead, it will only access the memory through the driver interface (non-volatile memory is this interface). Using this module, you can map the memory area of these SCM devices to block devices that are accessible in user space.


   Current applications use SCM for in-memory databases, high-performance computing (HPC) and artificial intelligence (AI) workloads, and also use it as a persistent cache. As NVeoF continues to mature, it will allow the export of SCM devices across storage networks.


   Intel’s Optane, Samsung’s Z-SSD, etc.


Between DRAM and traditional SSD are some emerging technologies, such as Intel's Optane (originally developed in cooperation with Micron and named 3D-XPoint) and Samsung's Z-SSD. These technologies are very new and very little is known about them except that they are neither DRAM nor NAND. In the case of Intel’s Optane, it is a new non-volatile storage technology based on phase change memory (PCM). Optane's performance is better than NAND, but not as good as DRAM. Another advantage is that it has a better battery life than a NAND SSD—that is, it can perform more drive write operations (DWPD) per day compared to a standard NAND SSD.


  Compute storage


   Usually, the delay introduced between the application and the data it needs to access is too long, or the CPU cycles required to host the application consume too many resources on the host, thus introducing additional delay to the drive itself. How can we avoid these negative effects? One of the answers is to move the application to the physical drive itself. This is a recent trend and is called computational storage.


  The frontiers of the above technologies are NGD systems, ScaleFlux, and Samsung. So, what is computational storage? How is it achieved?


  The idea is to locate the data processing to the data storage layer and avoid moving the data to the computer's main memory (which is initially processed by the host CPU). On traditional systems, resources are required to move data from the storage location, process it, and then move it back to the same storage destination. The whole process takes time and introduces access delay-if the host system is processing other tasks, the situation is even more serious. In addition, the larger the data set, the more time it takes to move in/out.


   In order to solve this problem, some vendors have begun to integrate embedded microprocessors into their NVMe SSD controllers. The processor will run a standard operating system (such as Ubuntu Linux) and allow a piece of software to run locally on the SSD for on-site calculations.


   Challenge


   Another aspect where HDD continues to outperform SSD technology is its ability to cross-standard forms. Your standard server can only hold so much storage. Moreover, the optional space of the hard disk is much larger than that of the SSD. With the development of memory technology, this situation may change in the next few years.


   Another difficulty of SSD is the field of software applications. Many software applications do not conform to the optimal method for accessing NAND memory. These applications will increase drive access latency while reducing NAND cell life.


  To sum up


   Because of the memory technology involved, the future looks promising and exciting at the same time. Will SSD completely replace traditional HDD? I doubt it. Look at the tape technology, it still exists and continues to find its place in archive storage. HDD is likely to have a similar fate. Although before that, HDD will continue to compete with SSD in price and capacity.

Select Your Location