Fusion-io NVMFS

SanDisk/Fusion-io's NVMFS file system, formerly known as Direct File System (DFS),[1][2] accesses flash memory via a virtual flash storage layer instead of using the traditional block layer API. This file system has two main novel features. First, it lays out files directly in a very large virtual storage address space. Second, it leverages the virtual flash storage layer to perform block allocations and atomic updates. As a result, NVMFS performs better and is much simpler than a traditional Unix file system with similar functionalities. Additionally, this approach avoids the log-on-log performance issues triggered by log-structured file systems.[3] Microbenchmark results show that NVMFS can deliver 94,000 I/O operations per second (IOPS) for direct reads and 71,000 IOPS for direct writes with the virtualized flash storage layer on top of a first-generation Fusion-io ioDrive. For direct access performance, NVMFS is consistently better than ext3 on the same platform, sometimes by 20%. For buffered access performance, NVMFS is also consistently better than ext3, and sometimes by over 149%. Application benchmarks show that NVMFS outperforms ext3 by 7% to 250% while requiring less CPU power.[1] Additionally, I/O latency is lower with NVMFS compared to ext3.[4]

Flash Memory API

The API used by NVMFS to access flash memory consists of:[5]

  • An address space that is several orders of magnitude larger than the storage capacity of the flash memory.
  • Read, append and trim/deallocate/discard primitives.
  • Atomic writes.[6]

The layer that provides this API is called the virtualized flash storage layer in the DFS paper.[1] It is the responsibility of this layer to perform block allocation, wear leveling, garbage collection, crash recovery, address translation and also to make the address translation data structures persistent.

References

  1. ^ a b c Josephson, William K.; Bongo, Lars A.; Flynn, David; Li, Kai (September 2010). "Dfs: A file system for virtualized flash storage" (PDF). ACM Transactions on Storage. 6 (3). doi:10.1145/1837915.1837922. S2CID 1715382.
  2. ^ Talagala, Nisha (24 August 2012). "Native Flash Support For Applications" (PDF). Flash Memory Summit.
  3. ^ Yang, Jingpei; Plasson, Ned; Gillis, Greg; Talagala, Nisha; Sundararaman, Swaminathan (5 October 2014). "Don't stack your Log on my Log" (PDF). 2nd Workshop on Interactions of NVM/Flash with Operating Systems and Workloads (INFLOW 14).
  4. ^ Rochner, Thomas (19 September 2013). "Running NoSQL natively on flash" (PDF). NoSQL Search Roadshow Zurich.
  5. ^ Das, Dhananjoy (14 November 2014). "In a Battle of Hardware, Software Innovation Comes Out On Top". SanDisk. Archived from the original on 2014-11-29.
  6. ^ Ouyang, Xiangyong; Nellans, David; Wipfel, Robert; Flynn, David; Panda, Dhabaleswar K. (February 2011). "Beyond block I/O: Rethinking traditional storage primitives". 2011 IEEE 17th International Symposium on High Performance Computer Architecture. pp. 301–311. CiteSeerX 10.1.1.300.4140. doi:10.1109/HPCA.2011.5749738. ISBN 978-1-4244-9432-3. S2CID 6214993.