Jason is passionate about hyperscale and scale-out storage systems and storage architectures. His experience ranges from high-performance flash systems to archival systems, and everything in between.
Microsoft/ Facebook have merged their SSD requirements into a single document. This presentation will discuss the benefits of this and how this helps both system makers and SSD providers.
Lee has over 25 of storage industry experience having worked on many types of storage devices ranging from Magneto-Optical to CD/DVD/Blu-ray to spinning rust to Flash. Lee started at Microsoft in 2008 working in the Windows and Devices Group where he was responsible for many of... Read More →
Ross Stenfort is a Hardware System Engineer at Facebook working on storage. He has been involved in development of SSDs, ROCs, HBAs and HDDs. He has over 40 granted patents. He has had extensive storage experience in both large and small companies including CNEX, Seagate, LSI, SandForce... Read More →
The HDD IO priority has been explored on Facebook storage server Bryce Canyon. The feature can help enable QoS at IO level and provide a methodology to better manage latency for our workload. However, the full stack implementation requires standardization efforts of creating innovative communication channel between the host and device on defining target metrics and policies. The communication syntax needs to be consistent across vendors, capacity and SAS SATA protocols. In this talk, we would like to present potential latency management benefits based on the preliminary experiments that has been done on our Bryce Canyon platform. We will also discuss about some certain challenges we are facing such as number of priority levels, SAS & SATA difference and tail latency management etc. Those challenges cannot be easily resolved without the collaboration across the industry.
When hard disk drives are use in cloud environments, the effects of each drive's queuing methods cease to be limited to the interactions between that one drive and the host. Instead, individual drives become participants in a larger management algorithm for data accesses.
Ralph Weber edited the SCSI Primary Commands standard from is inception in 1994 until 2016, coined the acronym CAP (Commands, Architecture, and Protocols) as the moniker for the most active SCSI Working Group, chaired T10 Plenaries for two years ending in 2016, and has been honored... Read More →
Bill Boyle has been in the hard disk drive industry for 30 years (25 at Western Digital). He has a breadth of experience in systems architecture – HDD controller firmware, servo firmware, systems storage interfaces – and has accumulated a considerable aptitude for designing widely... Read More →
The SAS and SATA interfaces have dominated the storage market for nearly two decades, but is this the right interface for the future? For SSDs, the market is already shifting to NVMe for its higher bandwidth, lower latency, and overall light-weight interface. For the HDD market, SAS and SATA are plenty fast for many more years. Why would we consider NVMe? Join us as we explore some of the un-tapped opportunities with NVMe HDDs.
Jason is passionate about hyperscale and scale-out storage systems and storage architectures. His experience ranges from high-performance flash systems to archival systems, and everything in between.
As the NVMe-oF ecosystem continues to mature, storage systems now have design choices for the type of external and internal fabrics to use, as well as the various attach points. Several Ethernet-based protocols (RoCE, iWARP, TCP) have emerged as design choices for storage fabrics and interfaces. There are unique properties and trade-offs associated with those choices, as well as different implementation and acceleration paths. An industry discussion has ensued on how close to the storage the Ethernet I/F should be carried: data center, rack, or even device. This presentation will provide a broad view of Ethernet-attached NVMe-oF disaggregated storage systems and what it would take for successful wide deployment of those systems.
Ihab Hamadi is Sr. Director of Engineering at Western Digital, where he focuses on the innovation and growth of the company's platforms business, delivering data and application-centric solutions for the enterprise and cloud. He has a solid 21-year track record of building leading-edge... Read More →
In the last ~40 years, some of us have seen many evolutions of block storage protocols come and go: IDE/ATA, SCSI, PATA, P-SCSI, SATA, USB, FCP, SAS … NVMe. Each protocol optimized specific Enterprise or Consumer storage device solutions and features. Regardless of the various storage media types and characteristics, is it finally possible for the storage industry to unify and consolidate around NVMe as the optimal block storage protocol? … Why? … When?
Mohamad El-Batal is the Seagate Enterprise Data Solutions (EDS) Chief Technologist, and part of the overall Seagate Office of the CTO team. He is given the opportunity to shape the Seagate EDS strategy and future storage product technology roadmap.In his career Mohamad lead a team... Read More →
Today all the EDSFF family of form factors share the same protocol (NVMe), the same interface (PCIe), the same edge connector (SFF-TA-1002), the same pinout and functions (SFF-TA-1009). There is a vast diversity of Enterprise and Datacenter applications. Data center infrastructure can be optimized in various ways for maximum capacity per rack unit, max performance, balanced performance to capacity, networking bandwidth, or for higher performance and TDP CPUs. Having a flexible and scalable family of form factors allows for optimization for different use cases, different media types on SSDs (e.g. TLC, QLC, SCM), scalable performance on PCIe 4.0 & 5.0, improving data center TCO through optimized power and thermal management, while maintaining key commonalities for compatibility and faster development. SFF-TA-1012 has been published to SNIA to show the breadth of the SFF-TA-1002 ecosystem, and provide pinout definitions for EDSFF SSDs (SFF-TA-1009), OCP NIC, Gen Z, PECFF, SNIA NVMe-oF ethernet drives, and future high speed devices.
EDSFF represents a new family of form factors. This talk will discuss the intersection of flash form factors, industry trends and hyperscale needs. In this talk Facebook seeks to share its experiences and insights into the challenges and solutions for the next generation of flash.
Ross Stenfort is a Hardware System Engineer at Facebook working on storage. He has been involved in development of SSDs, ROCs, HBAs and HDDs. He has over 40 granted patents. He has had extensive storage experience in both large and small companies including CNEX, Seagate, LSI, SandForce... Read More →
SSDs have gone from expensive storage devices used sparingly to a main-stream storage solution. In this transition, SSDs have left behind the legacy HDD defined form factors and are finding more optimal design opportunities including E1.S/L and E3.S/L. With E1.L SSDs, we can enable designs that can fit 1PB of flash in a front-serviceable system in the near future! On the other hand, E1.S SSDs can enable very high performance devices at a smaller capacity point that can be used across the storage and compute fleet. Together with the OCP community, we are working to ensure that an E1.S form factor is able to meet the needs of hyperscale consumers like us as well as the rest of the industry. We believe that these EDSFF formfactors are the future of flash, and will optimize and unlock new opportunities in our system designs.
Jason is passionate about hyperscale and scale-out storage systems and storage architectures. His experience ranges from high-performance flash systems to archival systems, and everything in between.
With NVMe now becoming mainstream and new protocols such as CXL on the way system designers are challenged with supporting multiple device types in the front and rear of a server. This presentation discusses a new family of interoperable device form factors that support a wide range of device types through features such as a common high speed connector, multiple link widths, multiple PCB sizes, and a range of power profiles.
The goal of establishing a standardized thermal analysis approach for SSD devices is to align both producers and consumers in the area of form factor selection, thermal analysis, device comparison, and spatial boundaries. It is a known problem that thermal designers and technicians have a variety of methodologies by which they qualify their deviceís designs. The outcome of this effort will combine best known methodologies for SSD characterization into one industry specification and approach that establishes confidence in comparing key engineering metrics
The objective of this presentation is to share the efforts Facebook Storage team has made on how to quantitively gauge the system acoustic vibration impact on the future HDDs and establish a reference acoustic vibration guideline for the next generation HDD storage platform. Since the number of drives is fixed after the chassis design is done, the main driving force for the HDD storage platform at Facebook is to increase the drive density and achieve the large TCO win. However, higher capacity drives are more sensitive to vibration, most impact coming from acoustic vibration. The current approach relies on the readiness of the actual drives from the suppliers to evaluate acoustic vibration impact, is limited to current-gen drives and is lacking in insights about the platform compatibility with the future drives. Although emulators are available from some suppliers for the purpose of evaluating this, the methods today are proprietary sometimes and vendor specific. Sound Pressure Level (SPL) around the hard drives is used in this study as the indicator for the acoustic vibration impact. SPL is a drive agnostic characteristic and only depends on the system fan and chassis design. It would be really beneficial to the industry if all the HDD vendors can publish SPL guidelines to stay under (preferably a mask of sorts, that is constant across multiple HDD generations), for the purpose of acoustic vibration mitigation in a chassis from its early product design phase.
Madhavan is a Hardware engineer in the Storage Hardware team that designs all the Storage gear used inside Facebook. His interests include hardware architecture, storage system efficiency and performance, storage media, and file systems.
Jun is a thermal engineer in the Infra HW group. His work at Facebook is primarily in the storage platform thermal management. He earned his PhD degree in Mechanical Engineering from the University of Maryland, College Park in 2004.