The buses that are exposed through the M.2 connector are PCI Express 3.0, Serial ATA (SATA) 3.0 and USB 3.0, which is backward compatible with USB 2.0. As a result, M.2 modules can integrate multiple functions, including the following device classes: Wi-Fi, Bluetooth, satellite navigation, near field communication (NFC), digital radio, Wireless Gigabit Alliance (WiGig), wireless WAN (WWAN), and solid-state drives (SSDs). The SATA revision 3.2 specification, in its gold revision as of August 2013, standardizes the M.2 as a new format for storage devices and specifies its hardware layout.
The M.2 specification provides up to four PCI Express lanes and one logical SATA 3.0 (6 Gbit/s) port, and exposes them through the same connector so both PCI Express and SATA storage devices may exist in the form of M.2 modules, lending an incredible amount of utility to the slot. Exposed PCI Express lanes provide a pure PCI Express connection between the host and storage device, with no additional layers of bus abstraction. PCI-SIG M.2 specification, in its revision 1.0 as of December 2013, provides detailed M.2 specifications.
There are three options available for the logical device interfaces and command sets used for interfacing with M.2 storage devices, which may be used depending on the type of M.2 storage device and available operating system support:
Used for SATA SSDs, and interfaced through the AHCI driver and legacy SATA 3.0 (6 Gbit/s) port exposed through the M.2 connector.
PCI Express using AHCI
Used for PCI Express SSDs and interfaced through the AHCI driver and provided PCI Express lanes, providing backward compatibility with widespread SATA support in operating systems at the cost of not delivering optimal performance by using AHCI for accessing PCI Express SSDs. AHCI was developed when the purpose of a host bus adapter (HBA) in a system was to connect the CPU/memory subsystem with a much slower storage subsystem based on rotating magnetic media; as a result, AHCI has some inherent inefficiencies in speed when applied to SSD devices, which behave much more like DRAM than like spinning media.
PCI Express using NVMe
Used for PCI Express SSDs and interfaced through the NVMe driver and provided PCI Express lanes, as a high-performance and scalable host controller interface designed and optimized especially for interfacing with PCI Express SSDs. NVMe has been designed from the ground up, capitalizing on the low latency and parallelism of PCI Express SSDs, and complementing the parallelism of contemporary CPUs, platforms and applications. At a high level, primary advantages of NVMe over AHCI relate to NVMe’s ability to exploit parallelism in host hardware and software, based on its design advantages that include data transfers with fewer stages, greater depth of command queues, and more efficient interrupt processing.