17.07.2022

sas connectors. Unprecedented serial interface compatibility. Controllers in Twin and FatTwin platforms


Hard drive for the server, features of choice

The hard drive is the most valuable component in any computer. After all, it stores information with which the computer and the user work, in the event that we are talking about a personal computer. A person, each time sitting down at a computer, expects that the operating system loading screen will now run through, and he will start working with his data, which the hard drive will give out “on the mountain” from his bowels. If we are talking about a hard drive, or even an array of them as part of a server, then there are tens, hundreds and thousands of such users who expect to get access to personal or work data. And all their quiet work or recreation and entertainment depends on these devices that constantly store data in themselves. Already from this comparison it is clear that requests for hard drives of home and industrial class are not equivalent - in the first case, one user works with it, in the second - thousands. It turns out that the second hard drive should be more reliable, faster, more stable than the first one many times over, because they work with it, many users rely on it. This article will discuss the types of hard drives used in the corporate sector and their design features to achieve the highest reliability and performance.

SAS and SATA drives - so similar and so different

Until recently, the standards of industrial and consumer hard drives differed significantly and were incompatible - SCSI and IDE, now the situation has changed - the vast majority of hard drives on the market are SATA and SAS (Serial Attached SCSI). The SAS connector is versatile and form factor compatible with SATA. This allows you to directly connect to the SAS system both high-speed, but at the same time small capacity (up to 300 GB at the time of writing) SAS drives, as well as slower, but many times more capacious SATA drives (up to 2 TB at the time of writing). ). Thus, in one disk subsystem, it is possible to combine vital applications that require high performance and rapid data access, and more economical applications with a lower cost per gigabyte.

This interoperability benefits both backplate manufacturers and end users by reducing hardware and engineering costs.

That is, both SAS devices and SATA devices can be connected to SAS connectors, and only SATA devices can be connected to SATA connectors.

SAS and SATA - high speed and large capacity. What to choose?

SAS disks, which replaced SCSI disks, completely inherited their main properties that characterize a hard drive: spindle speed (15000 rpm) and volume standards (36,74,147 and 300 GB). However, the SAS technology itself differs significantly from SCSI. Let's take a quick look at the main differences and features: The SAS interface uses a point-to-point connection - each device is connected to the controller by a dedicated channel, unlike it, SCSI works on a common bus.

SAS supports a large number of devices (> 16384), while the SCSI interface supports 8, 16, or 32 devices on the bus.

The SAS interface supports data transfer rates between devices at speeds of 1.5; 3; 6 Gb / s, while the SCSI interface bus speed is not allocated to each device, but is divided between them.

SAS supports the connection of slower SATA devices.

SAS configurations are much easier to assemble and install. Such a system is easier to scale. In addition, SAS hard drives inherited the reliability of SCSI hard drives.

When choosing a disk subsystem - SAS or SATA, you need to be guided by what functions will be performed by the server or workstation. To do this, you need to decide on the following questions:

1. How many simultaneous, diverse requests will the disk handle? If large - your clear choice - SAS disks. Also, if your system will serve a large number of users - choose SAS.

2. How much information will be stored on the disk subsystem of your server or workstation? If more than 1-1.5 TB, you should pay attention to a system based on SATA hard drives.

3. What is the budget allocated for the purchase of a server or workstation? It should be remembered that in addition to SAS disks, you will need a SAS controller, which also needs to be taken into account.

4. Do you plan, as a result, to increase the volume of data, increase productivity or increase the fault tolerance of the system? If so, then you need a SAS-based disk subsystem, it is easier to scale and more reliable.

5. Your server will run mission-critical data and applications - your choice is heavy-duty SAS drives.

A reliable disk subsystem, it is not only high-quality hard disks from a well-known manufacturer, but also an external disk controller. They will be discussed in one of the following articles. Consider SATA drives, what types of these drives are and which ones should be used when building server systems.

SATA drives: consumer and industrial sector

SATA drives used everywhere, from consumer electronics and home computers to high-performance workstations and servers, differ in subspecies, there are drives for use in household appliances, with low heat dissipation, power consumption, and as a result, low performance, there are drives - middle class, for home computers, and there are drives for high-performance systems. In this article, we will consider the class of hard drives for productive systems and servers.

Performance characteristics

Server class HDD

HDD desktop class

Rotational speed

7,200 rpm (nominal)

7,200 rpm (nominal)

Cache size

Average delay time

4.20 ms (nominal)

6.35 ms (nominal)

Transfer rate

Reading from drive cache (Serial ATA)

maximum 3 Gb/s

maximum 3 Gb/s

physical characteristics

Capacity after formatting

1,000,204 MB

1,000,204 MB

Capacity

Interface

SATA 3 Gb/s

SATA 3 Gb/s

Number of sectors available to the user

1 953 525 168

1 953 525 168

Dimensions

Height

25.4mm

25.4mm

Length

147 mm

147 mm

Width

101.6 mm

101.6 mm

0.69 kg

0.69 kg

impact resistance

Shock resistance in working condition

65G, 2ms

30g; 2 ms

Shock resistance when not in use

250G, 2ms

250G, 2ms

Temperature

In working order

-0° C to 60° C

-0° C to 50° C

Out of Service

-40° C to 70° C

-40° C to 70° C

Humidity

In working order

relative humidity 5-95%

Out of Service

relative humidity 5-95%

relative humidity 5-95%

Vibration

In working order

Linear

20-300Hz, 0.75g (0 to peak)

22-330Hz, 0.75g (0 to peak)

Free

0.004 g/Hz (10 - 300 Hz)

0.005 g/Hz (10 - 300 Hz)

Out of Service

low frequency

0.05 g/Hz (10 - 300 Hz)

0.05 g/Hz (10 - 300 Hz)

High frequency

20-500Hz, 4.0G (0 to peak)

The table shows the characteristics of hard drives from one of the leading manufacturers, in one column data are given for a server-class SATA hard drive, in the other for a conventional SATA hard drive.

From the table we can see that disks differ not only in performance characteristics, but also in operational characteristics, which directly affect the life expectancy and successful operation of the hard drive. You should pay attention to the fact that outwardly these hard drives differ insignificantly. Consider what technologies and features allow you to do this:

Reinforced shaft (spindle) of the hard drive, some manufacturers are fixed at both ends, which reduces the influence of external vibration and contributes to the precise positioning of the head unit during read and write operations.

The use of special intelligent technologies that take into account both linear and angular vibration, which reduces the positioning time of the heads and increases the performance of disks up to 60%

RAID runtime debugging feature - prevents hard drives from dropping out of RAID, which is a characteristic feature of conventional hard drives.

The height adjustment of the heads in combination with the technology of preventing contact with the surface of the plates, which leads to a significant increase in the life of the disk.

A wide range of self-diagnostic functions that allow you to predict in advance the moment when the hard drive will fail and warn the user about it, which allows you to have time to save information to a backup drive.

Features that reduce the rate of unrecoverable read errors, which increases the reliability of the server hard drive compared to conventional hard drives.

Speaking about the practical side of the issue, we can confidently say that specialized hard drives in servers "behave" much better. The technical service receives many times fewer calls on the instability of the operation of RAID arrays and failures of hard drives. Support by the manufacturer of the server segment of hard drives is much faster than conventional hard drives, due to the fact that the industrial sector is a priority for any manufacturer of data storage systems. After all, it is in it that the most advanced technologies that guard your information are used.

Analogue of SAS disks:

Hard drives from Western Digital VelociRaptor. These 10K RPM drives are equipped with a SATA 6 Gb/s interface and 64 MB of cache. The MTBF of these drives is 1.4 million hours.
More details on the manufacturer's website www.wd.com

You can order a server assembly based on SAS or an analogue of SAS hard drives in our company "Status" in St. Petersburg, you can also buy or order SAS hard drives in St. Petersburg:

  • call +7-812-385-55-66 in St. Petersburg
  • write to the address
  • Leave an application on our website on the page "Online application"

In modern computer systems, SATA and SAS interfaces are used to connect the main hard drives. As a rule, the first option suits home workstations, the second - server ones, so the technologies do not compete with each other, meeting different requirements. The significant difference in cost and memory size makes users wonder how SAS differs from SATA and look for compromises. Let's see if this makes sense.

SAS(Serial Attached SCSI) is a serial interface for connecting storage devices, developed on the basis of parallel SCSI to execute the same set of commands. Used primarily in server systems.

SATA(Serial ATA) is a serial data exchange interface based on parallel PATA (IDE). It is used in home, office, multimedia PCs and laptops.

If we talk about HDD, then, despite the different technical characteristics and connectors, there are no cardinal differences between the devices. Backward one-way compatibility makes it possible to connect disks to the server board using both one and the second interface.

It is worth noting that both connection options are also real for SSDs, but the significant difference between SAS and SATA in this case will be in the cost of the drive: the first can be dozens of times more expensive with a comparable volume. Therefore, today such a solution, if not rare, is sufficiently balanced, and is intended for fast corporate-level data centers.

Comparison

As we already know, SAS is used in servers, SATA - in home systems. In practice, this means that many users access the former at the same time and solve many tasks, while the latter is dealt with by one person. Accordingly, the server load is much higher, so the disks must be sufficiently fault-tolerant and fast. The SCSI protocols (SSP, SMP, STP) implemented in SAS allow you to process more I / O operations at the same time.

Directly for HDD, the speed of access is determined primarily by the speed of rotation of the spindle. For desktop systems and laptops, 5400 - 7200 RPM is necessary and sufficient. Accordingly, it is almost impossible to find a SATA drive with 10,000 RPM (except to look at the WD VelociRaptor series, again designed for workstations), and anything higher is absolutely unattainable. SAS HDD spins at least 7200 RPM, 10000 RPM can be considered the standard, and 15000 RPM is a sufficient maximum.

Serial SCSI drives are considered to be more reliable and have higher MTBF. In practice, stability is achieved more due to the checksum verification function. SATA drives, on the other hand, suffer from “silent errors”, when data is partially written or corrupted, which leads to bad sectors.

The main advantage of SAS also works for the fault tolerance of the system - two duplex ports that allow you to connect one device via two channels. In this case, information exchange will be carried out simultaneously in both directions, and reliability is ensured by Multipath I / O technology (two controllers insure each other and share the load). The queue of tagged commands is built up to a depth of 256. Most SATA drives have one half-duplex port, and the queue depth using NCQ technology is no more than 32.

The SAS interface assumes the use of cables up to 10 m long. Up to 255 devices can be connected to one port through expanders. SATA is limited to 1m (2m for eSATA), and only supports point-to-point connection of one device.

Prospects for further development - what is the difference between SAS and SATA is also felt quite sharply. The bandwidth of the SAS interface reaches 12 Gb / s, and manufacturers announce support for data transfer rates of 24 Gb / s. The latest revision of SATA stopped at 6 Gb / s and will not evolve in this regard.

SATA drives in terms of the cost of 1 GB have a very attractive price tag. In systems where the speed of access to data is not critical, and the amount of stored information is large, it is advisable to use them.

Table

SAS SATA
For server systemsPrimarily for desktop and mobile systems
Uses the SCSI command setUses the ATA command set
Minimum spindle speed HDD 7200 RPM, maximum - 15000 RPM5400 RPM minimum, 7200 RPM maximum
Supports checksum verification technology when writing dataA large percentage of errors and bad sectors
Two duplex portsOne half duplex port
Multipath I/O supportedPoint-to-point connection
Command queue up to 256Command queue up to 32
Cables up to 10 m can be usedCable length no more than 1 m
Bus bandwidth up to 12 Gb/s (in the future - 24 Gb/s)Bandwidth 6 Gbps (SATA III)
The cost of drives is higher, sometimes significantlyCheaper in terms of price per 1 GB

Little has changed over the past two years:

  • Supermicro is ditching the proprietary "flipped" UIO form factor for controllers. Details will be below.
  • LSI 2108 (SAS2 RAID with 512MB cache) and LSI 2008 (SAS2 HBA with optional RAID support) are still in service. Products based on these chips, both from LSI and from OEM partners, are well debugged and are still relevant.
  • There were LSI 2208 (the same SAS2 RAID with LSI MegaRAID stack, only with a dual-core processor and 1024MB of cache) and (an improved version of LSI 2008 with a faster processor and PCI-E 3.0 support).

Transition from UIO to WIO

As you remember, UIO boards are ordinary PCI-E x8 boards, in which the entire element base is located on the reverse side, i.e. when installed in the left riser, it is on top. This form factor was needed to install boards in the lowest slot of the server, which allowed four boards to be placed in the left riser. UIO is not only a form factor of expansion boards, it is also cases designed for installing risers, risers themselves and motherboards of a special form factor, with a cutout for the bottom expansion slot and slots for installing risers.
This solution had two problems. Firstly, the non-standard form factor of expansion boards limited the choice of the client, since under the UIO form factor, there are only a few controllers SAS, InfiniBand and Ehternet. Secondly, there are not enough PCI-E lines in the slots for risers - only 36, of which there are only 24 lines for the left riser, which is clearly not enough for four boards with PCI-E x8.
What is WIO? At first it turned out that it was possible to place four boards in the left riser without having to "turn the sandwich butter up", and there were risers for regular boards (RSC-R2UU-A4E8+). Then the problem of the lack of lines (now there are 80) was solved by using slots with a higher pin density.
UIO riser RSC-R2UU-UA3E8+
WIO riser RSC-R2UW-4E8

Results:
  • WIO risers cannot be installed in UIO motherboards (eg X8DTU-F).
  • UIO risers cannot be installed in new WIO boards.
  • There are risers for WIO (on the motherboard) that have a UIO slot for cards. In case you still have UIO controllers. They are used in platforms under Socket B2 (6027B-URF, 1027B-URF, 6017B-URF).
  • New controllers in the UIO form factor will not appear. For example, the USAS2LP-H8iR controller on the LSI 2108 chip will be the last one, there will be no LSI 2208 for UIO - only a regular MD2 with PCI-E x8.

PCI-E controllers

At the moment, three varieties are relevant: RAID controllers based on LSI 2108/2208 and HBA based on LSI 2308. There is also a mysterious SAS2 HBA AOC-SAS2LP-MV8 on a Marvel 9480 chip, but write about it because of its exoticism. Most use cases for internal SAS HBAs are storage with ZFS under FreeBSD and various flavors of Solaris. Due to the absence of problems with support in these operating systems, the choice in 100% of cases falls on LSI 2008/2308.
LSI 2108
In addition to UIO "shny AOC-USAS2LP-H8iR, which is mentioned in two more controllers were added:

AOC-SAS2LP-H8iR
LSI 2108, SAS2 RAID 0/1/5/6/10/50/60, 512MB cache, 8 internal ports (2x SFF-8087). It is an analogue of the LSI 9260-8i controller, but manufactured by Supermicro, there are minor differences in the layout of the board, the price is $40-50 lower than LSI. All additional LSI options are supported: activation, FastPath and CacheCade 2.0, cache battery protection - LSIiBBU07 and LSIiBBU08 (now it is preferable to use BBU08, it has an extended temperature range and comes with a cable for remote mounting).
Despite the emergence of more powerful controllers based on the LSI 2208, the LSI 2108 is still relevant due to the price reduction. Performance with conventional HDDs is enough in any scenario, the IOPS limit for working with SSDs is 150,000, which is more than enough for most budget solutions.

AOC-SAS2LP-H4iR
LSI 2108, SAS2 RAID 0/1/5/6/10/50/60, 512MB cache, 4 internal + 4 external ports. It is an analogue of the LSI 9280-4i4e controller. Convenient for use in expander cases, as you don’t have to bring the output from the expander outside to connect additional JBODs, or in 1U cases for 4 disks, if necessary, provide the ability to increase the number of disks. Supports the same BBUs and activation keys.
LSI 2208

AOC-S2208L-H8iR
LSI 2208, SAS2 RAID 0/1/5/6/10/50/60, 1024MB cache, 8 internal ports (2 SFF-8087 connectors). It is an analogue of the LSI 9271-8i controller. The LSI 2208 is a further development of the LSI 2108. The processor became dual-core, which made it possible to raise the performance limit in terms of IOPS "m up to 465000. Support for PCI-E 3.0 was added and increased to 1GB cache.
The controller supports BBU09 battery cache protection and CacheVault flash protection. Supermicro supplies them under part numbers BTR-0022L-LSI00279 and BTR-0024L-LSI00297, but it is easier to purchase from us through the LSI sales channel (the second part of the part numbers is the native LSI part numbers). MegaRAID Advanced Software Options activation keys are also supported, part number: AOC-SAS2-FSPT-ESW (FastPath) and AOCCHCD-PRO2-KEY (CacheCade Pro 2.0).
LSI 2308 (HBA)

AOC-S2308L-L8i and AOC-S2308L-L8e
LSI 2308, SAS2 HBA (with IR firmware - RAID 0/1/1E), 8 internal ports (2 SFF-8087 connectors). This is the same controller, it comes with different firmware. AOC-S2308L-L8e - IT firmware (pure HBA), AOC-S2308L-L8i - IR firmware (supporting RAID 0/1/1E). The difference is that L8i can work with IR and IT firmware, L8e can only work with IT, firmware in IR is locked. It is an analogue of the LSI 9207-8 controller i. Differences from LSI 2008: a faster chip (800 MHz, as a result - the IOPS limit has risen to 650 thousand), PCI-E 3.0 support has appeared. Application: software RAIDs (ZFS, for example), budget servers.
Based on this chip, there will be no cheap controllers supporting RAID-5 (iMR stack, out of ready-made controllers - LSI 9240).

Onboard controllers

In the latest products (X9 boards and platforms with them), Supermicro denotes the presence of a SAS2 controller from LSI with the number "7" in the part number, the number "3" indicates the chipset SAS (Intel C600). It just doesn't differentiate between the LSI 2208 and 2308, so be careful when choosing a board.
  • The LSI 2208-based controller soldered on motherboards has a maximum limit of 16 disks. If you add 17, it will simply not be detected, and you will see the message "PD is not supported" in the MSM log. This is offset by a significantly lower price. For example, a bundle "X9DRHi-F + external controller LSI 9271-8i" will cost about $500 more than an X9DRH-7F with LSI 2008 onboard. Bypassing this limitation by flashing in LSI 9271 will not work - flashing another SBR block, as in the case of LSI 2108, does not help.
  • Another feature is the lack of support for CacheVault modules, there is simply not enough space on the boards for a special connector, so only BBU09 is supported. The ability to install the BBU09 depends on the enclosure used. For example, the LSI 2208 is used in the 7127R-S6 blade servers, there is a BBU connector, but to mount the module itself, you need an additional MCP-640-00068-0N Battery Holder Bracket.
  • The SAS HBA (LSI 2308) firmware will now be required, since in DOS on any of the boards with LSI 2308 sas2flash.exe does not start with the error "Failed to initialize PAL".

Controllers in Twin and FatTwin platforms

Some 2U Twin 2 platforms come in three versions, with three types of controllers. For example:
  • 2027TR-HTRF+ - Chipset SATA
  • 2027TR-H70RF+ - LSI 2008
  • 2027TR-H71RF+ - LSI 2108
  • 2027TR-H72RF+ - LSI 2208
Such diversity is ensured by the fact that the controllers are placed on a special backplane that connects to a special slot on the motherboard and to the disk backplane.
BPN-ADP-SAS2-H6IR (LSI 2108)


BPN-ADP-S2208L-H6iR (LSI 2208)

BPN-ADP-SAS2-L6i (LSI 2008)

Supermicro xxxBE16/xxxBE26 Enclosures

Another topic that is directly related to controllers is the modernization of cases with . Varieties have appeared with an additional basket for two 2.5" disks located on the rear panel of the case. The purpose is a dedicated disk (or mirror) for loading the system. Of course, the system can be loaded by selecting a small volume from another disk group or from additional disks fixed inside the case (in 846 cases, you can install additional fasteners for one 3.5" or two 2.5" drives), but the updated modifications are much more convenient:




Moreover, these additional disks do not need to be connected specifically to the chipset SATA controller. Using the SFF8087->4xSATA cable, you can connect to the main SAS controller through the expander's SAS output.
P.S. Hope the information was helpful. Remember that the most complete information and technical support for products from Supermicro, LSI, Adaptec by PMC, and other vendors is available from True System.

This article will focus on what allows you to connect a hard drive to a computer, namely, the hard drive interface. More precisely, to speak about hard drive interfaces, because a great variety of technologies for connecting these devices have been invented over the entire period of their existence, and the abundance of standards in this area can confuse an inexperienced user. However, first things first.

Hard drive interfaces (or strictly speaking, external drive interfaces, since not only, but also other types of drives, such as optical drives, can act as them) are designed to exchange information between these external memory devices and the motherboard. Hard drive interfaces, no less than the physical parameters of drives, affect many of the drive's performance and performance. In particular, drive interfaces determine such parameters as the speed of data exchange between the hard drive and the motherboard, the number of devices that can be connected to the computer, the ability to create disk arrays, the possibility of hot plugging, support for NCQ and AHCI technologies, etc. . It also depends on the interface of the hard drive which cable, cord or adapter you need to connect it to the motherboard.

SCSI - Small Computer System Interface

The SCSI interface is one of the oldest interfaces developed for connecting drives in personal computers. This standard appeared in the early 1980s. One of its developers was Alan Shugart, also known as the inventor of floppy disk drives.

The appearance of the SCSI interface on the board and the cable connecting to it

The SCSI standard (traditionally, this abbreviation is read in Russian transcription as "skazi") was originally intended for use in personal computers, as evidenced even by the name of the format - Small Computer System Interface, or a system interface for small computers. However, it so happened that drives of this type were used mainly in top-class personal computers, and later in servers. This was due to the fact that, despite the successful architecture and a wide range of commands, the technical implementation of the interface was rather complicated and was not suitable for the cost of mass PCs.

However, this standard had a number of features not available for other types of interfaces. For example, a cord for connecting Small Computer System Interface devices can have a maximum length of 12 m and a data transfer rate of 640 MB/s.

Like the IDE interface that appeared a little later, the SCSI interface is parallel. This means that the interface uses buses that transmit information over several conductors. This feature was one of the limiting factors for the development of the standard, and therefore, a more advanced, serial SAS standard (from Serial Attached SCSI) was developed as its replacement.

SAS - Serial Attached SCSI

This is how the SAS interface of the server disk looks like

Serial Attached SCSI was developed as an improvement on the rather old Small Computers System Interface hard drive interface. Despite the fact that Serial Attached SCSI uses the main advantages of its predecessor, nevertheless, it has many advantages. Among them it is worth noting the following:

  • Use of a common bus by all devices.
  • The serial communication protocol used by SAS allows fewer signal lines to be used.
  • There is no need for bus termination.
  • Virtually unlimited number of connected devices.
  • Higher bandwidth (up to 12 Gbps). Future implementations of the SAS protocol are expected to support data rates up to 24 Gbps.
  • Ability to connect drives with Serial ATA interface to the SAS controller.

Typically, Serial Attached SCSI systems are built from several components. The main components include:

  • target devices. This category includes the actual drives or disk arrays.
  • Initiators are chips designed to generate requests to target devices.
  • Data delivery system - cables connecting target devices and initiators

Serial Attached SCSI connectors come in a variety of shapes and sizes, depending on the type (external or internal) and SAS versions. Below are the internal SFF-8482 connector and the external SFF-8644 connector designed for SAS-3:

Left - internal connector SAS SFF-8482; On the right is an external SAS SFF-8644 connector with a cable.

A few examples of the appearance of SAS cords and adapters: HD-Mini SAS cord and SAS-Serial ATA adapter cord.

Left - HD Mini SAS cord; Right - adapter cable from SAS to Serial ATA

Firewire - IEEE 1394

Today, it is quite common to find hard drives with a Firewire interface. Although any type of peripheral device can be connected to the computer through the Firewire interface, and it cannot be called a specialized interface designed to connect exclusively hard drives, nevertheless, Firewire has a number of features that make it extremely convenient for this purpose.

FireWire - IEEE 1394 - laptop view

The Firewire interface was developed in the mid-1990s. The beginning of the development was laid by the well-known company Apple, which needed its own, different from USB, bus for connecting peripheral equipment, primarily multimedia. The specification describing the operation of the Firewire bus is called IEEE 1394.

Firewire is one of the most commonly used high-speed serial front-end bus formats today. The main features of the standard include:

  • Ability to hot connect devices.
  • Open bus architecture.
  • Flexible topology for connecting devices.
  • Widely varying data transfer rate - from 100 to 3200 Mbps.
  • The ability to transfer data between devices without the participation of a computer.
  • The possibility of organizing local networks using the bus.
  • Bus power transmission.
  • A large number of connected devices (up to 63).

To connect hard drives (usually through external hard drive cases) via the Firewire bus, as a rule, a special SBP-2 standard is used, which uses the Small Computers System Interface protocol command set. It is possible to connect Firewire devices to a regular USB connector, but this requires a special adapter.

IDE - Integrated Drive Electronics

The abbreviation IDE is undoubtedly familiar to most personal computer users. The IDE hard drive interface standard was developed by a well-known hard drive manufacturer, Western Digital. The advantage of IDE over other interfaces that existed at that time, in particular, the Small Computers System Interface, as well as the ST-506 standard, was that there was no need to install a hard disk controller on the motherboard. The IDE standard meant installing the drive controller on the case of the drive itself, and only the host interface adapter for connecting IDE drives remained on the motherboard.

IDE interface on motherboard

This innovation has improved the performance of the IDE drive due to the fact that the distance between the controller and the drive itself has been reduced. In addition, the installation of an IDE controller inside the hard drive enclosure made it possible to somewhat simplify both motherboards and the production of hard drives themselves, since the technology gave manufacturers freedom in terms of optimal organization of the drive's operation logic.

The new technology was originally called Integrated Drive Electronics. Subsequently, a standard describing it, called ATA, was developed. This name comes from the last part of the name of the PC/AT computer family by adding the word Attachment.

A dedicated IDE cable is used to connect a hard drive or other device, such as an optical drive that supports Integrated Drive Electronics technology, to the motherboard. Since ATA refers to parallel interfaces (which is why it is also called Parallel ATA or PATA), that is, interfaces that provide simultaneous data transfer over several lines, its data cable has a large number of conductors (usually 40, and in recent versions of the protocol it was possible to use 80-core cable). A common data cable for this standard is flat and wide, but round cables are also found. The power cable for Parallel ATA drives has a 4-pin connector and is connected to the computer's power supply.

The following are examples of an IDE cable and a round PATA data cable:

The appearance of the interface cable: on the left - flat, on the right in a round sheath - PATA or IDE.

Due to the relative cheapness of Parallel ATA drives, the ease of implementing an interface on the motherboard, and the ease of installing and configuring PATA devices for the user, drives such as Integrated Drive Electronics ousted devices of other types of interface from the market of hard drives for low-end personal computers for a long time.

However, the PATA standard also has a number of disadvantages. First of all, this is a limitation on the length that a Parallel ATA data cable can have - no more than 0.5 m. In addition, the parallel organization of the interface imposes a number of restrictions on the maximum data transfer rate. Does not support the PATA standard and many advanced features that other types of interfaces have, such as hot plugging devices.

SATA - Serial ATA

View of the SATA interface on the motherboard

The SATA (Serial ATA) interface, as the name suggests, is an improvement on ATA. This improvement consists, first of all, in the conversion of the traditional parallel ATA (Parallel ATA) into a serial interface. However, the differences between the Serial ATA standard and the traditional one are not limited to this. In addition to changing the type of data transfer from parallel to serial, the connectors for data transfer and power supply have also changed.

Below is the SATA data cord:

Data cable for SATA interface

This made it possible to use a much longer cable and increase the data transfer rate. However, the downside was the fact that PATA devices, which were present on the market in huge quantities before the advent of SATA, became impossible to directly connect to the new connectors. True, most new motherboards still have the old connectors and support the connection of old devices. However, the reverse operation - connecting a new type of drive to an old motherboard usually causes much more problems. For this operation, the user usually requires a Serial ATA to PATA adapter. The power cable adapter usually has a relatively simple design.

Serial ATA to PATA power adapter:

On the left is a general view of the cable; On the right, the appearance of the PATA and Serial ATA connectors is enlarged

More complicated, however, is the situation with a device such as an adapter for connecting a serial interface device to a parallel interface connector. Typically, this type of adapter is made in the form of a small microcircuit.

Appearance of a universal bidirectional adapter between SATA - IDE interfaces

At present, the Serial ATA interface has practically supplanted Parallel ATA, and PATA drives can now be found mainly only in fairly old computers. Another feature of the new standard, which ensured its wide popularity, was support for .

Type of adapter from IDE to SATA

You can tell a little more about NCQ technology. The main advantage of NCQ is that it allows you to use ideas that have long been implemented in the SCSI protocol. In particular, NCQ supports a system for ordering read/write operations coming to multiple drives installed in the system. Thus, NCQ can significantly improve the performance of drives, especially hard drive arrays.

Type of adapter from SATA to IDE

To use NCQ, the technology must be supported by the hard drive as well as the motherboard host adapter. Almost all adapters that support AHCI also support NCQ. In addition, some older proprietary adapters also support NCQ. Also, NCQ requires its support from the operating system to work.

eSATA - External SATA

Separately, it is worth mentioning the eSATA (External SATA) format, which seemed promising at the time, but was not widely used. As you might guess from the name, eSATA is a type of Serial ATA designed to connect exclusively to external drives. The eSATA standard offers most of the features of the standard for external devices, i.e. internal Serial ATA, in particular, the same system of signals and commands and the same high speed.

eSATA connector on a laptop

However, eSATA also has some differences from the internal bus standard that gave rise to it. In particular, eSATA supports a longer data cable (up to 2m) and also has higher storage power requirements. In addition, eSATA connectors are somewhat different from standard Serial ATA connectors.

Compared to other external buses such as USB and Firewire, however, eSATA has one significant drawback. If these buses allow the device to be powered through the bus cable itself, then the eSATA drive requires special power connectors. Therefore, despite the relatively high data transfer rate, eSATA is currently not very popular as an interface for connecting external drives.

Conclusion

Information stored on a hard disk cannot become useful to the user and available to application programs until it is accessed by the computer's central processing unit. Hard drive interfaces provide a means of communication between these drives and the motherboard. To date, there are many different types of hard drive interfaces, each of which has its own advantages, disadvantages and characteristic features. We hope that the information provided in this article will be useful to the reader in many respects, because the choice of a modern hard drive is largely determined not only by its internal characteristics, such as capacity, cache memory, access and rotation speed, but also by the interface for which it was developed.

RAID 6, 5, 1, and 0 array tests with Hitachi SAS-2 drives

Apparently, the days when a decent professional 8-port RAID controller cost quite impressive money are gone. Today there are solutions for the Serial Attached SCSI (SAS) interface, which are very attractive both in terms of price and functionality, and in terms of performance. About one of them - this review.

Controller LSI MegaRAID SAS 9260-8i

Earlier we already wrote about the second generation SAS interface with a transfer rate of 6 Gb / s and a very cheap 8-port LSI SAS 9211-8i HBA controller designed for organizing entry-level storage systems based on the simplest SAS and SATA RAID arrays. drives. The LSI MegaRAID SAS 9260-8i model will be a higher class - it is equipped with a more powerful processor with hardware calculation of arrays of levels 5, 6, 50 and 60 (ROC technology - RAID On Chip), as well as a significant amount (512 MB) of onboard SDRAM memory for efficient data caching. This controller also supports 6 Gb/s SAS and SATA interfaces, and the adapter itself is designed for PCI Express x8 version 2.0 bus (5 Gb/s per lane), which is theoretically almost enough to meet the needs of 8 high-speed SAS ports. And all this - at a retail price of around $ 500, that is, only a couple of hundred more expensive than the budget LSI SAS 9211-8i. The manufacturer himself, by the way, refers this solution to the MegaRAID Value Line series, that is, economical solutions.




LSIMegaRAID SAS9260-8i 8-port SAS controller and its SAS2108 processor with DDR2 memory

The LSI SAS 9260-8i board has a low profile (MD2 form factor), is equipped with two internal Mini-SAS 4X connectors (each of them allows you to connect up to 4 SAS drives directly or more via port multipliers), is designed for the PCI Express bus x8 2.0 and supports RAID levels 0, 1, 5, 6, 10, 50, and 60, dynamic SAS functionality, and more. etc. The LSI SAS 9260-8i controller can be installed both in 1U and 2U rack servers (Mid and High-End servers) and in ATX and Slim-ATX cases (for workstations). RAID is supported by a hardware - built-in LSI SAS2108 processor (PowerPC core at 800 MHz), understaffed with 512 MB of DDR2 800 MHz memory with ECC support. LSI promises processor data speeds of up to 2.8 GB/s for reading and up to 1.8 GB/s for writing. Among the rich functionality of the adapter, it is worth noting the functions of Online Capacity Expansion (OCE), Online RAID Level Migration (RLM) (expanding the volume and changing the type of arrays on the go), SafeStore Encryption Services and Instant secure erase (encrypting data on disks and securely deleting data ), support for solid state drives (SSD Guard technology), and more. etc. An optional battery module is available for this controller (with it, the maximum operating temperature should not exceed +44.5 degrees Celsius).

LSI SAS 9260-8i Controller Key Specifications

System interfacePCI Express x8 2.0 (5 GT/s), Bus Master DMA
Disk interfaceSAS-2 6Gb/s (supports SSP, SMP, STP, and SATA protocols)
Number of SAS ports8 (2 x4 Mini-SAS SFF8087), supports up to 128 drives via port multipliers
RAID supportlevels 0, 1, 5, 6, 10, 50, 60
CPULSI SAS2108 ROC (PowerPC @ 800 MHz)
Built-in cache512 MB ECC DDR2 800 MHz
Energy consumption, no more24W (+3.3V and +12V supply from PCIe slot)
Operating/Storage Temperature Range0…+60 °С / −45…+105 °С
Form factor, dimensionsMD2 low-profile, 168×64.4 mm
MTBF value>2 million h
Manufacturer's Warranty3 years

Typical applications of the LSI MegaRAID SAS 9260-8i are as follows: a variety of video stations (video on demand, video surveillance, video creation and editing, medical images), high-performance computing and digital data archives, various servers (file, web, mail, databases). In general, the vast majority of tasks solved in small and medium-sized businesses.

In a white-orange box with a frivolously smiling toothy lady's face on the "title" (apparently to better lure bearded system administrators and harsh system builders) there is a controller board, brackets for its installation in ATX, Slim-ATX cases, etc., two 4-disk cables with Mini-SAS connectors on one end and regular SATA (without power) on the other (for connecting up to 8 drives to the controller), as well as a CD with PDF documentation and drivers for numerous versions of Windows, Linux (SuSE and RedHat), Solaris and VMware.


LSI MegaRAID SAS 9260-8i boxed controller package (MegaRAID Advanced Services Hardware Key mini card is available upon separate request)

With a special hardware key (sold separately) for the LSI MegaRAID SAS 9260-8i controller, LSI MegaRAID Advanced Services software technologies are available: MegaRAID Recovery, MegaRAID CacheCade, MegaRAID FastPath, LSI SafeStore Encryption Services (their consideration is beyond the scope of this article). In particular, in terms of improving the performance of an array of traditional disks (HDD) using a solid state drive (SSD) added to the system, MegaRAID CacheCade technology will be useful, with which the SSD acts as a second-level cache for the HDD array (an analogue of a hybrid solution for HDD), in some cases, providing an increase in the performance of the disk subsystem up to 50 times. Also of interest is the MegaRAID FastPath solution, which reduces I/O processing latency on the SAS2108 processor (by disabling HDD optimization), which allows you to speed up an array of multiple solid state drives (SSDs) connected directly to the SAS 9260-8i ports.

It is more convenient to configure, configure and maintain the controller and its arrays in the corporate manager in the operating system environment (the settings in the BIOS Setup menu of the controller itself are not rich enough - only basic functions are available). In particular, in the manager, in a few mouse clicks, you can organize any array and set its operation policies (caching, etc.) - see screenshots.




Example screenshots of the Windows manager for configuring RAID levels 5 (top) and 1 (bottom).

Testing

To explore the base performance of the LSI MegaRAID SAS 9260-8i (without the MegaRAID Advanced Services Hardware Key and related technologies), we used five high-performance SAS drives with a spindle speed of 15K rpm and support for the SAS-2 interface (6 Gb / c) - Hitachi Ultrastar 15K600 HUS156030VLS600 with a capacity of 300 GB.


Hitachi Ultrastar 15K600 hard drive without top cover

This will allow us to test all the basic levels of arrays - RAID 6, 5, 10, 0 and 1, and not only with the minimum number of disks for each of them, but also "for growth", that is, when adding a disk to the second of the 4-channel SAS ports of the ROC chip. Note that the hero of this article has a simplified analogue - a 4-port LSI MegaRAID SAS 9260-4i controller based on the same element base. Therefore, our tests of 4-disk arrays are equally applicable to it.

The maximum payload sequential read/write speed for the Hitachi HUS156030VLS600 is about 200 MB/s (see chart). Average random access time when reading (according to specifications) - 5.4 ms. Built-in buffer - 64 MB.


Hitachi Ultrastar 15K600 HUS156030VLS600 sequential read/write speed graph

The test system was based on an Intel Xeon 3120 processor, an Intel P45 chipset motherboard, and 2 GB of DDR2-800 memory. The SAS controller was installed in a PCI Express x16 v2.0 slot. The tests were carried out under the operating systems Windows XP SP3 Professional and Windows 7 Ultimate SP1 x86 (pure American versions), since their server counterparts (Windows 2003 and 2008, respectively) do not allow some of the benchmarks and scripts we used to work. The tests used were AIDA64, ATTO Disk Benchmark 2.46, Intel IOmeter 2006, Intel NAS Performance Toolkit 1.7.1, C'T H2BenchW 4.13/4.16, HD Tach RW 3.0.4.0, and Futuremark's PCMark Vantage and PCMark05. The tests were carried out both on unallocated volumes (IOmeter, H2BenchW, AIDA64) and on formatted partitions. In the latter case (for NASPT and PCMark), the results were taken both for the physical beginning of the array and for its middle (volumes of arrays with the maximum available capacity were divided into two equal logical partitions). This allows us to more adequately evaluate the performance of solutions, since the fastest initial sections of volumes, on which file benchmarks are carried out by most browsers, often do not reflect the situation on other sections of the disk, which can also be used very actively in real work.

All tests were performed five times and the results were averaged. We'll take a closer look at our updated methodology for evaluating professional disk solutions in a separate article.

It remains to add that in this test we used the controller firmware version 12.12.0-0036 and drivers version 4.32.0.32. Write and read caching for all arrays and drives has been enabled. Perhaps the use of more modern firmware and drivers saved us from the oddities seen in the results of early tests of the same controller. In our case, such incidents were not observed. However, we also do not use the FC-Test 1.0 script, which is very doubtful in terms of the reliability of the results (which in certain cases the same colleagues “want to call confusion, vacillation and unpredictability”) in our package, since we have repeatedly noticed its failure on some file patterns ( in particular, sets of many small, less than 100 KB files).

The charts below show the results for 8 array configurations:

  1. RAID 0 of 5 disks;
  2. RAID 0 of 4 drives;
  3. RAID 5 of 5 disks;
  4. RAID 5 of 4 drives;
  5. RAID 6 of 5 disks;
  6. RAID 6 of 4 drives;
  7. RAID 1 of 4 drives;
  8. RAID 1 of 2 drives.

A RAID 1 array of four disks (see the screenshot above) at LSI obviously means a stripe + mirror array, usually referred to as RAID 10 (this is also confirmed by the test results).

Test results

In order not to overload the review web page with a countless set of charts, sometimes uninformative and tiring (which some "rabid colleagues" often sin :)), we have summarized the detailed results of some tests in table. Those who wish to analyze the intricacies of our results (for example, to find out the behavior of the defendants in the most critical tasks for themselves) can do this on their own. We will focus on the most important and key test results, as well as on average indicators.

First, let's look at the results of "purely physical" tests.

The average random access time for a read on a single Hitachi Ultrastar 15K600 HUS156030VLS600 drive is 5.5 ms. However, when organizing them into arrays, this indicator changes slightly: it decreases (due to effective caching in the LSI SAS9260 controller) for “mirror” arrays and increases for all the others. The largest increase (about 6%) is observed for level 6 arrays, since the controller has to access the largest number of disks at the same time (three for RAID 6, two for RAID 5, and one for RAID 0, since access in this test occurs in blocks of only 512 bytes, which is significantly less than the size of array striping blocks).

The situation with random access to arrays during writing (blocks of 512 bytes) is much more interesting. For a single disk, this parameter is about 2.9 ms (without caching in the host controller), however, in arrays on the LSI SAS9260 controller, we see a significant decrease in this indicator due to good write caching in the 512 MB SDRAM buffer of the controller. Interestingly, the most dramatic effect is obtained for RAID 0 arrays (random access time during writes drops by almost an order of magnitude compared to a single drive)! This should undoubtedly have a beneficial effect on the performance of such arrays in a number of server tasks. At the same time, even on arrays with XOR calculations (that is, a high load on the SAS2108 processor), random write accesses do not lead to an obvious performance drop - again thanks to the powerful controller cache. Naturally, RAID 6 is slightly slower here than RAID 5, but the difference between them is essentially insignificant. I was somewhat surprised by the behavior of a single “mirror” in this test, which showed the slowest random access when writing (perhaps this is a “feature” of the microcode of this controller).

Linear (sequential) read and write speed graphs (in large blocks) for all arrays do not have any peculiarities (they are almost identical for reading and writing, provided controller write caching is enabled) and they are all scaled according to the number of disks participating in parallel in the “useful » process. That is, for five-disk RAID 0 disks, the speed "fivefolds" relative to a single disk (reaching 1 GB / s!), for five-disk RAID 5 it "quadruples", for RAID 6 - "triples" (triples, of course :)), for a RAID 1 of four disks, it doubles (no "y2eggs"! :)), and for a simple mirror, it duplicates the graphs of a single disk. This pattern is clearly visible, in particular, in terms of the maximum reading and writing speed of real large (256 MB) files in large blocks (from 256 KB to 2 MB), which we will illustrate with a diagram of the ATTO Disk Benchmark 2.46 test (the results of this test for Windows 7 and XP are almost identical).

Here, only the case of reading files on a RAID 6 array of 5 disks unexpectedly fell out of the general picture (the results were repeatedly rechecked). However, for reading in blocks of 64 KB, the speed of this array is gaining its 600 MB / s. So let's write off this fact as a "feature" of the current firmware. We also note that when writing real files, the speed is slightly higher due to caching in a large controller buffer, and the difference with reading is more noticeable, the lower the actual linear speed of the array.

As for the interface speed, which is usually measured in terms of buffer writes and reads (multiple accesses to the same disk volume address), here we are forced to state that it turned out to be the same for almost all arrays due to the inclusion of the controller cache for these arrays (see . table). Thus, the recording performance for all participants in our test amounted to approximately 2430 MB / s. Note that the PCI Express x8 2.0 bus theoretically gives a speed of 40 Gb / s or 5 Gb / s, however, according to useful data, the theoretical limit is lower - 4 Gb / s, which means that in our case the controller really worked according to version 2.0 of the PCIe bus. Thus, the 2.4 GB / s we measured is, obviously, the real bandwidth of the controller's on-board memory (DDR2-800 memory with a 32-bit data bus, as can be seen from the configuration of the ECC chips on the board, theoretically gives up to 3.2 GB/s). When reading arrays, caching is not as "comprehensive" as when writing, therefore, the speed of the "interface" measured in utilities is, as a rule, lower than the speed of reading the controller's cache memory (typical 2.1 GB / s for arrays of levels 5 and 6) , and in some cases it "drops" to the read speed of the buffer of the hard drives themselves (about 400 MB / s for a single hard drive, see the graph above), multiplied by the number of "consecutive" drives in the array (this is exactly the cases of RAID 0 and 1 from our results).

Well, we figured out the "physics" in the first approximation, it's time to move on to the "lyrics", that is, to the tests of the "real" application boys. By the way, it will be interesting to find out whether the performance of arrays scales when performing complex user tasks as linearly as it scales when reading and writing large files (see the ATTO test diagram just above). The inquisitive reader, I hope, has already been able to predict the answer to this question.

As a “salad” to our “lyrical” part of the meal, we will serve desktop-based disk tests from the PCMark Vantage and PCMark05 packages (under Windows 7 and XP, respectively), as well as a similar “track” application test from the H2BenchW 4.13 package of the authoritative German magazine C'T. Yes, these tests were originally designed to evaluate desktop and low-cost workstation hard drives. They emulate the execution of typical tasks of an advanced personal computer on disks - working with video, audio, photoshop, antivirus, games, swap files, installing applications, copying and writing files, etc. Therefore, their results should not be taken in the context of this article. as the ultimate truth - after all, other tasks are more often performed on multi-disk arrays. Nevertheless, in light of the fact that the manufacturer himself positions this RAID controller, including for relatively inexpensive solutions, such a class of test tasks is quite capable of characterizing a certain proportion of applications that will actually be run on such arrays (the same work with video, professional graphics processing, swapping OS and resource-intensive applications, copying files, antivirus, etc.). Therefore, the importance of these three comprehensive benchmarks in our overall package should not be underestimated.

In the popular PCMark Vantage, on average (see diagram), we observe a very remarkable fact - the performance of this multi-disk solution almost does not depend on the type of array used! By the way, within certain limits, this conclusion is also valid for all individual test tracks (task types) included in the PCMark Vantage and PCMark05 packages (see the table for details). This may mean either that the controller firmware algorithms (with cache and disks) almost do not take into account the specifics of the operation of applications of this type, or that the main part of these tasks is performed in the cache memory of the controller itself (and most likely we observe a combination of these two factors ). However, for the latter case (that is, the execution of tracks to a large extent in the RAID controller cache), the average performance of solutions is not so high - compare these data with the test results of some "desktop" ("chipset") 4-disk RAID 0 arrays and 5 and inexpensive single SSDs on the SATA 3 Gb / s bus (see review). If, compared to a simple "chipset" 4-disk RAID 0 (and on twice slower hard drives than the Hitachi Ultrastar 15K600 used here), LSI SAS9260 arrays are less than twice as fast in PCMark tests, then relatively not even the fastest "budget" single SSD all of them definitely lose! The results of the PCMark05 disk test give a similar picture (see table; it makes no sense to draw a separate diagram for them).

A similar picture (with some reservations) for arrays based on the LSI SAS9260 can be seen in another "track" application benchmark - C'T H2BenchW 4.13. Here, only the two slowest (in terms of structure) arrays (RAID 6 of 4 disks and a simple “mirror”) are noticeably behind all other arrays, the performance of which, obviously, reaches that “sufficient” level when it no longer rests on the disk subsystem, and in the efficiency of the SAS2108 processor with the controller cache for these complex access sequences. And in this context, we can be pleased that the performance of arrays based on LSI SAS9260 in tasks of this class almost does not depend on the type of array used (RAID 0, 5, 6 or 10), which allows you to use more reliable solutions without compromising the final performance.

However, “not everything is Maslenitsa” - if we change the tests and check the operation of arrays with real files on the NTFS file system, then the picture will change dramatically. Thus, in the Intel NASPT 1.7 test, many of whose “pre-installed” scenarios are quite directly related to tasks typical for computers equipped with the LSI MegaRAID SAS9260-8i controller, the array disposition is similar to what we observed in the ATTO test when reading and writing large files - the speed increases proportionally as the "linear" speed of the arrays grows.

In this chart, we show an average of all NASPT tests and patterns, while in the table you can see the detailed results. Let me emphasize that we ran NASPT both under Windows XP (this is what numerous browsers usually do) and under Windows 7 (which, due to certain features of this test, is done less frequently). The fact is that Seven (and its "big brother" Windows 2008 Server) use more aggressive algorithms of their own caching when working with files than XP. In addition, copying large files in the "Seven" occurs mainly in blocks of 1 MB (XP, as a rule, operates in blocks of 64 KB). This leads to the fact that the results of the "file" Intel NASPT test differ significantly in Windows XP and Windows 7 - in the latter they are much higher, sometimes more than twice! By the way, we compared the results of NASPT (and other tests of our package) under Windows 7 with 1 GB and 2 GB of installed system memory (there is information that with large amounts of system memory, caching of disk operations in Windows 7 increases and NASPT results become even higher) , however, within the measurement error, we did not find any difference.

We leave the debate about which OS (in terms of caching policies, etc.) is “better” to test disks and RAID controllers for the discussion thread of this article. We believe that it is necessary to test drives and solutions based on them in conditions that are as close as possible to the real situations of their operation. That is why, in our opinion, the results obtained by us for both operating systems are of equal value.

But back to the NASPT average performance chart. As you can see, the difference between the fastest and slowest of the arrays we tested here is on average a little less than three times. This, of course, is not a five-fold gap, as when reading and writing large files, but it is also very noticeable. The arrays are actually located in proportion to their linear speed, and this cannot but rejoice: it means that the LSI SAS2108 processor processes data quite quickly, almost without creating bottlenecks when arrays of levels 5 and 6 are actively working.

In fairness, it should be noted that NASPT also has patterns (2 out of 12) in which the same picture is observed as in PCMark with H2BenchW, namely that the performance of all tested arrays is almost the same! These are Office Productivity and Dir Copy to NAS (see table). This is especially evident under Windows 7, although for Windows XP the trend of "convergence" is obvious (compared to other patterns). However, in PCMark with H2BenchW there are patterns where there is an increase in array performance in proportion to their linear speed. So everything is not as simple and unambiguous as some might like.

At first, I wanted to discuss a chart with the overall performance of arrays, averaged over all application tests (PCMark + H2BenchW + NASPT + ATTO), that is, this one:

However, there is nothing much to discuss here: we see that the behavior of arrays on the LSI SAS9260 controller in tests that emulate the operation of certain applications can vary dramatically depending on the scenarios used. Therefore, it is better to draw conclusions about the benefits of a particular configuration based on what tasks you are going to perform at the same time. And one more professional test can significantly help us with this - synthetic patterns for IOmeter, emulating this or that load on the storage system.

Tests in IOmeter

In this case, we will omit the discussion of numerous patterns that carefully measure the speed of work depending on the size of the access block, the percentage of writes, the percentage of random accesses, etc. This is, in fact, pure synthetics, providing little useful practical information and of interest rather purely theoretically. After all, we have already clarified the main practical points regarding “physics” above. It is more important for us to focus on patterns that emulate real work - servers of various types, as well as file operations.

To emulate servers such as File Server, Web Server and DataBase (database server), we used the well-known patterns of the same name, proposed at one time by Intel and StorageReview.com. For all cases, we tested arrays with a command queue depth (QD) from 1 to 256 with a step of 2.

In the Database pattern, which uses random disk accesses in blocks of 8 KB within the entire array, one can observe a significant advantage of arrays without parity (that is, RAID 0 and 1) with a command queue depth of 4 or higher, while all parity-checked arrays (RAID 5 and 6) demonstrate very similar performance (despite a twofold difference between them in the speed of linear accesses). The situation is easily explained: all arrays with parity showed similar values ​​in tests for the average random access time (see the diagram above), and this parameter mainly determines the performance in this test. It is interesting that the performance of all arrays increases almost linearly with increasing command queue depth up to 128, and only at QD=256, in some cases, you can see a hint of saturation. The maximum performance of arrays with parity at QD=256 was about 1100 IOps (operations per second), that is, the LSI SAS2108 processor spends less than 1 ms to process one portion of data of 8 KB (about 10 million single-byte XOR operations per second for RAID 6 ; of course, the processor also performs other I/O and cache tasks in parallel).

In the file server pattern, which uses blocks of different sizes for random read and write accesses to the array within its entire volume, we observe a picture similar to the DataBase, with the difference that here five-disk arrays with parity (RAID 5 and 6) noticeably outperform their 4-disk counterparts and at the same time demonstrate almost identical performance (about 1200 IOps at QD=256)! Apparently, adding a fifth drive to the second of the two 4-lane SAS ports on the controller somehow optimizes the computational load on the processor (due to I / O operations?). It might be worth comparing 4-disk arrays in terms of speed when the drives are connected in pairs to different Mini-SAS connectors of the controller in order to identify the optimal configuration for organizing arrays on the LSI SAS9260, but this is a task for another article.

In the web server pattern, where, according to the intention of its creators, there are no disk write operations as a class (and hence the calculation of XOR functions for writing), the picture becomes even more interesting. The fact is that all three five-disk arrays from our set (RAID 0, 5 and 6) show identical performance here, despite the noticeable difference between them in terms of linear reading and parity calculations! By the way, the same three arrays, but of 4 disks, are also identical in speed to each other! And only RAID 1 (and 10) falls out of the picture. Why this happens is difficult to judge. It is possible that the controller has very efficient algorithms for selecting "good drives" (that is, those of five or four drives from which the necessary data comes first), which in the case of RAID 5 and 6 increases the likelihood of data arriving from the platters earlier, preparing the processor in advance for necessary calculations (think of the deep command queue and the large DDR2-800 buffer). And this can ultimately compensate for the delay associated with XOR calculations and equalize them in “chance” with “simple” RAID 0. In any case, the LSI SAS9260 controller can only be praised for its extremely high results (about 1700 IOps for 5-disk arrays with QD=256) in the Web Server pattern for arrays with parity. Unfortunately, the fly in the ointment was the very poor performance of the two-disk “mirror” in all these server patterns.

The Web Server pattern is echoed by our own pattern, which emulates random reading of small (64 KB) files within the entire array space.

Again, the results were combined into groups - all 5-disk arrays are identical to each other in terms of speed and lead in our “race”, 4-disk RAID 0, 5 and 6 also cannot be distinguished from each other in terms of performance, and only “DSLRs” fall out of the general masses (by the way, a 4-disk “reflex”, that is, RAID 10 is faster than all other 4-disk arrays - apparently, due to the same “choosing a good disk” algorithm). We emphasize that these regularities are valid only for a large command queue depth, while with a small queue (QD=1-2), the situation and leaders can be completely different.

Everything changes when servers work with large files. In the conditions of modern "heavier" content and new "optimized" operating systems such as Windows 7, 2008 Server, etc. working with megabyte files and 1 MB data blocks is becoming increasingly important. In this situation, our new pattern, which emulates random reading of 1-MB files within the entire disk (details of the new patterns will be described in a separate article on the methodology), comes in handy in order to more fully assess the server potential of the LSI SAS9260 controller.

As you can see, the 4-disk "mirror" here no longer leaves anyone hope for leadership, clearly dominating in any order of commands. Its performance also first grows linearly with the command queue depth, but with QD=16 for RAID 1, it saturates (about 200 MB/s). A little “later” (at QD=32) “saturation” of performance occurs in arrays that are slower in this test, among which “silver” and “bronze” have to be given to RAID 0, and arrays with parity turn out to be outsiders, losing even before a brilliant RAID 1 of two drives, which turns out to be unexpectedly good. This leads us to the conclusion that even when reading, the XOR computational load on the LSI SAS2108 processor when working with large files and blocks (arranged randomly) is very burdensome for him, and for RAID 6, where it actually doubles, sometimes even exorbitant - the performance of solutions barely exceeds 100 MB / s, that is, 6-8 times lower than with linear reading! "Excessive" RAID 10 is clearly more profitable to use here.

When accidentally writing small files, the picture is again strikingly different from what we saw earlier.

The fact is that here the performance of arrays practically does not depend on the depth of the command queue (obviously, the huge cache of the LSI SAS9260 controller and rather big caches of the hard drives themselves affect), but it changes dramatically with the type of array! The undisputed leaders here are "unpretentious" ones for the RAID 0 processor, and "bronze" with more than a twofold loss to the leader - in RAID 10. All arrays with parity formed a very close single group with a two-disk SLR ), three times losing to the leaders. Yes, this is definitely a heavy load on the controller's processor. However, frankly speaking, I did not expect such a “failure” from the SAS2108. Sometimes even a soft RAID 5 on a “chipset” SATA controller (with caching using Windows and calculation using the PC’s central processor) is able to work faster ... However, the controller still outputs “its” 440-500 IOps stably - compare this with chart on the average write access time at the beginning of the results section.

Switching to random writing of large files of 1 MB each leads to an increase in absolute speed indicators (for RAID 0 - almost to the values ​​\u200b\u200bfor random reading of such files, that is, 180-190 MB / s), but the overall picture remains almost the same - arrays with parity many times slower than RAID 0.

The picture for RAID 10 is curious - its performance drops with increasing command queue depth, although not much. For other arrays, there is no such effect. The two-disk "mirror" here again looks modest.

Now let's look at patterns in which files are read and written to disk in equal numbers. Such loads are typical, in particular, for some video servers or during active copying / duplication / backup of files within the same array, as well as in the case of defragmentation.

First - files of 64 KB randomly throughout the array.

Here, some similarity with the results of the DataBase pattern is obvious, although the absolute speeds of arrays are three times higher, and even with QD=256, some performance saturation is already noticeable. A higher (compared to the DataBase pattern) percentage of write operations in this case leads to the fact that arrays with parity and a two-disk “mirror” become obvious outsiders, significantly inferior in speed to RAID 0 and 10 arrays.

When switching to 1 MB files, this pattern is generally preserved, although the absolute speeds approximately triple, and RAID 10 becomes as fast as a 4-disk stripe, which is good news.

The last pattern in this article will be the case of sequential (as opposed to random) reading and writing large files.

And here already many arrays manage to accelerate to very decent speeds in the region of 300 MB / s. And although the gap between the leader (RAID 0) and the outsider (two-disk RAID 1) remains more than twofold (note that this gap is fivefold for linear reads or writes!), RAID 5, which is among the top three, and the rest of the XOR arrays that have pulled up, do not may not be encouraging. After all, judging by the list of applications of this controller, which LSI itself gives (see the beginning of the article), many target tasks will use this particular nature of array accesses. And it's definitely worth considering.

In conclusion, I will give a final diagram in which the indicators of all the above-mentioned IOmeter test patterns are averaged (geometrically over all patterns and command queues, without weight coefficients). It is curious that if the averaging of these results within each pattern is carried out arithmetically with weight coefficients of 0.8, 0.6, 0.4 and 0.2 for command queues 32, 64, 128 and 256, respectively (which conventionally depth of the command queue in the overall operation of drives), then the final (for all patterns) normalized array performance index within 1% will coincide with the geometric mean.

So, the average "hospital temperature" in our patterns for the IOmeter test shows that there is no way out of "physics with math" - RAID 0 and 10 are clearly in the lead. For arrays with parity, the miracle did not happen - although the LSI SAS2108 processor demonstrates in in some cases, decent performance, in general, it cannot "reach" such arrays to the level of a simple "stripe". At the same time, it is interesting that 5-disk configurations clearly add compared to 4-disk configurations. In particular, 5-disk RAID 6 is unequivocally faster than 4-disk RAID 5, although in terms of "physics" (random access time and linear access speed) they are actually identical. The two-disk “mirror” was also disappointing (on average, it is equivalent to a 4-disk RAID 6, although two XOR calculations per data bit are not required for a mirror). However, a simple "mirror" is obviously not a target array for a sufficiently powerful 8-port SAS controller with a large cache and a powerful processor "on board". :)

Price Information

The LSI MegaRAID SAS 9260-8i 8-port SAS controller with a complete set is offered at a price of around $500, which can be considered quite attractive. Its simplified 4-port counterpart is even cheaper. A more accurate current average retail price of the device in Moscow, relevant at the time you read this article:

LSI SAS 9260-8iLSI SAS 9260-4i
$571() $386()

Conclusion

Summing up what was said above, we can conclude that we will not risk giving unified recommendations “for everyone” on the 8-port LSI MegaRAID SAS9260-8i controller. Everyone should draw their own conclusions about the need to use it and configure certain arrays with its help - strictly based on the class of tasks that are supposed to be launched. The fact is that in some cases (on some tasks) this inexpensive "megamonster" is able to show outstanding performance even on arrays with double parity (RAID 6 and 60), but in other situations, the speed of its RAID 5 and 6 clearly leaves much to be desired. . And the only salvation (almost universal) will be only a RAID 10 array, which can be organized with almost the same success on cheaper controllers. However, it is often thanks to the processor and the SAS9260-8i cache that the RAID 10 array behaves here no slower than a stripe of the same number of disks, while ensuring high reliability of the solution. But what you should definitely avoid with the SAS9260-8i is a two-disk "reflex" and 4-disk RAID 6 and 5 - these are obviously suboptimal configurations for this controller.

Thanks to Hitachi Global Storage Technologies
for hard drives provided for testing.


2022
maccase.ru - Android. Brands. Iron. News