Setting up RAID 0+1 (great tutorial)

Status
Not open for further replies.

piyushp_20

Geekologist
You're sold. You want a RAID (redundant array of inexpensive disks/drives). Maybe you've witnessed the phenomenal speed of a RAID 0, or perhaps you want the safety net of immediate data backup that a RAID 1 offers. In any case, you're ready to take the plunge. (See other "How To Install" articles for more details on RAIDs 0
and 1.)

But hang on. What about the risks of a naked RAID 0? And isn't a RAID 1 no faster than a single hard drive? Moreover, you'd rather spend your money on bigger hard drives than on an add-in RAID controller card, so you want a RAID with support built into a motherboard. The answer might be RAID 0+1.

Goal

A RAID 0+1 starts with a pair of striped hard drives, meaning that the PC writes part of each file to both drives for nearly double the read/write speed. But that's a little risky because if either drive fails, you might not be able to access any of the data. So the RAID mirrors the data, or copies it, to another RAID 0 drive pair so that there will be an automatic backup at all times. In effect, RAID 0+1 is a RAID 1 made up of two RAID 0 arrays. It combines the speed of RAID 0 with the data redundancy of RAID 1.



You need at least four hard drives to make a RAID 0+1, which is why few people do it. ( NOTE: Intel's Matrix Storage Technology, built into its 915G, 915P Express, and 925X Express chipsets, provides RAID 0+1's primary benefits in a different manner. Using only two drives, MST divides part of the array into a RAID 0 for speedy app loading and the remaining part into a RAID 1 for personal data safety.) RAID 0+1 is out of most enthusiasts' price range, yet it's not efficient, scalable, or fault-tolerant enough for critical business uses. Out of those four drives, RAID 0+1 loses half the storage space to backup data. Furthermore, it also necessitates a computer case that can hold four drives and keep them cool, not to mention a power supply with enough oomph to run them all.​


Don't confuse RAID 0+1 with RAID 10, by the way. A RAID 10 has a striped pair of drives like RAID 0+1, but it backs these up to a pair of mirrored hard drives for a total of three copies of all data. RAID 10 is very reliable, but it loses a lot of capacity keeping all that backup data. For most desktop users, it makes more sense to use more of their drives' space in a different type of RAID and make regular backups to DVD or tape.​


Here's how we almost succeeded in making a RAID 0+1 PC for less than $1,000. A few component shuffles from out-of-stock or defective parts put us over that mark, but we've included tips on bringing the price down.







Background

Four 160GB Maxtor DiamondMax Plus 9 SATA (Serial Advanced Technology Attachment) hard drives seemed perfect for this ambitious RAID0+1 system. Fortified with 7,200rpm spindle speeds and ample 8MB caches (found in model 6Y160M0), these zippy hard drives cost us $105 each.





The $420 hard drive bill forced us to look for less expensive hardware with great features. We bought an Athlon XP 2000+ (a 2500+ cost just $22 more). We also bought a motherboard that ostensibly supported RAID 0+1, although it was a mystery just what type of RAID we'd created until we installed Windows and a RAID monitoring utility. The DFI NFII Ultra Infinity motherboard's box and docs all claimed RAID 0+1 support from its Silicon Image 3114 controller chip, but the preboot SiI RAID utility said that it offered RAID 10 instead.​


The final necessity was an Antec True430W power supply, which would offer enough amperage on its 12V rail (26A) to be able to start the computer with all those hard drives. The remainder of our PC parts reflects the bare minimum hardware available at low prices.​


Two changes could make this system ring up for less than $1,000, which was our original intention. One is to use 120GB drives, which would have saved us about $52 as of this writing. The other is to use a lighter graphics card, such as the $43 Radeon 9200 SE we originally purchased but had to return. Together, these two modifications will net you a $982 RAID 0+1 PC, not including tax or shipping.







The Build

Our RaidMax case came with a 350W power supply with 10A on the 12V rail, which we knew wouldn't be strong enough to fire up all the hardware we'd bought. We unscrewed it and pulled it from the case. However, we didn't install the new Antec power supply just yet, as we'd learned from working with similar systems that we'd need the room when we installed the CPU's heatsink.





Next, we placed the case on its side. We installed standoffs in the case's motherboard panel using our DFI NFII's silver-lined mounting holes as our guide. After fitting the included chromed I/O (input/output) shield, or port plate, to the hole in the back of the case, we screwed the mainboard into place in the RaidMax case.​




*www.smartcomputing.com/images/smartcomputing/fullsize/00941149.jpg

RAID 0+1

LEFT: A RAID 0+1 stripes
partial data to a pair of hard drives for speed, like a RAID 0. It also mirrors that same data to another striped RAID 0 for redundancy, like a RAID 1.

Capacity: Half of
A + B + C + D



RIGHT: A PC moves data to/from one unRAIDed hard drive at a time. Data isn't backed up, but one drive's failure won't affect the others and no capacity is lost.

Capacity: A + B + C + D





We lifted the CPU socket lever to 90 degrees, dropped the Athlon XP 2000+ into place (it will only fit facing one direction), and snapped the lever back down parallel to the socket. Next, we unwrapped the retail CPU's heatsink/fan combo, making sure not to smudge its preapplied thermal putty. The bottom of the heatsink had a recess meant to face the high part of the CPU socket, so like the processor, the heatsink could only face one way. Holding the sink at a slight angle, we snapped one end of its metal clip over the socket's tabs. Next, we set the sink on the Athlon chip and cautiously used a well-fitting, flat screwdriver to press the other end of the metal clip down over that side's tabs, locking the sink down. Finally, we connected the fan cable to the CPU FAN header on the motherboard.



Power to the PC. Now we could install the Antec power supply. This particular Antec came with its main harness neatly sleeved in black mesh, plus a large intake fan that fortuitously ended up right above our CPU. We snapped its 20-pin and 4-pin power leads into the motherboard, turning the connectors until they fit. The Antec also came with a couple of SATA power connectors. A few SATA hard drives require this type of power coupling; more often, they accept either SATA or typical 4-pin Molex connectors. If this describes your hard drive, use one or the other power lead type (never both).

Our matching DIMMs (dual in-line memory modules) of DDR400 (double-data-rate 400) SDRAM (synchronous dynamic random-access memory) from MemoryPRO went in next. The RAM could only go in one way, so we pushed them into the DIMM slots with moderate finger pressure on their top edges. We made sure the white end retainers pivoted up to fit in each stick's notches.​

Next, we installed our Radeon 9200 graphics card, screwing its bracket down and flipping up the retention lever on its AGP (Accelerated Graphics Port) slot. We attached our monitor's video cable to the rear of the Radeon card and then followed the DFI users manual to connect the front-panel LEDs (light-emitting diodes)and switches to the mainboard. These can be tricky, so use a flashlight, mirror, or magnifying glass if you need to.​
Finally, we ran power cords to the PC and monitor and turned the system on. We knew it wouldn't get far without an OS or boot diskette, but at least we found out that the hardware worked. We pressed DEL to enter its BIOS (Basic Input/Output System) Setup program, which you should always check when you're using a new motherboard. We set the clock and date; rearranged the boot devices to Floppy, CD-ROM, and SCSI (BIOSes often consider SATA drives as SCSI [Small Computer System Interface] devices for system configuration purposes), in that order; and set the Init Display First field to AGP instead of PCI (Peripheral Component Interconnect). We saved our changes, exited, and shut down the PC.



Drive time. We stood the computer on its feet again. Then, from top to bottom, we installed our drives in the front bays of the case. We had to pop out two plastic panels in the case's fascia for our CD-RW drive and floppy drive and then secured the drives in their bays with four screws each. If you use fewer screws, especially on optical and hard drives, you run the risk of letting uneven vibrations affect the drives' error rates, transfer speeds, and possibly lifespan.

Moving down to our four hard drives, we handled them gently as we slipped them into the lower 3.5-inch bays and bolted them in place. We had already installed an 80mm case fan in front of the drive cage, first flattening a raised section in the middle of the panel with a bolt, nut, and some washers. With so many hard drives and a hot motherboard chipset like our NFII Ultra Infinity's Nforce2, we knew we had to add at least one more fan to the RaidMax case's side fan (which we flipped to blow inward instead of outward).​

Once our drives were snug, we connected data and power cables to them. The CD-RW used an ATA/133 cable included with the mainboard to attach to an IDE header on the board, as well as a smaller audio cable to reach the board's distant CD audio input header. The floppy drive took a smaller power connector from our Antec supply. Its data cable, also supplied by DFI, had a twist that went toward the drive and a red stripe that faced Pin 1 on the drive, which is marked with a triangle. Use the end connectors of a floppy drive cable; the middle one is for a second diskette drive (B:).​

Finally, we added either SATA or 4-pin Molex power leads to each hard drive (never both). Being careful not to break any fragile SATA ports, we then ran SATA data cables from the drives to the four ports on the DFI motherboard. We were now ready to create our RAID, install Windows, and test the monster we'd just built.​

RAID creation. We started the PC. At the prompt a few seconds later, we pressed CTRL-S to start Silicon Image's RAID utility. We chose Create RAID Set and then RAID 10. (This eventually proved to be a typo for RAID 0+1, although we didn't find that out until later.) Note that if your particular motherboard doesn't recognize your USB (Universal Serial Bus) keyboard or mouse in the utilities that load before Windows, such as this one, fit the device with a USB-PS/2 (Personal System/2) adapter and restart the system. You might also have to press NUM LOCK before you can type numerals on the right-hand keypad.​

At this point, instead of choosing Auto Configuration, we selected Manual Configuration to get a better understanding of the process. We decided on a 16KB chunk size. Chunk is another term for allocation unit or cluster, which defines the smallest unit of storage the hard drives will use to store data. A 16KB size would make the drive speedier than would 4KB or 8KB chunks, yet less wasteful of storage space than would 64KB or 128KB clusters. As an example of that waste, if you saved a 5KB file on a drive with 128KB clusters, it would still take 128KB of space to save it, wasting 123KB.​




*www.smartcomputing.com/images/smartcomputing/fullsize/00935045.jpg
Here's how we flattened an area in front of our hard drive bays to accept an 80mm case fan. We also had good results prying on the bolt with the box end of the wrench.


Next, we pressed ENTER to select the first, second, third, and fourth drives of the RAID set. We simply chose our drives in the order they were listed. None of our hard drives had Windows or any other data on them yet, so we selected Create Without Data Copy. Choose the Create With Data Copy option if you're converting an existing system to a RAID 0+1 and you don't want to reinstall your OS and apps. We continued and found ourselves back at the main menu, the proud owners of . . . an SiI RAID 0+1 array.

Still, with no way of knowing if we actually had a RAID 10 or 0+1 on our hands, we slipped the Windows XP CD into the drive and pressed CTRL-E to exit the RAID utility. The system rebooted into Windows Setup. A few seconds later, we had to press F6 to install a third-party SCSI or RAID driver (if you miss your chance, reboot and try again). When Setup asked us, we inserted DFI's SATA RAID driver diskette and directed Setup to the driver for WinXP.

Later, when Setup showed us the space available on our RAID, it was 312GB, or roughly half of the 640GB we would have had without RAIDing our four 160GB drives. This supported the RAID 0+1 theory, we thought. We set up two 10GB partitions for Windows and our apps and then a third with 292GB for our data. Next, we installed WinXP on the first partition, choosing the NTFS file system. We had to eject the RAID driver diskette as WinXP Setup restarted the computer, and we also had to tell Setup twice to use those drivers without Microsoft certification.

The plot thickens. Once we made it through typing in the product key and selecting a few network and clock settings, we reached WinXP's Desktop. We immediately clicked Start, Control Panel, Switch To Classic View, Administrative Tools, Computer Management, and Disk Management to verify our RAID type and format our partitions. The RAID showed up as one drive, Disk 0, with about half of our drives' total capacity, 305GB. We right-clicked Disk 0 and selected Properties, which told us that Windows considered it a RAID 0+1. However, Disk Management didn't consider any of the drives fault tolerant, which was confusing. We right-clicked the E: and F: partitions in turn, formatted them with NTFS, and then exited Disk Management.

There was one more thing to check. We put the DFI motherboard's installation CD in the optical drive, clicked Tools when the GUI popped up, and chose SIL3114 RAID Utility. This installed Silicon Image's SATARaid utility, which, when launched, put a blue icon in the System Tray by the clock. We double-clicked this to open SATARaid's GUI (graphical user interface) and then clicked Set 0. SATARaid told us that we had built a RAID 0+1 after
all. We clicked the Members tab, and
it showed us that indeed we had a mirrored set of striped pairs. Case closed: RAID 0+1.

As always, refer to other "How To Install"RAID articles for further details on the components we used and how we put them together. Together these articles will give you a fairly thorough overview of multidrive PC building.


Testing
This PC's results don't compare directly to those of the RAID 0 and 1 PCs we built for other articles in our "How ToInstall" area, as those two systems had Athlon 64s and another motherboard.







One surprise on this system, though, was how fast the IOmeter scores were compared to the other PCs'. Silicon Image's 3112 was the fastest SATA controller at its introduction, so perhaps the 3114 on our DFI board holds a similar speed advantage over the VIA 8237 southbridge on our RAID 0 and 1 systems—despite the VIA's faster non-PCI pipe to a speedier CPU.​


Finally, we disconnected two drives to make a RAID 0 for comparison, as reported in our "Benchmark Scores" chart. RAID 0+1 was just a hair slower than the RAID 0 champ, yet vastly safer.​



Final Remarks


Despite using so many hard drives, RAID 0+1 is actually cheaper than some more efficient types of RAID because it often doesn't need a special RAID controller card, depending on the motherboard. With a good power supply and a roomy case, you could enjoy high speed without sacrificing data safety. That's having your cake and eating it, too.

 
OP
piyushp_20

piyushp_20

Geekologist
ya buddy,
found this interesting article from the source you mentioned and wanted to share this with all the readers of this forum.
 
Status
Not open for further replies.
Top Bottom