Next Previous Contents

1. Introduction

The reader is assumed to be familiar with the various types of raid implementations, their advantages and drawbacks. This is not a tutorial, just a set of instructions on how to implement root mounted raid on a linux system. All of the information necessary to become familiar with linux raid is listed here directly or by reference, please read it before send e-mail questions.

1.1 Where to get Up-to-date copies of this document.

Click here to browse the author's latest version of this document. Corrections and suggestions welcome!

Root-RAID-HOWTO -- OBSOLETE

Available in LaTeX (for DVI and PostScript), plain text, and HTML.

http://www.linuxdoc.org/HOWTO/Root-RAID-HOWTO.html
Available in SGML and HTML.
ftp.bizsystems.net/pub/raid/

1.2 More up-to-date Boot Root Raid with LILO howto

Available in LaTeX (for DVI and PostScript), plain text, and HTML.

http://www.linuxdoc.org/HOWTO/Boot+Root+Raid+LILO.html
Available in SGML and HTML.
ftp.bizsystems.net/pub/raid/

1.3 Bugs

As of this writing, the problem of stopping a root mounted RAID device has not yet been solved in a satisfactory way. A work-around proposed by Ed Welbon and implemented by Bohumil Chalupa is incorporated into this document which eliminates the need for a long ckraid at each boot for raid1 and raid5 devices. Without the workaround, it is necessary to ckraid the md device each time the system is re-booted. On a large array this can cause a severe availability performance degradation. On my 6 gig RAID1 device running on a Pentium 166 with 128 megs of ram, it takes well over half an hour to ckraid :-( after each re-boot. It takes over an hour on my 13 gig RAID5 array with a 20mb/sec scsi adaptor.

The workaround stores the status of the array at shutdown on the real boot device and compares it to a reference status placed there when the system is first built. If the status's match at reboot, the superblock on the array is rebuilt on the next boot, otherwise the operator is notified of the status error and the rescue system is left running with all the raid tools available.

Rebuilding the superblock causes the system to ignore that the array was powered down without mdstop by marking all the drives as OK, as if nothing happened. This only works if all the drives are OK at shutdown. If the array was operating with a bad drive, the operator must remove the bad drive prior to restarting the md device or the data can be corrupted.

None of this applies to raid0 which does not have to be mdstopped before shutdown.

Final proposed solutions to this problem include a finalrd similar to initrd, and mdrootstop which writes the clean flags to the array during shutdown when it is mounted read only. I am sure there are others.

In the mean time, the problem has been by-passed for now Please let me know when this problem is solved more cleanly!!!

1.4 Acknowledgements

The writings and e-mail from the following individuals helped to make this document possible. Many of the ideas were stolen from the helpful work of others, I have just tried to put it all in COOKBOOK form so that it is straightforward to use. My thanks to:

1.5 Copyright Notice

This document is GNU copyleft by Michael Robinton michael@bzs.org.

Permission to use, copy, distribute this document for any purpose is hereby granted, provided that the author's / editor's name and this notice appear in all copies and/or supporting documents; and that an unmodified version of this document is made freely available. This document is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY, either expressed or implied. While every effort has been taken to ensure the accuracy of the information documented herein, the author / editor / maintainer assumes NO RESPONSIBILITY for any errors, or for any damages, direct or consequential, as a result of the use of the information documented herein.


Next Previous Contents

Hosting by: Hurra Communications Ltd.
Generated: 2007-01-26 17:58:05