I'll report back here. I remember my PC taking forever to start with those 4 disks powered on, but there's no way to get around that. Thanks again! You mentioned accessing each of the 4 drives separately to root out which one is bad. I got to the login screen, entered my password, clicked login, and my system proceeded to hang for the next 16 hours I left for work and hoped it would be resolved when I got back. So I simply rebooted and was able to login and get to my desktop thereafter.
However, running CDI proved futile, as I tried to simply launch it 3 separate times, and each time, my PC crashed after hanging for about 5 minutes. So I was never able to get CDI running. HDS displayed 3 of the 4 drives in the spanned volume, so it's obvious there's a big problem with one of the drives.
The only thing worth mentioning about HDS, while a completely different subject, is something I saw on the first screenshot, titled "hdsentinel-disk0. Look at the passage appearing to the right of where the disks are listed that talks about one of the healthy disks being in "PIO mode" and such. Could you shed any light on what that means and what I might want to do to correct it? This is just a sidenote, so don't worry if you can't help me with that particular item. Yea haha you don't get notifications on edits XD and yesterday i was super busy so didn't get to take a look.
One thing you could try is take one HDD out of the loop and see if it boots up faster at all. More than likely the one that allows it to boot faster is the bad one. That or boot up and see how it is showing 3 of the drives and the 4th is missing? If you remove a drive and the other 3 are still there you know which one is acting up, OR you can remove all of them and plug them in one by one with a USB adapter and check the health status of them.
This way you don't have to boot it up all the time. Sorry, drtweak, I should have been more specific about my findings. First of all, you're wrong about one of the four drives not showing in the Disk Management Console if that's what you were talking about. All four of them are showing, but the last one at the bottom shows a red X with the word "missing" next to it. So Disk Management can see all four, but the 4th one apparently is missing. There was only one option available "Reactivate Disk" when I right-clicked where it said "missing", so I selected it to see what would happen.
Basically, I just get a progress circle for 10 seconds, which then goes away and there are no error messages presented at all. So I guess Reactivating the disk isn't an option. The other thing I should've been more clear about was the fact that I already know which disk is the bad one. So when I asked you what you thought I should do next, I wasn't referring to trying to identify which drive was bad. Instead, I wanted your advice on what my options might be for recovering the data off of this span, which, I'm guessing, would start with figuring out a way to reactivate that one bad disk.
Any suggestions? Also, I thought I read somewhere that you can convert dynamic disks back to basic without losing any data.
If I remembered it correctly, it said you had to use some special type of software or something. Any thoughts on that? Any thoughts, ideas, and suggestions are welcome, and thanks for sticking with me this far! Well the fact that is says "Missing" means its not seeing it. It's still showing it because you have a Spanned Set of drives there which that drive is apart of hence why it still shows it.
If you go into device manager it should list all 4 drives. If you're only seeing 3 then one of the drives isn't even being seen by the BIOS. It's more so what is wrong with the failed drive. If its just a matter of something like a few bad sectors or something we may be able to make a backup image of that drive with some software and then restore it to a new drive and hopefully get it back.
It's not a matter of getting it to be a Basic drive or not. Again because its a spanned drive data is written to ALL 4 drives so you won't find any full files on that drive. Only bits of them and even then to try to recover can be a huge pain. So even though it says.
So just because the other 3 drives are Disk 0,1, and 3 doesn't mean they are the drives plugged into SATA Ports 0 1 and 3. But still we need to find which of the 4 drives is the bad one. If you go into the BIOS in there it should list what drives are connected to what SATA port and form there we can use the process of elimination and find the failed drive and then see about recovering the data if possible. Even if you look at the program you used to get the smart status there.
How many WDC's are listed? So that means one drive is NOT being read. We need to find which one it is. Thank you for the very thorough and informative response, drtweak, much appreciated.
And I'm pretty sure the 4th, failed drive is a Seagate model. If it ends up being one of the two models I listed above, then I'm assuming my only recourse for finding out which one is bad is to hook each one of them up to the SATA-to-USB adapter that I have and whichever one doesn't launch when attached to my PC is the dead one. And to answer your question, yes, the BIOS can only see 3 of them.
Hopefully that 4th disk is in fact a Seagate so I'll be able to identify the bad one easily. During the first run, I won't select the option that says "Fix Bad Sectors" or whatever; I'll just run it to find out if and how many bad sectors there are.
I realize that if there's a lot, it will take forever for Windows to correct them, so I'll wait to do that only if you recommend it. If there's some other step I should be doing first, please let me know. Also, what if the drive won't be recognized by Windows and Device Manager at all when I attach it separately?
Is there anything I can do at that point, other than chuck it in the garbage? Thanks again for your continued help! Well a Disk Check will not help up here. That is only meant for partitions. When you find the bad drive which is probably the drive you are talking about since there are in fact 3 1TB WD's in there from the screenshot then yes you are correct in trying to use a SATA to USB adapter or something similar. If we can read it great!
If it still doesn't come up even after that I'd also try a different SATA port as well But if it doesn't come up then the drive just might be dead. Might only make it worse. But if we can't get a PC to see the drive then there isn't much more i could help you with. You would have to send it in to a Pro with all 4 Drives and see what they could do. Thanks for the knowledge! I recommend reading the article in full if you want more detail.
Today, however, JBOD is becoming increasingly popular as storage becomes larger and more intricate. Whether the cast members of Jersey Shore will have a similar impact several years down the road is still an open question…. Other disks are unaffected. OP: Want the real poop? Call Promise. It's on your nickel, but they generally will give you a straight answer. If you haven't already, go to MS Technet and read up on the features of NTFS and dynamic disks not that dynamic disks are required to mount drives to folders, but dd offers other options for volume manangement.
Note that the original raid advisory board only defined 1 through 5, and then later added 6, which no company has implemented yet to my knowledge. Spanning is not striping. It is a way of taking a bunch of disks and making them appear as a single logical drive. However, the data is stored contiguosly from one drive to the next, as opposed to RAID0 striping where when you write a file, it is distributed over all disks in the array.
From reading the comments in this thread, it seems that some mfrs equate JBOD with spanning, and others do not. I'll assume that what you want to know is what happens if your spanning array breaks.
I would think that if you replace the failed drive, think of it as having a chunk of a single drive wiped out. The recoverability of data depends on what got wiped out. If it was the first drive, then most likely all important disk structure got wiped out too. No doubt you will have disk corruption, but it's much more likely you'll recover from this kind of damage than recovering from a broken RAID0, where your data will be corrupted at regular intervals throughout the entire volume e.
If the controller dies, generally you should be able to get another one and lose nothing. Neither configuration has any redundancy, so the effects of powering off at a bad moment are no different than doing so with a single drive unlike RAID5, where the array can be broken and has to rebuild. Actually, you have one manufacturer promise which states it's spanning in one document and states it's NOT spanning in another.
The two terms are mutually exclusive. Based on the rest of your post, I assume that you're refering to a spanned or concatenated array. This is an array where entire disks or partitions are assembled into a single logical drive.
Some few manufacturers have chosen to use the term JBOD to mean spanning, and a few 'reference' web sites have chosen to follow. This does not make it reality. When a controller dies, the impact is dependant on the failure mode of the controller.
If the controller starts writing bad data in random locations, you can kiss the entire array goodby. Any data you recover could be bad, so why risk it. Fortunately, this is a rare failure mode. The file-system would appear as though the system were turned off at the point where the controller stopped working.
Your ability to get to the data will depend on how reliable the file-system structure is. If the controller for one of the other disks fails, the results will be similar to the above, but the file-system structure will be less likely to need adjustment.
When the first disk in the array fails, everything is gone. This is because most file-system structures place the critical information at the beginning of the file-system. When one of the other disks fails, all data on that disk is gone. In addition, any data that is pointed to by directories that reside on that disk will be difficult to locate.
As for overall reliability, it will be similar to a RAID-0 array. Of course even RAID-1 doesn't guarentee that you won't lose data, as the controller or OS could decide to scramble your data.
Thanks for all the info, everybody!
0コメント