woolooloo

Members
  • Posts

    144
  • Joined

  • Last visited

Everything posted by woolooloo

  1. First drive went well. It is missing a ton of files, but it is the new drive that was rebuilt. I'm starting the original drive now and hopefully am able to salvage additional files. It is all media backup so I can always re-rip, but it will be over 2TB of media if the old drive does not do any better, so quite a pain. Thanks for the tip.
  2. My array has been kind of neglected for a while. Anyways (I apologize for the long story about the perfect storm) I have been ugprading some drives, trying to consolidate some stuff from old 1.5TB drives onto new drives to get rid of them. Along the way I found one drive that was basically dead. I was able to get pretty much everything off of it onto another drive that had enough space, so it was still in the array while I finished consolidating the other drives, but it was empty. Then I find another drive that is failing. Not as bad as the first, but a full 3TB drive that I do not have enough free room to move onto other drives. So I ordered a new 4TB drive and put it in to rebuild and expand the file system. Unfortunately the pretty much dead 1.5TB drive still in the array has all sorts of read errors while rebuilding. I do a check on the new drive and sure enough, even though UNRAID says it is full, Windows shows it only has 250GB of files and a spot check shows a lot of empty directories and corrupted files. Knowing my array is pretty much bust with the multiple failed drives, I go ahead and nuke it and start up a new array with no parity, pulling all the old 1.5TB drives, just so I can access the new drive. I also put in the 3TB drive that was failing. I run reiserfsck --rebuild_tree /dev/md2 (in maintenance mode) on the new drive and it is slowly churning away, finding problems. Realizing it will take all night, I start a second remote session and fire reiserfsck --rebuild-tree off against the 3TB failing drive too. Then this morning I find out that FUCKING Windows decided last night would be a fine time to restart for an update (although I am continually trying to disable that crap), and it kills off both of my remote sessions and kills the reiserfscks mid-process. This morning after a server reboot, both of those drives are showing as unformatted. Is there any hope to recovering anything off of those drives?
  3. I played around with this today. It is a gigabyte GA-MA780G-UD3H with 2 x16 slots (one running at x4) and 3 x1 slots. Two of the x1 slots are disabled if an x4 card is installed in the second x16 slot. Anyways, I had the x8 SASLP card in the x16 slot, and I tried the Syba x1 2 port SATA card in all the other slots (including the second x16 (x4) slot. In some slots, the Syba card's BIOS would show up first and hang. In other slots the SASLP would show up, but the Syba card never did. I finally moved the SASLP into the second x16 slot (running x4 even though it is an x8 card) and put the Syba card in the first x16 slot. That boots up properly, the SASLP card shows up first, then the Syba card shows up and everything boots and I can finally access the cache drive. Unfortunately in this config the x8 SASLP card is crippled a bit running x4, but that's what my previous card was so at least I'm not going backwards. I looked through the BIOS and could not find anything that looked like it would change boot priorities or anything for the PCI-e slots so I'm out of ideas. I may get another 4TB drive and merge a couple of my older 1.5TB drives then put the cache on the SASLP and get rid of the Syba card altogether. But if anyone ever thinks of anything else, please let me know.
  4. Some background, my 13 drive plus parity plus cache system had been running with 6 drives on the MB SATA ports, 8 drives on a Supermicro AOC-SASLP-MV8 PCI-e x4 controller, and the final drive on a 2 port PCI-e x1 SATA card. Recently the SASLP went bad and I just swapped it out with its replacement AOC-SAS2LP-MV8 which is a PCI-E x8 variant. When I booted it up the first time, the system hung up on the 2 port PCI-e x1 SATA card. It showed the bios screen from that card but never moved on. After reseating everything and trying again, same problem. I swapped the 2 port card to a different PCI-e x1 slot and everything booted up and seemed to be fine. Then I noticed that my cache drive is not showing up - and that is the one that is on the 2 port card. I've moved the card around to different PCI-e x1 slots (3 of them) and even tried the other PCI-e x16 slot that the SASLP is not using. Depending on which slot it is in, either the 2 port card shows up before the SASLP during boot and the system hangs, or the card never shows up. All this is happening before UNRAID starts, so I'm not sure if there is anything helpful in my log but I'm attaching it anyways. But if anyone has any ideas on how to sort of this apparent conflict between my new SAS2LP and my 2 port card, I would appreciate it. syslog-new setup.txt
  5. Replaced the SASLP controller with the current version of it and was able to get everything back up and running. Thanks for the help debugging everything!
  6. It's happened multiple times now after reboot so I will look to identify which controller is attached to these drives and replace it. Thanks for the insight. I may need some help figuring out how to replace the controller while maintaining the integrity of the array - especially since one of the drives is dead so I can't just rebuild parity.
  7. Ok sorry, got pulled into jury duty last week which turned my life upside down, finally digging my way out. After starting a rebuild, the errors started up basically immediately, my syslog grew to 128mb of read errors by time I stopped it a couple min later. I've truncated it to post and removed all the repetitive read errors that were at the end. After stopping the rebuild, unRAID says there are 6 drives missing. Disk 10 is actually the one that I have replaced. My icydocks hold 4 drives, but if memory serves, at least one of my SATA cards has 6 drives, could that be going bad? syslog-2018-10-17 - truncated.txt
  8. I'm on 5.0.6, I do not use my server that actively and have not gotten around to upgrading to 6. It's been working fine with minimal effort for a couple years. I recently noticed it had been like a year since I had done a parity check, so I kicked one off the other day. I came back to check the status today and one of my disks was disabled. It was an older 1.5TB drive and I had a spare one sitting around, so I went ahead and swapped it out. Since then, I've been trying to get it to rebuild onto the new disk, but 1) the write count on the new drive never increments even though the % complete on the rebuild keeps going up and 2) after a while other drives start showing a massive number of errors and if I try to look at their contents, those disks show up as empty. Stopping the array then shows those drives as disabled. A reboot seems to clear it back up and starts another rebuild. If I stop the array and remove the new disk from the array, then start the array, it is showing all of the files on that disk through emulation, so everything still seems intact at this point. I just do not understand what is causing the rebuild to fail. I guess it could be a failing SATA controller or a failing IcyDock or other hardware component, but I'm not sure the best way to track it down and I don't want to nuke multiple disks through repeated attempts. Anyone have thoughts on the best way to approach this? TIA
  9. I actually think I've found a solution for this. I can manually put 2TB of movies on the drive but not add it as an included disk in the "Movies" share. My understanding is unRAID will see the Movies directory on the disk and add it to the files shown by the Movies share, but it won't make the disk available for new content on the share (except for auxiliary files that might be added to the existing folders based on split level). Woop!
  10. That's along the lines of what I was thinking, but it seems like you can only set the min free space for the overall user share. So I think if I let my Movies user share have access to the drive, it will potentially use more than 2TB on this drive, as long as the overall Movies share has at least 1TB available on some combination of drives. This is not correct. Min free space is per drive, e.g., if Movies is configured with a 1TB min free space, it will only write to drives that have more than 1TB of free space. Fair enough, I didn't realize that. Still, it can only be set once per share, so it would leave 1TB on every drive used by the share, right? I only want it to leave me 1TB on 1 of the 10 or however many drives used by the share.
  11. Ok, well it was worth a shot. I guess I'll just let my backups span an additional disk if it runs out of space. Thanks
  12. That's along the lines of what I was thinking, but it seems like you can only set the min free space for the overall user share. So I think if I let my Movies user share have access to the drive, it will potentially use more than 2TB on this drive, as long as the overall Movies share has at least 1TB available on some combination of drives.
  13. Because my array is full so I can't add any new drives, I can only upgrade the old ones.
  14. I currently have a 1TB drive that I have dedicated to backup of other systems in the house as well as a public Share folder that I use when I want to send my wife a large file or vice versa. I'm going to be upgrading this drive to a 3TB drive, but I don't need 3TB for what this disk is dedicated to. It would be perfect if I could keep 1TB of the new drive dedicated to its current purpose and use the remaining 2TB for media storage. It seems like having multiple partitions would be the cleanest way to accomplish this, but I don't see any way to do that in unRAID. If I just let multiple user shares access the entire 3TB, then media storage won't stop at 2TB, it will keep going until it is full and I won't have any space left for my backups. I could obviously have the backups span multiple drives, but I like how I'm able to access it directly through the disk mount instead of through a user share. Is there any other way to accomplish what I want to do? I just moved from 4.7 to 5RC16b, but I don't think this is a question that depends on version.
  15. Not a good backup plan. With the low cost of drives these days, there's no reason not to have an offline copy of all of your static data. I'd certainly think $30-40/TB is well worth the cost for a good backup that would save MANY hours of re-constructing the data !! My backup disks cost me a LOT more than that, but I'm still very glad I have everything backed up. All of my essential data is backed up in multiple locations including offsite. Having my DVDs and BRs ripped (particularly older ones we've already watched) is only a nice convenience. One that I've paid for in purchasing drives for unRAID and spending the time ripping them and managing them. Spending thousands more (I understand that today I could get five 4TB drives to back it all up with [still $1000], but on offline solution would have evolved with my unRAID over the years starting with 500GB drives so it would have a similar makeup as the unRAID meaning $1000s in drives) plus MANY hours managing the offline backup - and well I'm comfortable with my choice. Short of a catastrophic failure like a fire that destroys the entire array, I'd probably only lose a small portion of my online collection and I'm willing to let that go. Even with a fully catastrophic failure, I'd probably just walk away from the setup - I've got two young kids now and different priorities than when I started. So I understand your point of view, but I also understand the choice that I've made. Edit: Not that that means I take it lightly, which is why I'm very careful with my array and I've stuck with 4.7 for so long.
  16. That could well be. I probably will install 5RC16b if it is released. My new 4TB parity drive is ready to go. Hmmm, really? The first RC was released over 16 months ago! There's been 16 numbered RCs, more if you include A's and B's. RC or Final may have no bearing on the "capability" of the software, but it does on the stability of the software, and that is what I am most concerned about. Particularly when I have 20TB in my array and for a very large portion of that my only backup is going back and re-ripping the DVDs/BRs. What if I had decided to make the switch at 5RC15a when a dataloss issue was introduced?
  17. I've stuck with official releases, so I'm running 4.7 and I'm out of space. I've been waiting for 5 so that I can bring several 3 and 4 TB drives online, the wait has been miserable. At this point I need 5.0 but if the RC is really so close then maybe I just move forward with that. It seems like there will always be one more issue, seemingly the past several release candidates have all promised to be the last. It's been very frustrating from a user's perspective (at least this user, but based on the results of the prior poll many others probably agree). I've got a 4TB drive going through preclear that will finish in a few hours for my new parity drive. I guess if you decide to officially release 5, I'll install that. Otherwise I'll just install whatever the latest RC is.
  18. 3GB drives are fully supported in v4.7, as well as 4GB, and all drives up to 2000GB;-) Touche
  19. I'm with you, I need 3GB support. I'm tired of buying additional drives when I've got 3GB drives already in the array with 1/3 of their space being unutilized. Plus, the longer this goes on and the more 3GB drives I use as 2GB drives, the more time it is going to take me to convert them all to 3GB drives to recover the lost space. I'm really not looking forward to that as it is. Look, the people with that hardware or thinking of buying that hardware are already SOL and need to wait for a solution. Why make all of us wait? Release 5.0 with a disclaimer for the known issue please.
  20. I ran it again and got this: 32 sectors had been re-allocated before the start of the preclear. 32 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. I'm glad that the pre-clear didn't find any more sector to re-allocate, but I'm a little disturbed that the count went from 30 to 32 after the last pre-clear while the drive was sitting unused. I guess I'll go ahead and use the drive, but if anyone finds this really troubling, please let me know.
  21. ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdj /tmp/smart_finish_sdj ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 119 100 6 ok 207253708 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 High_Fly_Writes = 85 100 0 ok 15 Airflow_Temperature_Cel = 66 69 45 near_thresh 34 Temperature_Celsius = 34 31 0 ok 34 Hardware_ECC_Recovered = 56 100 0 ok 207253708 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 30 sectors are re-allocated at the end of the preclear, a change of 30 in the number of sectors re-allocated. I think everything looks good except the "30 sectors are re-alocated at the end of the preclear". I know this is kind of what the preclear is supposed to do, but should I be worried? Most of the logs others have posted have 0 sectors. Is 30 troublesome? Should I worry? Should I run it again and see if it continues to increase? BTW, this is a Seagate recertified drive that they sent me to replace a failed drive.
  22. Any word on when an official 5.0 release is going to be coming out? I've been getting notifications of release candidates since I think April. I'm running out of room in my 4.7 server and it has several 3TB drives that are only being used as 2TB drives that I'd really like to reclaim the wasted space on rather than buying new drives.
  23. Thanks for all the tips. I have tried all those things now without success. A few other things I have tried: * Put in an Intel NIC and disabled the onboard NIC. No change. * Put in my old SSD boot drive in place of the one I upgraded to about 6 months ago. It still had the old Win7 on it from back then, so if it worked it would indicate some sort of recent driver/OS issue. I thought for sure this would fix it, but no change. * Put in a 2 port SATA card and moved my DVD and Bluray drives to that card, leaving just my SSD boot and regular HD data drive on the MB's SATA ports. No change. * Replaced the SATA cables to both hard drives. No change. * Tried moving the files from both my HD and my SSD, same 11MB/sec transfer speed from both. * Tried two different (but identical model) HDs, one at a time, to rule out some sort of drive malfunction that is affecting the bus. Since I also tried 2 SSDs and replaced all the cables, I think I've done what I can. * Ran CrystalDiskMark on both my SSD and HD and compared the results to some that I recorded probably a year ago in the same system. All disk benchmarks seem to indicate 20-30% speed loss. The fact that I wasn't able to go back to my old SSD and OS seems to me to point to some sort of hardware rather than software problem, but I can't imagine what else to look at. Also I can't even copy from my HD to my SSD or vice versa at the same speed that I used to be able to copy from another computer to this one over the network. Also, the system seems unstable and is locking up a lot. Could this point to something on my MB dying?
  24. I don't think it is unRaid related, thus the Lounge posting. I am seeing the same problem moving files directly to my HTPC which is Win7. My unRaid is all SATA, the cache drive is empty, the log is clear of anything troubling. It has not changed at all since I started seeing this. I am seeing the same 11MB/sec speed copying files to the cache drive as directly to an unRaid disk. Something in the pipeline seems to be limiting me to 11MB/sec but I can't figure out what. It doesn't seem to be the hard drives in my desktop Win7 machine because I can copy files between internal drive fast (although not as fast as I would expect, but certainly much faster than 11MB/sec). LAN Speed Test indicates the network is not the bottleneck. Ok, here is another data point. I just moved a file from my HTPC to my unRaid and it went at 85MB/sec. So the common denominator appears to be there is only a problem moving files specifically from my desktop to other computers. This is the computer I have run LAN Speed Test from and that appears to be fine. I've got the latest (or at least no more than a couple weeks old) AMD Raid drivers. Besides moving from Raid 0 to Raid 1 to no Raid, I don't think anything else has changed on the machine. The data drives in my desktop are 500GB 7200RPM Sammys. Again, I understand this is not unRaid related, just looking for some ideas on what I could try. Or even what other forum I could go to for help. Thanks.