• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

skoj's Achievements


Newbie (1/14)



  1. <EDIT> Please ignore, not an unraid problem. Turns out a host was scanning a share 24/7. </EDIT> I've the same issue since upgrading from 6.7.0 to 6.8.1. All dockers turned off, no clients connected. Idle cpu was 5-10% before the upgrade. Has anyone run into this & fixed it? Minor impact but I hate to lose the capacity. Other odd thing is that there's network traffic even though all of the disks are spun down & the I/O counters are at very nearly zero.
  2. Yeah, that was it. I was testing with nothing plugged in except the board. It worked once I plugged in a couple of fans.
  3. So I bought this motherboard: ASUS - P9A-I/C2750/SAS/4L On paper it's a great board for UnRAID w/ low power, IPMI, 18 SATA ports, etc. but I can't get the thing to even start the POST process. I turn it on, the power supply fan spins for a couple of seconds and then shuts off. That's it. No video, beeps, or anything. CPU fan doesn't even start. The board is getting some power because the power LED is on and so is the LED indicating IPMI activity. This reviewer says the board is sensitive to power and wouldn't start until he switched from 650w power supply to a 150w one. Apparently the board doesn't provide enough load and the power supply shuts down automatically. I'm connecting to a Corsair CX430 which I suspect is running into the same issue. I should mention that i'm testing with memory from the motherboard's approved list and only have 1 drive connected. Has anyone gotten this board to work? If so, what power supply?
  4. Added a second parity disk to my server running 6.2.4. Shortly after the parity sync began I noticed that the # reads on one drive (disk 7) is exactly half that of all the other drives. That doesn't seem right. Shouldn't all drives have roughly the same number of reads?? No errors in the console or logs. Nothing out of the ordinary aside from this. Device Identification Temp. Reads Writes Errors FS Size Used Free View Parity 8 TB (sdn) 36 C 1,215,867 191 0 Parity 28 TB (sdh) 35 C 262 1,260,866 0 Disk 1 2 TB (sdm) 35 C 1,216,048 24 0 reiserfs Disk 2 4 TB (sdl) 34 C 1,216,051 24 0 reiserfs 4 TB Disk 3 6 TB (sdk) 35 C 1,216,056 25 0 reiserfs 6 TB Disk 4 8 TB (sdp) 35 C 1,216,045 24 0 reiserfs 8 TB Disk 5 3 TB (sdo) 35 C 1,216,049 24 0 reiserfs 3 TB Disk 6 4 TB (sdq) 32 C 1,216,040 24 0 reiserfs 4 TB Disk 7 4 TB (sdj) 35 C 648,243 25 0 reiserfs 4 TB Disk 8 2 TB (sdb) 33 C 1,222,568 24 0 reiserfs 2 TB Disk 9 6 TB (sdd) 36 C 1,216,247 25 0 reiserfs 6 TB Disk 10 6 TB (sde) 34 C 1,216,252 24 0 reiserfs 6 TB Disk 11 8 TB (sda) 35 C 1,216,241 25 0 reiserfs 8 TB
  5. Upgraded from 5.0.4 to 6.2.1 without any trouble. Just wanted to thank limetech & everyone on this board for their hard work on this upgrade and the great documentation. Between the upgrades to unraid itself, the new webgui, and the community plugins/dockers its like getting a new & improved NAS for free. Only quibble is that parity checks went from 17 to 25 hours but I expect that adjusting the tunables will take care of that. I'll just live with it until the script for that is ready for 6.2. Thanks again!
  6. @jonathanm Thanks for the referral to donordrives. They repaired the logic boards on the blown drives at a reasonable cost so I didn't wind up losing any data. @limetech Thanks for the e-mail assist as well.
  7. Thanks for the referral to jonathonm. I'll give them a call in the morning. I was in sticker shock from a data recovery firm's estimate when I saw your reply. Replacing all drives is good advice as I can't fully trust even the good drives after this. Not sure I'll be able to follow it as replacement cost for all my storage would be rather high.
  8. Hi all -- Running unRAID 5. Lost multiple drives in my array and looking for advice with regards to recovery. I was careless while swapping out a drive cage and plugged in a molex connector backwards. Now 4 of 12 drives aren't recognized by the BIOS and I suspect the logic boards on those drives are fried. The array didn't mount since it has 4 missing disks and I turned the server off to prevent any further damage. Looking into data recovery services. I have some questions about how unRAID works which would affect my next decisions. 1) Are the individual drives mountable on another server without reconstructing RAID encoding? I vaguely remember hearing each drive was an independent ReiserFS filesystem that could be mounted on any Linux system. 2) If they are able to recover all data on the 4 bad drives, they could clone the disk image to a fresh drive. Is it possible to connect those clones to my server and instruct the array to start with the clones? I suspect this will be problematic since I see the drive serial numbers in the UnRAID config. 3) If #2 isn't feasible, is this plan an option? a) Start array with 8 good disks (understanding that files from bad disks won't be available) b) Copy files recovered by service into array 4) Can you recommend anything else I should try? Are there any options I've neglected to consider? 5) Can you recommend any recovery service? Perhaps someone you've used in the past? Sorry for the long post. Appreciate any help you can offer.
  9. Successfully upgraded, no issues after 24 hours. Performance seems roughly the same as 5.0rc8a. Motherboard - Supermicro CS2EE (using onboard Realtek R8168 NIC)
  10. Just flashed my M1015 into a SAS2008 in IT mode using the instructions and ZIP file in this thread. Was pretty straightforward except that I had to try several motherboards until I found one that didn't give me the "Failed to initialize" error. Props to madburg for putting this together and to everyone that posted their experiences. It would have been a brutal process without this post! Not sure if anyone cares but I have CS2EE motherboard and I flashed to formware version P15 downloaded from LSI's support site. I'll test for a few days but things look good at first glance. No syslog errors, can read all drive temps and I can spin drives up/down from the web console.
  11. Thanks Rajahal. I also think fragmentation is involved but it just doesn't feel like the impact should be as extreme as this. I've never seen a modern filesystem lose the ability to write files with 30GB+ free. Maybe when you have millions of tiny files but I have max 10k files per drive. Mostly video files + the various jpg & nfo files generated by XBMC when you export your data to individual folders. Roughly ~10k files total with 20% being video files, rest being the xbmc data. I'm actually using an explorer replacement tool called Directory Opus but I'll give TeraCopy a try too. I checked the SMART reports, they look clean to me but I might have missed something so I'll post them tonight. Also, no cache drive is installed.
  12. FYI - the driver is unloaded/reloaded every time a webGui page is refreshed while the array is Stopped. This is to detect cases of the user hot plugging drives in and out of the server. Yeah, this syslog was right after upgrading to rc4. I was checking the partition alignment (per the docs) which involved stoppping the array, clicking on each disk on the main page once.
  13. Thank you! That worked. I didn't think I was in compatibility mode since I didn't see the icon for that (broken web page to left of the refresh button). It turns out when the "Display Intranet Sites in Compatibility Mode" option is checked that icon is not displayed at all for intranet sites regardless of whether compat mode is enabled or not.
  14. Apologies for the long post... I've upgraded from 4.7 to 5.0-rc3 (and then to rc4). This problem appears on both rc3 & rc4 not in 4.7. No addons running, only unmenu is installed but I've disabled it. Since upgrading I've noticed that a large percentage of file copy operations TO unraid via SMB are failing. They're either VERY slow (100-500KB/sec) or they fail outright (windows explorer reports "the specified network name is no longer available" after a few seconds). Could be a network issue but I don't think so. Reading from unraid works perfectly and (when I don't run into this problem) writes work perfectly as well. The problem only affects drives that are 99% full (~30-70GB of 2TB free). I don't always run into the problem on those drives but I never run into it on the drive with 100+ GB free. No errors in the syslog, smart reports are clean, reiserfsck --check doesn't report corruption on any of the disks. One final data point: unraid seems to be writing data to the target file even after the file copy errors out on my windows machine. It is not writing valid data, it is very slow (~100-500KB/sec) but it's definitely writing something to disk. I confirmed (using lsof)that smbd & shfs both have that file open for writing and I can see that blocks are being written to disk by watching the Device Status page on web interface. It eventually finishes but it takes 2 hours for 694MB file. Another thing...unraid seems to be generating far too many read operations when writing the file. See the attached screenshot. There's no way unraid should generate 340k read operations when writing a 694MB file to disk. I'm certain there aren't any other clients reading/writing to unraid & there aren't any plugins running. With 4.7, I maintained drives with as little as 5-10GB free. Performance suffered when writing sometimes but it never errored out like this. Can anyone help? This sucks if I have to keep ~100GB free per disk as it's a management hassle and that adds up to to a cool terabyte with 10 drives!