davis999

Members
  • Posts

    43
  • Joined

  • Last visited

Everything posted by davis999

  1. Thanks for the reply @johnnie.black. I guess the thing that trips me up is the fact the done logs from script were not output. I looked in the syslog and don't see any signs of them there either. Seems like the script just died after 'dd' command completed. Anyway given that 'dd' completed i guess I can assume it did what needs to be done since i see no other meaningful commands after that. Guess i'll try this with the other drives i plan to remove and see how they go. Btw, i hadn't looked at the done messages in script until after i sent my first message. I see it specifically calls out the fact that "no space left on device" error may occur and that it is not an issue (ie. is expected).
  2. Hi folks. I was attempting to run the "clear an array drive" script in preparation for removing some smaller drives. I marked one 2TB drive with `clear-me` folder and launched the script. Just checked the status of it the next day and i see a "No space left on device" error is shown in script output. Has anyone seen this before with this script? Not clear on what this means but im assuming the script was not successful Here is the full script output: Script location: /tmp/user.scripts/tmpScripts/clear an array drive/script Note that closing this window will abort the execution of this script *** Clear an unRAID array data drive *** v1.4 Checking all array data drives (may need to spin them up) ... Found a marked and empty drive to clear: Disk 3 ( /mnt/disk3 ) * Disk 3 will be unmounted first. * Then zeroes will be written to the entire drive. * Parity will be preserved throughout. * Clearing while updating Parity takes a VERY long time! * The progress of the clearing will not be visible until it's done! * When complete, Disk 3 will be ready for removal from array. * Commands to be executed: ***** umount /mnt/disk3 ***** dd bs=1M if=/dev/zero of=/dev/md3 status=progress You have 60 seconds to cancel this script (click the red X, top right) Unmounting Disk 3 ... Clearing Disk 3 ... 2000357949440 bytes (2.0 TB, 1.8 TiB) copied, 36837 s, 54.3 MB/s dd: error writing '/dev/md3': No space left on device
  3. Ahhh okay. Thanks very much @Squid. That seems to have worked.
  4. Hi folks. I've been having an issue using the Fix Common Problems plugin. I recently upgraded my server (replaced everything but the data drives), and after doing that i installed this plugin to clean up any issues. I've had a few times where i've been able to successfully run the plugin and it's given me some feedback, but most of the time when it runs, the "Scanning" dialog shows and never goes away. I noticed this in the logs which i'm assuming is the culprit: May 4 10:28:04 chunk nginx: 2019/05/04 10:28:04 [error] 9719#9719: *4080148 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.60, server: , request: "POST /plugins/fix.common.problems/include/fixExec.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.0.20", referrer: "http://192.168.0.20/Settings/FixProblems" Aside from this issue, my server appears to be working great. I'm not really sure how best to troubleshoot this. I've attached diagnostics. Can anyone give me any pointers? Thanks. chunk-diagnostics-20190504-1044.zip
  5. Thanks @witalit. At any given time, i can have up to 6-8 concurrent streams on my plex instance, but even with that, they're not necessarily all transcoding. Some would be direct play. That said, i will be adding more load to it as time goes on. VMs are next on the list. Hopefully this will last me for a long while, as my previous box did.
  6. For anyone that is curious... I've had my new system up and running for about a month or so now and so far i'm loving it. The hardware is pretty much exactly as i stated above. Migrating the data drives over was very smooth. So far it's taken everything i've thrown at it with ease. I used to run Plex on a separate box, but i migrated that over to a docker container on my unraid box. In addition to that i've also got docker containers for nzbget, radarr, sonarr, openvpn, duckdns, etc etc. The system is barely taxed by all of these things running... plex is the main user depending on how many concurrent streams are in use. I'm planning on possibly moving a windows VM over to this as well so we'll see how that goes. If anyone has any questions, feel free to reply and i'll answer as soon as i can.
  7. Well, i decided to pull the trigger on this build. My current unraid box won't even boot anymore. Couldn't wait any longer. I've done enough research that i'm confident this should work out well for my needs. Going to be a few weeks before i have all the parts. Will report back here when it's all up and running.
  8. Hey everyone, So, the time has finally come for me to upgrade my unraid box. Back in 2011, i built my first unraid box using some recommendations at the time for budget builds. This included the following main components: Biostar A760G M2+ AMD Sempron 140 2 GB RAM Supermicro AOC-SASLP-MV8 15 x 3TB - 8TB mixed hard drives ranging 2 x 120 GB SSDs for cache pool After switching to dual parity about a year ago, and with recent versions of unraid, the performance of my server has gone down hill. I was typically seeing sub 70MB/s write speeds, and parity check averages were around the same. I also have plex installed on a separate box. When parity check is running, i can't even watch anything from unraid box because the performance degrades so much... and that lasts for about 40 hours. To get some extra life out of the box, i tried upgrading to an AMD Phenom II X4 955 and 8GB ram. Strangely, my speeds got worse. That was about a couple months ago. Now my server is crashing frequently and it's not worth trying to bring it back with the existing hardware. Time for an overhaul With that said, here's what i'm thinking. Previously my unraid box just served as a NAS. This time, i'm thinking i might want to move my plex into a docker container on unraid box. I also want this box to last a good while again. So knowing that, i've tentatively decided on the following main components: Rosewill RSV-L4500 AsRock EP2C602-4L/D16 SSI EEB Dual Intel Xeon 2690 v2 64GB 8X8GB DDR3 1600MHz ECC REG EVGA SuperNOVA 1000 G3, 80 Plus Gold 1000W LSI SAS9300-8i (IT mode) (maybe 2 of these since SAS3 expander boards seem ridiculously expensive) Asus Hyper M.2 x16 NVMe Expansion Card 2 x Crucial P1 1TB 3D NAND NVMe PCIe M.2 SSD 2 x Noctua NH-D9DX i4 3U CPU Cooler 3 x iStarUSA BPN-DE350SS Trayless 5-in-3 Drive Cages As for HDDs, i have a couple 8TB Ironwolf drives for dual-parity, and the rest are a mixture of 3TB-8TB drives (green, blue, red, black, etc) which i will transfer over to new build as is (data intact hopefully). The main things i'm looking for with this build are: 1. Faster parity checks (ideally at least ~100-120MB/s) and write speeds 2. The ability to use plex in a docker container 3. Be able watch plex during parity checks without significant degradation (assuming a small number of concurrent streams) Looking for feedback on whether this will be sufficient for my needs? Is any of it overkill? Are my goals even realistic? Are there things i'm not considering that i should be? Thanks everyone. Really appreciate any and all feedback.
  9. Alright, so here's the latest... I setup a new flash drive with the latest 6.6.6 release and transferred over my backed up config files, registration key, parity logs etc to the new flash drive. I booted up my server and was successfully able to access the web interface. All my config was retained, including shares, drive assignments, etc. Prior to starting array, I went through the process of replacing my registration key using Tools --> Registration. After completing the registration update, i started up my array and it's up and running. A parity check was automatically kicked off. Not sure what triggered the parity check exactly but it makes sense to let it finish just to be sure all is well. Going to mark this as resolved. Thanks again for the help @itimpi, @jonathanm, @johnnie.black Much appreciated. You can't beat the great support of the UnRAID community!
  10. No i dont have array set to auto-start. That sounds like a dangerous setting, if enabled. The fact that my flash drive is seemingly not in good shape makes me wary of using the files in "previous" folder. So i just tried something and managed to get the server to successfully boot up. I ran a chkdsk against the the flash drive to correct any errors. The files that chkdsk flagged were mainly files in the "previous" folder. I replaced all of the zero size files with new ones taken from 6.6.6 release downloaded from unraid site. I restored the network.cfg from the 6.6.5 backup that i found. All other config files appear to be readable on the flash drive. With that, i attempted to boot up server and i was able to get into web interface. I noticed that some settings were lost, mainly the cache drive assignments (not a big concern), and the server name was reverted to 'tower'. The data disk assignments were correct. I did not start array though. I think it's best if i move my flash drive contents over to a new flash drive before i attempt to get the array back up and running. Anyone have any concerns with doing that? (moving to new flash drive)
  11. I just took a look at the contents of the flash drive on another machine, and something is definitely wrong. A bunch of the files are showing with zero size, including some of the unraid "bz" files. Wondering if my flash drive has possibly become corrupted? In other news, i did find a backup of my flash drive from 6.6.5 so i think ill try transferring that to a new flash drive and booting up. The only downside with this backup is that i did make a drive change since that backup was done, but im assuming i should be able to re-assign the drives and mark parity as trusted to get back up and running. The only other thing i'm unclear on is regarding the unraid license key. I understand it is linked to the flash drive UUID. I think i read that if i boot up with a diff flash drive and same key, that unraid will ask me to migrate key for use with new flash drive, or something like that? Can anyone confirm?
  12. Just gave that a try @itimpi but no luck. I've tried all of the safe/gui mode options and they all cause a reboot. Appreciate the suggestion though.
  13. Hi folks, So i just ran an unraid OS update (6.6.5 --> 6.6.6). Now my server is stuck in an infinite reboot loop. Prior to this it was running strong. The server gets to the unRAID spash screen, and then launches unRAID. I see two lines that state "Loading /bzimage... ok" and "Loading /bzroot... ok", then it reboots. Sadly i dont have a backup of the flash drive prior to doing the update. I mistakenly thought it would be a safe update since it is a minor version increment. I should also mention that before i ran the unraid OS i did update a few plugins. I did try safe mode launch of unraid but it has same issue. Not sure what i can do now. Anyone have any suggestions? I'm afraid i'm in a bad place here. Thanks for any help. -Dave
  14. Happy to report that this approach worked. I powered down, put old drive back in, started up array with 'parity is already valid' checked. That got me back to previous state. Then i updated to latest unraid (6.6.5) and did replacement procedure again. This time after powering up with new drive, my previous config was retained and i was able to mark new drive as replacement and start array with data re-build. All is well. Thanks again @johnnie.black.
  15. Yah, i'll give that a try and report back here. Thanks for the confirmation on that option @johnnie.black 👍
  16. Hi folks, Really hoping someone here can help me out... While attempting a data disk upgrade (upgrading a disk from 1TB to 8TB), I've run into an issue and i'm not really sure on the best course of action. I have an Unraid v6.5 box with dual parity box, 11 data disks and 2 small SSD cache drives. All i was wanting to do was upgrade one of my older data drives (1TB) to a new larger drive (8TB) and have the data rebuilt onto that newer drive. I followed the directions here (for recent versions of unraid - not 4.7 instructions): https://wiki.unraid.net/Replacing_a_Data_Drive After rebooting, i logged into the unraid web interface and found that all of my drives were showing as unassigned. At that point i got a little worried. I decided to start setting up the assignments as they were before (using exact same slots for each drive). I assigned the two parity drives, and then as soon as i assigned the first data drive, i saw a notice beside parity drives stating they were going to be overwritten upon starting the array. Then my worry increased. :P And now here i am writing this post. Not really sure how i should proceed from here. I grabbed a config/log dump using the diagnostics tools in unraid which is attached to this post. I have not rebooted since the first time. One path that comes to mind is i could power down the unraid box, put my old drive back in, start up unraid and reassign all drives, then start array with "Parity is already valid" checked. I assume that would get me back to where i was before, at which point i could try this all over again. My fear is that i will run into this same situation. I know another option is i could just assign all drives and just let it rebuild parity across all drives, but this just seems unnecessary and i'd like to avoid array being unprotected for that time period. Does anyone have any suggestions on how i should proceed? Is there something wrong with my unraid box that is causing all drives to show up unassigned after reboot? I do see some errors in syslog which i believe are related to preclear plugin, but wouldn't think that would interfere with drive assignment. Appreciate any help. Thanks :) chunk-diagnostics-20181127-1237.zip
  17. Hi folks, I've been running unraid v4.7 for a long time now. My box has been 100% stable. Recently it has been getting close to full so i've been trying to preclear a couple disks so that i can add them to the array. However, i'm running into an issue with a package i installed via unmenu which is supposed to shutdown the unraid server when the temperature of a drive exceeds a certain value (in my case 55 degrees celsius). It supposed to send a warning email when it gets to a warning threshold (50 degrees), and then send another if it reaches the max temp (55) and then initiate a shutdown. I've never had heat problems with my server... and none of my drives ever exceed 40 degrees, even when they're in use. I've got 2 wd green drives (1.5tb and 2tb) that i was running a preclear on simultaneously. In two different attempts to run these preclears, i've received an email from this overtemp script that it was killing processes due to temperature. I didn't even get the warning email. Here are the contents of the overtemp shutdown email: /bin/sh: line 1: 3686 Killed /usr/local/sbin/overtemp_shutdown.sh >/dev/null 2>&1 Unfortunately im unable to verify if the temps did exceed the values as the syslog is gone after reboot, but i can't see how this would have happened. I've been monitoring the drive temperatures throughout the preclear operation and have not seen the temps go above 32-35 degrees... plus there is the fact that the warning email was never sent. It seems like this overtemp shutdown script is having some kind of issue. Has anyone else had this problem before ? or any other problems related to this overtemp_shutdown script ? Any recommendations on what i should do ? Im thinking of just disabling this package but i kind of like to have something in place to monitor temps... but the way it's working now doesn't make sense to me. Maybe there are alternatives for monitoring temperature ? Appreciate any help you can give. Thanx. -Dave
  18. I have a supermicro drive cage that i'm selling. I've only been using it for a month and it is in perfect working order + mint condition. All parts will be included except for the plastic dummy drives. I got a good deal on some norco ss-500 drive cages so im filling my case with them instead. I'm in canada so would like to sell to someone within canada as well in order to minimize shipping cost. I checked with canadapost and it looks like the shipping cost would be ~$15 for regular parcel, or ~$35 for xpresspost. Please pm me if you're interested. Thanx. Product Link: http://www.ncix.com/products/?sku=30786&vpn=CSE-M35T-1B&manufacture=SuperMicro -Dave
  19. Hi folks, I recently purchased the Supermicro CSE-M35T-1 5-in-3 drive cage. For the most part im quite happy with it. One thing i've noticed is that the HDD LED's only turn on when there is activity for a given drive. Wondering if i should be seeing any other sort of indicators with the led's ? Is there supposed to be a power indicator (when the drive is not reading/writing)? Would i get more use out of the LED's if i could have used the LED cable that comes with the drive cage? (i couldn't use it becuz my mobo does not have connections for it) Still not sure what exactly it provides anyway. I couldn't find anything in the manual in regards to when the led's should light up. I know the icy dock drive cage shows green for power, amber for hdd activity and red for drive failure. Was hoping i could get similar functionality out of the supermicro cage. Anyone know more on this ? Thanx. -DaViS
  20. Good deal on WD20EARS @ NCIX Canada this week ($74.99): http://ncix.com/products/index.php?sku=49591&vpn=WD20EARS&manufacture=Western%20Digital%20WD&promoid=1281
  21. It seems that running the reiserfsck check with --rebuild-tree option has fixed the issues i was having with disk1. I have no idea what got my drive into this state. Maybe it is related to the errors that i had on the drive (ie. current_pending_sector error) when i first ran into problems. The drive cage and sata cable were fine in the end. Anyway, things are back to working now. Let's hope they stay that way. Will mark this thread as [sOLVED]. Huge thanx to Joe L for all the help on this!
  22. I've been a bit delayed on getting back to this. Was busy at the hospital taking care of our new baby boy, born feb. 14. It appears my previous assumptions here are not correct. I had originally thought there was either an issue with the sata cable or the drive cage. After running multiple parity checks with/without drive cage etc, there were no errors. However, there are some new issues i've discovered... I've noticed that now that my server is back online, there are some files/directories missing. If i telnet into the unraid server and list for those directories, i can see the files/subdirs are there, but for some of them im getting a 'permission denied' message. I'm not able to write to these folders/files through the shares either. I read some other posts in the forum where it was suggested to run a reiserfsck check on the drive hosting the problem files. I've done so and received some errors suggesting to run a 'rebuild-tree' operation on the drive. On the instruction guide for the reiserfsck check, it says not to do this unless directed to by unraid support crew. Syslog and reiserfsck are attached. Can someone tell me what i should do here? Thanx very much. -DaViS reiserfsck.txt syslog-2011-02-17.txt
  23. I've done some more testing and i think i've found the source of the problem. After re-seating the sata/power connections, i ran a party check, as you suggested Joe L. The parity check completed with 87 errors. So, in order to try and find the source of the issue, i removed the drive that was showing errors from my somewhat new SuperMicro 5-in-3 drive cage. I hooked it up to the same mobo sata slot directly using a different sata cable. Then i ran the parity check again. This time the parity check completed without errors. I was also able to copy files to the problem drive without errors... which was not possible before. So i now know that the problem is either with the drive cage, or with the sata cable... not the HD. I'm going to put the drive back into the drive cage tonite but with the newer sata cable. After running a parity check with this configuration, it should tell me whether the problem is the drive cage or not. If i get errors, i know for sure it's the drive cage. If no errors, then i know for sure it was the older sata cable. I'll post back with my results. -DaViS
  24. Sounds good. I'll check some connections and then start the array to build new parity. Thanx for all the help Joe L. I'll let ya know how i make out with this. -DaViS