davis999

Members
  • Posts

    43
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

davis999's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. Thanks for the reply @johnnie.black. I guess the thing that trips me up is the fact the done logs from script were not output. I looked in the syslog and don't see any signs of them there either. Seems like the script just died after 'dd' command completed. Anyway given that 'dd' completed i guess I can assume it did what needs to be done since i see no other meaningful commands after that. Guess i'll try this with the other drives i plan to remove and see how they go. Btw, i hadn't looked at the done messages in script until after i sent my first message. I see it specifically calls out the fact that "no space left on device" error may occur and that it is not an issue (ie. is expected).
  2. Hi folks. I was attempting to run the "clear an array drive" script in preparation for removing some smaller drives. I marked one 2TB drive with `clear-me` folder and launched the script. Just checked the status of it the next day and i see a "No space left on device" error is shown in script output. Has anyone seen this before with this script? Not clear on what this means but im assuming the script was not successful Here is the full script output: Script location: /tmp/user.scripts/tmpScripts/clear an array drive/script Note that closing this window will abort the execution of this script *** Clear an unRAID array data drive *** v1.4 Checking all array data drives (may need to spin them up) ... Found a marked and empty drive to clear: Disk 3 ( /mnt/disk3 ) * Disk 3 will be unmounted first. * Then zeroes will be written to the entire drive. * Parity will be preserved throughout. * Clearing while updating Parity takes a VERY long time! * The progress of the clearing will not be visible until it's done! * When complete, Disk 3 will be ready for removal from array. * Commands to be executed: ***** umount /mnt/disk3 ***** dd bs=1M if=/dev/zero of=/dev/md3 status=progress You have 60 seconds to cancel this script (click the red X, top right) Unmounting Disk 3 ... Clearing Disk 3 ... 2000357949440 bytes (2.0 TB, 1.8 TiB) copied, 36837 s, 54.3 MB/s dd: error writing '/dev/md3': No space left on device
  3. Ahhh okay. Thanks very much @Squid. That seems to have worked.
  4. Hi folks. I've been having an issue using the Fix Common Problems plugin. I recently upgraded my server (replaced everything but the data drives), and after doing that i installed this plugin to clean up any issues. I've had a few times where i've been able to successfully run the plugin and it's given me some feedback, but most of the time when it runs, the "Scanning" dialog shows and never goes away. I noticed this in the logs which i'm assuming is the culprit: May 4 10:28:04 chunk nginx: 2019/05/04 10:28:04 [error] 9719#9719: *4080148 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.60, server: , request: "POST /plugins/fix.common.problems/include/fixExec.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.0.20", referrer: "http://192.168.0.20/Settings/FixProblems" Aside from this issue, my server appears to be working great. I'm not really sure how best to troubleshoot this. I've attached diagnostics. Can anyone give me any pointers? Thanks. chunk-diagnostics-20190504-1044.zip
  5. Thanks @witalit. At any given time, i can have up to 6-8 concurrent streams on my plex instance, but even with that, they're not necessarily all transcoding. Some would be direct play. That said, i will be adding more load to it as time goes on. VMs are next on the list. Hopefully this will last me for a long while, as my previous box did.
  6. For anyone that is curious... I've had my new system up and running for about a month or so now and so far i'm loving it. The hardware is pretty much exactly as i stated above. Migrating the data drives over was very smooth. So far it's taken everything i've thrown at it with ease. I used to run Plex on a separate box, but i migrated that over to a docker container on my unraid box. In addition to that i've also got docker containers for nzbget, radarr, sonarr, openvpn, duckdns, etc etc. The system is barely taxed by all of these things running... plex is the main user depending on how many concurrent streams are in use. I'm planning on possibly moving a windows VM over to this as well so we'll see how that goes. If anyone has any questions, feel free to reply and i'll answer as soon as i can.
  7. Well, i decided to pull the trigger on this build. My current unraid box won't even boot anymore. Couldn't wait any longer. I've done enough research that i'm confident this should work out well for my needs. Going to be a few weeks before i have all the parts. Will report back here when it's all up and running.
  8. Hey everyone, So, the time has finally come for me to upgrade my unraid box. Back in 2011, i built my first unraid box using some recommendations at the time for budget builds. This included the following main components: Biostar A760G M2+ AMD Sempron 140 2 GB RAM Supermicro AOC-SASLP-MV8 15 x 3TB - 8TB mixed hard drives ranging 2 x 120 GB SSDs for cache pool After switching to dual parity about a year ago, and with recent versions of unraid, the performance of my server has gone down hill. I was typically seeing sub 70MB/s write speeds, and parity check averages were around the same. I also have plex installed on a separate box. When parity check is running, i can't even watch anything from unraid box because the performance degrades so much... and that lasts for about 40 hours. To get some extra life out of the box, i tried upgrading to an AMD Phenom II X4 955 and 8GB ram. Strangely, my speeds got worse. That was about a couple months ago. Now my server is crashing frequently and it's not worth trying to bring it back with the existing hardware. Time for an overhaul With that said, here's what i'm thinking. Previously my unraid box just served as a NAS. This time, i'm thinking i might want to move my plex into a docker container on unraid box. I also want this box to last a good while again. So knowing that, i've tentatively decided on the following main components: Rosewill RSV-L4500 AsRock EP2C602-4L/D16 SSI EEB Dual Intel Xeon 2690 v2 64GB 8X8GB DDR3 1600MHz ECC REG EVGA SuperNOVA 1000 G3, 80 Plus Gold 1000W LSI SAS9300-8i (IT mode) (maybe 2 of these since SAS3 expander boards seem ridiculously expensive) Asus Hyper M.2 x16 NVMe Expansion Card 2 x Crucial P1 1TB 3D NAND NVMe PCIe M.2 SSD 2 x Noctua NH-D9DX i4 3U CPU Cooler 3 x iStarUSA BPN-DE350SS Trayless 5-in-3 Drive Cages As for HDDs, i have a couple 8TB Ironwolf drives for dual-parity, and the rest are a mixture of 3TB-8TB drives (green, blue, red, black, etc) which i will transfer over to new build as is (data intact hopefully). The main things i'm looking for with this build are: 1. Faster parity checks (ideally at least ~100-120MB/s) and write speeds 2. The ability to use plex in a docker container 3. Be able watch plex during parity checks without significant degradation (assuming a small number of concurrent streams) Looking for feedback on whether this will be sufficient for my needs? Is any of it overkill? Are my goals even realistic? Are there things i'm not considering that i should be? Thanks everyone. Really appreciate any and all feedback.
  9. Alright, so here's the latest... I setup a new flash drive with the latest 6.6.6 release and transferred over my backed up config files, registration key, parity logs etc to the new flash drive. I booted up my server and was successfully able to access the web interface. All my config was retained, including shares, drive assignments, etc. Prior to starting array, I went through the process of replacing my registration key using Tools --> Registration. After completing the registration update, i started up my array and it's up and running. A parity check was automatically kicked off. Not sure what triggered the parity check exactly but it makes sense to let it finish just to be sure all is well. Going to mark this as resolved. Thanks again for the help @itimpi, @jonathanm, @johnnie.black Much appreciated. You can't beat the great support of the UnRAID community!
  10. No i dont have array set to auto-start. That sounds like a dangerous setting, if enabled. The fact that my flash drive is seemingly not in good shape makes me wary of using the files in "previous" folder. So i just tried something and managed to get the server to successfully boot up. I ran a chkdsk against the the flash drive to correct any errors. The files that chkdsk flagged were mainly files in the "previous" folder. I replaced all of the zero size files with new ones taken from 6.6.6 release downloaded from unraid site. I restored the network.cfg from the 6.6.5 backup that i found. All other config files appear to be readable on the flash drive. With that, i attempted to boot up server and i was able to get into web interface. I noticed that some settings were lost, mainly the cache drive assignments (not a big concern), and the server name was reverted to 'tower'. The data disk assignments were correct. I did not start array though. I think it's best if i move my flash drive contents over to a new flash drive before i attempt to get the array back up and running. Anyone have any concerns with doing that? (moving to new flash drive)
  11. I just took a look at the contents of the flash drive on another machine, and something is definitely wrong. A bunch of the files are showing with zero size, including some of the unraid "bz" files. Wondering if my flash drive has possibly become corrupted? In other news, i did find a backup of my flash drive from 6.6.5 so i think ill try transferring that to a new flash drive and booting up. The only downside with this backup is that i did make a drive change since that backup was done, but im assuming i should be able to re-assign the drives and mark parity as trusted to get back up and running. The only other thing i'm unclear on is regarding the unraid license key. I understand it is linked to the flash drive UUID. I think i read that if i boot up with a diff flash drive and same key, that unraid will ask me to migrate key for use with new flash drive, or something like that? Can anyone confirm?
  12. Just gave that a try @itimpi but no luck. I've tried all of the safe/gui mode options and they all cause a reboot. Appreciate the suggestion though.
  13. Hi folks, So i just ran an unraid OS update (6.6.5 --> 6.6.6). Now my server is stuck in an infinite reboot loop. Prior to this it was running strong. The server gets to the unRAID spash screen, and then launches unRAID. I see two lines that state "Loading /bzimage... ok" and "Loading /bzroot... ok", then it reboots. Sadly i dont have a backup of the flash drive prior to doing the update. I mistakenly thought it would be a safe update since it is a minor version increment. I should also mention that before i ran the unraid OS i did update a few plugins. I did try safe mode launch of unraid but it has same issue. Not sure what i can do now. Anyone have any suggestions? I'm afraid i'm in a bad place here. Thanks for any help. -Dave
  14. Happy to report that this approach worked. I powered down, put old drive back in, started up array with 'parity is already valid' checked. That got me back to previous state. Then i updated to latest unraid (6.6.5) and did replacement procedure again. This time after powering up with new drive, my previous config was retained and i was able to mark new drive as replacement and start array with data re-build. All is well. Thanks again @johnnie.black.
  15. Yah, i'll give that a try and report back here. Thanks for the confirmation on that option @johnnie.black 👍