optiman

Members
  • Posts

    1132
  • Joined

  • Last visited

Everything posted by optiman

  1. I upgraded from 6.3.5 with no issues so far. My NTP is pointed to pool.ntp.org. What should i change it to?
  2. Solved! Server has been up for 48 days without one single issue. I will assume the fs conversion did the trick and I also noticed increase in performance. File transfers and even deleting files is faster now. Today i upgraded to 6.5 and had to reboot, still everything looks good. Thanks again for the help guys!
  3. wow, been a while since I've read this thread, and I'm still on 6.3.5. After reading the past few pages, I have removed the Preclear plugin and then upgrade to 6.4.1. Seems I can just use the Seagate disk tool to confirm the drives are good, and then just add them to my array. I don't see a need to even bother with the script at this point. I think things got more simple.
  4. Yes I did, in the Sickrage docker thread, but nobody replied so no help there. Is there a better thread I should try?
  5. can anyone point in the right direction on where to get help with the script errors?
  6. wow, no help here. Anyone point me in the right direction please?
  7. Help please. I am trying to setup a unraid share/docker and point my HDHomeRun tuner for DVR server storage. I installed the docker on my unraid 6.3.5 server, using the template from post 1. I created a share called hdhomerundvr and shared via SMG, and made it public access. When the docker installed, it only had one path and that was for the config or main files. I added a hdhomerundvr path and pointed to the share I just created. Do I need any other variables? There are currently no Ports or IP addresses showing in the config section, which is not normal compared to the other dockers that all have port and IP information. This makes me think the install did not go as planned. I have the HDHomeRun Extend and I've purchased the DVR service. I've used the Windows version to update the firmware to 20180110. I tried to configure the DVR tab by selecting NAS. The list that comes up does not show any shares or the unraid server. It shows my Yamaha receiver and a few other items that just have long numbers for the name and ? mark, not sure what those are. I can also go directly to the IP address and I see the tuner menu. Under Internet Configuration, it shows Not Detected, none, not tested. The tuner has a valid IP via DHCP so it should be able to communicate with the internet just fine. What should I do about this part? Is there more detailed instructions on how to configure this docker/unraid setup and how to setup the DVR? Thanks!
  8. I've had several HGST drives and I've never had any issues with them.
  9. fs conversion completed, took 13 days. Everything went as planned, even the last part to remove the extra drive. The only issue was that when I ran a parity check at the very end, my 2nd parity disk was out of sync, meaning I had a bunch of parity errors for parity disk 2 only. All good after the parity check and I have verified I lost no files. Extra info for those interested: Leaving read only access via smb shares was fine and did not cause any issues. I put split level to auto, split as the system wants to I turned off disk shares, don't use disk shares. I will run the server for a month, maybe two and if no errors return, I will assume this was indeed related to the older file system and doing the xfs conversion fixed it. If no issues, I will mark this thread resolved. Thanks again for the support!
  10. I like the Seagate HDD ST4000NM0035 4TB SATA III 6Gb/s Enterprise 7200RPM 128MB 3.5 inch 512n drives. I have purchased the 6 and 8TB of this version, but not the 4TB one. From the seagate lineup, these are supposed to be the fastest. That said, they just released a newer version that cost around $270. These are supposed to be more relieable and faster than the desktop line. https://www.amazon.com/Seagate-ST4000NM0035-Enterprise-7200RPM-128MB/dp/B01FRC1GRQ/ref=sr_1_1_sspa?s=electronics&ie=UTF8&qid=1515865734&sr=1-1-spons&keywords=ST4000NM0035&psc=1
  11. First disk is complete. Holly crap you were not kidding, took about 30 hours to copy 4.9tb. Here is the summary part from the end of the output file: sent 4,881,463,720,150 bytes received 1,625,786 bytes 48,715,517.38 bytes/sec total size is 4,880,266,563,432 speedup is 1.00 Verify info: sending incremental file list sent 3,680,474 bytes received 7,142 bytes 63.49 bytes/sec total size is 4,880,266,563,432 speedup is 1,323,420.49 (DRY RUN) No errors and it looks like all files are there. I am confused by the above end report, showing bytes "received"?? However the bytes sent is correct. I went ahead and stopped the array, changed disk1 to xfs, started, formatted and now I'm running rsync again for the next drive. This will take over a week at this rate, but at least I'm on my way. I have a couple of other questions for you guys, and I'm just curious what you guys are running. For SMB user shares, what split level do you guys like? I have a mix of Top level only, none, and then split all you want. Of course this means my data is spread accross all drives. I don't mind splitting it, but it seems might be good to limit to maybe just 3 upper directories, or even Top level. Thanks in advance for sharing. Oh, last question. Do you guys leave Disk shares enabled? The default permissions is Public, so no security. I just realized mine was set this way. All this time I thought I had my server locked down, meanwhile disk shares are wide open. I want to just disable disk shares, after I'm done with this conversion. I only connect to user shares. Thanks again! Edit: I forgot to ask about leaving SMB shares active during the copy process. I have left SMB shares enabled, yet the only access to any files would be Movie players with read only access. Is this a issue to allow read only SMB access during the copy process, or should I just disable SMB shares until done?
  12. Got it, thanks again. I'm ready to kick this off. I'll report back once finished. Cheers!
  13. Thanks for the info and link. I only want to remove the old disk8 (which will still have all the files on it, as I only COPY the data). I don't care about zeroing it, unless that is necessary. I can just delete the files, or perhaps just reformat disk8 to clear. My goal is to remove disk8 and keep parity. I will just preclear the old disk8 later on, and save it as a spare. In the shrink array instructions, why do I need steps 7 and 8? If I have already deleted all files, the drive should be empty and parity good. Make sure that the drive you are removing has been removed from any inclusions or exclusions for all shares, including in the global share settings. Make sure the array is started, with the drive assigned and mounted. Make sure you have a copy of your array assignments, especially the parity drive. You may need this list if the "Retain current configuration" option doesn't work correctly. It is highly recommended to turn on reconstruct write, as the write method (sometimes called 'Turbo write'). With it on, the script can run 2 to 3 times as fast, saving hours! In Settings -> Disk Settings, change Tunable (md_write_method) to reconstruct write. Make sure ALL data has been copied off the drive; drive MUST be completely empty for the clearing script to work. Double check that there are no files or folders left on the drive. Note: one quick way to clean a drive is reformat it! (once you're sure nothing of importance is left of course!) Create a single folder on the drive with the name clear-me - exactly 7 lowercase letters and one hyphen Run the clear an array drive script from the User Scripts plugin (or run it standalone, at a command prompt). If you prepared the drive correctly, it will completely and safely zero out the drive. If you didn't prepare the drive correctly, the script will refuse to run, in order to avoid any chance of data loss. If the script refuses to run, indicating it did not find a marked and empty drive, then very likely there are still files on your drive. Check for hidden files. ALL files must be removed! Clearing takes a loooong time! Progress info will be displayed, in 6.2 or later. Prior to 6.2, nothing will show until it finishes. If running in User Scripts, the browser tab will hang for the entire clearing process. While the script is running, the Main screen may show invalid numbers for the drive, ignore them. Important! Do not try to access the drive, at all! When the clearing is complete, stop the array Go to Tools then New Config Click on the Retain current configuration box (says None at first), click on the box for All, then click on close Click on the box for Yes I want to do this, then click Apply then Done Return to the Main page, and check all assignments. If any are missing, correct them. Unassign the drive(s) you are removing. Double check all of the assignments, especially the parity drive(s)! Click the check box for Parity is already valid, make sure it is checked! Start the array! Click the Start button then the Proceed button (on the warning popup that will pop up) Parity should still be valid, however it's highly recommended to do a Parity Check Thanks again for your help. I'm trying to do good planning so I don't f*ck this up.
  14. Great, thanks guys! Kids are watching a movie right now, but I will reboot into Safe mode later today and start the conversion. I think I will just erase or clear the contents of disk8 at the end to keep parity. For the New Config - It's been a while since I did that. I stop the array, remove disk8, move disk 9 to slot 8, then do the New Config and tick the box to keep existing configuration, then start array - is that correct?
  15. ok good, so I can leave disk9 as is at the end, even though I won't have a disk8. What about my dual Parity drives? Is that a issue? I guess the rest of my instructions / Plan look ok?
  16. I have a new drive ready to go and I have created my plan using the feedback from you guys. That said, before I start, I am sharing my plan so you can tell me if I am missing anything. I do have a couple of questions. I have dual Parity drives. Do I need to do anything extra, like remove the 2nd parity drive before I start and put it back at the end? I have 8 data drives to convert, all of them are 6TB drives. I have added a 8TB drive (disk9), which will remain in the array when conversion is completed. This means I will have a extra 6TB drive at the end of this procedure. So the very last step for me is to remove the empty disk8 drive. I don’t have any more room in my case to add another data drive, so one must be removed at the end of this process. Because I am on Unraid 6.3.5, I am using the instructions (with help from this thread) at https://lime-technology.com/wiki/File_System_Conversion Share based, no inclusions, preserving parity *Use physical server Console *No New Config needed for this process *Boot into SAFE mode, no plugins and no dockers running, no mover 1. Start copying from console – from disk1 to the new XFS disk9 rsync -arv /mnt/disk1/ /mnt/disk9>/boot/copy.txt 2. After copy completes, run Verify and check output file (if successful there should be no files listed) rsync -narcv /mnt/disk1/ /mnt/disk9>/boot/verify.txt 3. Stop array, change to XFS format type on empty disk1, start array, verify only empty disk wants formatting, format disk. 4. COPY (not move) next reiserfs disk to freshly formatted XFS disk, verify copy, stop array, change desired format type of the disk you just copied from, start array, verify only one disk wants formatted. Repeat until all drives are converted. Copy Plan: disk1 to disk9 disk2 to disk1 disk3 to disk 2 disk4 to disk3 disk5 to disk4 disk6 to disk5 disk7 to disk6 disk8 to disk7 disk8 is now empty and can be removed from the array. After removing the old 6TB disk8, do I need to move disk9 to disk8 slot or does it matter? Thanks!
  17. thanks for the additional info. I'll get started as soon as my new drive is precleared.
  18. ok, I will use rsync. I'll boot into SAFE mode and run the two rsync commands at the console Thanks for the rsync commands jonathanm! So run the first rsync command to copy. When that one finishes, then run the second rync command to verify and confirm output file has no files listed. Preclearing the new drive now Thanks!
  19. I have no share inclusions nor exclusions and I don't care where the data ends up. All 8 of my data drives are 6TB. I have a spare 8TB that I want to use for this process and it can remain in my array afterwards. I will just have a spare 6TB at the end of the conversion. In your steps, would I still need to do the New Config part? I read it's best to boot into SAFE mode, so no plugins, no docker, no Mover, etc. I don't care if my server is offline the entire time. I'm more interested in the fastest process that keeps Parity. I was planning to use log into my server console and use MC to "copy" and verify, instead of the command line option. As long as I'm copying from disk share to disk share, then I this process should work. Do you agree this is ok, or should I reconsider using unbalance or the command line options?
  20. Thanks guys, point well made. I just wanted to be sure there wasn't another issue going on here that I should address first. I have new drives to preclear and I'll start working on the plan to convert. I assume the process hasn't changed much from last year, unless somebody has created an even easier process. That is why I deferred this task, as it seems it would take a long time. I guess it's time to just getter done. Thanks
  21. Thanks for the replies guys. As pwm points out, I too have not had any reiserfs issues, until recent release updates. Does the htop information agree that is the cause? How can I confirm my issue is for sure connected to reiserfs and converting my drives will solve this issue? Thanks again and cheers!
  22. I have been running into a issue where for some reason my server will become unresponsive. This can happen at any time it seems with no logic as to what is causing it. When this happens, I’m unable to do much at all. I have not made any resent changes to hardware or apps (plugins and dockers). All of that said, I have suspicion that the Mover may be involved. It was running almost every time that my server has become unresponsive. I’m running Unraid Pro 6.3.5. My signature has current hardware specs. I only run two Docker containers, sabnzbd and sickrage. For plugins, I have attached a screenshot to show the plugins installed. This issue started back with unraid 6.2 and happens every 2 or 3 weeks. The webgui dashboard shows that my cpu and memory are barely being used at all during normal operations. I also have green balls on all drives and the syslog doesn’t show any thing that helps me identify the cause. When this occurs the following happens: - SMB shares become unresponsive, no access - Webgui unresponsive, even on the server console - Command line commands do not execute, Powerdown, Shutdown, etc will not run - CPU goes to 100% - A cold restart is the only way to shutdown or reboot the server. This caused a Parity Check to start, which in most cases has zero errors. - If a manual copy is in process (or the Mover is running) then the active or current directory will become inaccessible. The only way I can get back into that directory is to use the “chmod -R 777”. - HTOP shows process that is eating up cpu, something about a “shfs error noatime.big_writes.allow only”. I’ve had no luck searching for forums. I am unable to KILL this process. What still works when this happens; - I can putty or use server console to login to the command line, although can’t do much. - I can copy the syslog to flash drive. - I can logout and login to the sever console, but the webgui will not come up, just hangs - Docker containers continue to run normal So far I have used the Server Console to view HTOP and that is how I know the cpu is at 100%. It shows a process that I do not recognize or understand, see attached screenshot of HTOP. I cannot KILL the process, not using htop and not using command line. The system will simple ignore the commands. I also tried to just leave the server alone and see if it would recover, no luck. After 3 days it was still screwed up and unresponsive. I’m still using reiserfs (I need to convert this when I have time) so I ran file system checks and reviewed SMART data on all drives - no errors. For my cache pool (btrfs) I ran the SCRUB command. I also ran memtest – no issues. After a fresh reboot I’m able to run tower-diagnostics and I can PM that and my Syslog files to LT support if it helps. This WILL happen again and I’m concerned with having to do a COLD restart. Because I’m unable to KILL the process that is pushing the cpu to 100%, it leaves no option but to turn off the power each time this happens – NOT GOOD. My main question is what can I do the next time this happens to help me identify what is causing this? Please help if you can. Thanks!
  23. I have both dockers up and running now, sabnzbd and sickrage (binhex-sickrage). I have installed the nzbtomedia script and configured it from scratch - really started over. Now everything is working again, except that I get a script error in sabnzbd, which means it shows the download as failed. However, the download has not failed and sickrage finds files and even moves them into my main show directory. The other thing I noticed is that the downloads that actually fail, are never added to Sickrage Manage Failed Downloads section. Here is a nzbtomedia log file for a download that worked just fine, and the show has been fully added without any issues. However, sabnzb shows it as a failed download on the main screen. [14:29:19] [INFO]::MAIN: Loading config from [/nzbToMedia/autoProcessMedia.cfg] [14:29:19] [INFO]::MAIN: Checking database structure... [14:29:19] [INFO]::MAIN: Checking if source needs an update [14:29:19] [ERROR]::MAIN: Unknown current version number, don't know if we should update or not [14:29:19] [INFO]::MAIN: nzbToMedia Version:11.03 Branch:master (Linux 4.9.30-unRAID) [14:29:19] [INFO]::MAIN: ######################################################### [14:29:19] [INFO]::MAIN: ## ..::[nzbToMedia.pyc]::.. ## [14:29:19] [INFO]::MAIN: ######################################################### [14:29:19] [INFO]::MAIN: Script triggered from SABnzbd [14:29:19] [INFO]::MAIN: Auto-detected SECTION:SickBeard [14:29:19] [INFO]::MAIN: Calling SickBeard:tv to post-process:Outlander.S03E03.All.Debts.Paid.1080p.AMZN.WEBRip.DDP5.1.x264-NTb-postbot.nzb [14:29:19] [INFO]::MAIN: SickBeard:tv fork set to sickrage [14:29:19] [INFO]::MAIN: FLATTEN: Flattening directory: /downloads/complete/tv/Outlander.S03E03.All.Debts.Paid.1080p.AMZN.WEBRip.DDP5.1.x264-NTb-postbot [14:29:20] [INFO]::TRANSCODER: Checking [Outlander.S03E03.All.Debts.Paid.1080p.AMZN.WEBRip.DDP5.1.x264-NTb-postbot.mkv] for corruption, please stand by ... [14:29:20] [INFO]::TRANSCODER: SUCCESS: [Outlander.S03E03.All.Debts.Paid.1080p.AMZN.WEBRip.DDP5.1.x264-NTb-postbot.mkv] has no corruption. [14:29:20] [POSTPROCESS]::SICKBEARD: SUCCESS: The download succeeded, sending a post-process request [14:29:22] [POSTPROCESS]::SICKBEARD: Unable to figure out what folder to process. If your downloader and SickRage aren't on the same PC make sure you fill out your TV download dir in the config. [14:29:22] [ERROR]::MAIN: A problem was reported in the /nzbToMedia/nzbToSickBeard.py script. SickBeard: Failed to post-process - Returned log from SickBeard was not as expected.! I don't know what the error code means, or why I'm now getting the message about Unable to figure out...... Both Dockers are on the same box and I have configured Categories in sabnzbd. I'm running the latest version and I have Unraid 6.3.5
  24. ok, thank you. I'll try the unraid general support.
  25. Hello binhex,

     

    Are you still actively supporting your docker containers?  I have posted in two threads looking for your support with no response.

     

    [SOLVED] SickRage Convert - Having trouble installing nzbtomedia Script

     

    [Support] binhex - SickRage

     

    Please respond if you are still with us.