lovingHDTV

Members
  • Posts

    605
  • Joined

  • Last visited

Posts posted by lovingHDTV

  1. I have a 4TB drive getting a lot of read errors, so I ordered a couple 14TB drives.  I precleared them both.

     

    I'm following this guide:

    The parity swap procedure - Unraid | Docs

     

    I got as far as stopping the array, unassigning the data drive to be replaced. starting the array, verifying it was missing and shutting down.

     

    I then removed the data drive and installed the new 14TB drive that will become my parity.  my current parity is 10TB.

     

    I booted up and see the new 14TB drive, but now another drive has gone missing.  I've fussed with power and sata cables, rebooted several times and cannot get the newly failed drive to come back up.

     

    So how do I re-install my 4TB drive that has read errors, but at least is working back into its slot?

     

    I think if I can do this, I can complete the parity swap with the newly failed drive, then after that is done use my second 14TB drive to swap out the original 4TB drive with read errors.

     

    I think I can re-install the 4TB drive, then assign it to the proper slot and say yes this is correct?  I think I've done that before, but wanted to ask before making things worst.

     

    A couple pictures. One before, one after.

     

    thanks

    david

    OrigDriveAssignmentsjpg.jpg

    CurrentDriveAssignments.jpg

  2. I've been running UnRaid since the very begining of time.  A long long time.  I've rebuilt my server a few times, and just went along with the updates.  I've known for a while my docker setup is bad, but it is now affecting the reliability of my system so I need to fix it.

     

    It is a mess.

     

    /mnt/cache - ssd

    /mnt/disks/LocalBackup - hdd

     

    /mnt/user/appdata - set to cache only

     

    docker vdisk location: /mnt/disks/LocalBackup/docker-settings

    default appdata storage location: /mnt/user/appdata (where did this come from?)

    the system share is pointed to Cache pool.

     

    Today:

    I have dockers installed in three locations.

    /mnt/cache/appdata

    /mnt/user/appdata (I assume this is the same thing as /mnt/cache/appdata as it is cache only?)

    /mnt/disks/LocalBackup/docker-settings (this is a UD mounted disk.  I moved many dockers here because they were hammering the ssd so I moved them here) - /dev/sdh

     

    It was suggested that I move to multiple cache pools now, instead of relying on UD.

     

    I watched SpaceInvaderOne's YT video on how to create them. All about Using Multiple Cache Pools and Shares in Unraid 6.9 - YouTube

     

    So what is the best way to consolidate this abomination?

     

    What I'd like to accomplish:

    I'd like to use the SSD only for the "traditional" cache drive.  Making file writing to the array faster.

     

    I'm fine with everything else going onto the hdd.  I'm even thinking of making the hdd a pool of two disks for redundancy as I have several just sitting on my desk to use.

     

    What should the default appdata storage location be?  Should it be on the array?  I really don't understand why/when this became a thing.  I had always thought cache was cache and not the array, but at some point they got merged or something.

     

    Suggestions wanted.

     

    What I think I need to do:

    1. stop dockers

    2. backup everything temporarily onto the array

    3. unmount /dev/sdh

    3. create new cache pool  using /dev/sdh

      - will this just mount up as it already has a file system and still maintain all the data?

    4. change appdata share to be on /dev/sdh pool

     

    I think that will get dockers going, but what do I do for the ssd?  How do I set it up so that the mover uses it for caching shares?  I'm think I don't have to do anything because my cached shares are already setup using it.

     

    thanks

    david

  3. I've no real need for BTRFS features.  Not sure why it is such.  I'll look to changing that later.

     

    I missed it in the wiki where it says you can just run it on the command line.

     

    will try to powerdown, but I suspect it won't work, and I'll have to power cycle it, like last time :(  I just did finish a parity check, so at least I know it was good as of Friday :)

     

    yep, no go on the powerdown.  I can still see the dockers running.

  4. I setup the openvpn conatiner following this article:

    How to route any Docker container on Unraid through a VPN (unraid-guides.com)

     

    However, I cannot access the webGUI directly from the docker page anymore.  

     

    I tried both using the http://[IP]:[PORT:8989]/ and also changed it to the UnRaid servers IP: http://192.168.1.107:[PORT:8989]/

     

    All I get is: about:blank#blocked

     

    I can get there by manually typing http://192.168.1.107:8989 into the browser.

     

    I'd like to be able to just click on the WebUI option like before.

     

    any ideas?

     

    Here is my current setup:

    image.thumb.png.9f61b46963b1f65e340e81cbc1ddb56d.png

     

    thanks

    david

  5. 6 minutes ago, lovingHDTV said:

    I just upgraded and now my 1.17.1 and papermc 1.17.1 servers wont start.

     

    Any idea where I can look at a log file to figure out why?

     

    thanks

    david

    I had to manually change the server.config file to point to java-17 instead of java-16 in case someone else hits this.

     

    david

  6. For years I've had a sshd docker setup to support remove backups.  In the docker I set a guid:uid and limit access to only the Backup share on the array.  I setup keys public/private for authentication that are stored in appdata directory.

     

    Is there a more native approach with ssh today?  I know that sometime ago there were changes made to the ssh support, but it seems allow access to the entire server which I don't want.

     

    thanks

    david

  7. after a good nights sleep I figured out what is going wrong.  I don't know why it fails, but have fixed it.

     

    My nzbgetvpn docker is failing to start, so I open a console in that docker and run the /home/nobody/watchdog.sh to start it.  That gets done as root, because the console login is root.

     

    This then means all the downloads are also owned by root/root.  

     

    This is causing sonarr to fail, even though it has full access to all the files/directories when I log into the sonarr console. Because the console login is root.  I'm sure as nobody/users it wouldn't have access to remove the original download.

     

    After changing ownership of the downloaded files, it runs fine.

     

    So I just need to figure out how to switch to be user nobody in a docker console so I can really test things.

  8. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='nzbgetvpn' --net='bridge' --privileged=true -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'VPN_ENABLED'='yes' -e 'STRONG_CERTS'='no' -e 'VPN_USER'='' -e 'VPN_PASS'='' -e 'VPN_REMOTE'='jfk-029.ovpn' -e 'VPN_PORT'='1194' -e 'VPN_PROV'='custom' -e 'VPN_PROTOCOL'='tcp' -e 'LAN_NETWORK'='192.168.1.0/24' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:6789]/' -l net.unraid.docker.icon='https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/nzbget-icon.png' -p '6789:6789/tcp' -v '/mnt/disks/LocalBackup/docker-settings/nzbgetvpn/downloads':'/data':'rw' -v '/etc/localtime':'/etc/localtime':'ro' -v '/mnt/disks/LocalBackup/docker-settings/nzbgetvpn/downloads/complete':'/data2':'rw' -v '/mnt/disks/LocalBackup/docker-settings/nzbgetvpn':'/config':'rw' 'jshridha/docker-nzbgetvpn'

     

    I posted in the nzb support page because I have to get into the console and start nzbget manually to get it to run.

     

    I just run the same command that the docker runs when it starts.  Seems like permissions issue as well.

  9. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-sonarr' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:8989]/' -l net.unraid.docker.icon='https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/sonarr-icon.png' -p '8989:8989/tcp' -p '9897:9897/tcp' -v '/mnt/disks/LocalBackup/docker-settings/nzbgetvpn/downloads/complete/':'/data2':'rw' -v '/mnt/cache/appdata/sonar/config':'/config':'rw' -v '/mnt/disks/LocalBackup/docker-settings/deluge/downloads/':'/data':'rw' -v '/mnt/user/TV Shows/':'/media':'rw' 'binhex/arch-sonarr'

     

  10. Still trying to recover from my docker drive crashing.  The restore of the appdata didn't actually restore everything to working order.

     

    Now trying to figure out why Sonarr will import a TV show, then delete it, then report:

    
    
    2021-10-19 21:50:27,703 DEBG 'sonarr' stdout output:
    [Warn] ImportApprovedEpisodes: Couldn't import episode /data2/tv/SEAL.Team.S05E01.PROPER.1080p.WEB.H264-STRONTIUM/MfAu8zfkfBJBYcphntDPTgvz.mkv
    
    [v3.0.6.1342] System.UnauthorizedAccessException: Access to the path '/data2/tv/SEAL.Team.S05E01.PROPER.1080p.WEB.H264-STRONTIUM/MfAu8zfkfBJBYcphntDPTgvz.mkv' is denied. ---> System.IO.IOException: Permission denied

     

    I can see it copy the file over to the array, so those permissions are fine, then it just deletes it.

     

    It is doing that constantly for all 15 files waiting to be imported.  Just copy/delete rinse and repeat.

     

    Any ideas as to what the permission issue it is referring to?

     

     

  11. 3 hours ago, lovingHDTV said:

    My docker drive crashed so I had to rebuild all my dockers.

     

    When I try to start the docker it connects to VPN fine, but the server wont start.  I get the message:

    2021-10-19 13:00:15,888 DEBG 'watchdog-script' stdout output:
    [info] Attempting to start nzbget...
    
    2021-10-19 13:00:26,999 DEBG 'watchdog-script' stdout output:
    [warn] Wait for nzbget process to start aborted, too many retries

     

    If I get a console for the docker and run nzbget -c nzbget.config -D it runs just fine.

     

    Any ideas on why it won't run when the docker starts?  I'm not sure where to fine a log file that might help.

     

    thanks

    david

    Oddly I can just run /home/nobody/wathdog.sh and it will run when I do it from the command line. When run from the system nzbget fails to run.

     

     

    • Like 1
  12. My docker drive crashed so I had to rebuild all my dockers.

     

    When I try to start the docker it connects to VPN fine, but the server wont start.  I get the message:

    2021-10-19 13:00:15,888 DEBG 'watchdog-script' stdout output:
    [info] Attempting to start nzbget...
    
    2021-10-19 13:00:26,999 DEBG 'watchdog-script' stdout output:
    [warn] Wait for nzbget process to start aborted, too many retries

     

    If I get a console for the docker and run nzbget -c nzbget.config -D it runs just fine.

     

    Any ideas on why it won't run when the docker starts?  I'm not sure where to fine a log file that might help.

     

    thanks

    david

  13. I was just posting that I see that sdi is gone.  Yes that is where my docker.img is stored along with all my dockers.

     

    So how best to recover?  Two days ago I installed the new version of the docker backup script and did a complete backup to disk7.

     

    I moved all my dockers from my cache drive because I had swapped it to an SSD and it was getting an insane number of writes.

     

    How would you recommend me recovering?  As I installed docker when it first came out, I've not kept up on what is the recommended way to do it any longer.  Back then you couldn't have it on in share on the array.  Is that still the case?

     

    thanks

  14. This AM I noticed two dockers quit running over night. I would get Execution Error 403.  I googled around a bit and it seems that my docker image was corrupted.  So I followed the instructions to recreate it and re-install all my previous dockers.

     

    It never returned from a installing that last docker so the prompt/log window never went away.

     

    I rebooted the server and when it came back up it started a parity check which I have paused.

     

    I was able to start the first two dockers, then hit run all.  Now it is just hanging and nothing is starting.

     

    Not sure what is going wrong, but here are my diagnostics.

     

    thanks,

    david

    tower-diagnostics-20211017-1118.zip

  15. One thing I wanted to be able to see/monitor in the node exporter is the network traffic based on the docker.  Today it lists all the veth<number> values, but I have no way to know how they map to the dockers.

     

    I found this plugin: ❚❚ [Plugin] Network Stats - Page 11 - Plugin Support - Unraid

     

    That has a script that can do the mapping.  Is it possible to have do this mapping for the Promethius data so that what we see in the Graphana dashboard says the docker name instead of the veth<number>?

     

    If the veth<number> changes everytime the docker restarts, is the historical data valid?

     

    thanks

    david

  16. 9 minutes ago, JonathanM said:

    1. Are you sure you are in the right thread? This is the support thread for a rather old version, it's been deprecated like the title says for years.

    2. My nzb and deluge appdata total right at 100MB for both. Are you sure you don't have something misconfigured, or are keeping stuff you don't need there, like failed downloads or failed extractions?

    hmm, just clicked the support link in the plugin.  Let me go check.

     

    Yep, upgrading to new plugin.

     

    thanks

  17. Just added a ER605 and TL-SG2008P. 

     

    Had an issue where the ER605 defaults to 192.168.0.1 and I use 192.168.1.1.  There is a support FAQ at TP-Link that tells you how to get Omada to adopt the router.

     

    Other than that, not a docker issue, it worked great.  I also just noticed that this docker supports non Host networking so I can assign a dedicated IP to the docker.  Very nice indeed.

     

    Thanks again