ati

Members
  • Posts

    44
  • Joined

  • Last visited

Posts posted by ati

  1. Not quite what I was getting at, but I get it. 

     

    I want to have my unRAID server at 192.168.10.50/24

    I want to have a docker (binhex-delugevpn) at 192.168.10.100/24

    I want to have a docker run through binhex-delugevpn and be accessible at 192.168.10.100 (because the ports are run through binhex-delugevpn). 

     

    They're all on the same network. No layer 3 required, just different IP addresses. I don't want my binhex-delugevpn and my unRAID server at 192.168.10.50/24.

  2. I am slowly trying to learn about routing one docker container through another. 

     

    I've watched SpaceInvaderOne's video which was a great help. I have also read up on the changes to the binhex dockers with regards to passthrough. 

     

    What I am most curious about is networking configurations. I typically prefer to use br0 networks for my dockers to keep different workloads on different addresses. Right or wrong, this is just how I have everything currently set up. What I am learning is that if I want do have one docker route through another docker (binhex-delugevpn in my case) I need to use bridge networking. Is that correct, there is no other way around that? 

     

    I initially setup binhex-delugevpn as a br0 network on my server and I got everything working and running fine. Now that I got into playing with the inter docker routing I tried changing my binhex-delugevpn docker to bridge, and I can no longer access the UI. As soon as I change it back to br0 it's all good again. (probably an unrelated issue)

     

    Is there a way to maintain a br0 docker network and still route traffic through that docker? I believe it comes down to ports. Once you're in br0 mode, my port mapping is ignored unlike when I am in bridge networking. 

     

     

     

     

  3. It shouldn't be a 3 device pool. I set it up with only 2 drives. I replaced one a while back, could that be the 3rd drive? There are no historical drives listed if I stop the array.

    Plus the drive it thinks is missing, it is also reporting as present?

     

    Cache.thumb.jpg.8ed1464dc006d6cd62eb37a1b2446f92.jpg

  4. I believe my 2 drive BTRFS cache pool is falling and is stuck in read-only mode. I cannot get my Docker service to start:

    Sep 30 08:35:59 unRAID root: ERROR: unable to resize '/var/lib/docker': Read-only file system
    Sep 30 08:35:59 unRAID root: Resize '/var/lib/docker' of 'max'
    Sep 30 08:35:59 unRAID emhttpd: shcmd (216): /etc/rc.d/rc.docker start
    Sep 30 08:35:59 unRAID root: starting dockerd ...
    Sep 30 08:36:14 unRAID emhttpd: shcmd (218): umount /var/lib/docker

    I am trying to use the mover to clear out my cache drives so I can replace them, but that will not work either. I figured that would be fine for a read-only file system, but I guess not. Should be moving from cache to disk1.

    Sep 30 08:32:32 unRAID root: mover: started
    Sep 30 08:32:32 unRAID move: move: file /mnt/cache/Movies/MOVIE1.mp4
    Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies/MOVIE1.mp4 error: Read-only file system
    Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies error: Read-only file system
    Sep 30 08:32:32 unRAID move: move: file /mnt/cache/Movies/MOVIE2.mkv
    Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies/MOVIE2.mkv error: Read-only file system
    Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies error: Read-only file system
    Sep 30 08:32:32 unRAID move: move: file /mnt/cache/Movies/MOVIE3.mkv
    Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies/MOVIE3.mkv error: Read-only file system
    Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies error: Read-only file system
    Sep 30 08:32:32 unRAID move: move: file /mnt/cache/Movies/MOVIE4.mkv
    Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies/MOVIE4.mkv error: Read-only file system
    Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies error: Read-only file system
    Sep 30 08:32:32 unRAID move: move: file /mnt/cache/Movies/MOVIE5.mp4
    Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies/MOVIE5.mp4 error: Read-only file system
    Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies error: Read-only file system
    Sep 30 08:32:32 unRAID move: move_object: /mnt/cache/Movies: Read-only file system
    Sep 30 08:32:33 unRAID move: move_object: /mnt/disk1/isos: Read-only file system
    Sep 30 08:32:33 unRAID move: move: file /mnt/disk2/isos/ubuntu-20.04.1-desktop-amd64.iso
    Sep 30 08:32:33 unRAID move: move: create_parent: /mnt/disk2/isos error: Read-only file system
    Sep 30 08:32:33 unRAID move: move_object: /mnt/disk2/isos: Read-only file system

    This issue came about because one of the main drives in my array had some read errors recently. So yesterday I stopped the array, pulled the drive and replaced it. I started the array and allowed it to rebuild. This morning I noticed my Docker service failed to start so I did a little digging. Fix Common Problems called out that my cache drive pool was mounted in read-only mode. I am assuming because of the number of errors? One other strange thing is when I start the array I get a notification that one of the cache pool disks is missing, but it doesn't show as missing after the array starts. 

     

    Tried starting and stopping the array again with no change. I just rebooted the server as well just to see - no change either. 

     

    I'd like to try and move everything off the cache pool into the array so I can replace both cache drives as both have issues. 

     

    Looking for some guidance and I am a unRAID newbie and a little lost with my current situation. 

     

     

    unraid-diagnostics-20200930-0844.zip

  5. 19 hours ago, alturismo said:

    well, i only use automation so ...

     

    but u could try from inside container, just try to move something the way filebot would do, may that brings u closer

     

    open docker shell, cd /storage, check what u see and try to move things there

    Works fine in shell. Issue appears to be with FileBot...

     

    /tmp # cd /storage/temp_files/
    
    /tmp # touch testfile
    
    /tmp # cp testfile /storage/Movies/
    
    /tmp # ls /storage/Movies/ | grep test
    testfile

     

  6. I am running into an issue with permissions and it doesn't seem to pop up in this thread via search. 

     

    I have 3 shares:

    1 - Temporary storage place on my cache drive (/mnt/user/temp_files)

    2 - Movies folder (/mnt/user/Movies)

    3 - TV Shows folder (/mnt/user/TV Shows)

     

    I have my Docker container setup to pass (/mnt/user) to the container as (/storage). When I bring up the WebUI I can configure FileBot to pull my media files from the folder on my Cache drive and tag them, but when I try to copy them to the parity protected share on the array I get an error. I tried using move in FileBot first and it created all the folders, but never moved the content. When I switch to copy I get the following error:

    4.png.d531d50ca87a6fe50b6e50d46f831605.png

     

    I am a little lost as to what to do.

  7. Yeah, the disk has some pending sectors. I can understand it dropped because it's failing - that's fair. 

     

    That doesn't explain why the unRAID webUI still wasn't reading correctly on the dashboard page. Plus, if the drive reconnected, why didn't unRAID recognize that's back and add it back into the cache array? I'm a little lost why there is no notification of a missing disk whatsoever on the main page like there would be for a data drive.

     

    Just seems like I'm missing something more...

  8. I recently setup a new Cache RAID-1 for running a few dockers. Nothing fanny so I used some old mechanical drives. I basically slapped them both into unRAID and assigned them to be cache drives and it did the rest of the work making the RAID-1 array. I then used the unBalance plugin to move the default 4 folders to those drives. 

     

    Overnight one of the cache drives went missing (screenshot 1). 1.png

     

    What is strange is the drive in unRAID on the main page doesn't show the drive as missing (screenshot 2), but it does show up under the Unassigned Devices section. 

    2.png

     

    What is even more strange to me is that when I go to the main unRAID dashboard it doesn't even show the same Unassigned Devices as the main page does (screenshot 3).

    3.png

     

    I am super lost and a little confused. I haven't stopped the array or restarted, but I am sure that'd fix the issue this time around. I am more interested in why it happened and how I can prevent it in the future. What most worries me is the drive doesn't show as missing in the webUI and the main and dashboard pages don't agree on the Unassigned Devices. 

  9. I am moving from FreeNAS to unRAID and in the process I decided to pre-clear my old drives just to ensure their reliability. I had six (6) 4TB WD Red drives in my old RAID-Z2 array on FreeNAS. All the drives were bought together and they have all been in service for approximately 3.5 years. One of them in the last few months has had a few errors causing about 60MB to resilver a few times. I was going to replace that drive when I decided to move to unRAID - so that is a known failure. 

     

    I am just a little taken back that of the remaining 5 drives every single one of them has had 1 or more pending sectors during the pre-clear pre-read. 3 of them (so far) failed instantly on the post-read even after the pending sectors went to zero during the zeroing.

    I am just a little lost and a little bummed that all the drives I was planning on using are seeming to be failing. What confuses me more is why did the pending sector count go to zero without the reallocated sector count increasing? Then after the pre-clear process was finished, the server rebooted (power loss unfortunately) and a SMART test ran on the drives they all show 1 pending sector again.

     

    I ran a short SMART test on a few of the drives. Each one reported back that they had 1 pending sector. One of the drives showed it failed, one showed that it passed. The only data item unRAID highlighted is the pending sector. How can they both have the same issue but one passes and the other doesn't? 

     

    Maybe I just got a bad batch or something, but I never had any indication of an issue with FreeNAS other than the one known issue drive. Just sucks that they are 6 months outside of the warranty.

     

    Is there any more testing or data I should be gathering to prove or disprove that these drives are indeed failing? I am outside of my element here with the SMART data and the unRAID wiki is a little lacking in this area causing my to second guess myself. 

     

     

  10. I have a few questions about how to properly setup the binhex/Plex Docker. I am new to unRAID and even newer to Docker. In the past I have always made my own Ubuntu VM to run Plex in so this is a bit of a learning curve for me. 

     

    As I understand it you're supposed to pass a single directory which houses all your media - something like '/mnt/user/Media'. I however have 3 shares - Movies, TV Shows, and Music. While I could just pass '/mnt/user' that makes me very nervous giving the Docker container access to all my files. That goes against everything I was taught; only give a server/process access to what it needs. Additionally, are the media mounts passed to Docker as read only? 

     

    I guess to sum up my question. I'd really like a way to pass multiple media mounts to the Docker and I'd really prefer to pass them as read only. Plex has no reason to modify my media files, so why give it the option to. 

     

    Hopefully what I am asking is possible.

     

    Anyways the Docker container is awesome, thank you! It's pretty cool to just start a Docker and it just works 100%. 

     

     

  11. That sounds like a good way to do it. I could just just keep my spare disks on the shelf and only add them to the share as my requirements grow. That will 'force' unRAID to fill up a single drive with a single media type before moving on to the next one that I provide. 

     

    With that said, out of curiosity, what does happen when a single drive in a multi drive share configured with split level becomes full? 

     

    Thanks for the assistance! 

  12. I am very new to unRAID, so I appreciate the assistance. I am migrating my setup over from a FreeNAS build and have a few questions on how best to organize my data. 

     

    On my FreeNAS build I had 2 arrays, a 6 disk 3TB RAIDz2 and a 4 disk 2TB RAIDz1. I used the 6 disk array for my media and the 4 disk array for everything else. 

     

    From what I understand I cannot have multiple arrays in unRAID, that is fine, I can divide it up by share and then assign that share to different disks (from what I understand). What I am a little confused by is how I can organize my media. In a perfect world I'd like to start filling disk 1 with movies until it reaches a max level then move on to a new disk and so on.  I prefer this allocation method for recovery and continuity - if I pull out disk 1 I know it will be full of movies. I'd like to do a similar thing with TV shows where it starts on disk 2 or something and then fills that up until disk 2 is full and moves on to the next disk (that isn't used by the movies share). 

     

    My questions come in with expansion. Lets say I fill up all my movie disks, can I add another drive and just add it to the movies share and unRAID will just begin to fill that drive up? Alternatively I can just have one big media share (as I do now on FreeNAS) and set the split level to level 2 (MEDIA>Movies and MEDIA>TV Shows), but what happens when the drive the movies have been assigned to is full? That is what I don't understand about split level, what happens went a split level assigned drive is full. 

    Say I have my 6 disks and I do a level 2 split and disk 3 is picked for movies and 4 for TV shows. What happens when I have more the just one disk of movies? 

     

    I could always take the no split level approach like a had with FreeNAS, but I really like the idea of methodically filling up one drive before the next rather than having it randomly sprinkled across many drives. 

     

    Thank you for your help. I have enjoyed reading the forums and WiKi about unRAID. I am finding the documentation and support easier to understand that FreeNAS.