Jump to content

jafi

Members
  • Posts

    47
  • Joined

Posts posted by jafi

  1. I have 5xNIC, but I only use three of them.

     

    eth0 (DOCKER-LAN)

    IP: 100.46.0.10

    GW: 100.46.0.1

     

    eth1 (LAN)

    IP: 100.64.0.10

    GW: 100.64.0.1

     

    eth2 (VPN)

    IP: 46.64.0.10

    GW: 46.64.0.1

     

    eth3 & eth4 are inactive.

     

    Unraid created routing table:

     

    ROUTE                GATEWAY                 METRIC

    default                46.64.0.1 via br2       1010

    default                100.64.0.1 via br1      1011

    default                100.46.0.1 via br0     1012

    100.46.0.1/24       br0                          1012

    100.64.0.1/24       br1                           1011

    46.64.0.1/24         br2                          1010

     

    cat /etc/resolv.conf
    # Generated by dhcpcd from br0.dhcp
    domain xxx.com
    nameserver 100.46.0.1

     

    Settings - Docker (advanced)

    IPv4 custom network on interface br0:

    Subnet: 100.46.0.0/24 Gateway: 100.46.0.1 DHCP pool: not set

    IPv4 custom network on interface br1:

    Subnet: 100.64.0.0/24 Gateway: 100.64.0.1 DHCP pool: not set

    IPv4 custom network on interface br2:

    Subnet: 46.64.0.0/24 Gateway: 46.64.0.1 DHCP pool: not set

     

     

    Problem is that with these network settings APPS does not work. Not even ping google.com does not work. I can fix this problem temporarily by removing 100.46.0.1/24 br0 1012 from routing table and modifying /etc/resolv.conf -> nameserver 100.64.0.1. With these modification everything works fine for a while, but soon UNRAID change everything back.

     

    I only have couple of docker that uses VPN, all other docker's use DOCKER-LAN. Unraid should use LAN for updating, apps, etc.

     

     

     

  2. I'm moving to new hardware, but I need help with my plan.

     

    I want complete new unraid setup with my server, I don't copy any setting etc from old server. But I want to move all data (media) to new server.

     

    Current server

    Array 21.2 TB used of 24 TB (88.3 %)

    16TB HDD (parity)

    16TB HDD (disk1)

    4TB HDD (disk2)

    4TB HDD (disk3)

     

    New server

    I only have one new 16TB HDD, but I want to save all old data. My current plan is to remove parity disk from old server and create new  array to new server.

     

    Parity: 16TB HDD (new)

    disk 1: 16 TB HDD (old parity)

     

    After that the plan is to move disk2/3 data from the old server to the new array. After that I will move disk2/3 to new server.

     

    Parity: 16TB HDD (new)

    disk 1: 16TB HDD (old parity)

    disk 2: 4TB HDD (old disk2)

    disk 3: 4TB HDD (old disk3)

     

    After that the plan is to move disk1 data from the old server to the new array. After that I move the last HDD to the new server.

     

    Array

     

    Parity: 16TB HDD (new)

    disk 1: 16TB HDD (old parity)

    disk 2: 4TB HDD (old disk2)

    disk 3: 4TB HDD (old disk3)

    disk 4: 16TB HDD (old disk1)

     

    Is this ok array? What is the best way to move the data via LAN? My current array uses XFS, but btrfs is the way to go?

     

    I have one extra nvme 1TB SSD and sata 1TB SSD. Does unraid still benefit from cache disk? (cache pool)

     

  3. I upgraded to 6.12 and all my VM's was lost. I created new VM's, but there was also problems with containers - I cant launch any of them.

     

    docker start dockersocket
    Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/docker-entrypoint.sh": permission denied: unknown
    Error: failed to start containers: dockersocket

     

    docker start emby
    Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/init": permission denied: unknown
    Error: failed to start containers: emby
    docker start whoogle
    Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/sh": permission denied: unknown
    Error: failed to start containers: whoogle
    docker start vaultwarden
    Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/start.sh": permission denied: unknown
    Error: failed to start containers: vaultwarden

     

    etc..

     

    I tried "Tools - New Permissions", did not help. I even tried to install new dockers, but they fail also.

  4. Cannot install.

     

    Command execution
    docker run
      -d
      --name='gitea'
      --net='IPZD'
      -e TZ="True/Hell"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="666"
      -e HOST_CONTAINERNAME="gitea"
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.webui='http://[IP]:[PORT:3000]'
      -l net.unraid.docker.icon='https://raw.githubusercontent.com/fanningert/unraid-docker-templates/master/fanningert/icons/gitea.png'
      -p '22:22/tcp'
      -p '3000:3000/tcp'
      -v '/mnt/user/appdata/gitea':'/data':'rw' 'gitea/gitea'
    docker: Error response from daemon: error creating overlay mount to /var/lib/docker/overlay2/41d60f1fea4ae0c5b9965a6f0f190f77beafe0355bfbe8bc8126029c62f47a7d-init/merged: too many levels of symbolic links.
    See 'docker run --help'.
    
    The command failed.

     

  5. 1 hour ago, itimpi said:

    Do you have a parity drive?    If so what size?

     

    Sorry for my first message.

    I have 2x4TB drives, other is parity. I can't get new 4TB, but I can get new 6TB. Can I change it to new parity drive and use 2x4TB as disk 1/2. After upgrade I have array size of 8TB?

  6. Hi,

     

    I changed my CPU from 5700G -> 3900X. Now I get machine check event. Any idea what is the problem?

     

    Quote

    Dec  4 20:56:40 Mastermind kernel: mce: [Hardware Error]: Machine check events logged
    Dec  4 20:56:40 Mastermind kernel: mce: [Hardware Error]: CPU 0: Machine Check: 0 Bank 5: bea0000000000108
    Dec  4 20:56:40 Mastermind kernel: mce: [Hardware Error]: TSC 0 ADDR 1ffff8124156a MISC d012000100000000 SYND 4d000000 IPID 500b000000000 
    Dec  4 20:56:40 Mastermind kernel: mce: [Hardware Error]: PROCESSOR 2:870f10 TIME 1670180115 SOCKET 0 APIC 0 microcode 8701021
    Dec  4 20:56:40 Mastermind kernel: mce: [Hardware Error]: Machine check events logged
    Dec  4 20:56:40 Mastermind kernel: mce: [Hardware Error]: CPU 22: Machine Check: 0 Bank 5: bea0000000000108
    Dec  4 20:56:40 Mastermind kernel: mce: [Hardware Error]: TSC 0 ADDR 7f6e61a84894 MISC d012000100000000 SYND 4d000000 IPID 500b000000000 
    Dec  4 20:56:40 Mastermind kernel: mce: [Hardware Error]: PROCESSOR 2:870f10 TIME 1670180115 SOCKET 0 APIC 1b microcode 8701021

     

  7. Hi,

     

    I have used jellyfin before, but updated my homelab. So now I am testing this version of jellyfin, because I have AMD iGPU. I can connect to jelly via lan (IP) fine, but not via reverseproxy (public domain). 

     

    With public domain I get message: "select server" and url is https://jellyfin.mydomain.com/web/index.html#!/selectserver.html

     

    I use nginx proxy manager and all other services work fine.

    I have activated "Allow remote connections to this server".

    Also I added custom variable to the tempalte: JELLYFIN_PublishedServerUrl, value: jellyfin.mydomain.com

  8. Hi,

     

    Sorry for the "useless" topic, but I need some reassurance.

     

    I have used unraid for some time now, learned a lot. My goal is the have one computer for a lot of different things. I run several services via docker like nginxproxymanager, mariadb, nginx, nextcloud, jellyfin, etc. Also I have couple of VM like pfsense, Home Assistant and Windows 10 (for games).

     

    I am about to upgrade my "homelab" and unraid does not support multiple arrays or zfs (officially) so I started to think.

     

    My current homelab is:

    • Ryzen 7 2700
    • 64GB memory
    • 2x 4TB HDD
    • 1x 1TB NVME
    • 1x 256BB NVME
    • 1x 128 GB SSD
    • GTX 660 Ti.

     

    I was using 4TB drives as array and 1TB NVME for Windows VM. 256GB NVME was cache and 128 GB SDD was for backups.

     

    My new homelab is:

    • Ryzen 7 5700G
    • 64GB memory 
    • 2x 4TB HDD
    • 2x 1TB NVME
    • 1x 1TB SSD
    • RTX 2060 TI

     

    My news CPU has GPU (Vega) and that is for transcoding with jellyfin. I still would like to use 2x 4TB as array for media. But what about everything els. If I could create another array I would create 2x 1TB NVME and use that for nextcloud, docker and other stuff. And 1x 1TB SSD is for Windows VM only and RTX 2060 TI also.

     

    I don't need backup of my Windows VM. But everyhing els I would like to backup, I know RAID is not a backup, but for now it is fine.

     

    With something like proxmox I can have multiple arrays, but I would have to learn a lot of new stuff.

     

    Any ideas?

  9. 2 hours ago, alturismo said:

     

    from page 1 ... may try this

     

    image.png.41ddf8fafdc61688777b23ca5ebefd74.png

     

    Like capt.shitface said in the quote that I used, we tried to use updater.phar via console. But it fails.

     

    docker exec -it nextcloud updater.phar
    Nextcloud Updater - version: v20.0.0beta4-11-g68fa0d4
    
    Current version is 22.2.0.
    
    Update to Nextcloud 22.2.3 available. (channel: "stable")
    Following file will be downloaded automatically: https://download.nextcloud.com/server/releases/nextcloud-22.2.3.zip
    Open changelog ↗
    
    Steps that will be executed:
    [ ] Check for expected files
    [ ] Check for write permissions
    [ ] Create backup
    [ ] Downloading
    [ ] Verify integrity
    [ ] Extracting
    [ ] Enable maintenance mode
    [ ] Replace entry points
    [ ] Delete old files
    [ ] Move new files in place
    [ ] Done
    
    Start update? [y/N] y
    
    Info: Pressing Ctrl-C will finish the currently running step and then stops the updater.
    
    [✔] Check for expected files
    [✔] Check for write permissions
    [✔] Create backup
    [✔] Downloading
    [✔] Verify integrity
    [ ] Extracting ...PHP Warning:  require(/config/www/nextcloud/updater/../version.php): failed to open stream: No such file or directory in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 658
    PHP Fatal error:  require(): Failed opening required '/config/www/nextcloud/updater/../version.php' (include_path='.:/usr/share/php7') in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 658

     

  10. On 2/25/2021 at 9:13 PM, capt.shitface said:

    Got some trouble while upgrade via web-gui.
    Tried the updater.phar in docker instead. it says

    [✔] Check for expected files
    [✔] Check for write permissions
    [✔] Create backup
    [✔] Downloading
    [✔] Verify integrity

    [   ] Extracting ...PHP Warning:  require(/config/www/nextcloud/updater/../version.php): failed to open stream: No such file or directory in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 658
    PHP Fatal error:  require(): Failed opening required '/config/www/nextcloud/updater/../version.php' (include_path='.:/usr/share/php7') in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 658
    root@c3af26c4f37e:/data# cd /config/www/nextcloud/updater/

    Im stuck.

     

    Same here.

    Any news how to fix this?

  11. 23 hours ago, mgutt said:

    I think this is false.

     

    Try this in advanced:

    location / {
    
      add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
    
      proxy_set_header Upgrade $http_upgrade;
    
      proxy_set_header Connection $http_connection;
    
      proxy_http_version 1.1;
    
      # Proxy!
    
      include conf.d/include/proxy.conf;
    
    }

     

    By that the default NPM "location /"-rule is completely replaced by your own.

     

     

    Thank you.

     

    For some reason this does not work. I have tried to edit add_header Strict-Transport-Security value for several different ways, but it does not seem to help.

     

    There is line in the config files that says: "# HSTS (ngx_http_headers_module is required)"

     

    I connected the docker and checked nginx -V and can't find that module. I need to add it?

  12. I'm too stupid to figure it out myself.

     

    I use NPN & Nextcloud, I use subdomain nextcloud.

    I get warning from nextcloud: "The "Strict-Transport-Security" HTTP header is not set to at least "15552000" seconds."

     

    I tried to add "add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";" to Advanced-tab, but did not work. After that I noticed "Please note, that any add_header or set_header directives added here will not be used by nginx. You will have to add a custom location '/' and add the header in the custom config there."

     

    But I have subdomain, not "/".

  13. 9 minutes ago, rix said:

    What commands did you run. If I know how to improve the image, ill gladly do so ;)

     

    I'm sorry, I don't know much about dockers and stuff like this.

     

    There was problem with the version 1.16.4 so I installed Ubuntu-Playground (via unraid). After that I used these instructions to install makemkv (1.16.5) and libraries. For some reason I did not get ripper.sh to work with my own docker so after you released 1.16.5 I tried it. But I had problems with missing libraries. I copied them manually from my docker.

     

    After that I noticed that your docker (1.16.5) does not include 'eject' command, so the ripper.sh did not work correctly. I manually downloaded the package from here and installed it with 'dpkg -i <deb file>' and now everything seems to work.

  14. On 11/3/2021 at 3:37 PM, rix said:

    The automated build for 1.16.5 failed. I fixed the script and a new build should be available within 2 hours.

     

    There seems to be still issues with 1.16.5.

     

    Shared libraries are still missing, I added those manually. But there some other issues also, because for example command 'eject' is missing.

  15. Latest version stopped ripping everything, it will only create empty directory. Before update everything worked fine.

    Also tried docker-ripper:1.16.4, but there is problem also:

     

    02.11.2021 23:52:21 : Starting Ripper. Optical Discs will be detected and ripped within 60 seconds.
    makemkvcon: error while loading shared libraries: libmakemkv.so.1: cannot open shared object file: No such file or directory
    
    02.11.2021 23:52:21 : Unexpected makemkvcon output:
    makemkvcon: error while loading shared libraries: libmakemkv.so.1: cannot open shared object file: No such file or directory
    
    02.11.2021 23:53:21 : Unexpected makemkvcon output:
    makemkvcon: error while loading shared libraries: libmakemkv.so.1: cannot open shared object file: No such file or directory

     

  16. On 10/20/2021 at 6:22 PM, SplitHoirzon said:

    Im having the problem with the spinner never going away and my CA never loads.  Here are my Debug logs.

    CA-Logging-20211020-1120.zip 1.38 kB · 2 downloads

    Is your "default font size" and "minimum font size" same in your browser settings? I have same problem if the values are same. But for example default 16 and minimum 14 and everything works fine. Only tried with Vivaldi.

×
×
  • Create New...