darrenyorston

Members
  • Posts

    321
  • Joined

  • Last visited

Everything posted by darrenyorston

  1. ok, thanks. Why not just do backups of your database? Seems like your adding complexity for little reason.
  2. Thanks mate. Ive read over the previous posts in the thread and I wouldn't say its clear. You say not to use the same DB for both sites, which is understandable, and to create a new MariaDB. Do you mean a new database within MariaDB or an entirely new MariaDB container? ie I will have two MariaDB containers running, one with the databases for all my others containers (authelia/nextcloud/photprism/wordpress) and another that solely contains a database for the second wordpress site?
  3. Hello. I have Wordpress set up and running on my server with it being published to the web using nginxproxymanager. Its great and works perfectly. I am now trying to work out how to host a second site. I have created a second database in mariadb but how do i install a second wordpress container?
  4. Im having difficulty getting Cloudflare, NGINX and any Docker container working. I have working Docker containers that I want to proxy. nextcloud being one. I have the Cloudfare DNS docker container functioning and the appropriate A record for my domain name is showing on Cloudflare. I have created a CNAME for nextcloud targeted to my domain. I have forwarded ports 80 and 443 in my router to NGINX's ports (1880 and 18443) When I create and select a host in NGINX I am presented with an Error 522 page. According to the error Cloudflare say "The initial connection between Cloudflare's network and the origin web server timed out. As a result, the web page can not be displayed." Anyone have an idea where I go about addressing the problem?
  5. I'm having a problem launching creating new VMs. . I attempted to install ArcoLinux using the default Arch template. Calamares installer runs fine and the O/S installs. After shutting down the VM, removing the ISO and rebooting the VM I am getting the following message: Starting version 248.3-2-arch /dev/vda2: clean, 464203/3260416 files, 4273307/13029719 blocks The cursor flashes however the VM never progresses beyond this point. If I reboot the same message appears. The log for the VM is: -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pcie-root-port,port=0x8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1 \ -device pcie-root-port,port=0x9,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x1 \ -device pcie-root-port,port=0xa,chassis=3,id=pci.3,bus=pcie.0,addr=0x1.0x2 \ -device pcie-root-port,port=0xb,chassis=4,id=pci.4,bus=pcie.0,addr=0x1.0x3 \ -device pcie-root-port,port=0xc,chassis=5,id=pci.5,bus=pcie.0,addr=0x1.0x4 \ -device pcie-root-port,port=0xd,chassis=6,id=pci.6,bus=pcie.0,addr=0x1.0x5 \ -device pcie-root-port,port=0xe,chassis=7,id=pci.7,bus=pcie.0,addr=0x1.0x6 \ -device pcie-root-port,port=0xf,chassis=8,id=pci.8,bus=pcie.0,addr=0x1.0x7 \ -device pcie-root-port,port=0x10,chassis=9,id=pci.9,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=0x11,chassis=10,id=pci.10,bus=pcie.0,addr=0x2.0x1 \ -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/ArcoLinux/vdisk1.img","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \ -device virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-1-format,id=virtio-disk2,bootindex=1,write-cache=on \ -netdev tap,fd=33,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:5f:4f:38,bus=pci.1,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=34,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device vfio-pci,host=0000:43:00.0,id=hostdev0,bus=pci.4,addr=0x0,romfile=/mnt/user/isos/vbios/GTX1080.rom \ -device vfio-pci,host=0000:43:00.1,id=hostdev1,bus=pci.5,addr=0x0 \ -device vfio-pci,host=0000:46:00.0,id=hostdev2,bus=pci.6,addr=0x0 \ -device vfio-pci,host=0000:47:00.0,id=hostdev3,bus=pci.7,addr=0x0 \ -device vfio-pci,host=0000:48:00.0,id=hostdev4,bus=pci.8,addr=0x0 \ -device vfio-pci,host=0000:49:00.0,id=hostdev5,bus=pci.9,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2021-08-01 05:20:16.609+0000: Domain id=3 is tainted: high-privileges 2021-08-01 05:20:16.609+0000: Domain id=3 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) I've not had a problem running VMs with GPU pass through before. I've tried an install of the same O/S but used VNC instead of passing through the GPU. After install the O/S boots fine and operates through VNC. I'm presuming its some issue with the VP pass through. Ive tried two GPUs and get the same problem. One of the GPU I have used with GPU pass through previously. Any advice?
  6. Thanks. I had opened console within the container. It worked once I opened in root. I couldn't end up getting it running. Bit to much beyond me unfortunately.
  7. Hello I am following these instructions. When I input the cat /dev line I get the following message "tr: range-endpoints of 'O-9' are in reverse collating sequence order". I have looked in the homeserver.yaml document and these is no line beginning "enable_registration.." Also when edit the bind address to 0.0.0.0 I am still not able to access the container.
  8. How does RAM transcoding improve performance? Is it only reduced disk IO or does it have some other effect? I recently updated to a 4k display. I dont seem to be able to stream 4k content from my server (Threadripper 2950x w/126GB memory). Would RAM transcoding improve 4k streaming performance?
  9. I deleted it and now all is working fine. I dont know how it occurred as I added the two additional NVME drives (months) after I had set PCIe ACS override to "Both". I didnt even have them at the time. Thank you for your help.
  10. New diagnostics with PCIe ACS disabled. 2 of the NVME drives dissapeared. tower-diagnostics-20210406-2048.zip
  11. This is running 6.9.1 with PCIe ACS set to both. Two of the cache NVME dissapear if I disable PCIe ACS. According to System Devices all three NVME are on the same bus (c0a9:2263). Nothing else is in that group. PCIe ACS set to both splits them into there own IOMMU groups. tower-diagnostics-20210406-1746.zip
  12. Hello all. I attempted to upgrade my server from 6.8.3 to 6.9.1 and have encountered a problem with my cache disks. I would appreciate some advice as how to proceed. I have 8 WD Red Disks in my array and three WD Black NVME drives in an array cache. The NVME drives are on the mother board (a Gigabyte Aorus xtreme x399 board). I had (have) PCIe ACS override in the VM Manager set to "Both" as I was previously passing through one of the MB's USB controllers and a GP to various VMs. For one reason or another I have stopped using pass through as I was having some performance issues. As a result I turned off PCIe ACS override and rebooted the server. When the server restarts only one of the NVME drives appears in the Cache. The other two drives are missing, they dont show up in unassigned devices either. This seems to indicate that soem of the NVME drives need PCIe ACS pass through. Is this how it should be configured? I can upgrade to version 6.9.1. All three NVME drives show up in assigned devices. However, if I try to add them to the cache it wants to format them. I downgraded to 6.8.3, turned on PCIe ACS override and my NVME cache is working fine. I note the NVME cache is using BTRFS. I dont know how to proceed to upgrade to 6.9.1.
  13. Twice in recent weeks I have walked into my study and noted that unraid server is turned off. I hadnt turned it off and we didnt have a power outage. Sleep settings are disabled. I have checked the system log and they seem to only record a log for the current system start. I've looked for historical logs but cannot find any. Is there a way to turn on a longer log function?
  14. One of the disks in my array is showing as disabled. Ive run an extended SMART self-test which reports "Completed without error". However, highlighted in the Attributes table is the following line: 5 Reallocated sector count 0x0033 200 200 140 Pre-fail Always Never 18 SMART overall-health is "Passed". SMART report attached. I do see that Fix Common Problems is reporting the relevant disk has read errors. How should I proceed? Should I be replacing the disk? Format the disk and let the array rebuild? tower-smart-20210317-1319.zip
  15. So I think, think, I may have identified the issue. I was checking the VPN certs and I noticed I have two "binhex-delugevpn" folders in my app data. One has a "." after it. It doesnt have any OpenVPN details. I copied the same Open VPN files into it as is in the similarly named container and now it works. As a result I checked the container details and whilst the container name is "binhex-delugevpn" its AppData Config Path is "/mnt/user/appdata/binhex-delugevpn." I changed the path to "/mnt/user/appdata/binhex-delugevpn", dropped the "." and now Sonarr and Radarr work with Proxy enabled. I have deleted the appdata folder with the "." at the end of the folder name. Is there a way for me to check that Sonarr, Radayy, and Lidarr are actually utilising the VPN? I regularly "iknowwhatyoudownloaded" but is there a more specific way?
  16. Im not sure if this will answer your question. I have your delugevpn, lidarr, radarr, sonarr, and sabnzbd containers installed. I utilised Spaceinvader1's YT guide to set them up. Deluge has my VPN enabled (PIA). Privoxy is also enabled though I have not set any browser or PC set to utilise it. Sonarr, Radarr, and Lidarr all have SAB set as the Download Client. All three had the same SAB config. I made sure all three were the same. They have all been working fine. Lidarr still is, Sonarr and Radarr are running. I can access the UI on all. Lidarr still downloads fine. The others dont. I attempted to fix the issue by deleting the Sab download client in Radarr's config. I have since attempted to re-add it assuming it may be corrupted in some way.Upon clicking "Test" I am shown the following screen. To me this seems to indicate a problem with the hosts IP address. I have tried localhost as the address however receive the same error. I have attached a picture of the Lidarr confg showing the test is successful. Also attached Sonarr proxy settings as requested. Edit: So i noted Lidarr wasnt usign a Proxy, hence im assuming why its downloading. Turning off Proxy in Sonarr/Radarr and it works, test successful. With Proxy enabled I receive the error message. So it seems to me there is a problem with the Proxy path, as it works (including downloading) when Proxy is set to off. I have not edited my delugevpn container. When Radarr/Sonarr connect to SABnzb does it go through my VPN, as specified in deluge? Could this be why the test fails locally?
  17. Im running your delugevpn container with privoxy enabled. There doesnt appear to be any issue for Lidarr downloading via your SAB container. I just tested it and its all working fine. Just Sonarr and Radarr reporting the error. So at the moment all I have changed is the Sonarr Settings/General/Proxy Settings field to ignore my server address. Im still getting the message "Test was aborted due to an error: Unable to connect to SABnzbd, please check your settings".
  18. I followed the link you provided. I removed the Extra Parameters field. Q25 does not apply as I can view the UI. Q26 does apply. I have added my unRAID server to the Ignored Addresses field of Sonar's Settings/General Proxy. Error continues. As to Q27; where am I adding the VPN_OUTPUT_PORTS to? Am i adding a variable from the Sonarr docker template config page? This page?
  19. I forced the update on the containers of yours that Im running. Sonarr and Radarr still dont work, though Lidarr does. I have added the line --sysctl="net.ipv4.conf.all.src_valid_mark=1" to Sonarr's Extra Parameters field but the Sonarr container has now dissappeared from the list on the Docker tab. I have reinstalled the container but still get the same message. I edited the Settings-General-Proxy Settings of Sonarr's config to ignore my server (192.168.1.100). Still getting the same message.
  20. I tried unchecking "Enable HTTPS" however it does not work. It says the change will require a restart. However, when it restarts HTTPS remains enabled.
  21. Hello. I have problem which recently developed. Both Radarr and Sonarr are reporting an error: Unable to connect SABnzbd. Lidarr is working fine with the same details for my SABnzbd server. Its been working fine, for well over a year actually, and I have not changed anything with the server. In the host field I have always had my servers IP address (192.168.1.100) and port 8080 in the port field. But now 192.168.1.100, localhost or 0.0.0.0 all report the same message when I try to test the connection. The log file has this error. 2021-03-08 12:23:05,405 DEBG 'sonarr' stdout output: [Error] Sabnzbd: Unable to connect to SABnzbd, please check your settings [v2.0.0.5344] NzbDrone.Core.Download.Clients.DownloadClientUnavailableException: Unable to connect to SABnzbd, please check your settings ---> System.Net.WebException: Error: ConnectFailure (Connection refused): 'https://localhost:8080/api?mode=version&apikey=a1467c734a844b9b91d5840ac35cb49a&output=json' ---> System.Net.WebException: Error: ConnectFailure (Connection refused) ---> System.Net.Sockets.SocketException: Connection refused
  22. Thanks. I recently had an issue that two of the three SSDs in my cache was offline. Ive deleted the oldest.
  23. Fix common problems is reporting the following error: "The following files exist within the same folder on more than one disk. This duplicated file means that only the version on the lowest numbered disk will be readable, and the others are only going to confuse unRaid and take up excess space: /mnt/user/system/docker/docker.img disk7 cache" Do I just delete one? Which is on the lowest disk? disk7 or cache? Which should be deleted?