Ctofte

Members
  • Posts

    9
  • Joined

  • Last visited

Ctofte's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I already have notifications setup for monitoring Unraid in general and SMART attributes as well as temperature of the drives - none of the drives show signs of wear yet, so for now, everything is good on this point. My though was basically to swap drives once the SMART registers any issues with a drive and then go from there. But good to point it out 🙂
  2. That's some really good points - especially that the wear and tear might not be critical for Unraid systems due to the spinning down. In that case I don't think I can justify the premium of approximately 20% here in Denmark. Although I already have the hardware to support more disks, fewer disks are easier to maintain - and less spin up/down across the array, since the disks contain a lot more of the similar data in the folders. I will have to see if there's any good bargins for the bigger disks coming around then 🙂 Then I guess the the plan must be to add a couple of drives and then exchange dying drives with a bigger one as well, and (hopefully) slowly replace the array with larger drives one by one.
  3. Hi - My array is nearing it's full capacity and I need to evaluate how I should proceed. Currently I have 8 drives; 4x 3TB WD Red (WD30EFRX) 4x 4TB Seagate Ironwolf (ST4000VN008) - one of which is Parity. None of them are PRO drives. From what I can see, both WD Red and Ironwolf are rated for up to 8 bays. How are my expanding options? I am considering moving towards larger drives, like 8/10/12TB as they are currently similarly priced per TB. Though, still requires at least two drives to be switched, due to the parity as well. If exceeding the 8 bays, must the drives be PRO labeled? I could also exchange two of the drives with bigger drives and keep it below 8 bays. Then again; not gaining a lot of new space with this method in the beginning. Is there any other drives I should look at, instead of choosing either WD and Seagate again?
  4. Hi, Yes, after I bought the right cable, the drives were detected. I just tried the same 2x1TB drives I had laying, and they connected just fine - so haven't had all 8 drives connected at once, but they should work.
  5. Shoot... Hadn't seen that note until now. Will order new cables and see if it solves the issue - thanks for the quick reply.
  6. Hi, I just bought a Dell H310 off eBay and it came flashed to LSI IT mode (SAS2008, v. 20.00.07.00). I have connected two drives to it, but they don't appear in "Unassigned devices" - using these cables: Mini SAS 36P SFF-8087 to 4 SATA 7 Pin Pin out 2 WL. The card does seem to be registered in UnRaidas the LSI SAS2008. What can I do to make the drives available?
  7. Solution: Ended up deleting the library, as no backups occured for the last 6 months, so the database must have been somewhat corrupt for some time. Starting Plex Media Server - Loop All of a sudden, I experience the Plex docker not starting. It worked fine at Friday, but when the server started today, Plex no longer starts. Friday at midnight, the Plex container automatically updated using app; AutoUpdateApps. Using LinuxServer.io LetsEncrypt docker for ReverseProxy, if it is related. Which was also updated at Friday midnight. Anyone who has an idea for how to solve this? In the Plex Media Server log (\appdata\plex\Library\Application Support\Plex Media Server\Logs) It reports that the database image is corrupt. PRAGMA integrity_check outputs the following, and the 'Repair a corrupt database guide from Plex' unfortunately does not solve it. Instead, making an empty database when converting from the .SQL to .DB Configuration [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 10-adduser: executing... ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 30-dbus: executing... [cont-init.d] 30-dbus: exited 0. [cont-init.d] 40-chown-files: executing... [cont-init.d] 40-chown-files: exited 0. [cont-init.d] 50-plex-update: executing... No update required [cont-init.d] 50-plex-update: exited 0. [cont-init.d] done. [services.d] starting services Starting dbus-daemon Starting Plex Media Server. [services.d] done. Starting Avahi daemon Found user 'avahi' (UID 102) and group 'avahi' (GID 103). Successfully dropped root privileges. avahi-daemon 0.7 starting up. No service file found in /etc/avahi/services. *** WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. *** *** WARNING: Detected another IPv6 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. *** Joining mDNS multicast group on interface vethfd8410a.IPv6 with address fe80::240d:a4ff:fec5:7105. New relevant interface vethfd8410a.IPv6 for mDNS. Joining mDNS multicast group on interface vethc602a55.IPv6 with address fe80::7c89:38ff:fec7:1d5b. New relevant interface vethc602a55.IPv6 for mDNS. Joining mDNS multicast group on interface veth26b4467.IPv6 with address fe80::b8f9:8eff:fe46:dc31. New relevant interface veth26b4467.IPv6 for mDNS. Joining mDNS multicast group on interface vethe9c235b.IPv6 with address fe80::1c51:14ff:fe9e:d29a. New relevant interface vethe9c235b.IPv6 for mDNS. Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1. New relevant interface virbr0.IPv4 for mDNS. Joining mDNS multicast group on interface docker0.IPv6 with address fe80::42:8dff:fee1:edf7. New relevant interface docker0.IPv6 for mDNS. Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. New relevant interface docker0.IPv4 for mDNS. Joining mDNS multicast group on interface br0.IPv6 with address fe80::f884:27ff:fe6f:ebc9. New relevant interface br0.IPv6 for mDNS. Joining mDNS multicast group on interface br0.IPv4 with address 192.168.1.14. New relevant interface br0.IPv4 for mDNS. Joining mDNS multicast group on interface bond0.IPv6 with address fe80::76d0:2bff:fe95:2bb8. New relevant interface bond0.IPv6 for mDNS. Joining mDNS multicast group on interface lo.IPv6 with address ::1. New relevant interface lo.IPv6 for mDNS. Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1. New relevant interface lo.IPv4 for mDNS. Network interface enumeration completed. Registering new address record for fe80::240d:a4ff:fec5:7105 on vethfd8410a.*. Registering new address record for fe80::7c89:38ff:fec7:1d5b on vethc602a55.*. Registering new address record for fe80::b8f9:8eff:fe46:dc31 on veth26b4467.*. Registering new address record for fe80::1c51:14ff:fe9e:d29a on vethe9c235b.*. Registering new address record for 192.168.122.1 on virbr0.IPv4. Registering new address record for fe80::42:8dff:fee1:edf7 on docker0.*. Registering new address record for 172.17.0.1 on docker0.IPv4. Registering new address record for fe80::f884:27ff:fe6f:ebc9 on br0.*. Registering new address record for 192.168.1.14 on br0.IPv4. Registering new address record for fe80::76d0:2bff:fe95:2bb8 on bond0.*. Registering new address record for ::1 on lo.*. Registering new address record for 127.0.0.1 on lo.IPv4. Server startup complete. Host name is tower.local. Local service cookie is 1672898640. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server.
  8. 1812 Opening ports is not an issue - it is currently what I have done using the Qnap and it's FTP server. But I have not yet found a reliable method for a FTP server on unRAID, accessible to outside. Webdav is supported by Kodi, so it could be a solution. Though, I have not used it before, so unsure of whether I would need webdav to be supported directly from unRAID? If that is the case, which docker / plugin need to be setup? In terms of reverse proxy, what solutions are you thinking of? I have already setup the NGINX reverse proxy for PLEX and other dockers running. The solution should provide an easy way to add/remove users to the shares, so I can keep the access restricted.
  9. Hi, I am in the process og migrating from a Qnap system to unRaid. For transferring files (up to 15GB each), I am used to use FTPS on the Qnap. Besides, I have used FTPS for setting up KODI for remote sources. As far as I understand, the built-in FTP server in unRaid is of no use, as it gives access to all folders and is quite insecure to expose externally. I have looked into VPN, but it seems like a cumbersome step for tech-novices to setting it up just to download a file or watch something on Kodi. Besides, others should not have access to the entire home network and it's devices. 1) How can friends access the files on unRaid - both in terms of downloading and uploading files? 2) Any way to Access the user shares remotely through Kodi? 3) How is user access rights maintained for remote share access? Ideally, setting access to specific user shares and specifying read/write access.