Jump to content


Popular Content

Showing content with the highest reputation on 08/15/18 in all areas

  1. 1 point
  2. 1 point
    @BCinBC i found a working version number for 1.13.2. Seems like Plex is finally working properly again with this version. :) It's: I found it from randomly searching around in the Plex forums, as I also had deleted my crash reports folder.
  3. 1 point
  4. 1 point
    Awesome! Thanks for the quick update. Pulled the latest image and it fixed both problems. For the icons I had to clear the icons folder under config/, restart the container and clear the website data on my browser. The IP files work as expected too port gets forwarded and set in Deluge. Until the DNS problem gets resolved, I'm happy with this solution. Thanks again
  5. 1 point
    yes this is because the check is done against the hostname, as you are using ip address for the remote endpoint it doesnt match and thus is marked as doesn't support port forwarding, even though technically it can do port forwarding. so ive taken a look at the code and relaxed this section a bit, so it will now warn and carry on and attempt port forwarding even if its not in the current list, this should get around this issue. you can do the change but it wont be permanent, so ive now included python2-pillow, which according to that ticket should fix the issue. please pull down the latest image to pick up both of these changes.
  6. 1 point
    The Docker and Github links in the first post explain how to install a specific version of Plex.
  7. 1 point
    Yes, it's a realistic setup. If you already have those disks on hand then fine, use them. But if you have to go and buy them, consider that low capacity disks are generally more expensive in terms of cost per gigabyte than mid capacity disks. Of course, high capacity disks are also more expensive but there's a sweet spot somewhere in the middle (around the 4 to 6 TB mark, at the moment) that you might want to consider. You will need more storage space eventually - everyone does - and having fewer, larger disks is preferable to having many, smaller ones. Higher capacity disks are generally faster and with fewer disks there's less to go wrong. If your 1 TB disk fails you can replace it with another 1 TB disk or a 2 TB disk or anything you might find that's in between those values. The lower bound is the size of the disk that failed and the upper bound is the size of your parity disk. The file system is automatically expanded to fill the disk once the rebuild is complete. If your 2 TB data disk fails you can only replace it with another 2 TB disk. If your 2 TB parity disk fails you can replace it with a disk of capacity 2 TB or greater. So, with your proposed set up, if you wanted to keep a spare it would make sense to make it a 2 TB disk. (Note that there is a way of replacing a failed disk with one that is larger than your existing parity disk, but it involves a "double shuffle" which ends up with the new large disk replacing your parity disk and your old parity disk becoming the replacement for your failed disk.) Most people who have a cache disk use either an SSD or a pool containing two or more SSDs. The original purpose of the cache - to speed up writes to the unRAID server - still applies, though there is now another way to speed up writes called "turbo write" that doesn't need a cache. People who want to use Dockers and VMs find that the cache is useful place to store the files used by those two systems. Whether you have a cache is entirely up to you. You could maybe start off without one and add one later if you decide that you need one.
  8. 1 point
    so, here's a real life example: 2 oxs vm's and 1 opnsense firewall on a z400. both vm's are on (not sleeping) and monitors shut off after 15 minutes. The server is siting at 4-8% overall cpu usage, and that is mostly due to opnsense. The threads with the osx vm's on them are 1-3% probably using less than a light bulb... not an led light bulb, but still, not that much.
  9. 1 point
    It was just a lucky find on ebay! he is now selling the same drive for £185! mine has been in and running for a week now ?
  10. 1 point
  11. 1 point
    xfs_admin -U generate /dev/sdX1 Replace X with the correct unassigned disk identifier. There were read errors on disk18 during the rebuild of all 3 disks, this means there will be corruption on all 3, though if it happen during only the first rebuild all 3 would still be corrupt, since any subsequent rebuild would use a corrupt disk.
  12. 1 point
    This is what my pass-through command looks like /dev/disk/by-id/ata-Crucial_CT525MX300SSD1_163313AAD0A9 Also edit your VM templet and make sure your Primary vDisk bus is set to SATA