Jump to content

Can0nfan

Members
  • Content Count

    505
  • Joined

  • Last visited

Community Reputation

27 Good

About Can0nfan

  • Rank
    Advanced Member
  • Birthday 06/04/1977

Converted

  • Gender
    Male
  • Location
    Calgary
  • ICQ
    38688944
  • MSN Messenger
    cyphus4free@hotmail.com
  • Personal Text
    my ICQ and MSN are dead but i do remember those!

Recent Profile Visitors

1379 profile views
  1. Was just offering my experience and my help, while i was looking to move from the binhex-plexpass docker to the official one as the binhex ones were taking too long in my impatience to get the plexpass updates, noticed on my other server running the official plex container it was getting them on docker container restart not the typical way to update dockers in unRaid. so I wanted to move to the official one. No offense to you was meant this was just general information incase the OP was still wondering becasue their question wasnt actually answered when i read your post in the thread. once they are are running it I also wanted to exaplain you dont get the usual update available in the docker tab with the official plexpass one, you need the GUI to annouce the update then restart the container, you can watch the logs and see it update
  2. you need to edit the container and edit two spots...the repository set it to plexinc/pms-docker:plexpass and edit Key 4 to: plexpass A third spot need editing for Hardware transcoding you need to add into the Extra Parameters: (these are for if using an intel CPU with integrated Intel GPU with Quick Sync) There is a different parameter for nVidia cards and it also requires the nVidia unraid drivers from the Apps tab. --device /dev/dri:/dev/dri here are the screenshots
  3. I have been running plex pass using both the repository: "plexinc/pms-docker:plexpass" and tag "plexpass" and been getting constant updates for over a year so but to add to it i am running the plexpass repository not the stable one so that is likely where you and I have different points of view. To update the plex inc official docker you simply restart the container it updates then starts it. It is instantaneous available from as soon as the Plex webgui annouces the update in the orange banner
  4. not entirely true I am using "plexpass" and am getting updates regularly they post the updates to plexpass here: https://hub.docker.com/r/plexinc/pms-docker/tags/ oldest one is 4 days ago as of time of this writing
  5. I sold it and sadly the motherboard died shortly after
  6. not sure they can do that without configuring it as a complete theme here is the original Reddit post I quoted you in it (hope thats ok), this one is a cfg file and a couple of 3kb .css files that basically tells Theme Engine how to configure it. so was basically wondering if Day/Night could be updated to support all the themes that can be imported and used in /boot/config/plugins/theme.engine/themes/ this is my ls -l of that directory -rw------- 1 root root 804 Jan 16 10:05 Ashes-black.cfg -rw------- 1 root root 5978 Jan 16 10:05 Ashes-black.css -rw------- 1 root root 772 Jan 16 10:05 BrushThreesDark-black.cfg -rw------- 1 root root 5357 Jan 16 10:05 BrushThreesDark-black.css -rw------- 1 root root 655 Jan 16 10:05 DarkTheme-black.cfg -rw------- 1 root root 764 Jan 16 10:05 Dracula-black.cfg -rw------- 1 root root 5357 Jan 16 10:05 Dracula-black.css -rw------- 1 root root 808 Jan 16 10:05 Grayscale-black.cfg -rw------- 1 root root 5982 Jan 16 10:05 Grayscale-black.css -rw------- 1 root root 764 Jan 16 10:05 Monokai-black.cfg -rw------- 1 root root 5357 Jan 16 10:05 Monokai-black.css -rw------- 1 root root 765 Jan 16 10:05 NordDark-black.cfg -rw------- 1 root root 5357 Jan 16 10:05 NordDark-black.css -rw------- 1 root root 801 Jan 16 10:05 Nova-black.cfg -rw------- 1 root root 5564 Jan 16 10:05 Nova-black.css -rw------- 1 root root 764 Jan 16 10:05 Rebecca-black.cfg -rw------- 1 root root 5357 Jan 16 10:05 Rebecca-black.css -rw------- 1 root root 568 Mar 10 13:53 Sanity-black.cfg -rw------- 1 root root 3282 Mar 10 13:53 Sanity-black.css -rw------- 1 root root 554 Mar 10 13:53 Sanity-white.cfg -rw------- 1 root root 3079 Mar 10 13:53 Sanity-white.css -rw------- 1 root root 770 Jan 16 10:05 SolarizedDark-black.cfg -rw------- 1 root root 5357 Jan 16 10:05 SolarizedDark-black.css
  7. ok thanks ill reach out to the creator to see if they can update the names of the files. it does import to the Theme Engine Plugin this is the file naming as it is is installing inflating: unraid-sanity-master/CHANGELOG.md inflating: unraid-sanity-master/README.md inflating: unraid-sanity-master/Sanity-black.cfg inflating: unraid-sanity-master/Sanity-black.css inflating: unraid-sanity-master/Sanity-white.cfg inflating: unraid-sanity-master/Sanity-white.css extracting: unraid-sanity-master/VERSION inflating: unraid-sanity-master/install.sh inflating: unraid-sanity-master/screenshot.png Sanity Theme installed! here is it showing in Theme Engine
  8. @bonienl Any chance of getting the day/night plugin to work with custom saved themes? I install Sanity Theme from Github Here after seeing a screenshot and link in Reddit and would love to set the white for day and black for night like this wonderful plug-in does
  9. That! And set the little inactive switch at the top to active
  10. I have my pihole server set as my DNS and no issues and why do you say there is no reason? It’s blocking all the sad malware and ads when I’m remotely connected as it would when I’m home works great!
  11. Hi I was reading the security post written up and realized i may have myy two servers with no SMB security on my Disk Shares. so I go to disk 1 and set to private and then apply and write to the other disks it gives the success flag when i got back to shares only the next disk is updated not the rest like I tried. this happened on 6.8.2 on both of my servers both diagnostics attached freya-diagnostics-20200219-1629.zip thor-diagnostics-20200219-1629.zip
  12. this is one reason I have staggered the purchase and installation of drives I have just had two recently purchased Seagate Ironwolfs both exihibit UDMA CRC errors and due to time sensitivity of the RMA from the reseller i got them from I had to replce both last night Parity is almost done rebuilding but boy would it have sucked if one of my other 13 drives decided to bite it while rebuilding the two current drives.
  13. I have asked for option for as much parity as we want, but constantly asked "Why" i said becasue with 28 max Data drive risk of more than two failing at once increases the future answer is multiple drive pools with up to two parity and 28 data drives per pool so that will help
  14. two servers (formerly 3---retired one) Primary 2U 12bay hotswap - 110TB Data Array, 1TB Cache pool (2x1TB SSD) Two 8TB Parity Drives Using a Netapp DS4246 to get past my Server Case's 12 drive limit only just starting to populate it now Secondary 3U 10 bay hot swap - 36TB Data Array, 1TB Cache pool (2x1TB SSD), 1 6TB Parity Drive So guess if counting both im at 146TB total Array space and 2TB Cache space