• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jakea333's Achievements


Rookie (2/14)



  1. I've never had any luck with this plugin keeping itself up to date. I have the setting "Automatically protect new and modified files:" set to "Enabled" but it doesn't seem to ever correctly work. So far, I've just manually rebuilt every few days which works fine, but I'd really prefer the near real-time protection enabled. I think it might be related to the inotifywait component. When I look at the config I see: root@Tower:~# cat /etc/inotifywait.conf cmd="a" method="-md5" exclude="(Domain_Backups/|Podcasts/)" disks="" Is that "disks" parameter supposed to contain all of my array disks? I've intermittently seen an error in my Syslog that's related I believe. Apr 14 20:06:08 Tower inotifywait[22757]: Failed to watch /mnt/disk5; upper limit on inotify watches reached! Apr 14 20:06:08 Tower inotifywait[22757]: Please increase the amount of inotify watches allowed per user via `/proc/sys/fs/inotify I went to that location and increased the watches from the default 524288 to 1524288 as a rough test. I then reapplied the settings in File Integrity but saw no change. Disk5 contains a backup of my Plex Library, which has a large number of small files. My guess is that it's larger than the default watches, but my quick change didn't seem to work. I've got the Share it's stored in excluded, but I don't believe that'll matter to the disk watches. Any suggestions for what I might do to troubleshoot next? I haven't found much in the way of logs to help parse what's going on, so I'm a little out of my depth. ***UPDATE It seems I spoke too soon. Since updating the max # of watches my files have been staying up to date. I added a line to my Go file to set my max watches to ~2 million each time unRAID boots. I figure that's enough so that I won't have to worry about it. Looks like inotifywait uses ~225 MB of RAM on my system, but that seems a small price to pay for the functionality of this plug-in. I still see the disks="" but the plug-in seems to be functioning correctly now.
  2. I generated the same error while trying to create a new backup external drive and testing the mount/unmount script I had assigned. I could remount (only once) each time if I rebooted the server. As these were simply backup discs that I had just formatted, I went with a different FS. Given the prevalence of XFS in UnRAID, perhaps Rob's fix should be implemented in this plugin?
  3. In the GUI, you can go to the Shares tab and click "Compute" under the Size category. I believe this only works for the top level shares, however. Any folder inside a share can easily be checked via Windows with the right-click, properties. You can also use "du" from command line at any level.
  4. Thanks, picked up the single to pair with my current parity drive. If you've got an Amex card there's a $25 back on $200 offer at Newegg that brings it down to $214 (not to mention the bonus warranty year and Shoprunner 2-day shipping benefits those cards usually provide).
  5. So, I've corrected my issue. I'm still not convinced this is due to gfjardim's plugins, but uninstalling them temporarily did resolve my problem. Basically, I uninstalled both the Preclear and Unassigned Devices plugins, rebooted the server (cache did not auto populate), assigned the drive, started the array, stopped the array, unassigned the drive, started and stopped the array again, then finally assigned the drive again. After that my cache has persisted through power cycles, even once I reinstalled these plugins. That may not be the most efficient way to do it, but I was essentially trying to emulate the process needed for unRAID to "forget" a disk you wanted to rebuild onto. I'd done something similar with the plugins installed and the cache drive never persisted through reboots.
  6. Did you find a fix for this? I have seen the same symptoms. I noticed my cache drive was no longer persistent before adding this plugin (I have recently added the pre-clear plugin and then noticed the issue when I rebooted to add a new drive). I did not change the physical cache drive, but I did unassign it to do a secure erase approximately a month ago. I probably hadn't rebooted the server since I first reassigned the drive following the secure erase and the recent power down to install the new drive. I'm assuming this isn't related to gfjardim's excellent plugins, but I am curious if you've corrected the problem. I've removed the flash and checked it in a Windows box (indicating no issues) and unassigned/reassigned the cache drive multiple times with no change.
  7. Nothing fancy on my end, just plug and play. I picked it up from BPlus via Amazon. I only had a half mini PCIe slot to play with, so I made sure I found one of the shorter cards. Left my single PCIe available for a graphics card I could passthrough in a VM.
  8. I don't believe UnRAID supports wireless cards. I'm using the Z87E-ITX board as well and it's been very solid. I tossed the wireless card into an old laptop and use the mini PCIe for a dual sata card.
  9. jakea333

    plex talk

    A single i7 or Xeon are more than adequate. A few things you should consider before making decisions: what quality (bitrate) is your media, how many simultaneous transcoded streams do you realistically expect to see, how large are your media files? I think a lot of people overestimate the amount of processing power they need for Plex. I have an i5-4570 that hums along nicely with my setup - a docker Plex Media Server on UnRAID that serves 12-14 individuals locally and remotely with a Xen-based Window 8.1 VM that functions primarily as a Plex Home Theater front-end. Quality is important, as people give a general recommendation of ~1000 passmarks per 1080p stream. So a Haswell i5 will support 7 or so simultaneous transcodes. That said, my media is usually at a lower quality. My largest files are only 4-5 GB and average file size is probably closer to 750 MB. This translates to bitrates around 1000 - 5000 kbps usually. At that size, only my largest files are even transcoded during remote playback, and they are likely requiring less than the 1000 passmark recommendation anyway. Even sharing to so many people I never see more than 6-7 people watching at the same time and usually only 1-2 of those remote connections are transcoded. Now, if you have nothing but 25 GB Blu-ray rips at 20+Mbps bitrates, your needs may change - especially if every single remote file has to be transcoded down to ~3Mpbs. As far as Plex's usage of RAM, it doesn't seem to use much unless you decide to make your transcode directory in the Plex settings the /tmp directory on UnRAID (which is stored in RAM). This is the location those temporary transcoded files are stored. You can easily set this to the a regular location on a HDD/SSD as well. The thing to keep in mind is that Plex doesn't delete the "chunks" as it goes along. It builds the entire file up as it does the transcode and doesn't delete it until the viewer closes the stream. If you've got large files being transcoded, this can take up a lot of space. I transcode to the /tmp directory on my system because I never really see more than maybe 2-3 transcodes taking up 3-5 GB depending on where they each are in playback. So, if you choose to use the /tmp directory you'll need adequate RAM. I believe it defaults to half the amount of RAM UnRAID has available to it. I have 16 GB installed > so /tmp is ~8 GB. My point is just that the simple act of using Plex doesn't necessitate some super beefy system. The files that you'll be serving and how those will be accessed matters. Obscene amounts of RAM aren't necessary - as there is minimal advantage to transcoding to RAM over an SSD unless you've just got the extra resources anyway. For remote streams, I think most people find the bottleneck to be their upload internet speeds rather than a CPU limitation. If you've got something like Google Fiber where that's not the case, just tell your remote users to change the settings to allow direct play so you don't tax the CPU anyway.
  10. Just a note on the mPCIe cards. I'm using the SATA III version (http://www.hwtools.net/Adapter/PM1061.html) with an ASRock Z87E-ITX and it works flawlessly. Good luck with your project!
  11. I don't see an obvious reason for it not work. Sorry, here's the default in my container: /config/Library/Application Support/Plex Media Server/Plug-in Support/Databases/
  12. Bump. Hoping someone else has this issue occur as well, or if it's wide spread. The docker works fine, as do all of mine, but sonarr just simply doesnt refresh realtime, I always have to refresh the page manually to see any kind of update (whether it's files added to a show, activity, anything) I see the same thing. It really isn't an issue for me, as, like you said, the work-around is to manually refresh. I believe this is widespread.
  13. Unfortunately, Newegg dropped ShopRunner from their shipping options recently (they seem to be pushing "Premier"). It was one of the reasons I used them as my go to over Amazon and other competitors. Now, they don't have that advantage over Amazon for me any longer. TD still accepts SR, but they've always been more picky about which products it applies to.
  14. The difference is forward vs reverse breakout cables. The forward cables connect a controller on the SFF-8087 end to hard drives on the Sata end. The reverse cables connect 4 sata ports on a controller to a backplane (like the ones used with the Norco cases) on the SFF-8087 end.
  15. Yeah, outside this version specific hiccup, my Xen Windows VM (and Arch before I made the Docker transition) have been absolutely rock-solid. There is just minimal motivation to learn a new hypervisor when I can't see any room for improvement. I expect I'll make the transition one day - the focus on KVM by the unRAID team will likely dictate it. I'm just in no hurry until the detailed guides are posted to make it idiot-proof. For now, I've got a 24/7 Windows VM w/GPU passthrough that has had months with no downtime outside the short period before I rolled back to the 4.4.0 package.