Jump to content

kaiguy

Members
  • Posts

    723
  • Joined

  • Last visited

Everything posted by kaiguy

  1. A new version is required any time the unRAID kernel is updated. Zeron is usually pretty darn quick with compiling a new plugin and will update this thread when available. Edit: Strange, I see an update for this plugin for the new kernel as of yesterday. Is it not working for others? Edit2: Ok, something funky is going on. The plugin has been updated, and the new package is downloaded, but for some reason the package isn't properly installed. Here's what it shows in the syslog after I updated the plugin and rebooted: Feb 11 18:28:07 titan logger: plugin: installing: /boot/config/plugins/openVMTools.plg Feb 11 18:28:07 titan logger: plugin: skipping: /boot/packages/open_vm_tools-9.10.0.2476743-K4.1.15_unRaid-x86_64-9Zeron.tgz already exists Feb 11 18:28:07 titan logger: plugin: skipping: /boot/packages/open_vm_tools-9.10.0.2476743-K4.1.13_unRaid-x86_64-8Zeron.tgz already exists Feb 11 18:28:07 titan logger: plugin: skipping: /boot/packages/open_vm_tools-9.10.0.2476743-K4.1.7_unRaid-x86_64-7Zeron.tgz already exists Feb 11 18:28:07 titan logger: plugin: skipping: /boot/packages/open_vm_tools-9.10.0.2476743-K4.1.5_unRaid-x86_64-6Zeron.tgz already exists Feb 11 18:28:07 titan logger: plugin: skipping: /boot/packages/open_vm_tools-9.10.0.2476743-K4.0.4_unRaid-x86_64-5Zeron.tgz already exists Feb 11 18:28:07 titan logger: plugin: running: anonymous Feb 11 18:28:07 titan logger: Open-VM-Tools is not available for Kernel 4.1.17 Please update the plugin. Feb 11 18:28:07 titan logger: plugin: installed If you updated the plugin, it likely ended up being downloaded to your /boot/packages directory. Just run a "installpkg open_vm_tools-9.10.0.2476743-K4.1.17_unRaid-x86_64-10Zeron.tgz" in that directory for a temporary fix. I think someone with better plugin experience will need to take a look at the file to figure out what went wonky. Edit3: I went ahead and just deleted the old .plg (which was old) and redownloaded the updated one. I think things are looking good now. Thanks.
  2. Are you connecting to SSL servers? If not, I'd try that. And maybe use an uncommon SSL port for your provider (most have a couple port options). I have found than some ISPs throttle certain ports.
  3. Throwing my hat into the ring as well to +1 this! This would be outstanding. FYI, if/when it does happen, from what I have read, the network type will need to be set to host. See https://github.com/nfarina/homebridge/issues/309 Edit: Actually, there are probably more challenges that would need to be overcome, such as the ability to install the various plugins. If they could live outside of the Docker container, then that wouldn't be too difficult, but if not, this could be a hurdle.
  4. Zeron, you're single-handedly the only reason I'm still able to run unRAID under ESXi. So thank you so much.
  5. No question here. Just wanted to say thanks again to everyone for their great work. I'm using your containers exclusively and they're performing exceptionally well!
  6. Even though I figured out my issue, I'd still love to know what are the best practices for identifying what is keeping a device busy and ultimately keeping the array from unmounting. In my case, I stupidly created an SMB mountpoint under the cache drive. I only figured it out after I specifically ran 'mount' to see what mountpoints were still active. Once I unmounted it, the array was able to unmount. I believe (and correct me if I'm wrong) the correct command to use lsof would be for other types of array unmounting issues would be: lsof +D /mnt/ to see what open files are still within a directory under /mnt (again, not helpful in my case)... Any other commands that would be of use?
  7. Hello! Over the last few RC's (and now to 6.1.0), I have noticed that something is keeping my array from unmounting. Specifically, according to the log, something is keeping the cache drive active: Sep 1 08:34:12 titan logger: rmdir: failed to remove '/mnt/disk12': No such file or directory Sep 1 08:34:12 titan emhttp: shcmd (1703): umount /mnt/disk13 |& logger Sep 1 08:34:12 titan logger: umount: /mnt/disk13: not found Sep 1 08:34:12 titan emhttp: shcmd (1704): rmdir /mnt/disk13 |& logger Sep 1 08:34:12 titan logger: rmdir: failed to remove '/mnt/disk13': No such file or directory Sep 1 08:34:12 titan emhttp: shcmd (1705): umount /mnt/cache |& logger Sep 1 08:34:12 titan logger: umount: /mnt/cache: device is busy. Sep 1 08:34:12 titan logger: (In some cases useful info about processes that use Sep 1 08:34:12 titan logger: the device is found by lsof( or fuser(1)) Sep 1 08:34:12 titan emhttp: Retry unmounting disk share(s)... I am not sure if I am doing this correctly, but I ran commands such as the following: lsof /mnt/cache/* lsof /mnt/cache fuser /mnt/cache/* fuser /mnt/cache Which gave me nothing. I further ran ps -A and reviewed the results line-by-line. Nothing stood out to me. Every time I can never figure out whats hanging my array, so I just end up issuing a powerdown command. I'm running 4 Dockers from linuxserver (which I wouldn't think would cause a problem), and the following plugins: Powerdown 2.18, Open-VM-Tools (yeah, I know it's not bare metal, but maybe someone can be so kind as to help me), cache_dirs (which I did not see in ps -A, tried to killall cache_dirs but nada), Dynamix active streams, and Community Applications. Nothing in my 'go' file of merit. If someone can give me some correct commands or areas to check when this happens in the future, that would be awesome. Thanks for any insight!
  8. Been using this Docker since its release. Works flawlessly. After a few days, I spun down the VM I was using for PlexWatch and PlexWatchWeb--PlexPy is now doing all the work.
  9. No worries. Thank you both for your replies. I'm just stoked you rolled out a version with multicore par2! I can patiently wait for version 8 to go to release status (but man, that new skin looks real pretty)
  10. Also, I know this is a longshot, but I don't suppose there is any way (via the environmental variables or extra parameters, perhaps) to be able to pull an alpha build of sab, by chance? (I would have edited my above post to add this question, but I guess this sub doesn't let me edit my posts). Thanks!!
  11. I was hoping you guys would release this! Is this version of par2 multicore, by chance?
  12. Yes. I have the PlexPy docker working well connected to PMS on a different ESXI VM (local). Secure connections is set to preferred (not mandatory) within PMS.
  13. Could be the processor improvement between your Netgear and your unRAID box. My understanding is there's actually more overhead involved with OpenVPN versus PPTP due to the enhanced encryption.
  14. Yeah, I don't see that happening ;-) We need to do more digging on this to see what the best approach would be for Plex. Maybe I need to drop a note to the Plex team to get their feedback. They're actually quite helpful to outside developers, from my understanding. Not sure if there's a good way to get in touch with them aside from their forum. Maybe email plexpass at plex.tv?
  15. I'm not getting those errors. ESXi 5.5, unRAID 6, vmdk boot method. Build details in my sig. If we want to get to the root cause of this, we may need to start attaching screen shots of the unRAID VM config within the vpshere client. I can't do it from work, but if needed, I'll post it later today.
  16. Well, yet another datapoint. I removed the SSD from the M1015 and did a RDM of the SSD from a mobo SATA port. fstrim still doesn't work. So now I'm thinking its ESXi that might be getting in the way. No trim for me.
  17. I think I may have found some evidence of why I can't seem to get fstrim to work when attached to my M1015 in IT mode: http://comments.gmane.org/gmane.linux.scsi/88189 Essentially this is saying that LSI controllers require "deterministic read after trim" capability on the SSD drive in order to enable TRIM support. hdparm -I shows that my 850 EVO is missing support for this feature, but apparently the PRO versions have it. I guess its just my luck that I picked out a cache drive that doesn't work with TRIM with my hardware setup. Edit: Just to check I updated the firmware on the mobo and M1015. fstrim absolutely doesn't work with this SSD on the HBA. Oh well.
  18. This is the one I was originally referring to, specifically with the Samsung 840 EVO: http://www.xbitlabs.com/articles/storage/display/samsung-840-evo_6.html
  19. That's a good idea. But I don't even need to do a fresh flash. I should be able to just remove the ESXi flash drive and boot from the unRAID flash. I will probably unplug my ESXi datastore drives from the mobo first, however. I'm kind of surprised, given the amount of people here who run a similar Atlas build clone haven't tried trimming their SSD cache drive. Or haven't posted about it...
  20. Hmm, well then I'm at a loss what could be preventing fstrim from running. Perhaps its also because I'm running ESXi and the M1015 is being PCI passthrough to the unRAID VM. Because of my ESXi setup, there's no way for me to pass through a single mobo SATA port, so I'm stuck on the M1015. Some review site did a comparison of speeds of SSDs (including Sammys) with only garbage collection in use and then with TRIM. TRIM'd drives were magnitudes faster. But maybe I just need to accept the fact I won't be able to TRIM my cache drive.
  21. My limited understanding (and hopefully someone else will chime in) is that XFS and BTRFS both support TRIM, but it either needs to be set upon mount or invoked with the fstrim command.
  22. Anyone know why I would get the following error? fstrim: /mnt/cache: FITRIM ioctl failed: Operation not supported Samsung SSD is formatted XFS. Thanks! Edit: Ugh. Is this because it's hooked up to my M1015?
  23. Would transferring a full drive to an empty drive via rsync take about the same time as a parity check (write phase) on said drive size? As there are different factors between different setups, I figured if the answer to this question is yes, I'd have a good idea of how long migrating a drive will take me. Ideally I'd like to do a drive every night while sleeping, but my guess is it will take much longer than that.
  24. Just doing some reading here in preparation for my ultimate migration to v6. I must say, I'm surprised there's not a simpler method. Not that this is overly complicated, but there appears to be different schools of thought/little consensus/lots to keep in mind. For me, I'm wondering how to handle user shares that span multiple disks. For example, if I'm copying from disk1 (RFS) to disk16 (XFS), do I later reassign what was disk16 to disk1, preserving my user share config, or do I keep records of all the changes, and ultimately make that change to the user share(s)? If a particular user share spans 6 disks, do I just delete that user share until I'm done migrating all those disks to XFS, then create it again? Or am I changing the config of the included disks in that user share 6 times? Is any of this even worth the hassle? Thanks!
  25. That update dialog is probably when the app refreshed the unRAID status. To update the actual app to a new version, go to the App Store and check the Updates tab.
×
×
  • Create New...