kaiguy

Members
  • Posts

    723
  • Joined

  • Last visited

Everything posted by kaiguy

  1. Do you have PMS set to generate video preview thumbnails under Settings | Server | Library? This could be causing your high CPU usage.
  2. This could be a Plex-related issue. https://forums.plex.tv/discussion/206468/plex-network-lag I remember seeing other posts on latency issues for the last few years on the Plex forums. One thing to do would be to turn off the Plex docker and see if your latency issues go away. If so, I'd say its Plex. Then I'd try disabling the DLNA server, if you haven't already.
  3. A new version is required any time the unRAID kernel is updated. Zeron is usually pretty darn quick with compiling a new plugin and will update this thread when available. Edit: Strange, I see an update for this plugin for the new kernel as of yesterday. Is it not working for others? Edit2: Ok, something funky is going on. The plugin has been updated, and the new package is downloaded, but for some reason the package isn't properly installed. Here's what it shows in the syslog after I updated the plugin and rebooted: Feb 11 18:28:07 titan logger: plugin: installing: /boot/config/plugins/openVMTools.plg Feb 11 18:28:07 titan logger: plugin: skipping: /boot/packages/open_vm_tools-9.10.0.2476743-K4.1.15_unRaid-x86_64-9Zeron.tgz already exists Feb 11 18:28:07 titan logger: plugin: skipping: /boot/packages/open_vm_tools-9.10.0.2476743-K4.1.13_unRaid-x86_64-8Zeron.tgz already exists Feb 11 18:28:07 titan logger: plugin: skipping: /boot/packages/open_vm_tools-9.10.0.2476743-K4.1.7_unRaid-x86_64-7Zeron.tgz already exists Feb 11 18:28:07 titan logger: plugin: skipping: /boot/packages/open_vm_tools-9.10.0.2476743-K4.1.5_unRaid-x86_64-6Zeron.tgz already exists Feb 11 18:28:07 titan logger: plugin: skipping: /boot/packages/open_vm_tools-9.10.0.2476743-K4.0.4_unRaid-x86_64-5Zeron.tgz already exists Feb 11 18:28:07 titan logger: plugin: running: anonymous Feb 11 18:28:07 titan logger: Open-VM-Tools is not available for Kernel 4.1.17 Please update the plugin. Feb 11 18:28:07 titan logger: plugin: installed If you updated the plugin, it likely ended up being downloaded to your /boot/packages directory. Just run a "installpkg open_vm_tools-9.10.0.2476743-K4.1.17_unRaid-x86_64-10Zeron.tgz" in that directory for a temporary fix. I think someone with better plugin experience will need to take a look at the file to figure out what went wonky. Edit3: I went ahead and just deleted the old .plg (which was old) and redownloaded the updated one. I think things are looking good now. Thanks.
  4. Are you connecting to SSL servers? If not, I'd try that. And maybe use an uncommon SSL port for your provider (most have a couple port options). I have found than some ISPs throttle certain ports.
  5. Throwing my hat into the ring as well to +1 this! This would be outstanding. FYI, if/when it does happen, from what I have read, the network type will need to be set to host. See https://github.com/nfarina/homebridge/issues/309 Edit: Actually, there are probably more challenges that would need to be overcome, such as the ability to install the various plugins. If they could live outside of the Docker container, then that wouldn't be too difficult, but if not, this could be a hurdle.
  6. Hi trurl (or anyone else who'd be so kind to chime in), So finally got the replacement drive precleared, and I figured before I swap back in an old drive that did give me read errors at one point (though the smart report looks fine) and do a newconfig, why not just try and see what happens if I rebuild the failed disk without swapping back in the old disk14. So I rebuilt drive3, and surprisingly enough, everything appears to be working fine--I believe all of my files on that drive are still intact (unRAID rocks!). The weird thing is, though, I don't see any errors on the webgui, but my email notification looked like this: Event: unRAID Data rebuild: Subject: Notice [TITAN] - Data rebuild: finished (411641108 errors) Description: Duration: unavailable (no parity-check entries logged) Importance: warning Could those errors be a carryover from when I was having read errors that for some reason never got cleared out, or is this something I should be concerned about? Thanks for all of your help!
  7. ReiserFS. I haven't done a migration to xfs, except for when I add new disks. Yes, the smart report was from the old disk14. Not sure why I got read errors during parity check, but I should have run a smart report before deciding to pull it. Jumped the gun there. Ok, so let me repeat this back to you just to ensure I'm fully understanding this procedure. - Pull newly rebuilt disk14 (4TB), replace with original disk14 (2TB) - Put in a blank precleared drive for disk3 - Keep the array stopped - Run "new config" with the trust parity option - Pretty sure I'd have to reassign all the drives though with new config, right? - Start array, stop array - Unassign disk3 - Start array, stop array - Reassign disk3 and let it rebuild Essentially this is tricking unRAID into thinking disk3 initially has data on it when it in fact doesn't, keeping parity, and thus allowing that drive to hopefully be rebuilt successfully? Thanks again! Edit: And the fact that the old and new disk14 are different sizes won't cause any issues with the parity? I have no clue how drive sizes and parity work as it relates to the rebuilding of data.
  8. 6.1.6. How would that process go? Pull the rebuilt disk14, put back in the old disk14, replace missing disk3, and do a new config? Thanks!
  9. Hey all, So during my monthly parity check, one of my disks (disk14) started having read errors. Having an extra precleared drive ready to go (old:2TB, new:4TB), I stopped the parity check, pulled the drive, replaced it and started to rebuild the disk. During this rebuild, however, another one of my drives (disk3) started experiencing millions of read errors. The rebuild completed and I'm pretty sure the data is bad (directory names don't match up with their actual contents). Files can't be accessed, etc. So here's the deal. I'm fairly certain that disk3 is fully dead. When its in a bay, it just sounds like its repeatedly trying to spin up and then I get a "click." The original disk14 may not be a total loss, however. But I don't think there's much I can do to salvage disk3 now, since parity is probably bonked as well. Does anyone have any recommendations of how I should proceed? Is it pretty safe to say that disk3's data is a total loss? I've never run into anything like this before in the years I've ran unRAID, so I guess I was due. Any and all suggestions are much appreciated. Thanks! Edit: Well I put that 2TB back in and ran a SMART test. It seems to be fine? It passed. Attaching it here. So at least I should have disk14's data intact, but that doesn't really help me with disk3, right? And I don't even know what I could do while trying to keep whatever parity I have left, if any. Am I even making sense? titan-smart-20160102-1134.zip
  10. It's been so long since I've configured ESXi, that I don't even know if my method is ideal anymore. This is from memory: You will the SSD as a datastore under ESXi and pass through the "NAS" drive to the unRAID VM via raw device mapping. The Atlas build thread should give you all you need. However, considering unRAID's virtualization capabilities, why do you want to run it under ESXi on your Microserver? Why not just run unRAID baremetal, and use either Dockers or KVM machines if you need them? You could then assign your SSD as an unRAID cache drive and have it do double duty...
  11. Zeron, you're single-handedly the only reason I'm still able to run unRAID under ESXi. So thank you so much.
  12. No question here. Just wanted to say thanks again to everyone for their great work. I'm using your containers exclusively and they're performing exceptionally well!
  13. Even though I figured out my issue, I'd still love to know what are the best practices for identifying what is keeping a device busy and ultimately keeping the array from unmounting. In my case, I stupidly created an SMB mountpoint under the cache drive. I only figured it out after I specifically ran 'mount' to see what mountpoints were still active. Once I unmounted it, the array was able to unmount. I believe (and correct me if I'm wrong) the correct command to use lsof would be for other types of array unmounting issues would be: lsof +D /mnt/ to see what open files are still within a directory under /mnt (again, not helpful in my case)... Any other commands that would be of use?
  14. Hello! Over the last few RC's (and now to 6.1.0), I have noticed that something is keeping my array from unmounting. Specifically, according to the log, something is keeping the cache drive active: Sep 1 08:34:12 titan logger: rmdir: failed to remove '/mnt/disk12': No such file or directory Sep 1 08:34:12 titan emhttp: shcmd (1703): umount /mnt/disk13 |& logger Sep 1 08:34:12 titan logger: umount: /mnt/disk13: not found Sep 1 08:34:12 titan emhttp: shcmd (1704): rmdir /mnt/disk13 |& logger Sep 1 08:34:12 titan logger: rmdir: failed to remove '/mnt/disk13': No such file or directory Sep 1 08:34:12 titan emhttp: shcmd (1705): umount /mnt/cache |& logger Sep 1 08:34:12 titan logger: umount: /mnt/cache: device is busy. Sep 1 08:34:12 titan logger: (In some cases useful info about processes that use Sep 1 08:34:12 titan logger: the device is found by lsof( or fuser(1)) Sep 1 08:34:12 titan emhttp: Retry unmounting disk share(s)... I am not sure if I am doing this correctly, but I ran commands such as the following: lsof /mnt/cache/* lsof /mnt/cache fuser /mnt/cache/* fuser /mnt/cache Which gave me nothing. I further ran ps -A and reviewed the results line-by-line. Nothing stood out to me. Every time I can never figure out whats hanging my array, so I just end up issuing a powerdown command. I'm running 4 Dockers from linuxserver (which I wouldn't think would cause a problem), and the following plugins: Powerdown 2.18, Open-VM-Tools (yeah, I know it's not bare metal, but maybe someone can be so kind as to help me), cache_dirs (which I did not see in ps -A, tried to killall cache_dirs but nada), Dynamix active streams, and Community Applications. Nothing in my 'go' file of merit. If someone can give me some correct commands or areas to check when this happens in the future, that would be awesome. Thanks for any insight!
  15. Thanks, RobJ. I recall reading about the toggle for SMB shares, but I guess I missed the other part. So many changes to keep track of in these RCs.
  16. Wow, you guys are quick! Noticed you pushed out a commit on github to fix the issue. Confirmed working! Awesome!
  17. I've been in the process of transitioning my containers over to Linuxserver exclusively. One item I've noticed with Sonarr (coming from initially a full install on a VM, then via a Binhex docker) is that this version does not seem to auto "refresh" as I am used to. For example, if I've had the Sonarr webgui up in a browser and it snatches a release, I would see the activity icon auto display, and a progress bar would show on the calendar. However, I am noticing that I need to manually refresh my browser to get any changes to the webgui whatsoever. Navigating to other areas within the webgui doesn't even seem to cause a refresh. I actually have to hit the reload button on my browser. Any thoughts? Is this a known issue, perhaps? Thanks! Edit: This thread seems like a similar problem: https://forums.sonarr.tv/t/gui-update-issues/4195/30 ... it sounds like it could be an issue with SignalR... I do in fact get an item in Chrome's dev tools: Starting signalR SignalRBroadcaster.js:9 SignalR: [connecting] SignalRBroadcaster.js:32 GET http://<myhost>:8989/signalr/negotiate?_=1440564408822 500 (Internal Server Error) jquery.js:9659 SignalR: [disconnected] SignalRBroadcaster.js:32 I'll post on github.
  18. Makes perfect sense. Thank you, Jon. I guess I've just never noticed that before. Thanks!
  19. I like to keep my cache drive clean, so I have created directories on my cache for my appdata (.appdata) and docker (.docker) folders. Today, I realized that I have .appdata and .docker directories within my user share directory. I don't recall this in the past. I have not actually created any cache-only user shares, just created the directories with the leading dot like I have in previous versions of unRAID. I guess I'm just curious if this is by design. Longtime user, but this one was strange for me. Thanks!
  20. Been using this Docker since its release. Works flawlessly. After a few days, I spun down the VM I was using for PlexWatch and PlexWatchWeb--PlexPy is now doing all the work.
  21. No worries. Thank you both for your replies. I'm just stoked you rolled out a version with multicore par2! I can patiently wait for version 8 to go to release status (but man, that new skin looks real pretty)
  22. Also, I know this is a longshot, but I don't suppose there is any way (via the environmental variables or extra parameters, perhaps) to be able to pull an alpha build of sab, by chance? (I would have edited my above post to add this question, but I guess this sub doesn't let me edit my posts). Thanks!!
  23. I was hoping you guys would release this! Is this version of par2 multicore, by chance?
  24. Tried browsing and searching through all the rc threads, so my apologies if this was already asked and answered. I no longer have a cache drive SMB share after upgrading to rc5 (from 6.0.1) nor do I see any sort of toggle in the settings. Was this removed in 6.1?
  25. Sorry if this is a stupid question, but this is my first time moving up to an rc for 6.1... I thought in rc2 there was a change to non-rotational drives in the webgui. Well, my cache SSD has a "click to spindown" arrow on the Main tab in the webgui. Am I misunderstanding the change to think that this shouldn't be an option? Thanks!