JimPhreak

Members
  • Posts

    723
  • Joined

  • Last visited

Everything posted by JimPhreak

  1. Ok so just re-configured my cache pool this weekend to a 3x480GB Intel 730 SSD pool. My usual transfers speed didn't increase at all, still hovering around 30-40MB/s tops. However, I did come to a realization... 95% of the time I am transferring files to a cache only share (Downloads). Any time I transfer to this share the transfer speed is very poor (30-40MB/s). However if I do a transfer to a non-cache only share such as Videos (which I rarely if ever do) then my transfers pretty much max out my 1Gbps connection (113MB/s). So my question is, what is it about copying to a cache only share that would be causing such a significant drop?
  2. I see. Well if I have 4 SSD's in a cache pool what would be the advantage of putting my Plex data on it's on SSD outside of the pool? I personally wouldn't want to have my Plex data on an unprotected drive/array as it's very valuable to me. If you have room in cache that is always easier than dealing with a separate drive that isn't managed by unRAID. I will have room once I add these 2 new SSD's. Can I just simply add two new drives to an already created cache pool without losing the data on the pool as currently configured? It's supposed to work that way, but you might make a backup just in case something goes wrong. Yup, already on it. Thanks!
  3. I see. Well if I have 4 SSD's in a cache pool what would be the advantage of putting my Plex data on it's on SSD outside of the pool? I personally wouldn't want to have my Plex data on an unprotected drive/array as it's very valuable to me. If you have room in cache that is always easier than dealing with a separate drive that isn't managed by unRAID. I will have room once I add these 2 new SSD's. Can I just simply add two new drives to an already created cache pool without losing the data on the pool as currently configured?
  4. If I have two drives already in a cache pool, can I add 2 more drives to that pool without losing any data currently on the pool? Is it just as simple as taking down the array and assigning the two new disks to the pool?
  5. I see. Well if I have 4 SSD's in a cache pool what would be the advantage of putting my Plex data on it's on SSD outside of the pool? I personally wouldn't want to have my Plex data on an unprotected drive/array as it's very valuable to me.
  6. Hey everyone, I'm in the process of upgrading my cache pool (adding two 480GB SSD's to my two current ones) and figured now is a good time to clean-up (free-up) some space that some of my dockers take up on that pool. My Plex server is HUGE and most of that is because I have Video Thumbnail Previews enabled on my Movie (1,650+ movies) and TV Show (11,000+ episodes) libraries. Is there a way for me to store that data (which resides inside of the Media directory in the Plex appdata folder) on my array and if so is there any reason not to do so? With that data stored in my Plex config folder my Plex appdata folder takes up over 180GB (35GB without the Thumbnail Previews) of space so you can image why I'd be anxious to free some of that up.
  7. I used to run Plex inside of a separate VM but when unRAID moved to Dockers in version 6 and to me it's just so much easier to manage via dockers than running it on top of an OS. I'm don't quite understand why I'd see any benefit in terms of performance running Plex in a separate VM instead of inside of my unRAID VM. It's still Plex using my CPU either way.
  8. I got it for $375 on Newegg a month ago, don't know why the price has gone up so much. There is no internal 2.5" mounting bracket but there is a 3.5" internal mounting bracket which I have used to mount two SSD's inside of a 2-to-1 enclosure. I'm planning to add two more SSD's that and I will just through them in between the two 3.5" HDD cages with some velcro tape.
  9. I wouldn't say I'm not getting the performance I need, I just want to maximize what I can out of my CPU for Plex transcoding purposes. When you say you assigned all your cores to all of your VMs, do you mean you just assigned all available vCPUs leaving none left or you overprovisioned each VM?
  10. This question really is targeting those who are heavy Media Server users, more specifically those whose servers do a LOT of transcoding. I'm trying to determine how to provision my vCPUs on my ESXi box with regard to my unRAID VM so that I get the best possible performance of that server while not affecting my other VMs. My CPU has 16 vCPU cores available (Xeon D-1540) and my unRAID server (mainly Plex) get hammered as I do a ton of transcoding (sometimes 6-10 at a time). How are some of you provisioning (or over-provisioning) your vCPUs with regard to unRAID and how has it affected your other VMs? P.S. I don't have any high need VM's. I have 1 that acts as my home DC and two other Windows client VMs. All 3 are very lightly taxed.
  11. Currently I have an ESXi box (specs below) that has unRAID as my main VM on it. I'm looking to move off ESXi and go bare metal with unRAID (provided I can move my VMs over some way) to leverage all my CPU power since Plex really hammers my CPU (often have 10+ remote users connected). Would it possible to add my Samsung SM951 M.2 SSD to my cache pool (I realize I'd lose about 32GB of space)? Currently my VMs sit on my M.2. SSD and then the two Intel 730's are passed through to my unRAID VM as my cache pool but I obviously want to make full use of all my hardware. I'm also considering picking up another two Intel 730's to add to my cache pool since I'll need the extra space for VMs but I'm not sure what I'd do with my SM951 in that case. Would love some suggestions. Hardware: Supermicro X10SDV-TLN4F w/ Xeon X-1540 2.0Ghz 8-Core CPU 64GB DDR4 ECC RAM IBM ServeRAID M1015 16GB SATA DOM SSD Samsung SM951 51\2GB M.2 SSD Intel 730 480GB SSD (x2)
  12. I'm giving serious consideration to moving my unRAID server off of VM and going bare metal to ensure it's getting the full use of all my Xeon D-1540 cores (Plex hammers my CPU). However I run a bunch of Windows VM's (2-3 Win8/10 VMs, and 2 2012R2 VMs). Is there an easy way for me to move these VM's over to KVM without having to reconfigure everything? Also, how does networking work with KVM? I don't see any options to set the VLANs for the given VMs anywhere within unRAID.
  13. Short SMART test can pass on disks with pending sectors, but I don’t see any obvious issues in the SMART report. Did it complete the SMART extended test without errors? Yes it passed both short and extended tests.
  14. I moved my server to this case about a month ago and I love it. Pretty much the same case in the OP except for the drive bays.
  15. I know there have been noted issues with regard to the write speed of the BTRFS files system that is used with cache pools on unRAID. I experienced these issues first hand and then decided to try another route which was to use a 2-to-1 SSD/RAID enclosure that presents a RAID1 mirror to unRAID as a single device. I figured if I could do that and put the XFS file system on I'd have better speeds. Well...no dice. I still have the same speed issues (rarely if ever eclipsing 50MB/s). It's making me seriously question why I even have a cache drive/pool. Is there no way to get close to 1Gbps cache pool writing in unRAID 6 yet?
  16. Smart report attached. It passed. WDC_WD30EFRX-68EUZN0_WD-WCC4NNARETAN-20151130-0837.txt
  17. Just got a notification of 192 disk read errors on one of my drives today. Syslog pertaining to this errors is below. Just ran a short SMART test which passed and now running extended. Nov 29 12:39:07 SPE-UNRAID kernel: mpt2sas0: log_info(0x31110d00): originator(PL), code(0x11), sub_code(0x0d00) Nov 29 12:39:07 SPE-UNRAID kernel: sd 4:0:3:0: [sdh] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Nov 29 12:39:07 SPE-UNRAID kernel: sd 4:0:3:0: [sdh] Sense Key : 0x2 [current] Nov 29 12:39:07 SPE-UNRAID kernel: sd 4:0:3:0: [sdh] ASC=0x4 ASCQ=0x0 Nov 29 12:39:07 SPE-UNRAID kernel: sd 4:0:3:0: [sdh] CDB: opcode=0x88 88 00 00 00 00 00 0c b4 ce 80 00 00 02 00 00 00 Nov 29 12:39:07 SPE-UNRAID kernel: blk_update_request: I/O error, dev sdh, sector 213175936 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175872 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175880 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175888 Nov 29 12:39:07 SPE-UNRAID kernel: sd 4:0:3:0: [sdh] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175896 Nov 29 12:39:07 SPE-UNRAID kernel: sd 4:0:3:0: [sdh] Sense Key : 0x2 [current] Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175904 Nov 29 12:39:07 SPE-UNRAID kernel: sd 4:0:3:0: [sdh] ASC=0x4 ASCQ=0x0 Nov 29 12:39:07 SPE-UNRAID kernel: sd 4:0:3:0: [sdh] CDB: opcode=0x88 88 00 00 00 00 00 0c b4 d0 80 00 00 04 00 00 00 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175912 Nov 29 12:39:07 SPE-UNRAID kernel: blk_update_request: I/O error, dev sdh, sector 213176448 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175920 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175928 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175936 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175944 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175952 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175960 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175968 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175976 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175984 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213175992 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176000 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176008 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176016 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176024 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176032 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176040 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176048 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176056 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176064 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176072 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176080 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176088 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176096 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176104 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176112 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176120 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176128 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176136 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176144 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176152 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176160 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176168 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176176 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176184 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176192 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176200 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176208 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176216 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176224 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176232 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176240 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176248 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176256 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176264 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176272 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176280 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176288 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176296 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176304 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176312 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176320 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176328 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176336 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176344 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176352 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176360 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176368 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176376 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176384 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176392 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176400 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176408 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176416 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176424 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176432 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176440 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176448 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176456 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176464 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176472 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176480 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176488 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176496 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176504 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176512 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176520 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176528 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176536 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176544 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176552 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176560 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176568 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176576 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176584 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176592 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176600 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176608 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176616 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176624 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176632 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176640 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176648 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176656 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176664 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176672 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176680 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176688 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176696 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176704 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176712 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176720 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176728 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176736 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176744 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176752 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176760 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176768 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176776 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176784 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176792 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176800 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176808 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176816 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176824 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176832 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176840 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176848 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176856 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176864 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176872 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176880 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176888 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176896 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176904 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176912 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176920 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176928 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176936 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176944 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176952 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176960 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176968 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176976 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176984 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213176992 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177000 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177008 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177016 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177024 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177032 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177040 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177048 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177056 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177064 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177072 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177080 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177088 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177096 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177104 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177112 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177120 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177128 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177136 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177144 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177152 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177160 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177168 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177176 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177184 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177192 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177200 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177208 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177216 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177224 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177232 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177240 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177248 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177256 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177264 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177272 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177280 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177288 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177296 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177304 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177312 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177320 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177328 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177336 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177344 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177352 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177360 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177368 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177376 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177384 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177392 Nov 29 12:39:07 SPE-UNRAID kernel: md: disk5 read error, sector=213177400 Nov 29 12:39:28 SPE-UNRAID kernel: sd 4:0:3:0: attempting task abort! scmd(ffff880108f0ed80) Nov 29 12:39:28 SPE-UNRAID kernel: sd 4:0:3:0: [sdh] CDB: opcode=0x12 12 00 00 00 24 00 Nov 29 12:39:28 SPE-UNRAID kernel: scsi target4:0:3: handle(0x000c), sas_address(0x4433221103000000), phy(3) Nov 29 12:39:28 SPE-UNRAID kernel: scsi target4:0:3: enclosure_logical_id(0x500605b006365a00), slot(0) Nov 29 12:39:32 SPE-UNRAID kernel: sd 4:0:3:0: task abort: SUCCESS scmd(ffff880108f0ed80)
  18. What does "ipmitool sensor" show from the command line? Also try "ipmitool sdr" and "ipmitool -vc sdr" if that works. I realized my issue is that my unRAID server is a VM inside ESXi so it's not going to read my IPMI sensors. The plugin works fine on my baremetal backup unRAID server.
  19. Is there an easy way to pull a drive from unRAID that is formatted with XFS and read that data off a Windows PC? I've searched the forums for this info but the best I can find is some people posting about some 3rd party utilities that can read RFS. Haven't seen any mention of XFS.
  20. I wouldn't know how to determine that even if it was?
  21. I haven't had any issues either until this. What do you mean by a reload? I've tried to install the docker from scratch and it's a no-go at this point.
  22. Awesome. Thanks for the quick response aptacla.
  23. Having an issue with my Sonarr docker today. I went into the webGui today to see if all my shows downloaded last night to find that one of them was mysteriously missing from my calendar. So I look at my series editor and in fact it was gone. I tired to add it but doing a search just times out and it finds nothing. I then went to do an update to see if maybe there was some bug and the updates page just will not load at all. I tried restarting the docker multiple times but no resolution. So I figured why not delete and recreate the docker container in case something go corrupt. Now I can't even get Sonarr to load, the log just sits at this point: ----------------------------------- GID/UID ----------------------------------- User uid: 99 User gid: 100 ----------------------------------- Get:1 http://apt.sonarr.tv master InRelease [6,875 B] Ign http://mirrors.rit.edu/ubuntu/ trusty InRelease Get:2 http://mirrors.rit.edu/ubuntu/ trusty-security InRelease [64.4 kB] Get:3 http://mirrors.rit.edu/ubuntu/ trusty-updates InRelease [64.4 kB] Get:4 http://mirrors.rit.edu/ubuntu/ trusty Release.gpg [933 B] Get:5 http://apt.sonarr.tv master/main amd64 Packages [108 kB] Get:6 http://mirrors.rit.edu/ubuntu/ trusty-security/main Sources [123 kB] Get:7 http://mirrors.rit.edu/ubuntu/ trusty-security/restricted Sources [3,230 B] Get:8 http://mirrors.rit.edu/ubuntu/ trusty-security/universe Sources [35.4 kB] Get:9 http://mirrors.rit.edu/ubuntu/ trusty-security/multiverse Sources [2,168 B] Get:10 http://mirrors.rit.edu/ubuntu/ trusty-security/main amd64 Packages [448 kB] Get:11 http://mirrors.rit.edu/ubuntu/ trusty-security/restricted amd64 Packages [19.4 kB] Get:12 http://mirrors.rit.edu/ubuntu/ trusty-security/universe amd64 Packages [152 kB] Get:13 http://mirrors.rit.edu/ubuntu/ trusty-security/multiverse amd64 Packages [3,526 B] Get:14 http://mirrors.rit.edu/ubuntu/ trusty Release [58.5 kB] Get:15 http://mirrors.rit.edu/ubuntu/ trusty-updates/main Sources [304 kB] Get:16 http://mirrors.rit.edu/ubuntu/ trusty-updates/restricted Sources [4,513 B] Get:17 http://mirrors.rit.edu/ubuntu/ trusty-updates/universe Sources [180 kB] Get:18 http://mirrors.rit.edu/ubuntu/ trusty-updates/multiverse Sources [5,109 B] Get:19 http://mirrors.rit.edu/ubuntu/ trusty-updates/main amd64 Packages [803 kB] Get:20 http://mirrors.rit.edu/ubuntu/ trusty-updates/restricted amd64 Packages [22.7 kB] Get:21 http://mirrors.rit.edu/ubuntu/ trusty-updates/universe amd64 Packages [424 kB] Get:22 http://mirrors.rit.edu/ubuntu/ trusty-updates/multiverse amd64 Packages [12.9 kB] Get:23 http://mirrors.rit.edu/ubuntu/ trusty/main Sources [1,335 kB] Get:24 http://mirrors.rit.edu/ubuntu/ trusty/restricted Sources [5,335 B] Get:25 http://mirrors.rit.edu/ubuntu/ trusty/universe Sources [7,926 kB] Get:26 http://mirrors.rit.edu/ubuntu/ trusty/multiverse Sources [211 kB] Get:27 http://mirrors.rit.edu/ubuntu/ trusty/main amd64 Packages [1,743 kB] Get:28 http://mirrors.rit.edu/ubuntu/ trusty/restricted amd64 Packages [16.0 kB] Get:29 http://mirrors.rit.edu/ubuntu/ trusty/universe amd64 Packages [7,589 kB] Get:30 http://mirrors.rit.edu/ubuntu/ trusty/multiverse amd64 Packages [169 kB] Fetched 21.8 MB in 7s (3,007 kB/s) Reading package lists... I also see that the last update log was on Oct. 14th so it seems that no updates have installed since then. EDIT: Docker finally loaded but webgui is still unresponsive when doing a search for any new show or when clicking on the updates page. When the updates page finally loads after a few minutes I get these entries in the log after I hit install latest. [Fatal] NzbDroneErrorPipeline: Request Failed System.Net.WebException: The request timed out at System.Net.HttpWebRequest.EndGetResponse (IAsyncResult asyncResult) [0x00000] in :0 at System.Net.HttpWebRequest.GetResponse () [0x00000] in :0 at NzbDrone.Common.Http.HttpClient.ExecuteWebRequest (NzbDrone.Common.Http.HttpRequest request, System.Net.HttpWebRequest webRequest) [0x00000] in :0 [Warn] SkyHookProxy: The request timed out System.Net.WebException: The request timed out at System.Net.HttpWebRequest.EndGetResponse (IAsyncResult asyncResult) [0x00000] in :0 at System.Net.HttpWebRequest.GetResponse () [0x00000] in :0 at NzbDrone.Common.Http.HttpClient.ExecuteWebRequest (NzbDrone.Common.Http.HttpRequest request, System.Net.HttpWebRequest webRequest) [0x00000] in :0 [info] InstallUpdateService: Downloading update 2.0.0.3551 [info] InstallUpdateService: Verifying update package [Error] InstallUpdateService: Update package is invalid [Error] InstallUpdateService: Update process failed NzbDrone.Core.Update.UpdateVerificationFailedException: Update file '/tmp/nzbdrone_update/NzbDrone.develop.2.0.0.3551.mono.tar.gz' is invalid at NzbDrone.Core.Update.InstallUpdateService.InstallUpdate (NzbDrone.Core.Update.UpdatePackage updatePackage) [0x00000] in :0 at NzbDrone.Core.Update.InstallUpdateService.Execute (NzbDrone.Core.Update.Commands.ApplicationUpdateCommand message) [0x00000] in :0 [Error] CommandExecutor: Error occurred while executing task ApplicationUpdate NzbDrone.Core.Messaging.Commands.CommandFailedException: Downloaded update package is corrupt ---> NzbDrone.Core.Update.UpdateVerificationFailedException: Update file '/tmp/nzbdrone_update/NzbDrone.develop.2.0.0.3551.mono.tar.gz' is invalid at NzbDrone.Core.Update.InstallUpdateService.InstallUpdate (NzbDrone.Core.Update.UpdatePackage updatePackage) [0x00000] in :0 at NzbDrone.Core.Update.InstallUpdateService.Execute (NzbDrone.Core.Update.Commands.ApplicationUpdateCommand message) [0x00000] in :0 --- End of inner exception stack trace --- at NzbDrone.Core.Update.InstallUpdateService.Execute (NzbDrone.Core.Update.Commands.ApplicationUpdateCommand message) [0x00000] in :0 at NzbDrone.Core.Messaging.Commands.CommandExecutor.ExecuteCommand[ApplicationUpdateCommand] (NzbDrone.Core.Update.Commands.ApplicationUpdateCommand command, NzbDrone.Core.Messaging.Commands.CommandModel commandModel) [0x00000] in :0 EDIT #2: I tried completely deleting the docker container as well as deleting (well moving it out of my appdata folder for to test) the Sonarr config folder. When I try to re-create the Sonarr docker I get the following message while it's creating: Warning: mkdir(): File exists in /usr/local/emhttp/plugins/dynamix.docker.manager/include/CreateDocker.php on line 23