• Posts

  • Joined

  • Last visited

Everything posted by DanielCoffey

  1. Remember that your Parity drive must always match (or exceed) the capacity of the largest data drive and as you increase the size of the largest drive in the array, the time to perform a Parity Check increases. To give a data point, 8Tb drives take around 16h for a Parity Check and it is fairly linear with drive size.
  2. Since this is a Release Announcement thread, you mightl have a better chance of getting help if you raise a new thread in either the General Issues or KVM forums.
  3. Thank you - I will be able to follow that. If I get into difficulty I will come back and ask.
  4. I am currently set up with a pair of 512Gb SSDs in a btrfs cache pool and I would like to know the smoothest way of moving over to a single M.2 SSD, preferably without having to redo my Docker settings if possible. Mt Forum Signature is up to date. I do understand that I should run Mover first. Thank you.
  5. The Basic license will allow you to grow your array in the future and you do still get the full Docker and VM support that the larger two licenses also share.
  6. On the NAS in my signature I came across a bottleneck today and I am interested in knowing what might have caused it. This is a lightweight NAS that serves one PLEX Docker and also acts as an on-site backup for my desktop PC. My wife was watching a movie that was already encoded correctly for the Apple TV and without thinking I decided to back up a large number of small files to the backup share while the movie was playing. Very quickly the movie started to stutter then froze until the file transfer was complete. The movie was streaming from one of the HDDs. I know from experience that Plex only causes the i3-8300 to go between 10% and 25% load as a chunk is requested. The file transfer was 4Gb in size with about 8500 small files. It was going to the Backup share which is Cache = Yes via FTP (FileZilla). The Source files were on my desktop NVME drive and went over Cat6 LAN through a 1Gb switch. The NAS cache drives are a pair of SSDs which are on the motherboard's SATA connectors. The HDDs go through the SAS9207-8i card. When I heard that the movie was stuttering I looked at the Dashboard and saw the CPU with a couple of cores pegged at 100% and the other two at about 80%. I also wondered if I might have saturated the motherboard's SATA lanes. Since it is only a 4-core CPU I do not have any core pinning. What do you think might have caused the bottleneck and what might I be able to do so that it is less likely to happen again?
  7. And remember that if you have a cache drive, movies will sit on it and contribute to it filling up until the Mover process runs according to its schedule (or you manually kick it off yourself) at which point they should be sent over to the array shares.
  8. I am at 2560x1440 on Win10 if that matters. If you want me to check it on other platforms I have a Mac Mini (also at 1440p) running Safari browser and an iPad.
  9. VERSION : 6.8.1 ISSUE : UI on Dashboard page - "thumb down" dropdown menu for SMART options cannot see last menu item and cannot scroll down to it. BROWSER : Firefox 72.0.1 (64-bit) I had several CRC errors on my first cache device which displayed the yellow "error" thumb down icon. When I clicked on the error thumb to reveal the Attributes/Capability/Identity and Ignore (or cancel?) menu, the final menu item was not visible even when I scrolled the window down as far as possible. The last Ignore (or Cancel?) option was tucked behind the menu footer. I was able to just click on the top bit of the menu item to activate it and clear the errors but I feel the text should have been displayed. See attached images for the UI issue and active Plugins nas-plex-diagnostics-20200117-1949.zip
  10. I have just tried 6.8.0-rc1 to compare its parity checks to both 6.7.2 and the Tunables script since my array is very straightforward and the result is interesting and shows there is still room for improvement. The first thing I did before upgrading from 6.7.2 was reset all the Disk Settings tunables back to default (including nr-requests which defaults to auto now). My array is 2x WD Red 8Tb dual parity and 6x WD Red 8Tb data with 2x 512Gb SSD for cache. 6.7.2 default - 17h15 to 17h30 6.7.2 tunables - 15h30 6.8.0-rc1 default - 16h45 I have added a screenshot of the 6.7.2 Tunables and also 6.8.0-rc1 default values for comparison and I think there is still scope for improvement. I suspect that on my server we could find up to an hour in there that could be tuned out.
  11. If you have a spare PCI slot you also have the option of adding an HBA card such as the LSI SAS 9201-8i which will have two SAS ports. Each of these can be connected to four SATA drives using a SAS to SATA cable giving a total of eight mroe SATA ports. There are reputable resellers on eBay who will sell tested and flashed cards ready to plug in and go. EDIT : This does assume you have spare slots in your unRAID license of course.
  12. Perfect - I didn't know you could do that. Thanks!
  13. On my 6.7.2 Array with a 2xSSD Cache Pool, one of the cache drives threw a CRC Error during a Mover run and I would like to know how to clear the warning. I had just manually transferred about 13Gb from my main PC to the backup share and clicked the Mover to distribute it all according to the share preferences and I got a popup saying that the first Cache drive had experienced one CRC Error. Looking at the SMART Report for that device does show one CRC Error Count but all other values are healthy. A quick SMART test passes. The device is still flagged in yellow on the Dashboard and I would like to know if it can be cleared or if it is stuck that way?
  14. Here's a tip based on personal experience... make sure the array is not due to go to sleep while the Preclear is running. In my case it watched the array for 30 minutes, decided all was quiet, ignored the unassigned devices and shut down. Oh poop! Time to start over I guess. EDIT : well colour me impressed! When I woke the array, the Preclear docker simply resumed from where it had left off and was up and ticking by the time I reopened the WebUI.
  15. That is just what I needed, thank you. So just as with other Dockers, we can close the Docker window and the process inside it will carry on running. We can pop back later and reopen the WebUI and take a fresh peek at how it is getting on. Once it is completely finished we are free to stop the Docker if we want. Cheers.
  16. Please excuse the possibly idiot question but can I close the Docker command prompt window while the preclear is running (like you could with Screen) or must it remain open in order for the script to continue? On an 8Tb drive it will of course need to run for quite a while and this was not mentioned in the FAQ.
  17. Good point, thanks. I will report back with what I receive.
  18. Well I have to say I am REALLY impressed with WD's warranty support! Despite changing the SAS-SATA cable, trying the drive on a new HBA card and deleting the partition, it still did its odd shutdown thing one time in four. It still showed a slower transfer rate between 0-1Tb and that odd wobble between 5-6Tb compared to all the other seven identical drives. All the time it still performed perfectly well as a functioning hard drive in that it never lost data. First WD had me run their own SMART data collecting tool which the drive of course passed. They advised that they couldn't really accept the DiskSpeed Docker results as evidence in a Warranty case since it wasn't their own tool but agreed that it was clear the drive had "something odd" about it. They just turned round and said that I should send it in and they would just replace it with a new one. Not bad service for a two year old drive (which of course has a three year warranty). I feel that their opinion was that it was simply not worth the effort of sending the drive to a technician to be examined and analysed. Just replace it and move on. Customer happy? Yup.
  19. There we go - tidied up my post. I will be rerunning the script soon as I have swapped one WD Red 8Tb out of the array was it is behaving oddly on shutdown and the DiskSpeed docker showed it had odd behaviour and performance. Sadly WD are being very slow to acknowledge my Warranty case so I may have to escalate it on Monday.
  20. Hello folks - I need some help getting back into the web GUI of my array after a router change. I used to have the typical FTTC Modem to wireless router running IPv6 and DHCP to a static address ( to my unraid box. Accessing the web GUI via was fine. I have just changed to a 4G modem/router running IPv4 (at the moment) and of course the DHCP server is not set up to issue static addresses yet. I quickly found the dynamic address the server had been assigned and accessed it via (this time) but immediately ran into a problem. My Windows 10 desktop browser changed the into the "long" address that the box seemed to be using. It gave the Server Not Found error : We can’t connect to the server at 47451451017659140b69fc701f291711f8f06834.unraid.net. What do I need to change or reset in order to gain access to the web GUI again please? I do have a monitor on the server and I do have PuTTY.
  21. I can answer the last part... no, you don't have to set Tunables to default before using the tool. The first test is performed using "current". It then takes a peek at "default". Only then does it start probing the possible values that may affect the performance. If you want to set the values back to Default at any time, simply go to the Disk Settings page and delete the value in the fields that say "User Defined" and hit Apply. It will automatically reset them to Default for you.
  22. Good point, thanks. Now I have taken that drive off and also changed one of the SAS-SATA cables, I will observe the array. If it does it again I'll replace the card. Fortunately I can replace it like for like as I am not saturating it at all.
  23. I have completed an Extended SMART test on the drive that has been giving me issues. Please could someone have a look at the results to see if there is anything that would indicate issues with the drive. The drive itself is not in the array now so I am free to pull it and submit it for warranty replacement but I would like to know if there is something I could point a finger to in the SMART report rather than just relying on the DiskSpeed results. The data on the drive has never been compromised (as far as I am aware) but it really hates shutting down (and I do recall one or two lockups on boot a fair while ago which may be related). WDC_WD80EFZX-68UW8N0_VK1DZHAY-20190822-0554.txt