DanielCoffey

Members
  • Posts

    254
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    South Ayrshire, UK

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

DanielCoffey's Achievements

Contributor

Contributor (5/14)

13

Reputation

  1. Remember that your Parity drive must always match (or exceed) the capacity of the largest data drive and as you increase the size of the largest drive in the array, the time to perform a Parity Check increases. To give a data point, 8Tb drives take around 16h for a Parity Check and it is fairly linear with drive size.
  2. Since this is a Release Announcement thread, you mightl have a better chance of getting help if you raise a new thread in either the General Issues or KVM forums.
  3. Thank you - I will be able to follow that. If I get into difficulty I will come back and ask.
  4. I am currently set up with a pair of 512Gb SSDs in a btrfs cache pool and I would like to know the smoothest way of moving over to a single M.2 SSD, preferably without having to redo my Docker settings if possible. Mt Forum Signature is up to date. I do understand that I should run Mover first. Thank you.
  5. The Basic license will allow you to grow your array in the future and you do still get the full Docker and VM support that the larger two licenses also share.
  6. On the NAS in my signature I came across a bottleneck today and I am interested in knowing what might have caused it. This is a lightweight NAS that serves one PLEX Docker and also acts as an on-site backup for my desktop PC. My wife was watching a movie that was already encoded correctly for the Apple TV and without thinking I decided to back up a large number of small files to the backup share while the movie was playing. Very quickly the movie started to stutter then froze until the file transfer was complete. The movie was streaming from one of the HDDs. I know from experience that Plex only causes the i3-8300 to go between 10% and 25% load as a chunk is requested. The file transfer was 4Gb in size with about 8500 small files. It was going to the Backup share which is Cache = Yes via FTP (FileZilla). The Source files were on my desktop NVME drive and went over Cat6 LAN through a 1Gb switch. The NAS cache drives are a pair of SSDs which are on the motherboard's SATA connectors. The HDDs go through the SAS9207-8i card. When I heard that the movie was stuttering I looked at the Dashboard and saw the CPU with a couple of cores pegged at 100% and the other two at about 80%. I also wondered if I might have saturated the motherboard's SATA lanes. Since it is only a 4-core CPU I do not have any core pinning. What do you think might have caused the bottleneck and what might I be able to do so that it is less likely to happen again?
  7. And remember that if you have a cache drive, movies will sit on it and contribute to it filling up until the Mover process runs according to its schedule (or you manually kick it off yourself) at which point they should be sent over to the array shares.
  8. I am at 2560x1440 on Win10 if that matters. If you want me to check it on other platforms I have a Mac Mini (also at 1440p) running Safari browser and an iPad.
  9. VERSION : 6.8.1 ISSUE : UI on Dashboard page - "thumb down" dropdown menu for SMART options cannot see last menu item and cannot scroll down to it. BROWSER : Firefox 72.0.1 (64-bit) I had several CRC errors on my first cache device which displayed the yellow "error" thumb down icon. When I clicked on the error thumb to reveal the Attributes/Capability/Identity and Ignore (or cancel?) menu, the final menu item was not visible even when I scrolled the window down as far as possible. The last Ignore (or Cancel?) option was tucked behind the menu footer. I was able to just click on the top bit of the menu item to activate it and clear the errors but I feel the text should have been displayed. See attached images for the UI issue and active Plugins nas-plex-diagnostics-20200117-1949.zip
  10. I have just tried 6.8.0-rc1 to compare its parity checks to both 6.7.2 and the Tunables script since my array is very straightforward and the result is interesting and shows there is still room for improvement. The first thing I did before upgrading from 6.7.2 was reset all the Disk Settings tunables back to default (including nr-requests which defaults to auto now). My array is 2x WD Red 8Tb dual parity and 6x WD Red 8Tb data with 2x 512Gb SSD for cache. 6.7.2 default - 17h15 to 17h30 6.7.2 tunables - 15h30 6.8.0-rc1 default - 16h45 I have added a screenshot of the 6.7.2 Tunables and also 6.8.0-rc1 default values for comparison and I think there is still scope for improvement. I suspect that on my server we could find up to an hour in there that could be tuned out.
  11. If you have a spare PCI slot you also have the option of adding an HBA card such as the LSI SAS 9201-8i which will have two SAS ports. Each of these can be connected to four SATA drives using a SAS to SATA cable giving a total of eight mroe SATA ports. There are reputable resellers on eBay who will sell tested and flashed cards ready to plug in and go. EDIT : This does assume you have spare slots in your unRAID license of course.
  12. Perfect - I didn't know you could do that. Thanks!
  13. On my 6.7.2 Array with a 2xSSD Cache Pool, one of the cache drives threw a CRC Error during a Mover run and I would like to know how to clear the warning. I had just manually transferred about 13Gb from my main PC to the backup share and clicked the Mover to distribute it all according to the share preferences and I got a popup saying that the first Cache drive had experienced one CRC Error. Looking at the SMART Report for that device does show one CRC Error Count but all other values are healthy. A quick SMART test passes. The device is still flagged in yellow on the Dashboard and I would like to know if it can be cleared or if it is stuck that way?
  14. Here's a tip based on personal experience... make sure the array is not due to go to sleep while the Preclear is running. In my case it watched the array for 30 minutes, decided all was quiet, ignored the unassigned devices and shut down. Oh poop! Time to start over I guess. EDIT : well colour me impressed! When I woke the array, the Preclear docker simply resumed from where it had left off and was up and ticking by the time I reopened the WebUI.
  15. That is just what I needed, thank you. So just as with other Dockers, we can close the Docker window and the process inside it will carry on running. We can pop back later and reopen the WebUI and take a fresh peek at how it is getting on. Once it is completely finished we are free to stop the Docker if we want. Cheers.