AgentXXL

Members
  • Posts

    400
  • Joined

  • Last visited

Everything posted by AgentXXL

  1. Then I suspect my speed drops were related to the older 6TB and 4TB drives that are 5400rpm and lower cache on each than newer models. Regardless, the lower speeds were only while those smaller drives were part of the parity calculation. I am only using a 4 core i7-6700K right now, with unRAID and other dockers/tasks pinned to the 1st core (2 threads with HT), and Plex pinned to the other 3 (6 threads with HT). I'll be upgrading my motherboard, CPU and RAM when I save up enough money in a couple of months, likely to a Ryzen 2 16 core setup. I may also replace the PCIe 2.0 LSI controller with a PCIe 3.0 variant to improve disk speed performance, but for now I'm content as I've been able to stream my highest bitrate 4K titles (86.5Mbps average according to Plex) with no issues.
  2. I've been watching this thread as I am in the midst of upgrading my parity drive to a new 10TB Ironwolf. What's interesting is that it started out with speeds around 120MB/s and as the parity rebuild/sync continued it got progressively slower, dropping to 31MB/s for the lowest I saw. I suspect this is because it was taking more time to calculate the parity across all drives in the array, when dealing with actual data vs empty/zeroed space. My array currently has 2 x 8TB, 1 x 6TB, and 4 x 4TB drives. When the parity rebuild started, it was reading across all drives and all were spun up as it had to create parity by reading the sectors from all drives. Now that the parity rebuild is at the 72% mark, I noticed that the 4TB and 6TB drives are all spun down, i.e as it's past the size of those drives for the parity rebuild, it's now only needing to read the 2 x 8TB drives. And my parity rebuild speed is back to 112MB/s. I suspect once it hits the 80% mark, it'll go even faster as I don't have any 10TB data drives in the array so the last 2TB of parity should be null. Once the parity rebuild completes, I have 2 more new and precleared 8TB drives to add, which again shouldn't impact the parity as they are zeroed. Then I can start migrating the data from my 5 x 10TB USB drives (UD mounted) to the array. As each 10TB drive is finished copying to the array, they'll be shucked and inserted into a reserved slot in my configuration. I'll then preclear them and add them to the data pool, giving me more storage for the next 10TB drive. I have another 6TB and one more 4TB drive that has to have its data migrated as well, so in the end my data array will consist of 5 x 10TB, 4 x 8TB, 2 x 6TB and 5 x 4TB drives - 114TB of raw storage, about 80% full by my estimate. In any case, I think the slow-down you were seeing @Jomo was related to your read errors, but overall a parity rebuild/resync will slow down while it's reading the sectors across all drives in the array. 4TB drives got dropped off the calculation at 40% and the 6TB drive at 60%. As it no longer needs to calculate parity for those drives (already completed), the speed is only that of the 2 x 8TB drives. Makes sense to me so I hope this scenario and explanation helps others understand the speed of parity rebuild/resync.
  3. If you haven't upgraded unRAID to 6.7.1 stable (released two days ago), you may want to consider a fresh re-build of your unRAID using default paths and reconfiguring your containers/VMs appropriately. It's a long process for sure, but for me it was easier. I've been a FreeNAS user for years, but recently chose to move to unRAID as I really like the ease of configuration for Docker and VMs. FreeNAS was often an exercise in frustration trying to get containers and VMs to run well.
  4. Essentially the same process. It's not advisable by many to not use the default appdata share, but it shouldn't really matter as long as you configure your Docker containers and VMs to use the correct path to the folder(s) on your cache drive. Alas plugins like the CA Backup/Restore appdata might not be configurable to use a non-standard location like 'apps'.
  5. If you have your Plex and other Docker containers using the default appdata share, you can use a utility like Krusader to manually backup the folders and files. You could also install CA Backup/Restore appdata plugin (highly recommended) and then configure it to backup your appdata share to your array or to a disk mounted via Unassigned Devices.
  6. One other possibility, though this hasn't helped me yet, is to go into the unRAID Settings tab and under Docker settings, and disable Docker by setting the first option to No. Once applied, you then have the option to delete the docker.img file and click Apply again. Then re-enable Docker and the next time you re-install your Plex container, it'll download a fresh copy of the docker.img file. This can correct corruption in the base image.
  7. BTW - before you play around too much further, I recommend making either a manual or other backup of the Plex Media Server folder usually configured to reside in the appdata share. If something has gone wrong with just the container, you don't want the hassle of potentially needing to rebuild your libraries from scratch.
  8. It's a default option and usually you want it pointed to a fast storage device and/or a ramdisk. As I used Direct Play for my local network, it's not as critical for me, but any remote access to your Plex server will most likely require a valid /transcode location. I used the stock Plexinc docker container, not the linuxserver.io version.
  9. When re-installing the Plex (or any other Docker) container, it's a good practice to verify the paths and variables needed by that container. Sounds like the mountpoint for /transcode has been invalidated so you may just need to edit that path to point to where you want. I'd walk through all the options for your container in the Advanced view mode to make sure all paths/variables that you might have changed are valid. I had a similar issue as I've just upgraded to my full unRAID Pro license yesterday, after a 26 day trial. When I did the upgrade, part of the process was to create a new unRAID USB key and that replaced my working trial unRAID with the NVidia drivers with the stock build. As the NVidia drivers and tools weren't part of my new install, my Plex docker container also failed when trying to re-install. Once I re-installed the unRAID NVidia plugin and rebooted, my Plex Docker container re-installed and started successfully.
  10. This happens occasionally for various reasons, creating orphaned containers. I haven't looked at your diagnostics, but the simple fix is to go back to the Apps tab and on the top left you'll find 'Previous Apps'. Click on it and it should show the Plex docker container you used and the option to re-install. If you customized your Plex Docker container and your template wasn't damaged, it'll re-install using your last template and all should be good.
  11. Last night we had a small lightning show and saw numerous power bumps and one small outage. I had a pre-clear underway that was 55% through the zeroing stage. My UPS settings told the system to shutdown when the outage occurred, although it was so short that it really wasn't necessary, so I'll relax those settings later. Of course I was miffed at the thought of having to restart the pre-clear procedure. After I waited for the storm to settle down, I powered up and saw the drive sitting in UD with the preclear icon. So I clicked on it to start the process over. To my very happy surprise, it then asked me if I wanted to RESUME the preclear. Of course I did!! I've asked other questions in other threads that led to me needing to reboot my unRAID to fix the issue. I've always waited until any preclear in progress completed, sometimes meaning long delays before I could reboot. I haven't read through all the docs or this support thread, but knowing that preclears are resume-able would be something that should be a prominent mention in the features. Perhaps I missed it, but just wanted to share in case others didn't know this was possible. But one question: are there specific conditions where the preclear is resume-able? I'd assume it would only work with a clean shutdown or reboot, but perhaps not if there was a crash due to kernel panic or some other reason. Regardless, I was happy that I hadn't lost my preclear progress on the drive! Dale
  12. And I've just deleted some of my posts that didn't need to be in the thread. Thanks!
  13. For anyone that cares, I decided on the following: 1. Created new config with the new 10TB parity drive, without re-assiging the potentially failing disk. 2. Started the array and the parity rebuild on the new 10TB drive is underway. 3. Mounted the potentially failing disk in UD. This time it let me... not sure why it wouldn't previously. 4. Slowly copying data off the potentially failing disk while the parity rebuild is in progress. I realize that the entire array is unprotected until the parity rebuild completes - current estimate is about 24 hrs. 5. Started preclear on 2 month old 8TB drive that was previously used for parity, just to ensure it's OK. I'll add it to the array pool once the pre-clear is done. So for now, I have a working unRAID, that is actually running 6.7.1 stable. I'll slowly re-install any necessary plugins that aren't working, and the Docker containers and VM can wait until the parity rebuild is complete. A little painful, but that's the risk I took when adding the older 8TB drive that is potentially failing. It did pass the pre-clear when I originally added it to the array, but being 4.5 years old, I shouldn't have used it. Lesson learned. So far the data is copying off the old drive with no errors, so with any luck I'll get it completed by the time the parity rebuild finishes.
  14. Dang it.... the drive was seen after I re-inserted it into the system, but now unRAID says 'replacement disk inserted'. And if I start the array, it's going to do a full parity check or rebuild based on the messages I'm seeing. I think I'll run out and grab another new 8TB drive, attach both to my laptop and try to copy the data off it before doing anything else. It is an older 8TB that occasionally threw UDMA CRC errors but I don't think I want to trust it in unRAID to a parity check or data rebuild. I'd feel more comfortable making an 'offline backup'. My other option is to consider it 'lost data', delete the drive from the array and go ahead with replacing the parity drive (a 2 month old 8TB) with the new 10TB Ironwolf and then letting the parity rebuild occur, which has to happen anyways since I'm upgrading the parity drive. While parity is rebuilding on the new parity drive, I could use the old 8TB parity (still almost new) as the recovery destination with both it and the potentially failing and now un-trustworthy 8TB as the source. With luck this would let me recover all the data to a known good and trusted drive, after which I can attach via UD and migrate the data back to the array. Of course this would mean waiting for the parity rebuild on the new 10TB Ironwolf to finish, before I'm able to copy the data back. A long process either way. A few options.... think I'll take a break and think about it. Dale
  15. Yes, I did physically re-arrange the drives. I can try another slot on the hot-swap SATA backplane in my case, but other channels on that same row are functional. Still, could be one bad channel in the SFF-8087 to SFF-8087 cable for that row. I do have a new spare cable so I can try swapping it out as well. Any reason to check the drive/filesystem since it's mountable on the Linux laptop?
  16. OK, I have successfully migrated to a full unRAID Pro license on the new USB, and re-organized my drives with the grouping and empty slots for planned additions (as I migrate data off the UD mounted devices, they will be pre-cleared and then added to the array pool). HOWEVER: one of my 8TB drives won't mount under unRAID, and shows errors and a red X beside the slot which it resides in. The logs state that the filesystem (XFS) isn't clean. I've moved that drive into a USB enclosure and it won't mount under UD either. So I then attached it to my Linux Mint laptop which is able to mount it, and I can see the data. As I can likely re-copy the data to unRAID over the network, and I have enough free space to allow this, is that my best option rather than letting it try to rebuild? If I hadn't already added the empty but new 8TB drive to the array, I suspect I could have inserted it and let the drive rebuild from parity. Something tells me that it will be faster just to leave the array as is and do the copy over the network. Can I use fsck from my Mint laptop to check and repair the XFS filesystem and then try re-inserting it into unRAID? Suggestions? Dale
  17. Thanks again @Squid - reported the other post to the mods so they can delete it. Looks like the pre-clear on my new 8TB has finished so I'm ready to begin the migration plan. Hopefully I'll be back soon reporting all went well!
  18. Will do. As you replied to the post I want to delete, and then I replied to it, I'll post that here (looks like it won't keep the quote formatting from the other post so I just separated your comments and mine with a blank line): 7 minutes ago, Squid said: 99% of the time, no issues will result from copying the entire /config folder over As I want to re-order the config and disk/slot assignments, I'm assuming I don't want the super.dat file. Or if I did, I could just go to Settings and choose New Config? 7 minutes ago, Squid said: If you only have a single parity drive, then when making a new config, you can select "Parity is already valid". With dual parity drives though, you will always have to rebuild parity from scratch since you're rearranging the disk slots. Regardless, you should always run a parity check to confirm everything is hunky dory. And make damn sure that when assigning drives that you've got the parity drives assigned correctly. I'm currently running a single parity, but upgrading it to a 10TB in the process of re-doing my disk/slot arrangment. I plan to leave a drive bay with another motherboard SATA port feeding it empty, for a 2nd parity drive once I'm able to afford it. Would I be better to migrate to the full license with the existing parity disk and then do the parity upgrade?
  19. As I want to re-order the config and disk/slot assignments, I'm assuming I don't want the super.dat file. Or if I did, I could just go to Settings and choose New Config? I'm currently running a single parity, but upgrading it to a 10TB in the process of re-doing my disk/slot arrangment. I plan to leave a drive bay with another motherboard SATA port feeding it empty, for a 2nd parity drive once I'm able to afford it. Would I be better to migrate to the full license with the existing parity disk and then do the parity upgrade?
  20. Why not? If I post a question, I certainly want to be notified of replies. I have seen the 'Follow' option on some posts, but not on all. I can understand though if it's a longer thread and you don't want notification about ALL replies.
  21. My 12-step (apologies to AA) unRAID Migration Plan: 1. Fresh unRAID OS build created on new 32GB USB key - done. Backups of the appdata share and the old USB key completed also. 2. Copy the following items from /config on the old key to the new USB key to migrate most of my settings: config/ident.cfg and config/network.cfg, config/share.cfg and the folder config/share, config/passwd, config/shadow, and config/smbpasswd, and the config/plugins/dockerMan/templates-user folder. These are based on this article: https://wiki.unraid.net/Files_on_v6_boot_drive 3. As I’m re-organizing the disk/slot layout, I’m not copying config/super.dat but should I copy the file config/disk.cfg to retain my disk settings? 4. Before swapping the USB key and changing the disk/slot layout, stop the array and un-assign the 8TB parity drive. Also change all Docker containers and VMs so they DON'T auto-start. Power-down the unRAID system. 5. Replace the USB key with the new one and re-organize the disks to my liking, grouping them by capacity and leaving some free slots for planned additions. Replace the 8TB parity drive with the new 10TB Ironwolf. 6. Power on the system and wait for unRAID to boot. At this point I need to purchase my Pro license as the old trial key won’t work on the new USB. I could request a new trial key, but I’m satisfied with my choice to move to unRAID so purchasing the license is the way I’ll go. 7. Assign the new parity drive, data disks and existing cache SSD. Leave blank (un-assigned) slots for known future disk additions. Start the array and let the parity build start. 8. I’ve read a procedure where you can copy your old parity drive data to a new parity drive, but I think the ‘build parity from scratch’ is a better option, even though it’s estimated to take a LONG time. 9. Don’t add new data to the array while the parity is rebuilding. Re-install plugins and docker containers. Walk through all settings and make any changes necessary. Change my docker containers and VM to use the previous 8TB parity drive (mounted via UD and formatted) as temporary storage until the parity build is complete. 10. Once the parity build is complete, migrate the data from the temporary storage drive mounted via UD to the array. Then run a pre-clear on the 8TB temporary so it too can be added to the array. 11. At this point I’ll have enough free space on the array to start migrating my 5 x 10TB USB drives (mounted via UD) to the array, one at a time. As each 10TB drive migration is completed, run a pre-clear on it and add it to the array ensuring enough free space to migrate the next 10TB drive. 12. Once all 5 x 10TB drives have been shucked and added to the array, I should have a fully functional unRAID with 100TB+ of storage! Woohoo! Does this plan sound reasonable? Specifically looking for answer to the question in item #3 - should I or should I not copy the file config/disk.cfg to the new USB key? Any comments or suggestions appreciated! Thanks! Dale
  22. I have the Mover tuning plugin already, but I'll go update it and let you know what happens. Thanks again! Hope my donation made it to you!
  23. Sorry to hijack the thread, but just one more question: when using the Krusader docker to move data to the array, I've been copying from mountpoints I added to the Krusader config for my UD attached devices. For example, I created a new path in the Krusader config for my UD attached drive called MoviesA. I added the container path as /MoviesA and the host path as /mnt/disks/MoviesA. But when copying from /MoviesA (on the left panel of Krusader), I've used /media/General/MoviesA/ as the path in the right panel of Krusader., where General is the share name of my main array share. What is the difference in using the /media mountpoints vs using the /mnt/user0 mountpoints? I haven't found an explanation for the differences in the /media mountpoints in my searches. Thanks!
  24. I did try the 'reconstruct write' mode with my array, but speed was still abysmal. My MB/CPU/RAM are an Asus ROG Maximus VIII Gene (microATX), an i7-6700K and 32GB of ram. The LSI controller is in a x8/x16 slot. I did not try going to the shares via /mnt/user0 - I always wondered why they were listed under a 2nd user? How does using the /mnt/user0 mountpoints differ from the /mnt/user mountpoints? EDIT: just found that user0 is just the array disks, where user is the array and cache, so you ensure the bypass. Good to know! I'm soon (hopefully later today) going to be re-doing my unRAID config with a re-org of the slots where I place disks, mostly to satisfy my OCD of wanting all drives of the same capacity lumped together. I'm also replacing my 2 month old 8TB parity drive with a new 10TB Ironwolf, as the 50TB is 5 x 10TB USB drives that are full, but I want to add to my unRAID after migrating their data to the array. That requires the parity disk upgrade. But the main reason for the re-config is I'm moving to a new USB key and purchasing my unRAID Pro license - only 4 days left in my trial, and while I could request an extension, I'm quite happy with the overall use of unRAID. At least when Mover isn't required to move large amounts of data. The ease of Docker/VM configuration and the many useful plugins for unRAID make it a winner over my long-standing and stable FreeNAS, which was always a pain to configure for Docker/VMs.
  25. I'll add my name to the list of those seeing poor response from unRAID's webgui and Docker containers whenever Mover is running. As I'm still migrating 50TB of data attached via the Unassigned Devices plugin, this is a major bottleneck. I copy about 950GB to my 1TB cache SSD, pause the copy, then manually initiate mover during which the system becomes unresponsive and Plex is unusable. If I just let the copy continue, it gets worse when the scheduled Mover run starts, which is why I pause the copy after each successive fill of the cache SSD. 6 drives are attached to my motherboard SATA connector, and 16 more slots via a LSI 9201-16i to my hot-swap SATA backplane.