rockard

Members
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

2 Neutral

About rockard

  • Rank
    Newbie
  1. I still don't understand how it came to be that it was mounted again, and where the 16 GB fs came from, but I just gave up and ignored it and proceeded with the next steps. Looks good so far.
  2. I wrote this as a follow up to a previous post, but since I got no response I'll try with a new post. I have disk that is causing problems, so I wanted to remove it from the array. I found https://wiki.unraid.net/Shrink_array and started to follow the steps in "The "Clear Drive Then Remove Drive" Method". After 48+ hours finally the "clear an array drive" script finished, and I wanted to continue with the next step. However, at this point I find that the disk has been mounted again and a parity check is running. According to the main tab, the fs size is 16 GB, and 1.21 GB is used. A ls -al at
  3. Memtest and extended SMART reports found no problems, but nevertheless reported uncorrect kept rising and Unraid reported read errors on the disk, so I decided I wanted to remove it from the array. I found https://wiki.unraid.net/Shrink_array and started to follow the steps in "The "Clear Drive Then Remove Drive" Method". After 48+ hours finally the "clear an array drive" script finished, and I wanted to continue with the next step. However, at this point I find that the disk has been mounted again and a parity check is running. According to the main tab, the fs size is 16 GB, and 1.21 GB is u
  4. Alright, good tip about Syslog Server! I just enabled it, so hopefully there will be better diagnostics next time it happens! Thank you! No, I haven't. My rendition of what happens is the my Plex docker is waiting for disk io, caused in some way by my gaming VM trying to reserve something that is in use by docker. I don't think memory has anything to do with it, but I am in no position to rule anything out. I'll do that when there's a chance, still waiting for extended SMART reports.
  5. Thanks for responding! Good point, I haven't tried extended tests on the other disks, I'll start it immediately! It took a long time to finish the last time, so I expect it to run overnight and be ready some time tomorrow. Don't understand this part. Do you mean you are repeatedly running parity checks? Sorry for being unclear, I'll describe the timeline and hope that clears it up: I was forced to do a hard reset due to a docker stuck waiting for IO (as mentioned in my other post), and that in turn forced a parity check which takes around 24 hours. When that f
  6. Sigh. So I decided to try to make use of my cache drive after all, I thought that there must be some way it can be useful. So I stopped all my docker containers, moved the storage paths that I cannot have unprotected (like my database) out of the appdata share, made a backup with the backup app mentioned, set appdata share to prefer cache and ran Mover. The speeds were not impressive, but acceptable, so I decided to keep this setup. Because of the hard resets mentioned in my other posts, parity checks have been running more or less constantly since then, and since I used the Mover Tuning app t
  7. Did I post this in the wrong forum? If that is the case, is there a way to move it?
  8. I gave in and did a hard reset, so I guess this is more or less moot. I'm quite certain it will happen again though, so if somebody could help me understand why it happens, I'd be thankful. In the meantime, I will just keep the gaming VM on at all times, so this can't happen.
  9. Hi, I have a docker container (plexinc/pms-docker) that is "stuck" and can't be stopped, I think it's waiting for IO. Is there any way to get it out of this state without a hard reboot? I want to avoid them as much as possible, since parity checks take 20+ hours and make the system unreliable. A little context: I am running "Unraid Nvidia", and have to GPUs, a GTX 1080 passed through to a gaming VM, and a GTX 1660 set as visible to Plex to be used for transcoding. The gaming VM also has a USB controller passed through. This is the second time that I've tried to start my
  10. Hi, I have a docker container (haugene/transmission-openvpn) that autostarts, despite the autostart setting being "off". Can somebody tell me what am I doing wrong? I've attached a diagnostic zip. I'm also wondering if there is a way to edit a container without it automatically starting when I save the changes. Thanks! /Rickard EDIT: Found a FAQ in Docker Engine forum: eru-diagnostics-20200422-1328.zip eru-diagnostics-20200422-1741.zip
  11. Hi again, Thanks for an extensive answer! My mover finally finished, and after reseating all cables in both ends I booted the machine and started an extended SMART test. It's been running for over an hour now and completed 10%, so I'll leave it over night again and will probably have to kill it in the morning. My issue with slow mover speeds is moot, I guess, since it finally finished and I will get rid of the cache or move to something other than Unraid, so the following are only my own reflections on cache drives in Unraid, and can be ignored. Thanks for e
  12. Hi, Thanks for your reply! I really appreciate it! This is what I mean with "I don't understand how to use the cache". Wouldn't that leave my always-on dockers and VMs completely unprotected, since mover won't move things that are in use? Sorry, I meant they are Yes, and I set it as such because the description is "Mover transfers files from cache to array" which is what I want if I want to get rid of the cache disk. It has space now because everything has been off for over 24 hours and mover has been running for over 10 hours, moving things off the cache. I
  13. <TLDR>I'm getting ~20MB/s read/write speeds when Mover is moving from SSD cache to array, what am I doing wrong?</TLDR> Hi all, I'm very new to Unraid and the forums, so I apologize in advance for any frustration I will cause with my ignorance. I'm currently evaluating Unraid to see if it is for me, and in the beginning it all looked so promising so I decided to take the plunge and move everything I have in terms of storage into Unraid. I also bought a new 1TB SSD to use as a cache drive, so now my disk setup is: Parity 1: ST8000DM004 8TB SATA