Jump to content

citizen_y

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by citizen_y

  1. Just a quick update - I really struggled to solve this, but read some other posts of people having issues with a slow performing zfs drive in the array so I decided to try and migrate my one zfs drive to xfs. Turns out this was a struggle, even copying the content from the zfs drive to another xfs formatted drive the best performance I could get was 2-3 Mb/s. I finally buckled down and copied only the irreplaceable content and blew up the rest when I erased that drive and reformatted under xfs. After doing so, I ran another speed check and voila, the drive was now showing speeds on speed tests that were inline with the other drives (4-5x what it had been under zfs). After that, I started another parity build, its not done yet, but its been maintaining speeds of 170-260 MB/s (inly really slowing as it comes to the later portions of any given drive size int he array). I'll report back when complete to confirm this was the fix - but right now its looking like another case of a a zfs drive in an array causing exceedingly slow performance.
  2. If I pause the rebuild, and reboot will the build restart or will it maintain its position?
  3. THanks much - I tried exploring that, but the file activity pluggin wasn't showing any disk activity. Also, when I look at the read / write speed per disk it looks like the parity writes are equivalent to the reads on all of the other disks except for disk 4 which is showing a slightly higher read speed. I ran the diskspeed test docker prior to starting the parity build and did find disk 4 operating at a much lower performance than the other drives but nothing even close to this slow. Do you think its prudent to run the diskspeed test again while still running the parity rebuild? One other item of note the CPU load on the system seems quite high given the fact that I don't have docker or vm services on - anything I should look at there that may be related to the slow rebuild speed?
  4. Hello, I initiated a parity rebuild due to the replacement of a failing drive. The rebuild started relatively quickly but eroded to ~30-40MB/s after the first day. Unfortunately for the last two days the rebuild has been averaging aroun 2-3 MB/s. I'm not seeing any driver related errors in the logs. I have shutdown all VM and dockers services and have disabled all shares, but am having still seeing this exceptionally slow speed. I previously ran a drive speed check and did see that disk4 (the zfs disk I have in the array) had an average speed of <20% the speed of the rest of the drives. I assume this is part of the issue and want to test out migrating this disk away from zfs and potentially removing fromt he server entirely, but I need to finish this parity rebuild first. What else should I be considering to try and get this parity rebuild back to a relatively (or at least viable) speed. citizenur-diagnostics-20240516-2334.zip
  5. Thanks for this - when I went to run without the "-n" option I got the following error. Recommendations on how to proceed?
  6. Hey, thanks for replying. Right now I don't have any disk assigned as disk 1 (that was the disk that was failing). I have a disk currently in preclear that I intend to assign as disk 1 and rebuild from parity once preclear is completed. The array is currently stopped. Given that current situation, is it viable for me to run a check file system on disk 1? If so, is it the right step for me to bring the array online in maintenance mode with the disk 1 assignment empty and attempt to run a check filesystem on an emulated disk 1? I just want to make sure I'm clear on the steps to take so I can avoid losing the data if possible.
  7. Of note - I did just find a diagnostic I pulled about a week ago which should have the full config in it. Prior to today, none of my disk assignments have changed for months. Is it possible for me to use the config backup from the diagnostic a week ago, to restore my array configuration information and rebuild from parity? If so, what steps would I follow to do so?
  8. Hello, I had a drive that looking like it was failing so shut down my server and replaced it. When I booted back up, I started a pre-clear on the new disk, but needed access to the array for a small amount of time so started the array with the replaced disk missing and emulated. While the array was starting I started to see errors like the followng in my system log "kernel: ata5: COMRESET failed (errno=-16) unraid" when the array started none of the shares were listed so I stopped the array and rebooted. When the system came back up the array started automativally with the missing disk still missing but it was not being emulated. I did not see any errors in the log upon reboot, but I immediately shut down the array since not all looked right. Have I now lost everything that was on the original disk, or when I add the new pre-cleared disk to the array will it be recovered from parity? Is there any steps I should take to make the data from the failing disk recoverable from parity? citizenur-diagnostics-20240509-2117.zip
  9. Did you ever figure this out? I'm having the exact same issue
  10. Hello, I am currently using Unraid Version 6.12.9. Whenever my Unraid server boots I'm getting repeated errors from nginx failing to bind to ports 80 and 443 due to error "98: Address already in use". I have pasted the logs below (though I have removed my internal network IPs and hostname). Is this expected behavior? if not, I'm fairly confident the fix is something straight forward and obvious that I am just missing - but would appreciate any help. Thanks Apr 1 16:53:29 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:80 failed (98: Address already in use) Apr 1 16:53:29 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:80 failed (98: Address already in use) Apr 1 16:53:29 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:80 failed (98: Address already in use) Apr 1 16:53:29 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:443 failed (98: Address already in use) Apr 1 16:53:29 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:443 failed (98: Address already in use) Apr 1 16:53:29 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:443 failed (98: Address already in use) Apr 1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:80 failed (98: Address already in use) Apr 1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:80 failed (98: Address already in use) Apr 1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:80 failed (98: Address already in use) Apr 1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:443 failed (98: Address already in use) Apr 1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:443 failed (98: Address already in use) Apr 1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:443 failed (98: Address already in use) Apr 1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:80 failed (98: Address already in use) Apr 1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:80 failed (98: Address already in use) Apr 1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:80 failed (98: Address already in use) Apr 1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:443 failed (98: Address already in use) Apr 1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:443 failed (98: Address already in use) Apr 1 16:53:30 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:443 failed (98: Address already in use) Apr 1 16:53:31 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:80 failed (98: Address already in use) Apr 1 16:53:31 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:80 failed (98: Address already in use) Apr 1 16:53:31 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:80 failed (98: Address already in use) Apr 1 16:53:31 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to 127.0.0.1:443 failed (98: Address already in use) Apr 1 16:53:31 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_1]:443 failed (98: Address already in use) Apr 1 16:53:31 [MY_HOSTNAME] nginx: 2024/04/01 16:53:24 [emerg] 7599#7599: bind() to [MY_INTERNAL_IP_ADDRESS_2]:443 failed (98: Address already in use)
×
×
  • Create New...