-
Posts
48426 -
Joined
-
Last visited
-
Days Won
504
Community Answers
-
JorgeB's post in (Solved) After reboot, Parity Drive disabled? was marked as the answer
Disk was already disabled at boot time so we can't see what happened, but disk looks OK so most likely a power/connection issue.
-
JorgeB's post in added old parity drive to array mistakenly thinking it was the same process as adding any new disk was marked as the answer
Preserve all, the emulated disk will disappear, you can then reassign disk12 as disk11 if you want.
-
JorgeB's post in Cache Pool Corruption was marked as the answer
With all that corruption best way forward would be to re-format the pool, is all data you want or can recover already backed up?
-
JorgeB's post in Disk Disabled advice was marked as the answer
Diags are after rebooting so we can't see what happened, but SMART looks OK, and there are some UDMA CRC errors, so most likely a cable problem.
-
JorgeB's post in Docker Containers Randomly Pausing / Shutting Down was marked as the answer
Apr 19 00:00:55 unServer kernel: BTRFS critical (device loop2): unable to find logical 4611686018630778880 length 4096 Apr 19 00:00:55 unServer kernel: BTRFS critical (device loop2): unable to find logical 4611686018630778880 length 16384
Docker image is corrupt, delete and recreate.
-
JorgeB's post in Quarterly parity check has detected errors was marked as the answer
Apr 12 06:10:23 tower kernel: md: recovery thread: P incorrect, sector=11515649200 Apr 12 06:24:24 tower kernel: md: recovery thread: P incorrect, sector=11797234176 Apr 13 07:55:05 tower kernel: md: recovery thread: P incorrect, sector=19001710760 Apr 13 21:12:23 tower kernel: md: recovery thread: P corrected, sector=9239094240 Apr 13 23:04:06 tower kernel: md: recovery thread: P corrected, sector=11515649200 Apr 14 00:26:27 tower kernel: md: recovery thread: P corrected, sector=13144112592 Apr 14 01:31:14 tower kernel: md: recovery thread: P corrected, sector=14377903704 Apr 14 01:38:38 tower kernel: md: recovery thread: P corrected, sector=14514929456 Apr 14 22:04:40 tower kernel: md: recovery thread: P incorrect, sector=9239094240 Apr 15 01:18:45 tower kernel: md: recovery thread: P incorrect, sector=13144112592 Apr 15 02:58:28 tower kernel: md: recovery thread: P incorrect, sector=14377903704 Apr 15 04:09:49 tower kernel: md: recovery thread: P incorrect, sector=14514929456
Not all sectors are the same, some might have been wrongly correctly, hence why they were detected again, this suggests a hardware issue, most commonly RAM related
-
JorgeB's post in Both parity drives disabled. One disabled a few days ago and then both are disabled today. was marked as the answer
Both disks were already disabled at last boot so we can't see what happened, but the disks look so, so just re-sync:
https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
You can do both at the same time.
-
JorgeB's post in Unraid zfs pool size was marked as the answer
Assuming raidz it will be a little less than 8TB usable.
-
JorgeB's post in HDD read error, looking for secound opinion on SMART ext results [solved] was marked as the answer
SMART looks OK and this is usually a power/connection problem:
Apr 19 01:06:35 Oracle kernel: sd 11:0:11:0: Power-on or device reset occurred
Replace cables and if the emulated disk11 contents look correct you can rebuild on top.
-
JorgeB's post in I/O ERROR | BTRFS ERROR was marked as the answer
Btrfs is finding a lot of data corruption. start by running memtest, and you can clearly see one example of a bit flip:
Mar 27 09:19:18 Tower kernel: BTRFS error (device nvme0n1p1): parent transid verify failed on 533586477056 wanted 17184512195 found 4643011
17184512195 = 010000000000010001101101100011000011 4643011 = 010001101101100011000011
So RAM issues for sure.
-
JorgeB's post in Device is Disabled was marked as the answer
Disk was already disabled at last boot, so we cannot see what happened, but disk looks healthy and since the emulated disk is mounting, and assuming contents look correct, you can rebuild on top, it might be a good idea to replace/swap cables before doing it to rule that out if it happens again.
https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
-
JorgeB's post in SOS: Cache disk "Unmountable: Wrong or no file system" was marked as the answer
Apr 18 07:24:57 void emhttpd: shcmd (13188): mount -t xfs -o noatime,nouuid /dev/sde1 /mnt/cache Apr 18 07:24:57 void kernel: XFS (sde1): Invalid superblock magic number
Now is trying to mount xfs, post the output of:
xfs_repair -v /dev/sde1
-
JorgeB's post in New hard drives not detected by unraid was marked as the answer
Make sure it's not the 3.3v issue, Google "SATA 3.3v pin"
-
JorgeB's post in Preclearing a drive, is this normal in main logs was marked as the answer
Disable spin dow while they are being precleared.
-
JorgeB's post in My BTRFS nvme cache drive is stuck as read-only was marked as the answer
Filesystem is crashing during the balance due to corruption, recommend backing up and re-formatting the pool.
-
JorgeB's post in Daily Out Of Memory errors was marked as the answer
Last OOM event is from April 2nd:
Apr 2 00:58:09 Paradigm kernel: Out of memory: Killed process 4206 (java) total-vm:9880052kB, anon-rss:4203604kB, file-rss:0kB, shmem-rss:0kB, UID:100 pgtables:8436kB oom_score_adj:0
You'll need to reboot to clear the FCP warnings.
-
JorgeB's post in Unmountable: Wrong or no file system was marked as the answer
Docker image is corrupt, delete and re-recreate, also note that the log tree issue can re-occur, so make sure backups are up to date and if it does re-occur the same command should still work, but then recommend backing up and re-formatting the pool.
-
JorgeB's post in Unmountable: Wrong or no file system was marked as the answer
Now start the array in normal mode and the disk should mount again.
-
JorgeB's post in Array will no longer start was marked as the answer
Yes.
Only manually, and if all everything is using /mnt/user it will still keep working.
-
JorgeB's post in Flash failure - No backup - Attempted new Flash - Blacklisted was marked as the answer
See if this helps:
https://wiki.unraid.net/Manual/Changing_The_Flash_Device
If it doesn't contact support with the new flash drive GUID and they can do the key replacement for you.
-
JorgeB's post in Encrypted drive won't mount was marked as the answer
That's because the fs is set to encrypted, but according to output above it's not, with the array stopped click on disk1, change fs to zfs (or auto) and start the array, if it doesn't mount post new diags.
-
JorgeB's post in Wrong empty space annouce on the dashboard was marked as the answer
Like mentioned used space on the dashboard will be wrong, it counts the 1TB used for parity plus the actual 410GB used.
-
JorgeB's post in Unraid crashes whole network was marked as the answer
Lots of different crashing going on, looks more like a hardware issue to me.