-
Posts
48426 -
Joined
-
Last visited
-
Days Won
504
Community Answers
-
JorgeB's post in Unraid Boot No IP Address Assigned was marked as the answer
Link detected: no
No link is being detected on eth0
-
JorgeB's post in btrfs cache unmountable was marked as the answer
Apr 2 12:00:59 Tower kernel: BTRFS warning (device nvme0n1p1): devid 2 uuid 73cb595a-267c-41db-8255-e0208da3535d is missing
Pool was already missing a device, you cannot remove another one until this is fixed, problem is that there's a lot of data corruption detected on the pool, so possibly the missing device will fail to delete, try this to see if you can get the existing pool back, stop array, type on the console:
btrfs-select-super -s 1 /dev/nvme1n1p1
Then unassign remaining pool member, start array, stop array, now re-assign both pool members, start array, if the pool mounts run a correcting scrub and post the results together with new diags.
-
JorgeB's post in not sure what happened was marked as the answer
Apr 26 06:46:47 Tower shfs: shfs: ../lib/fuse.c:1451: unlink_node: Assertion `node->nlookup > 1' failed. You need to reboot for the shares to come back.
Also in case you missed it:
Apr 11 14:13:05 Tower kernel: md: disk25 read error, sector=607448736 Apr 11 14:13:05 Tower kernel: md: disk25 read error, sector=607448744 Apr 11 14:13:05 Tower kernel: md: disk25 read error, sector=607448752 Looks more like a power/connection problem, and disk24 was already disabled at boot.
-
JorgeB's post in config missmatch, newbi problem was marked as the answer
Enable destructive mode on the UD plugin settings.
-
JorgeB's post in Unable to map shares was marked as the answer
Go to the share(s) and make sure it's being exported, default is no.
-
JorgeB's post in Existing Server + New Config process will the array be usable during parity build? was marked as the answer
Yes.
Yes, but ideally they will be on a pool, array performance will be degraded during the sync.
-
JorgeB's post in ERRORS: Emulating two drives during a rebuild! :-( was marked as the answer
Disks 2 and 6 also dropped, this is usually a power/connection problem, could also be a bad PSU.
-
JorgeB's post in Server freezes weekly forcing unclean shutdowns was marked as the answer
Try switching to ipvlan (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right)).
-
JorgeB's post in BTRFS errors on cache pool was marked as the answer
Second
Jul 11 10:11:35 Nexus kernel: BTRFS: device fsid 653150f2-f281-4987-bb45-f3e553c14b9c devid 2 transid 534361 /dev/nvme2n1p1 scanned by udevd (1391) Jul 11 10:11:35 Nexus kernel: BTRFS: device fsid 653150f2-f281-4987-bb45-f3e553c14b9c devid 1 transid 542551 /dev/nvme1n1p1 scanned by udevd (1402) I assume you mean devid 2 (nvme2n1)? The other device will be out of sync.
If yes it's always good to make sure backups are up to date, if you mounted devid2 by itself read/write best forward is to wipe the other one and then re-add to the pool.
-
JorgeB's post in kernel: bio error: err: 10 was marked as the answer
Never seen that before, but since it's a recent model look bor a BIOS update.
-
JorgeB's post in Restarting my server best practice was marked as the answer
This is a good place to start mostly for the user shares:
https://wiki.unraid.net/Manual/Overview
For the fs just select which one you want and re-format the disks.
-
JorgeB's post in Disk 2 disabled;unable to stop array was marked as the answer
Try typing 'reboot' in the console, it will initiate a clean shutdown, then post new diags after array start.
-
JorgeB's post in UNMOUNTABLE: WRONG OR NO FILE SYSTEM was marked as the answer
Yep, if xfs_repair cannot fix the fileaystem that's the best option.
-
JorgeB's post in Unmountable: Unsupported partition layout was marked as the answer
That's expected when moving disks form a RAID controller, one of the reasons they are not recommended, this can help:
https://forums.unraid.net/topic/84717-solved-moving-drives-from-non-hba-raid-card-to-hba/?do=findComment&comment=794399
-
JorgeB's post in Lost password Username webgui was marked as the answer
Repeat the recover root password procedure, if it's done correctly it will ask for a new password at first logon.
https://wiki.unraid.net/Manual/Troubleshooting#Lost_root_Password
-
JorgeB's post in Kernel: BUG: Bad page cache in process php-fpm81 pfn:fd010f was marked as the answer
Wouldn't worry for now.
-
JorgeB's post in Cache Unmountable: No file system was marked as the answer
No, it will delete all data.
According to this the pool wasn't redundant:
Apr 23 10:01:50 Paco kernel: BTRFS warning (device nvme1n1p1): chunk 85963309056 missing 1 devices, max tolerance is 0 for writable mount
So you could not remove a device, type:
btrfs-select-super -s 1 /dev/nvme0n1p1
Then stop array, unassign all cache devices, start array, stop array, re-assign both cache devices, start array, post new diags
-
JorgeB's post in Does encryption affect parity? was marked as the answer
Depends on what you changed before, any changes to the disks filesystem will make parity invalid, at least partially, if the changes were small you can check parity is already valid and run correcting check, but it will take the same time as a sync, if they were large, like a disk added or removed you should re-sync instead since it will be faster.
-
JorgeB's post in Recover from double drive issues was marked as the answer
If the emulated disk is mounting and contents look correct, to rebuild again, after checking/replacing cables:
https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
-
JorgeB's post in zfs value was marked as the answer
btrfs is usually reliable for single and raid1 filesystems, assuming good/reliable hardware, still zfs if nothing else is better at recovering for example from a raid1 dropped device, so if it were me I would probably convert the cache pool, array not for now, in part becuase there are still some write performance issues with zfs when used on the array.
-
JorgeB's post in Docker service failing, docker image growing, dockers failing to start, exit, and run was marked as the answer
I suspected it would help since before the docker image was failing to mount because it was in use, that means it failed to unmount at array stop, things look normal for now, post new diags it it happens again.
-
JorgeB's post in Cloning (?) a server was marked as the answer
Copying within the array with parity installed will be super slow, use UD and copy some data to each disk, just will need to limit each copy to the disk capacity, those disks can then be assigned to a new Unraid array and it will import the existing data.
-
JorgeB's post in (6.11.5) Able to access login screen, but no other part of web gui was marked as the answer
Looks like this issue:
-
JorgeB's post in Out of memory errors detected on your server - Diagnostic Attached was marked as the answer
If it's a one time thing you can ignore, if it keeps happening try limiting more the RAM for VMs and/or docker containers, the problem is usually not just about not enough RAM but more about fragmented RAM, alternatively a small swap file on disk might help, you can use the swapfile plugin:
https://forums.unraid.net/topic/109342-plugin-swapfile-for-691/