-
Posts
48426 -
Joined
-
Last visited
-
Days Won
504
Community Answers
-
JorgeB's post in Strange errors Buffer I/O error! was marked as the answer
Those are normal for new array drives, just ignore, once they are added and formatted the errors will be gone.
-
JorgeB's post in 6.10.3 - Possible corruption on flash drive was marked as the answer
Those files are definitely corrupt, you can just delete them and they will be recreated, but with default share settings, so adjust if necessary.
-
JorgeB's post in BTRFS ERROR in cache pool. One device failed to restart was marked as the answer
NVMe device dropped offline:
Apr 10 21:20:23 Nexus kernel: nvme nvme2: I/O 203 QID 7 timeout, aborting Apr 10 21:20:23 Nexus kernel: nvme nvme2: I/O 204 QID 7 timeout, aborting Apr 10 21:20:23 Nexus kernel: nvme nvme2: I/O 205 QID 7 timeout, aborting Apr 10 21:20:23 Nexus kernel: nvme nvme2: I/O 206 QID 7 timeout, aborting Apr 10 21:20:54 Nexus kernel: nvme nvme2: I/O 203 QID 7 timeout, reset controller Apr 10 21:21:24 Nexus kernel: nvme nvme2: I/O 24 QID 0 timeout, reset controller Apr 10 21:22:28 Nexus kernel: nvme nvme2: Device not ready; aborting reset, CSTS=0x1 Apr 10 21:22:28 Nexus kernel: nvme nvme2: Abort status: 0x371 ### [PREVIOUS LINE REPEATED 3 TIMES] ### Apr 10 21:22:59 Nexus kernel: nvme nvme2: Device not ready; aborting reset, CSTS=0x1 Apr 10 21:22:59 Nexus kernel: nvme nvme2: Removing after probe failure status: -19 Apr 10 21:23:29 Nexus kernel: nvme nvme2: Device not ready; aborting reset, CSTS=0x1 Apr 10 21:23:29 Nexus kernel: nvme2n1: detected capacity change from 1953525168 to 0
Power cycle the server and if it comes back online you can re-add to the pool, assuming the pool was raid1, it it wasn't post new diags after the reboot, also see here for more info.
-
JorgeB's post in Disk16 is missing when viewing /media it also has tons of system files I wouldnt expect to see was marked as the answer
You need to run xfs_repair again without -n or nothing will be done.
-
JorgeB's post in Stuck on 6.10.3, no GUI when I update was marked as the answer
That looks like this issue:
-
JorgeB's post in Files not moving from cache to array and becoming unmovable was marked as the answer
Cache pool is detecting data corruption, run a correcting scrub and post the output, also good idea to run memtest.
-
JorgeB's post in Moving HDDs from a USB enclosure to a SAS-HBA internally was marked as the answer
You can try this, note that it will only work if parity is still valid:
-Tools -> New Config -> Retain current configuration: All -> Apply
-Check all assignments and assign any missing disk(s) if needed, including all the disks that were in the enclosure, assigned slots must the same as before to keep parity2 valid.
-IMPORTANT - Check both "parity is already valid" and "maintenance mode" and start the array (note that the GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the checkbox, but it won't be as long as it's checked)
-Stop array
-Unassign the disk you want to rebuild
-Start array (in normal mode now), ideally the emulated disk will now mount and contents look correct, if it doesn't you should run a filesystem check on the emulated disk
-If the emulated disk mounts and contents look correct stop the array
-Re-assign the disk to rebuild and start array to begin.
-
JorgeB's post in No Disks Detected | Dell Perc H310, First Setup was marked as the answer
HBA is in IT mode, main suspect would be the cables, you need forward breakout cables, reverse breakout look the same but won't work.
-
JorgeB's post in Unraid Version: 6.12.0-rc2 - zfs guide to steps to create zpool with the integrated zfs in rc2 was marked as the answer
Click on "Add pool", select number of slots you want then OK
Assign all pool devices
Click on the first pool device, change fs to zfs, select pool type and topology you want
Back on main, start array, format pool.
-
JorgeB's post in Large CPU Usage + Ethernet Usage was marked as the answer
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 18247 nobody 20 0 1200.8g 7.2g 7.1g R 13.3 23.3 79:44.77 qbittorre+
This is using a lot of CPU, try shutting it down.
-
JorgeB's post in Parity disabled with read errors during array shrinking was marked as the answer
https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
If you are going to remove that disk just remove it now, and instead of the above do a new config without it (Tools -> New config) then start array to begin parity sync.
-
JorgeB's post in Out of memory? was marked as the answer
If it's a one time thing you can ignore, if it keeps happening try limiting more the RAM for VMs and/or docker containers, the problem is usually not just about not enough RAM but more about fragmented RAM, alternatively a small swap file on disk might help, you can use the swapfile plugin:
https://forums.unraid.net/topic/109342-plugin-swapfile-for-691/
Docker image went read only due to not enough space, reboot and if it doesn't work just recreate.
-
JorgeB's post in Best way to refer to files in cache? was marked as the answer
If it's going to be used as cache only you can use the disk path, to bypass FUSE.
-
JorgeB's post in WEB UI crashing after some days was marked as the answer
There's nothing relevant logged before the crash, this usually suggests a hardware problem, one thing you can try is to boot the server in safe mode with all docker/VMs disabled, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
-
JorgeB's post in 2 data drives (connected by PCI-E SATA card) marked "missing" all of a sudden - no system config change - Unraid 6.11.5 was marked as the answer
You can for example boot an Unraid trial key in a different computer then mount them with UD and copy the data over SMB.
-
JorgeB's post in (SOLVED) Help with recurring server crash was marked as the answer
Mar 30 20:42:02 M93p kernel: macvlan_broadcast+0x10a/0x150 [macvlan] Mar 30 20:42:02 M93p kernel: macvlan_process_broadcast+0xbc/0x12f [macvlan] Macvlan call traces are usually the result of having dockers with a custom IP address and will end up crashing the server, switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right))
-
JorgeB's post in Unable to start Docker after reboot, Server keeps going unresponsive after a while was marked as the answer
Start here, if it doesn't help enable the syslog server and post that after a crash.
-
JorgeB's post in /var/log is getting full (currently 79 % used) was marked as the answer
btw, I assume that log file is created by this plugin:
Mar 26 13:42:42 unraid-server root: | Installing new package /boot/config/plugins/snmp/net-snmp-5.9.3-x86_64-1.txz Mar 26 13:42:42 unraid-server root: +============================================================================== Mar 26 13:42:42 unraid-server root: Mar 26 13:42:42 unraid-server root: Verifying package net-snmp-5.9.3-x86_64-1.txz. Mar 26 13:42:42 unraid-server root: Installing package net-snmp-5.9.3-x86_64-1.txz: Mar 26 13:42:42 unraid-server root: PACKAGE DESCRIPTION: Mar 26 13:42:42 unraid-server root: # net-snmp (Simple Network Management Protocol tools) Mar 26 13:42:42 unraid-server root: # Mar 26 13:42:42 unraid-server root: # Various tools relating to the Simple Network Management Protocol:
-
JorgeB's post in TRIM on encrypted SSD cache? was marked as the answer
Yes.
Not sure since I don't use encryption so don't really care, but maybe about 2 years.
-
JorgeB's post in Unmountable disk present was marked as the answer
Start the array in normal mode, disk should mount now.
-
JorgeB's post in Unable to format an unmountable drive was marked as the answer
Apr 10 22:37:40 Tower root: The new table will be used at the next reboot or after you Apr 10 22:37:40 Tower root: run partprobe(8) or kpartx(8) Reboot the server.
-
JorgeB's post in Cache Drive - Power Outage/Surge was marked as the answer
If the log is the only problem this might help:
btrfs rescue zero-log /dev/sde1
-
JorgeB's post in how to setting up cache..not work was marked as the answer
By default adding a second device will create a raid1 pool, so only 120GB would be usable in this case, if you want to use the full capacity of both devices convert to single profile:
https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480421
-
JorgeB's post in Can't connect after added new NIC was marked as the answer
Try deleting/renaming /boot/config/network.cfg and network-rules.cfg and reboot.