-
Posts
47 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by tronyx
-
-
6 minutes ago, JorgeB said:
This shows the UUID change didn't complete, so it's not just a case of both pool members having a different UUID, that I could reproduce, by changing the UUID with one of the devices disconnected, and fix, not sure about this since I cannot create an identical situation, lets try this:
First try changing the UUID again for only one of the devices, the other one still has the same FSID, so lets see if it can change for both automatically:
btrfstune -u /dev/sdf1
If this one doesn't work try the other one:
btrfstune -u /dev/sde1
If neither works try manually changing both to the same UUID, if it works for the 1st one, to change the 2nd device you must temporarily disconnect the 1st one or it will complain that the UUID already exists:
btrfstune -U 81fe799a-b046-40f8-b9af-fe886306ba0d /dev/sdf1
If it fails try the other one:
btrfstune -U 81fe799a-b046-40f8-b9af-fe886306ba0d /dev/sde1
If successful for one of them disconnect or use UD to detach that device then do it for the other one, then reboot and post new btrfs fi show output.
Changing the UUID for the 1st drive failed, but the 2nd drive worked:
root@Morgoth:~# btrfstune -u /dev/sdf1 WARNING: ignored: dev_item fsid mismatch: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000 ERROR: dev_item UUID does not match fsid: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000 ERROR: superblock checksum matches but it has invalid members ERROR: cannot scan /dev/sdf1: Input/output error ERROR: dev_item UUID does not match fsid: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000 ERROR: superblock checksum matches but it has invalid members ERROR: cannot scan /dev/sde1: Input/output error WARNING: ignored: dev_item fsid mismatch: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000 WARNING: ignored: dev_item fsid mismatch: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000 warning, device 1 is missing WARNING: it's recommended to run 'btrfs check --readonly' before this operation. The whole operation must finish before the filesystem can be mounted again. If cancelled or interrupted, run 'btrfstune -u' to restart. We are going to change UUID, are your sure? [y/N]: y Current fsid: d5cddf19-cc3e-42bc-8607-4e4242a02d61 New fsid: d5cddf19-cc3e-42bc-8607-4e4242a02d61 Set superblock flag CHANGING_FSID ERROR: failed to write bytenr 956768075776 length 16384: Input/output error ERROR: btrfstune failed root@Morgoth:~# btrfstune -u /dev/sde1 WARNING: ignored: dev_item fsid mismatch: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000 ERROR: dev_item UUID does not match fsid: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000 ERROR: superblock checksum matches but it has invalid members ERROR: cannot scan /dev/sde1: Input/output error WARNING: ignored: dev_item fsid mismatch: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000 WARNING: ignored: dev_item fsid mismatch: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000 WARNING: it's recommended to run 'btrfs check --readonly' before this operation. The whole operation must finish before the filesystem can be mounted again. If cancelled or interrupted, run 'btrfstune -u' to restart. We are going to change UUID, are your sure? [y/N]: y Current fsid: d5cddf19-cc3e-42bc-8607-4e4242a02d61 New fsid: d5cddf19-cc3e-42bc-8607-4e4242a02d61 Set superblock flag CHANGING_FSID Change fsid in extents Change fsid on devices Clear superblock flag CHANGING_FSID Fsid change finished root@Morgoth:~# btrfs fi show Label: none uuid: fc5ecaa9-cdce-4b18-afe2-0d3640b5e669 Total devices 2 FS bytes used 661.49GiB devid 1 size 894.25GiB used 722.03GiB path /dev/sdc1 devid 2 size 894.25GiB used 722.03GiB path /dev/sdb1 Label: 'Plex_Docker_AppData' uuid: 8a3a773d-d3f3-46b2-9783-b5c5c517d2b7 Total devices 2 FS bytes used 127.39GiB devid 1 size 465.76GiB used 253.03GiB path /dev/nvme0n1p1 devid 2 size 465.76GiB used 253.03GiB path /dev/nvme1n1p1 Label: 'Torrents' uuid: d5cddf19-cc3e-42bc-8607-4e4242a02d61 Total devices 2 FS bytes used 1.22TiB devid 1 size 3.64TiB used 1.23TiB path /dev/sde1 devid 2 size 3.64TiB used 1.23TiB path /dev/sdf1
I was then able to mount the Pool and everything seems to be good to go again. Thank you so much!- 1
-
8 minutes ago, JorgeB said:
That's strange since I don't see a way to change the UUID of a single pool member when all are connected, post the output of:
btrfs fi show
Here is the output:
root@Morgoth:~# btrfs fi show Label: none uuid: fc5ecaa9-cdce-4b18-afe2-0d3640b5e669 Total devices 2 FS bytes used 663.39GiB devid 1 size 894.25GiB used 722.03GiB path /dev/sdc1 devid 2 size 894.25GiB used 722.03GiB path /dev/sdb1 Label: 'Plex_Docker_AppData' uuid: 8a3a773d-d3f3-46b2-9783-b5c5c517d2b7 Total devices 2 FS bytes used 127.39GiB devid 1 size 465.76GiB used 253.03GiB path /dev/nvme0n1p1 devid 2 size 465.76GiB used 253.03GiB path /dev/nvme1n1p1 ERROR: dev_item UUID does not match fsid: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000 ERROR: superblock checksum matches but it has invalid members ERROR: cannot scan /dev/sdf1: Input/output error ERROR: dev_item UUID does not match fsid: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000 ERROR: superblock checksum matches but it has invalid members ERROR: cannot scan /dev/sde1: Input/output error
Therein lies the issue which I'm trying to fix. -
Checked the syslog, however, and see the following:
May 24 10:40:16 Morgoth ool www[20370]: /usr/local/emhttp/plugins/unassigned.devices/scripts/rc.settings 'uuid_change' May 24 10:40:36 Morgoth kernel: BTRFS: device label Torrents devid 1 transid 34810 /dev/sde1 scanned by udevd (29068) May 24 10:40:43 Morgoth kernel: BTRFS: device label Torrents devid 2 transid 34810 /dev/sdf1 scanned by udevd (29069) May 24 10:40:57 Morgoth unassigned.devices: Warning: shell_exec(/sbin/btrfstune -uf '/dev/sde1') took longer than 20s! May 24 10:40:57 Morgoth unassigned.devices: Changed partition UUID on '/dev/sde1' with result: command timed out
-
2 hours ago, JorgeB said:
Was going to say that it would probably be a good idea, or change them for all pool members, but I tested and it changed them for all pool devices:
May 25 12:40:35 Tower15 unassigned.devices: Changed partition UUID on '/dev/sdg1' with result: Current fsid: cf6ba21f-646f-4793-9145-29d965e34c2b New fsid: f49fbb0d-3c6e-46bf-b03d-bc76c86c3cdd Set superblock flag CHANGING_FSID Change fsid in extent tree Change fsid in chunk tree Clear superblock flag CHANGING_FSID Fsid change finished May 25 12:40:35 Tower15 kernel: BTRFS: device fsid f49fbb0d-3c6e-46bf-b03d-bc76c86c3cdd devid 2 transid 12 /dev/sdg1 scanned by udevd (29766) May 25 12:40:35 Tower15 kernel: BTRFS: device fsid f49fbb0d-3c6e-46bf-b03d-bc76c86c3cdd devid 3 transid 12 /dev/sdb1 scanned by udevd (29765) May 25 12:40:35 Tower15 kernel: BTRFS: device fsid f49fbb0d-3c6e-46bf-b03d-bc76c86c3cdd devid 1 transid 12 /dev/sde1 scanned by udevd (29764)
@tronyxwere all devices connected when you changed the UUID?
Yes, all drives were connected. -
4 minutes ago, dlandon said:
New version of UD. The notable change is with root shares. Changes were made in the configuration to be more aligned with how other SMB and NFS shares are configured. This change will cause any existing root shares to fail to mount. The fix is to remove any root shares and add them back. Save any script files you have on the root shares as they will be deleted when the root share is removed.
The other change was pools failed to mount with a false share name is already being used message. I broke this making a change yesterday to address where a duplicate share name was not detected. Should be all sorted out now.
I thank you for dealing with this so quickly. My one pool mounted fine, but the other is having issues now and I think it might be because I changed the UUID of one of the disks within UD to try and resolve the issue earlier today.May 24 21:00:11 Morgoth unassigned.devices: *** dev /dev/sde1 mountpoint Definitely_Not_Torrents May 24 21:00:11 Morgoth unassigned.devices: Mounting partition 'sde1' at mountpoint '/mnt/disks/Definitely_Not_Torrents'... May 24 21:00:11 Morgoth unassigned.devices: Mount cmd: /sbin/mount -t 'btrfs' -o rw,noatime,nodiratime,space_cache=v2 '/dev/sde1' '/mnt/disks/Definitely_Not_Torrents' May 24 21:00:11 Morgoth kernel: BTRFS error (device sde1): unrecognized or unsupported super flag: 34359738368 May 24 21:00:11 Morgoth kernel: BTRFS error (device sde1): dev_item UUID does not match metadata fsid: d5cddf19-cc3e-42bc-8607-4e4242a02d61 != 62b54680-89b9-47a3-a1be-a770935f18df May 24 21:00:11 Morgoth kernel: BTRFS error (device sde1): superblock contains fatal errors May 24 21:00:11 Morgoth kernel: BTRFS error (device sde1): open_ctree failed May 24 21:00:13 Morgoth unassigned.devices: Mount of 'sde1' failed: 'mount: /mnt/disks/Definitely_Not_Torrents: wrong fs type, bad option, bad superblock on /dev/sde1, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. ' May 24 21:00:13 Morgoth unassigned.devices: Partition 'Definitely_Not_Torrents' cannot be mounted. May 24 21:00:13 Morgoth unassigned.devices: Disk with ID 'CT4000MX500SSD1_2223E63A58B0 (sdf)' is not set to auto mount.
Would you happen to know if there is some way I can fix this? -
5 hours ago, dlandon said:
I see the problem and will issue an update to UD as soon as I figure out what is happening.
Is there anything I can do in the meantime to get them mounted? Server is pretty much useless without being able to mount these pools. -
14 minutes ago, dlandon said:
I see the problem and will issue an update to UD as soon as I figure out what is happening.
Thanks for confirming and for all your hard work! -
Updated the Plugin to 2023.05.23 and now my UD Pools won't mount, generating an error in the syslog that the name is reserved and cannot be used.
May 24 10:56:57 Morgoth unassigned.devices: Error: Device '/dev/sde1' mount point 'Definitely_Not_Torrents' - name is reserved, used in the array or a pool, or by an unassigned device. May 24 10:56:57 Morgoth unassigned.devices: Disk with serial 'CT4000MX500SSD1_2223E63A58B2', mountpoint 'Definitely_not_Torrents' cannot be mounted. May 24 11:02:21 Morgoth unassigned.devices: Error: Device '/dev/nvme0n1p1' mount point 'Plex_Docker_AppData' - name is reserved, used in the array or a pool, or by an unassigned device. May 24 11:02:21 Morgoth unassigned.devices: Disk with serial 'Samsung_SSD_970_EVO_500GB_S5H7NC0N331833E', mountpoint 'Plex_Docker_AppData' cannot be mounted.
Have two Unassigned Devices pools which have been working for a few years now, but they will no longer mount, stating the mount name is reserved or in use. I have tried stopping the array, removing the /var/state/unassigned.devices/share_names.json file, and starting the array again, but am seeing the same result. Issue persists through a reboot as well. I feel like there may be an issue with the update for the share name dupe check that was done with the most recent update. -
WARNING! These examples may crash your computer and delete all files if executed inappropriately. Before removing any files, ensure you have a backup of all critical files. I am not responsible for any data loss.
Login to the Server via SSH and run the following command to find the files across all shares:
find /mnt/user/ -type f -name "*.partial"
Then, if everything looks okay, you can run this command to find AND delete them:
find /mnt/user/ -type f -name "*.partial" -delete
- 1
-
Anyone else having their Telegraf container not start w/ the following errors?
ERROR: Unable to lock database: Permission denied ERROR: Failed to open apk database: Permission denied
I believe it is an issue with this command now for some reason:
/bin/sh -c 'apk update && apk add smartmontools nvme-cli && telegraf'
EDIT: Looks like pinning the image to the 1.19.2-alpine tag fixed it so they must've broken something.
- 1
- 1
-
1 minute ago, stFfn said:
where do you see the domain.com?
could you please pinpoint it to me?
i see organizr.* in the config.
*sigh* We don't see it anywhere. Rox was simply making sure you were, in fact, browsing to organizr.domain.com and not just domain.com. Just a measure of validation, that's all.
-
Just now, stFfn said:
why would i use organizr.domain.com?
of course im using my own domain.. where did you see that i use organizr.domain.com?
the swag and organizr dockers are up to date.
i dont know how anything else with nginx could interfere? port 443 is redirected to swag. how could anything else be taking over only this docker but not the others that im running?
We didn't see it, but Rox asked you because people have done that before and, since nothing is standing out as obvious to us as to what the issue could be, we're asking everything we can think of.
Either your containers are not up to date or you have another Nginx instance running either in another container or installed natively on the System that is running Docker.
root@suladan:~# docker exec -it organizr nginx -v nginx version: nginx/1.20.1 root@suladan:~# docker exec -it swag nginx -v nginx version: nginx/1.20.1
-
49 minutes ago, stFfn said:
Hello? didnt you see my other messages?
Yes, we did, but you didn't answer the question. Are you LITERALLY browsing to organizr.domain.com or are you replacing domain.com with your ACTUAL domain?
Also, those error pages show Nginx version 1.18. Both the SWAG and Organizr containers are running Nginx version 1.20 so you're either VERY out of date or you're running something else that is using Nginx.
-
14 minutes ago, coblck said:
I take it the boot/config/modprobe file is already present on the flash drive.
That is a directory, not a file, but yes, it exists already, or at least it should. If not, create the directory.
-
2 minutes ago, coblck said:
Yeah i understood the intel part 😄 , so if i login into unraid from my w10 pc and open a console window and type touch /boot/config/modprobe.d/i915.conf that's it ?
That should be it, per the docs.
-
8 minutes ago, coblck said:
This command "touch /boot/config/modprobe.d/amdgpu.conf # create an empty file"
Im sorry for the hassle just not very clear to me.
So within config/modprobe.d folder create a file with "touch /boot/config/modprobe.d/amdgpu.conf # create an empty file" using notepad ??
No, the command is this: touch /boot/config/modprobe.d/amdgpu.conf
That will create an empty file called amdgpu.conf in the /boot/config/modprobe.d/ directory. You will need to create the correct file that corresponds to your GPU type. If it is an Intel integrated GPU, like with my issue, you will need to create the i915.conf file instead of the amdgpu.conf file. This will cause it to overwrite the file that exists to blacklist the driver.
-
5 minutes ago, coblck said:
Ok thanks will have a look, would this command need to be entered in the terminal window after every reboot
What command? The doc outlines creating a file on the flash drive which is persistent between reboots.
-
47 minutes ago, coblck said:
I see you use the igpu for plex transcoding, i have i7400 with igpu and use mine mainly for plex, updating tomorrow and just wondering if i have to change anything within the plex docker parameters like i originally did top get it use use igpu for hardware transcoding. What exactly did you add or remove form the /boot/syslinux/syslinux.cfg file.
Thanks
After upgrading to 6.9, I followed the docs to whitelist the i915 driver, as they're now blacklisted by default, as outlined HERE. Then I removed the original entry I had made to the /boot/config/go file to get things working with 6.8.3, and rebooted.
-
6 minutes ago, coblck said:
Did you remove the extra lines in the go file or leave them
I removed them as they are no longer needed, per the 6.9 docs. Also, please see my edited reply above as I found the real issue and resolved it.
-
For anyone else that may encounter the same issue, I was able to resolve my issue on 6.9 by removing the nomodeset commands in my /boot/syslinux/syslinux.cfg file, like so:
root@Morgoth:~# cat /boot/syslinux/syslinux.cfg default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage append initrd=/bzroot label unRAID OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label unRAID OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest
I do not remember explicitly adding them myself previously, but it is very possible that I did to get things working on 6.8.3.
- 2
- 1
-
Was running 6.8.3 with functional iGPU/QS transcoding w/ Plex. Upgraded to 6.9 this morning and now the /dev/dri directory is missing. Found the section in the Wiki that explains the drivers being blacklisted, created the necessary file to whitelist it ( /boot/config/modprobe.d/i915.conf ), rebooted the server, but /dev/dri is still missing. Removed the original lines I had previously added to the go file in 6.8.3 and rebooted, but still no dice. The Server had been rebooted several times before with no issues and nothing has changed within the BIOS settings. My Server is essentially useless at the moment, so any help would be greatly appreciated.
Mobo: Supermicro X11SCH-F
CPU: Intel Xeon E-2246G
Everything points to the fact that it SHOULD be working:
root@Morgoth:~# lsmod | grep -i 'i915' i915 1712128 0 iosf_mbi 16384 1 i915 drm_kms_helper 163840 1 i915 drm 356352 2 drm_kms_helper,i915 intel_gtt 20480 1 i915 video 45056 1 i915 i2c_algo_bit 16384 2 igb,i915 backlight 16384 3 video,i915,drm i2c_core 45056 8 drm_kms_helper,i2c_algo_bit,igb,i2c_smbus,i2c_i801,i915,ipmi_ssif,drm root@Morgoth:~# modinfo i915 filename: /lib/modules/5.10.19-Unraid/kernel/drivers/gpu/drm/i915/i915.ko.xz license: GPL and additional rights description: Intel Graphics author: Intel Corporation
Output of the above is the same on both 6.8.3 and 6.9. Although lspci -k doesn't show it is in use, like it used to on 6.8.3:
00:02.0 Display controller: Intel Corporation HD Graphics P630 Subsystem: Super Micro Computer Inc Device 1b11 Kernel driver in use: i915 Kernel modules: i915
Compared to 6.9:
00:02.0 Display controller: Intel Corporation CoffeeLake-S GT2 [UHD Graphics P630] Subsystem: Super Micro Computer Inc Device 1b11 Kernel modules: i915
Running modprobe i915, which should no longer be necessary anyway, doesn't make /dev/dri show up either.
Doesn't seem to matter whether or not the array is started either. Diagnostics are attached in case they might be of any help.
-
This is awesome! Thanks so much for putting it together. I don't suppose you'd be willing to share your dashboard for this for easy replication of the above panels?
-
Pardon my ignorance here, but what is the benefit of this? Is this something that one *SHOULD* be doing? Are there any disadvantages to this?
-
Been trying to get RDP-BOINC running with Rosetta@home since Sunday and every time I try to connect I receive a "this project is temporarily unavailable" message. I can connect to other projects fine, but I want to add the Rosetta one. Anyone seeing the same issue or have any suggestions?
Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array
in Plugin Support
Posted
Booted up my Unraid Server today after very carefully moving the Server to my new place, and my two NVMe drives are not showing up. No other issues with any of the array drives or other unassigned devices that are installed. I see the following in the syslog:
root@Morgoth:~# grep -i nvme /var/log/syslog Aug 29 19:08:08 Morgoth kernel: nvme nvme0: pci function 0000:09:00.0 Aug 29 19:08:08 Morgoth kernel: nvme nvme1: pci function 0000:0a:00.0 Aug 29 19:08:08 Morgoth kernel: nvme nvme0: Device not ready; aborting initialisation, CSTS=0x0 Aug 29 19:08:08 Morgoth kernel: nvme nvme1: Device not ready; aborting initialisation, CSTS=0x0 Aug 29 19:08:08 Morgoth kernel: nvme nvme0: Removing after probe failure status: -19 Aug 29 19:08:08 Morgoth kernel: nvme nvme1: Removing after probe failure status: -19
Already tried adding nvme_core.default_ps_max_latency_us=0 pcie_aspm=off at the end of the append line of the boot options for the default boot option, but it did not help. lshw shows them both as UNCLAIMED as well, but lspci sees them, or at least their controllers:
09:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 0a:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983
I also tried upgrading from 6.11.5 to 6.12.3 and even reseating the drives, but nothing has helped. Are they both just magically dead?