-
Posts
57 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by ailliano
-
-
1 hour ago, Kilrah said:
You've probably set /mnt/user/data as being part of the "Appdata Sources" and shouldn't do that. What's your reason for adding it?
I didn't lol
-
-
Any way to not let the container run as root ? I have the PID and PGID configured but the container is still writing as root in media files.
-
12 hours ago, EDACerton said:
- --accept-routes: causes Tailscale to use the routes that are being advertised by subnet routers (most frequently -- used by clients to access servers/devices that can't run Tailscale).
- --accept-dns: causes Tailscale to handle DNS resolution... this allows clients to enter (for example) "unraid" to access the "unraid" device on Tailscale.
Both features are generally more useful for clients than servers. (Unless you need to initiate connections from Unraid to something else on Tailscale, but most use cases rely on a client initiating the connection to Unraid).
As for why disabling those fixed your issue -- some combinations of the "advanced" network settings for Tailscale work on Unraid, and some don't work so well. In this case, I think it's a combination of the Synology subnet router + accept-routes that causes things to go awry.
This is an effect of the way the network configuration works in Unraid. Unraid doesn't use tools like NetworkManager or systemd-resolvd to manage network configurations. This means that applications like Tailscale have to manually update the network configuration to work -- which can run into problems if multiple things are all trying to do that.
@EDACertonThank you for great explanation, makes a lot of sense
I have another host in a different location, I want to be able to vpn in and navigate throughout the network so i advertise it as an exit node, however in this case tailscale is installed on WSL and has a separate network (172.27.16.0/20) than the host Windows 192.168.0.0/24, I tried to advertise both routes, but I still have no access to the internet or the 192 network, I'm sure since the wsl adds another layer I'm not reaching something network wise.
Setup is this WindowsHost(192.168.0.0/24)<>WSL Tailscale(172.27.16.0/20)<>Internet
command:
sudo tailscale up --advertise-exit-node --advertise-routes=192.168.0.0/24, 172.27.16.0/20 --resetThank you
-
1 minute ago, EDACerton said:
The network config might be a little weird after making a bunch of changes, you can try restarting the daemon with
/etc/rc.d/rc.tailscale restart
If that doesn’t work, I’d just reboot.
just tried that, NICE no dns issues, I can get in too.
You're the best !
last q, why did tailscale up --accept-routes=false --advertise-exit-node --advertise-routes=172.18.108.0/24 --accept-dns=false this work instead of my regular command
what do --accept-routes=false and --accept-dns=false fix ? Trying to understand what they do in the first place, I have another unraid in different site I need to set up -
3 minutes ago, ailliano said:
Do you have any other subnet routers that are advertising the same route?
I have a synology running tailscale too
sudo tailscale up --advertise-exit-node --advertise-routes=172.18.108.0/24 --reset
Can you access Unraid via its Tailscale address?
Yes
If you set accept routes to false, can you get back in locally?
yes I can just tried
root@Loki:~# tailscale up --accept-routes=false --advertise-exit-node --advertise-routes=172.18.108.0/24 --accept-dns=false
Some peers are advertising routes but --accept-routes is falseokay seems like everything is good, even Dockers can find updates now, only issue is my log getting spammed with
May 10 18:24:31 Loki tailscaled: 2023/05/10 18:22:57 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:22:57 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:22:57 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:07 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") (22 dropped) May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:07 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:07 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:07 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:16 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") (10 dropped) May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:16 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:16 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:16 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:26 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") (22 dropped) May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:26 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:26 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:26 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:36 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") (22 dropped) May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:36 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:36 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:36 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:46 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") (22 dropped) May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:46 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:46 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:46 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:56 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") (22 dropped) May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:56 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:56 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:23:56 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") May 10 18:24:31 Loki tailscaled: 2023/05/10 18:24:06 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") (22 dropped) May 10 18:24:31 Loki tailscaled: 2023/05/10 18:24:06 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:24:06 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:24:06 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") May 10 18:24:31 Loki tailscaled: 2023/05/10 18:24:16 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") (22 dropped) May 10 18:24:31 Loki tailscaled: 2023/05/10 18:24:16 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:24:16 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL May 10 18:24:31 Loki tailscaled: 2023/05/10 18:24:16 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") May 10 18:24:31 Loki tailscaled: 2023/05/10 18:24:31 [RATELIMIT] format("dns: resolver: forward: no upstream resolvers set, returning SERVFAIL") (16 dropped) May 10 18:24:31 Loki tailscaled: 2023/05/10 18:24:31 dns: resolver: forward: no upstream resolvers set, returning SERVFAIL
-
1 minute ago, EDACerton said:
A couple questions…
Do you have any other subnet routers that are advertising the same route?
Can you access Unraid via its Tailscale address?
If you set accept routes to false, can you get back in locally?
Do you have any other subnet routers that are advertising the same route?
I have a synology running tailscale too
sudo tailscale up --advertise-exit-node --advertise-routes=172.18.108.0/24 --reset
Can you access Unraid via its Tailscale address?
Yes
If you set accept routes to false, can you get back in locally?
yes I can just tried
root@Loki:~# tailscale up --accept-routes=false --advertise-exit-node --advertise-routes=172.18.108.0/24 --accept-dns=false
Some peers are advertising routes but --accept-routes is false -
3 minutes ago, EDACerton said:
This seems like a weird issue with Tailscale, my best guess being that it has something to do with all of the following being enabled:
- Accept Routes
- Accept DNS
- Advertised Route
- DNS server is inside the advertised route
Do you need for your server to be able to resolve MagicDNS addresses? If not, you could probably just run
tailscale set --accept-dns=false
and make the problem go away.
okay just tried, now having another issue when I do
tailscale up --accept-routes --advertise-exit-node --advertise-routes=172.18.108.0/24 --accept-dns=false
I lose connection to unraid locally, ssh dies right after the command, all my dockers are unreachable as well. -
9 minutes ago, EDACerton said:
The DNS queries aren't being rate limited, the error logging is what's being rate limited.
Please post diagnostics. That gives me more info that I can use to try and help.
sure !
-
Noticed some of my dockers with the infamous "not available" problem, I looked at the logs and seems that tailscale plug in is rate limiting DNS queries ?
I use tailscale up --advertise-exit-node --accept-routes --advertise-routes=mysubnet/24
May 10 17:11:13 Loki tailscaled: 2023/05/10 17:10:03 [RATELIMIT] format("dns udp query: %v") May 10 17:11:13 Loki tailscaled: 2023/05/10 17:10:12 [RATELIMIT] format("dns udp query: %v") (1 dropped) May 10 17:11:13 Loki tailscaled: 2023/05/10 17:10:12 dns udp query: context deadline exceeded May 10 17:11:13 Loki tailscaled: 2023/05/10 17:10:12 dns udp query: context deadline exceeded May 10 17:11:13 Loki tailscaled: 2023/05/10 17:10:12 [RATELIMIT] format("dns udp query: %v") May 10 17:11:13 Loki tailscaled: 2023/05/10 17:10:32 [RATELIMIT] format("dns udp query: %v") (5 dropped) May 10 17:11:13 Loki tailscaled: 2023/05/10 17:10:32 dns udp query: context deadline exceeded May 10 17:11:13 Loki tailscaled: 2023/05/10 17:10:32 dns udp query: context deadline exceeded May 10 17:11:13 Loki tailscaled: 2023/05/10 17:10:40 dns udp query: context deadline exceeded May 10 17:11:13 Loki tailscaled: 2023/05/10 17:10:40 dns udp query: context deadline exceeded
After I do tailscale down and check for updates all my dockers are green again.
Any suggestions ? -
2 minutes ago, dlandon said:
Unless you are using AD, leave it blank.
correct I usually leave it blank, since I don't have a domain.
-
3 minutes ago, dlandon said:
_mode=0777,dir_mode=0777,uid=99,gid=100,vers=3.1.1,credentials='/tmp/unassigned.devices/credentials_Unraid' '//DS918/Unraid' '/mnt/remotes/DS918_Unrai
5 minutes ago, dlandon said:This is the command that is used on Unraid to mount your remote share:
/sbin/mount -t cifs -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=3.1.1,credentials='/tmp/unassigned.devices/credentials_Unraid' '//DS918/Unraid' '/mnt/remotes/DS918_Unraid'
The credentials file contents are:
username=
password=
domain=
Try the command on Unraid and modify the options, etc and see if one of them is un-compatible with Synology.
I'm wondering if you didn't set a domain when you set up the share.
Thanks for checking, what domain would I use ? is that the WORKGROUP ?
-
24 minutes ago, dlandon said:
I just realized I've not gotten your diagnostics. You just posted a log snippet.
Please post diagnostics so I can see more of your setup.
Sure !
-
Just now, dlandon said:
You don't use a password on the remote share? Just a username?
It will ask for password once I submit that
-
Just now, dlandon said:
Can you post the mount command from the Linux VM so I can compare?
sudo mount -t cifs -o username=unraid //172.18.108.100/Unraid DS918/
very simple and worked right away.
-
22 minutes ago, dlandon said:
Which version of SMB was successful in mounting to the Synology.
UD attempts to mount the remote share with the following sequence of SMB versions:
- None to see if the remote server offers a version it supports.
- then 3.1.1
- then 3.0
- then 2.0
- finally 1.0
The next version is tried if the previous version fails.
It may be that the Synology needs a specific version not included here, and I need to add another version to try.
Maybe NFS would be a better solution to mount the Synology remote share in your case.
Synology shows that the linux vm is connected with SMB3 but not sure about the specific version. even then I have synology to allow SMB2 except v1.0 which should be compatible between unraid and synology. I can do NFS again but was trying to take advantage of Multi-channel support.
-
46 minutes ago, dlandon said:
I don't think so. I think you are confusing Windows browsing of shares with Unraid doing a CIFS mount. They are different.
I have no issues mounting remote shares and have not heard of anyone else having issues like what you are having. I'll keep looking, but I don't see anything.
The think the CIFS mount error is coming from the Synology and not from Unraid. The research I did was of no help with this error code so I don't have any other suggestions.
I was able to successfully mount Synology share with CIFS on a linux vm, it's using SMB3 , unraid is the only one that can't mount this synology and not sure what else to look.
-
1 minute ago, dlandon said:
This is confusing. How does Windows mount the Synology? Navigating the share how?
A windows host can mount a synology share without issue, I can either mount a drive in windows explorer, or I can go in the network and navigate the share there.
With that said, it means that the Synology is not the issue, ports are open otherwise windows wouldn't be able to mount the same folder I'm trying to mount in the Unraid, correct ? -
36 minutes ago, dlandon said:
This is your issue:
May 1 17:42:54 Loki kernel: CIFS: VFS: cifs_mount failed w/return code = -512
SMB uses port 445, but CIFS also needs port 139. Since you have PCs that can connect, I suspect there is an issue with port 139. Check the firewall on the Synology and verify both ports (445 and 139) are open in the firewall. If they are, toggle them off and back on again. Then see if Unraid can connect. I did a little research and found this was a solution on another server that was not a Synology NAS, but the same principle might apply.
What changed you may ask? Well it seems that Microsoft and samba are in a continuous cat and mouse chase here. Each update in Windows and samba can bring on new issues as Microsoft and samba up their security. Any update on a server or client can bring new challenges.
I did the following upon your suggestion @dlandon
- Rebooted Synology NAS
- Checked ports 119-445 (Open) and Firewall on Synology is OFF
- Rebooted Unraid
- Windows can mount the Synology without issue, as a Drive. not just navigating the share.
The issue is unraid/plug-in I believe -
On 5/2/2023 at 9:11 AM, ailliano said:
Anything I can do in the Synology ? @dlandon
-
41 minutes ago, dlandon said:
Check on the Synology and see if there are settings that controls CIFS mounts. I assume the Windows and Mac PCs were only browsing the shares and not mounting them with CIFS.
Anything out of the ordinary ? I never changed anything here and unraid always worked on previous versions.
-
12 hours ago, dlandon said:
Your server is not responding to a mount command, or it is taking too long. Is port 445 open to your remote server? It looks like you are accessing the remote server through the Internet. Check with your Internet provider to be sure it's not being blocked. Hopefully you are using a VPN to connect the two servers.
Did you let UD search for the remote shares on that server or set it up manually? You should set up the remote share using the IP address for the server, then let UD find the available shares. If you don't get a list of the shares, port 445 is probably not open.
Server is on same subnet unraid 172.18.108.10 and Synology at 172.18.108.100, I tried with automatic search and IP option, shares load fine with both choices.
-
Hello I'm unable to mount my Synology 918+ share via SMB3/2 to my unraid, I've tried mounting it with IP only, local DNS name, changed password to have no special characters, it will still not mount. Any help is appreciated.
*NFS works fine.
*The same share works fine on windows or mac, so UD is the only one that can't mount it.
May 1 17:42:24 Loki kernel: CIFS: Attempting to mount \\172.18.108.100\Unraid May 1 17:42:34 Loki unassigned.devices: Warning: shell_exec(/sbin/mount -t 'cifs' -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,credentials='/tmp/unassigned.devices/credentials_Unraid' '//172.18.108.100/Unraid' '/mnt/remotes/172.18.108.100_Unraid' 2>&1) took longer than 10s! May 1 17:42:34 Loki kernel: CIFS: VFS: \\172.18.108.100 Send error in SessSetup = -512 May 1 17:42:34 Loki kernel: CIFS: VFS: cifs_mount failed w/return code = -512 May 1 17:42:34 Loki unassigned.devices: SMB default protocol mount failed: 'command timed out'. May 1 17:42:34 Loki unassigned.devices: Mount SMB share '//172.18.108.100/Unraid' using SMB 3.1.1 protocol. May 1 17:42:34 Loki unassigned.devices: Mount SMB command: /sbin/mount -t cifs -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=3.1.1,credentials='/tmp/unassigned.devices/credentials_Unraid' '//172.18.108.100/Unraid' '/mnt/remotes/172.18.108.100_Unraid' May 1 17:42:34 Loki kernel: CIFS: Attempting to mount \\172.18.108.100\Unraid May 1 17:42:44 Loki kernel: CIFS: VFS: \\172.18.108.100 Send error in SessSetup = -512 May 1 17:42:44 Loki kernel: CIFS: VFS: cifs_mount failed w/return code = -512 May 1 17:42:44 Loki unassigned.devices: Warning: shell_exec(/sbin/mount -t cifs -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=3.1.1,credentials='/tmp/unassigned.devices/credentials_Unraid' '//172.18.108.100/Unraid' '/mnt/remotes/172.18.108.100_Unraid' 2>&1) took longer than 10s! May 1 17:42:44 Loki unassigned.devices: SMB 3.1.1 mount failed: 'command timed out'. May 1 17:42:44 Loki unassigned.devices: Mount SMB share '//172.18.108.100/Unraid' using SMB 3.0 protocol. May 1 17:42:44 Loki unassigned.devices: Mount SMB command: /sbin/mount -t cifs -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=3.0,credentials='/tmp/unassigned.devices/credentials_Unraid' '//172.18.108.100/Unraid' '/mnt/remotes/172.18.108.100_Unraid' May 1 17:42:44 Loki kernel: CIFS: Attempting to mount \\172.18.108.100\Unraid May 1 17:42:54 Loki kernel: CIFS: VFS: \\172.18.108.100 Send error in SessSetup = -512 May 1 17:42:54 Loki kernel: CIFS: VFS: cifs_mount failed w/return code = -512 May 1 17:42:54 Loki unassigned.devices: Warning: shell_exec(/sbin/mount -t cifs -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=3.0,credentials='/tmp/unassigned.devices/credentials_Unraid' '//172.18.108.100/Unraid' '/mnt/remotes/172.18.108.100_Unraid' 2>&1) took longer than 10s! May 1 17:42:54 Loki unassigned.devices: SMB 3.0 mount failed: 'command timed out'. May 1 17:42:54 Loki unassigned.devices: Mount SMB share '//172.18.108.100/Unraid' using SMB 2.0 protocol. May 1 17:42:54 Loki unassigned.devices: Mount SMB command: /sbin/mount -t 'cifs' -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=2.0,credentials='/tmp/unassigned.devices/credentials_Unraid' '//172.18.108.100/Unraid' '/mnt/remotes/172.18.108.100_Unraid' May 1 17:42:54 Loki kernel: CIFS: Attempting to mount \\172.18.108.100\Unraid May 1 17:43:04 Loki kernel: CIFS: VFS: \\172.18.108.100 Send error in SessSetup = -512 May 1 17:43:04 Loki kernel: CIFS: VFS: cifs_mount failed w/return code = -512 May 1 17:43:04 Loki unassigned.devices: Warning: shell_exec(/sbin/mount -t 'cifs' -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=2.0,credentials='/tmp/unassigned.devices/credentials_Unraid' '//172.18.108.100/Unraid' '/mnt/remotes/172.18.108.100_Unraid' 2>&1) took longer than 10s! May 1 17:43:04 Loki unassigned.devices: SMB 2.0 mount failed: 'command timed out'. May 1 17:43:04 Loki unassigned.devices: Mount SMB share '//172.18.108.100/Unraid' using SMB 1.0 protocol. May 1 17:43:04 Loki unassigned.devices: Mount SMB command: /sbin/mount -t 'cifs' -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=1.0,credentials='/tmp/unassigned.devices/credentials_Unraid' '//172.18.108.100/Unraid' '/mnt/remotes/172.18.108.100_Unraid' May 1 17:43:04 Loki kernel: Use of the less secure dialect vers=1.0 is not recommended unless required for access to very old servers May 1 17:43:04 Loki kernel: May 1 17:43:04 Loki kernel: CIFS: VFS: Use of the less secure dialect vers=1.0 is not recommended unless required for access to very old servers May 1 17:43:04 Loki kernel: CIFS: Attempting to mount \\172.18.108.100\Unraid May 1 17:43:04 Loki kernel: CIFS: VFS: cifs_mount failed w/return code = -95 May 1 17:43:04 Loki unassigned.devices: SMB 1.0 mount failed: 'mount error(95): Operation not supported Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg) '
-
[PLUGIN] ZFS Master
in Plugin Support
Posted
Hey all, pls help
I never removed anything, all 3 drives are 2 months old, I tried to do an online but got
zpool online nvme_cache /dev/nvme0n1p1
warning: device '/dev/nvme0n1p1' onlined, but remains in faulted state
use 'zpool replace' to replace devices that are no longer present
pool: nvme_cache state: DEGRADED status: One or more devices has been removed by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using zpool online' or replace the device with 'zpool replace'. scan: scrub repaired 0B in 00:02:45 with 0 errors on Wed Jul 26 12:19:49 2023 config: NAME STATE READ WRITE CKSUM nvme_cache DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 /dev/nvme0n1p1 REMOVED 0 0 0 /dev/nvme1n1p1 ONLINE 0 0 0 /dev/nvme2n1p1 ONLINE 0 0 0 errors: No known data errors
In Unraid they all show green, no errors