KombatJam

Members
  • Posts

    15
  • Joined

  • Last visited

KombatJam's Achievements

Noob

Noob (1/14)

1

Reputation

1

Community Answers

  1. I did a bit of digging around. I was able to add the second gateway but I set a priority for the primary nic to be priority 1 and the other as 5. Works good for now* Will update if things go wrong after reboot / update t0 6.12.0 Cheers
  2. Well shit. There we go. Thank you so much. Solved the issue and I'm back online. Marking as solved. Cheers
  3. What type of HBA are you using? Cheaper amazon or a server grade HBA? Or are you running an LSI card in IT mode? Cheaper hba cards can have issues when you load them up with drives. I tried a cheap 100$ HBA and it threw a ton of errors or dropped disks. Switched to a 40$ LSI card on eBay and I haven't had any issues since.
  4. I'm not sure what is going on, but it's super frustrating. I basically got disconnected from the internet on the unraid server and any interface with the bridge network. br0 works great. LAN Config is right as far as I can tell. Is there some routes that got broken? I haven't touched anything in there except when it went to shit. I have 4 nics, 2 on board as fail-over. 1x X520 D2 card (so 2 nics). They too have a different subnet going into it, but they aren't assigned to anything. They'll be passed over to a VM. I should also mention this issue happened before I put this dual NIC in there. I just upgraded from a 1 to 2 for opnsense etc. Here are the diagnostics. I hope somebody can help me with this. I can't update dockers, wireguard is broken (but a VM with it and a docker with it's own IP both work). Thanks! tower-diagnostics-20230314-2043.zip
  5. The issue I'm having is that anything hosted on the Unraid server IP, applications or wireguard. None of those are accessible outside the network. Yet if I change the IP type from the bridge to its own IP on the network. It then works fine. I'm not sure what would have caused i didn't do any changes before it stopped working. After that I tried to simplify the network. I had 2 interfaces in failover. Set to single. Second nic / subnet was removed just in case. But if I spin up the exact unraid config on Proxmox, it then starts working fine. The only difference there is the interface went from physical to digital. Yet it's using the same subnet, everything as before. Just a different interface but same network, wire etc. The rest of the network runs just fine. I'm perplexed as it just started "not" working.
  6. I have a weird issue I can't seem to figure out, but I hope it's something stupid I'm missing. Basically, it was working fine one night, but I had to change a few things on my router and noticed it stopped working. Thinking it was the router, I factory reset it as well as put a spare one I had just to see. Same issue. Not long ago I was playing around with Proxmox and decided to try and run Unraid from it. Worked great, VM's shares etc worked. To my surprise the shares started working as well. Not sure what it could be, it was working fine for years.. Attached is the diagnostics in hopes that somebody can help me figure this out. tower-diagnostics-20230222-1714.zip
  7. So I have a 4 disk, 3 data array and had one of the drives die. So I replaced it and started rebuild, it failed about 50% in and the disk went offline. The disk was on a molex to sata, swapped to a new direct from PSU cable, changed from HBA header to SATA / brand new cable. I know it's a mess of drives, it's temporary so I can dump it's content and rebuild a new array etc. The drive that's not working is the WD42PURZ. I've tried a handful of things.. Reboot Change SATA connector / Controller New Power cable unassigned devices - Wipe - Format XFS - SMART Pass - Extended test pass - 20 days up time Here are the diagnostic logs, hopefully it helps. Thanks.. Edit found the info. Solved https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself (Use at own risk)
  8. So I decided to link an outside data store on my shares folder for Plex using the following command: ln -s /mnt/zfs/Plex/ /mnt/user/Plex ln: failed to create symbolic link '/mnt/user/Plex': No space left on device To be clear, this originally worked, but I had the share miss configured and had dockers writing on disks instead of the zfs array. Unraid basically created a folder in the place of the symlink and moved it inside the folder. Thinking it was an easy fix, I just deleted the entire folder, and tried to create the link again. Which is when I get the error above. If I try a different name that "Plex" the link will work and functions as it should. I tried taking the array offline, rebooting, etc. But it keeps giving me the error. I tried the unlink command, as well as find -xtype l with no success. Hopefully somebody has seen this and has a fix.
  9. I am running my unraid server on an X99 Deluxe II with 3x NVME drives just installed. Please let me know the best course of action. I did find to update the syslinux.cfs with append initrd=/bzroot pci=nommconfto but this has not resolved the issue. Since installation the logs gets filled immediately on reboot with PICE errors. Here are logs entries: Jan 7 21:39:23 Tower kernel: pcieport 0000:00:02.0: AER: Corrected error received: 0000:03:00.0 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: device [2646:2263] error status/mask=00000001/0000e000 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: [ 0] RxErr Jan 7 21:39:23 Tower kernel: pcieport 0000:00:02.0: AER: Corrected error received: 0000:03:00.0 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: device [2646:2263] error status/mask=00000001/0000e000 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: [ 0] RxErr Jan 7 21:39:23 Tower kernel: pcieport 0000:00:02.0: AER: Corrected error received: 0000:03:00.0 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: device [2646:2263] error status/mask=00000001/0000e000 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: [ 0] RxErr Jan 7 21:39:23 Tower kernel: pcieport 0000:00:02.0: AER: Corrected error received: 0000:03:00.0 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: device [2646:2263] error status/mask=00000001/0000e000 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: [ 0] RxErr Jan 7 21:39:23 Tower kernel: pcieport 0000:00:02.0: AER: Corrected error received: 0000:03:00.0 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: device [2646:2263] error status/mask=00000001/0000e000 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: [ 0] RxErr Jan 7 21:39:23 Tower kernel: pcieport 0000:00:02.0: AER: Corrected error received: 0000:03:00.0 Jan 7 21:39:23 Tower kernel: pcieport 0000:00:02.0: AER: Corrected error received: 0000:03:00.0 Jan 7 21:39:23 Tower kernel: pcieport 0000:00:02.0: AER: Corrected error received: 0000:03:00.0 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: device [2646:2263] error status/mask=00000001/0000e000 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: [ 0] RxErr Jan 7 21:39:23 Tower kernel: pcieport 0000:00:02.0: AER: Corrected error received: 0000:03:00.0 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: device [2646:2263] error status/mask=00000001/0000e000 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: [ 0] RxErr Jan 7 21:39:23 Tower kernel: pcieport 0000:00:02.0: AER: Corrected error received: 0000:03:00.0 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: device [2646:2263] error status/mask=00000001/0000e000 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: [ 0] RxErr Jan 7 21:39:23 Tower kernel: pcieport 0000:00:02.0: AER: Corrected error received: 0000:03:00.0 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: device [2646:2263] error status/mask=00000001/0000e000 Jan 7 21:39:23 Tower kernel: nvme 0000:03:00.0: [ 0] RxErr Jan 7 21:39:23 Tower kernel: pcieport 0000:00:02.0: AER: Corrected error received: 0000:03:00.0 Thanks!
  10. I've been having weird issues with plex lately, and wanted to refresh all metadata, but when I try to modify anything in the metadata folder, i get the "You need permission from Nobody". Docker Safe New Perms doesn't touch this, so my question is, would somebody have a valid folder tree / permissions? It would allow me to change it manually, or, if you have a plugin that could do that also. I am not sure I trust the New Perm tool, as there are a few warnings that crop up from messing with the appdata folder, but unfortunately, this is where my issues reside. Attached is a copy of my diagnostics in case it can point to a solution. If you do spot something, I'd love to know where / how you spotted it so I can learn and fix it myself in the future Thanks! tower-diagnostics-20180703-1404.zip Edit: Looked around and noticed some people had issues if the appdata content was hosted on the cache drive, but cloned to the array during the mover script. They say to point the container to /mnt/cache instead of mnt/user, but I can't see it being the proper answer, no?
  11. So the data transfer is all done, throw in a few gigabit connections, 4 USB 3 externals, and about 8TB in no time. Rebuilt the array by removing the parity drive (temp until data was done), enable turbo write, transferred all the stuff (from 30-40MB/s to 150-200MB/s (x2) is a big plus). Once done, I changed the cache minimum size properly this time (second attempt as I added one too many zeros), enabled the parity check, and it's currently at 80% done in it's backup. I'd be transferring data until saturday afternoon if I didn't find this out! Thanks again for all your help, it really saved me a headache, as well as get things right from the early stages vs down the road.
  12. Oh I see, that makes total sense, I basically, set it and forget it. I'll sit down and configure that properly tonight, thanks for the guidance, it should help me fix this issue.
  13. I hope this is the correct place to put this.. So I am looking for a way / script that can automatically disable a share's ability to use the cache during a write operation. Basically, I am migrating from a 5 disk array to Unraid, but when I put the cache disk and it fills up (250GB ssd vs 9TB of data), it will fill the cache drive, then throw errors during write, either not enough space. I was wondering if there is a script that I can run that would disable that share's access to the cache when it hits, lets say 85%, so it would write to disk normally again, start the mover process, and once it's at ~10%, enable the cache again. I tend to do this manually when I can, but I still have 3-4 more days of transferring, so if I can use a script for this, it would be a big time saver. Cheers