localhost

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by localhost

  1. You are having clean shutdown/reboot until a container has an error? And you can stop the array with all disks cleanly unmounting in normal operation? I wasn't able to cleanly shutdown or stop unraid since 12.4 until bypassing fuse, since then I've had no issues with disks failing to unmount. This however hasn't resolved the stability issues, its just one issue since 12.4 resolved.
  2. TBH I can't really help on the crashes. I'm experiencing what many others are reporting on this version of unraid. Unexplained high CPU loads, then slow GUI, usually followed by a crash in the near future. Then its fine after reboot until that repeats. I've been asking myself for some time why I run unraid on a primary server and really I think my solution is going to be truenas. Unraid serves me well as a VM host, but over the years has been flaky as a services server, and since I've been exclusively zfs for a long time now I think its time. PS. I just caught it bogging down and rebooted before a crash would come, still there is nothing interesting in the syslog.
  3. Are your dockers pointing to /mnt/cache/appdata or /mnt/user/appdata? If /user do you have exclusive shares turned on and does the appdata share show as exclusive. Either pointing to /cache or turning on exclusive shares fixed the unmounting issue for me, and my issue was not exclusive to Deluge (as I have just moved to qbittorrentvpn as deluge performance drops off significantly relative to number of active downloads) This hasn't solved my issues, I've had one crash since. However its down from 1-2 crashes per day to one in the last couple days. Syslog server didn't update the log to the designated share so I've set it to mirror to the flash and waiting for the next one. This system has been 100% stable on 11.5 for a long time and crashing frequently after updating to 12.4 a few days ago. So I'm expecting a software bug unless 11.5 was just so much better that it was able to hide a hardware fault. I've read about the issues with macvlan, I use vlans and custom networks for some dockers. Have switched to ipvlan before the last crash so interested to see what the log shows for the next. I may have to experiment with the network config further, I've just been avoiding it the last couple days as its been a long time since I worked in IT and after a decade of working the stupid hours the motorsport industry requires I'm lucky if I can remember where I live some days, so I need to do some refreshing first really...
  4. I'm wondering if 6.12 has a FuseFS issue. Enabling exclusive shares in the global share settings has allowed me to cleanly stop the array for the first time since updating from 6.11.5
  5. This has been my experience too since going 11.5 to 12.4, haven't had a clean shut down since. My drive formats/config are as they were before the update. Been running zfs without issue before 6.12.x for a long time. Cache is btrfs though
  6. I'm pretty sure its something in the Docker config, but new settings don't seem to want to stick. Totally stable system, now needs hard reset every few hours since update. I'd like to create a fresh USB, carry over the disk config & network settings (though more a nice to have,) then recreate the Docker in the new directory format. My USB is a mess anyway, I've got old redundant config files from 2017 onwards. So I'd just like someone to confirm which are the current relevant config files I'll need to copy over, and can I bring over my docker templates?
  7. Well OK, after it not working for ages (I did remove and reinstall it yesterday too with no change,) I log in today and its back. No reboot inbetween. Oh well, thats IT.
  8. I'm having the no status (---) issue now. Setting screen not showing anything. Do you plan any more updates? I just had to do a cold reboot hence the error. But it has not been working for a while before now. Thanks zpool.txt
  9. Don't worry, I fixed it. It was the update looks like.
  10. I have an issue with vlans which has just appeared. I hope I'm OK to just link to my reddit thread here: Thanks
  11. I have had connection issues on the desktop app lately to my usual (PIA) servers. Switching servers fixes but I assume its some teething issues
  12. Hi, I'm having file permission issues on files/dirs created by delugeVPN. The UMASK is set to 000 and files are created under the nobody user however permissions are always incorrect and I am unable to access my files. I created a thread about my issue here: And was advised this would be the best place to ask. Has anybody else experienced this, I have tried searching the thread but no luck yet. Currently all my other dockers are unable to work with delugeVPN created dirs. Thanks
  13. Hi all, For as long as I've run an unraid box I've had weird permissions issues. I'm due to replace my primary nas with a much more powerful machine and have been trialling other options (proxmox maybe but for my use I'm leaning nethserver) unless I can get this sorted now, I'll need another license upgrade for the new machine so its time to shit or get off the bog. My issues have always been related to docker created directories/files. I used to run AD but have since shrank my network and gone back to workgroup. These issues have been the same in both cases. Here are my docker settings vs actual permissions I haven't run chmod for a few days and as you can see the new dirs are missing the expect permissions for 000. But on top of that I am connecting to the shares with a user that has access to all shares. This seems to be working correctly as I have a couple of private shares that I would otherwise not be able to access, however even this user can't access those nov 23 & 24 dirs (from windows.) If someone could explain my error I would be eternally grateful, I like the unraid platform in general but find this issue to be unique to it and just wont deal with it anymore.
  14. For my primary storage/docker server 'Sector001'
  15. I've been running pi hole on my pi for a while now and its great. I decided as I have an unraid server I really should just be running it in a docker. However I can't start it, it looks to have a conflict on port 53 but looking at netstat I don't see any open connections using the port. Does unraid run dnsmasq or something on that port, I can't seem to find any options for a dns server. Thanks
  16. Its all rebuilt and back to normal now, I will keep an eye on it. Thanks
  17. Thanks for the suggestion, its a brand new disk and cable now so I should be good in the short term but will consider that for sure.
  18. Dayum, we were on the same page just then. After making the post I went out to my shed and grabbed some sata cables as a last resort. Have swapped the cable for parity 2 and its now rebuilding. I never understood this, even almost a decade ago when I just to work in IT, how the hell does a sata cable just fail. Its obviously not the first time I've seen it but I'm always just left in disbelief. Its a low power signal cable that hasn't been touched, and just fails... Anyway i'll see how it goes and if I have anymore problems with the cache i'll do the same. I refuse to believe 2 cables have failed simultaneously... Thanks for your time
  19. Hi UNRAID community, I've been having some problems for the last week or so that I've been unable to resolve. To try and avoid a wall of text I'll add some bullet points for things as they happened: Parity 2 went offline showing 2000+ errors. I could not spin the disk up and a reboot would make it disappear entirely. I assume disk failure. Swap new disk in, rebuild sucessfully - all seems resolved Test 'failed' disk - passes all extended tests Transmission stops being able to write to cache (BTRFS - SSD), errors go from I/O error to read-only error I assume a FS corruption, decide to switch cache to xfs. Reformat & works for a bit. Parity 2 goes offline - exactly as before Swap Parity 2 disk again to test the new disk - after reboot cache has no filesystem and needs formatting Now when I try to rebuild the array it always fails after a few minutes on parity 2, and the cache keeps going unreadable. I haven't pulled the cache drive for tests yet but it passes SMART. I have also run a couple of passes of memtest just incase, which passed. Any help would be much appreciated.
  20. Thats right, its the linuxserver release. I'll follow the link now thanks
  21. I have had a look at the container settings and this is how it currently is:
  22. I'm struggling to get transmission to write files/folders I can actually access. I have been looking around for a solution including on this forum, the only thing I saw which seems relevant was to adjust the umask option in transmissions settings.json file. I have done this and set it to 2 as per someones suggestion but this hasn't changed anything for me so currently I have to open a terminal and use chmod to change the permissions before I can access any of the files. I don't really understand how umask translates to permissions either. Any insight on this would be much appreciated. Thanks
  23. Oh OK I'll do that then. Thanks for the advice. I'm not too concerned about losing say 24 hours worth of the appdata share, I just didn't want to have to reconfigure everything. Now if I can just get transmission to write my downloads with permissions I can access I'll be all green lights again. Thank you
  24. I was under the impression the cache is not protected by the parity, which was why I didn't want important files on it. Am I wrong on this?
  25. Hi all, I've been cleaning house a bit on my server this week. Replaced an ssd which was in the array with a HDD and added a second parity. All went smoothly. As part of this clean up one thing thats been bugging me for ages are some files seemingly stuck on the cache. I installed the cache about a year ago and was a bit enthusiastic when adding the cache to shares, I added it to appdata. I realised later I didn't want that data on there and set the use cache option to no, then left it assuming the mover would move it all back later. I checked today and can see there are two shares data on the cache I don't want there; appdata and system. I never turned cache on for system though. I assume using dolphin to move the files back may break some dockers etc so what is the proper procedure here to get these files back on the array? TIA