xaositek

Members
  • Content Count

    62
  • Joined

  • Last visited

Everything posted by xaositek

  1. Just a heads up - this isn't the first time we've been bitten by this issue. I personally always pop-open the GitHub page and check what the commits were to ensure it isn't this type of revert. GitHub: https://github.com/linuxserver/docker-unifi-controller You can see it happen here when it reverted from 6.2 to 6.1: https://github.com/linuxserver/docker-unifi-controller/commit/6eff0f3d19534437a2b3a823d47e94e8487f93cf
  2. Downgraded to unRAID 6.9.1 and all drives immediately spun down after they wouldn't spin down all day under 6.9.2. Apr 8 20:43:00 cobblednas emhttpd: spinning down /dev/sdi Apr 8 20:43:00 cobblednas SAS Assist v0.85: Spinning down device /dev/sdi Apr 8 20:43:03 cobblednas emhttpd: spinning down /dev/sdf Apr 8 20:43:03 cobblednas SAS Assist v0.85: Spinning down device /dev/sdf Apr 8 20:43:03 cobblednas emhttpd: spinning down /dev/sdd Apr 8 20:43:03 cobblednas SAS Assist v0.85: Spinning down device /dev/sdd Apr 8 20:43:03 cobblednas emhttpd: spinn
  3. Performed the same as SimonF for reference. root@cobblednas:~# date && cat /sys/block/sdf/sdf1/stat Thu Apr 8 13:05:17 CDT 2021 352 6073 57504 1456 549 6102 53208 7431 0 6344 8887 0 0 0 0 0 0 root@cobblednas:~# date && cat /sys/block/sdf/sdf1/stat Thu Apr 8 13:05:34 CDT 2021 352 6073 57504 1456 549 6102 53208 7431 0 6344 8887 0 0 0 0 0 0 root@cobblednas:~# dat
  4. Fair point completely @SimonF. The research I had available to me across my two unRAID servers had limited this issue only to the server with SAS drives. My other unRAID box which has only SATA drives is working as expected.
  5. I found this on my system as well across all my SAS drives. I've sent diagnostic info over to Doron who maintains the "Spin Down SAS Drives" plug-in.
  6. Looks like something in unRAID 6.9.2 is resulting in drives not spinning down. Even trying to manually spin down doesn't help. Here's the Tools > System Log Apr 8 08:12:12 cobblednas emhttpd: spinning down /dev/sde Apr 8 08:12:12 cobblednas emhttpd: spinning down /dev/sdi Apr 8 08:12:12 cobblednas SAS Assist v0.85: Spinning down device /dev/sde Apr 8 08:12:12 cobblednas SAS Assist v0.85: Spinning down device /dev/sdi Apr 8 08:12:33 cobblednas emhttpd: read SMART /dev/sde Apr 8 08:12:33 cobblednas emhttpd: read SMART /dev/sdi Apr 8 08:15:15 co
  7. Take the glory!! It's awesome work and thank you to @ljm42 for calling it out! I've been using this daily since I stood up my second unRAID server and the craftsmanship is great. I updated and was able to reissue keys for my four devices in less than 10 minutes.
  8. Noticed I can view messages but I can not longer delete messages. Here's the log files (logs on a recreation so time stamps are a bit off) and a screenshot. ==> mail.log <== Mar 19 11:10:22 ce466793ae2b dovecot: imap-login: Login: user=<xaositek@mymailserver.info>, method=PLAIN, rip=192.168.55.11, lip=127.0.0.1, mpid=14851, TLS, session=<LugE9OW9toLAqDcL> Mar 19 11:10:22 ce466793ae2b dovecot: imap(xaositek@mymailserver.info)<14851><LugE9OW9toLAqDcL>: Error: Mailbox INBOX: link(/data/domains/mymailserver.info/xaositek/Maildir/cur/1615003435.M3
  9. unRAID is Slackware based, not FreeBSD. Anyone feel free to correct me if I'm wrong, but I don't believe this changes it for us.
  10. Mistakenly attempted to add a new peer yesterday and the whole thing came crumbing down. I removed the entire /boot/config/wireguard folder, uninstalled the plug-in and tried again. Now when I try to create my initial tunnel, the page just refreshes but no settings are saved. The /boot/config/wireguard folder is not made either. What (and where) are the logs that would be relevant to troubleshooting this issue? I can post them pretty quick. Edit: Figured out that some remaining iptables entries in the FORWARD rule and also the WIREGUARD chain all together wa
  11. I saw someone else comment on the new Git directory on /boot/ ... If we uninstalled the plug-in, can we remove /boot/.git/ and /boot/.gitattributes ? Are there any other new files that are created we should consider removing? Shouldn't the plug-in clean up after itself?
  12. Hmm well I tried this and didn't really care for the need to expose port forwarding... Now I'm stuck with the cryptic Unraid.net DNS hostname even when I've signed out and remove the plugin. How can I go back to local hostname and that's sufficient? Edit: Figured it I could go into Management Access and set Use SSL / TLS to No and it set local DNS names back into effect.
  13. Upgraded my Production and Backup servers from 6.9.0-RC2 - silky smooth and working well! Great job @limetech!!! You guys rock!
  14. Every time someone asks they push it back 2 weeks 🤪
  15. Confirmed I swapped from v5.6 to v5.7 and it worked. To get "latest" to work I did have to remove and re-add but all is working now!
  16. Thank you! This fixed it up immediately.
  17. Hey @spants - just wanted to bring attention here in case a rollback if necessary. Can we do anything to assist in troubleshooting?
  18. Thank you for confirming it wasn't just my bad luck or a misconfig.
  19. Anyone else have their PiHole break with the update that came out today? Was humming along great and now it's broken. I even did a complete removal, delete the template, remove all AppData, and restore from Teleporter function backup. Ideas? I'm not sure where logs get stored but along the header of the homepage I see "Lost Connection to API" and all DNS resolution stopped.
  20. I am fairly new to the unRAID space (started at v6.8.3) and was an adopter of v6.9-beta25. I don’t think it is at all stagnant, in fact for most bugs you find, their is community support to help you resolve the issue pretty timely. LimeTech is doing a great job and still very much a front runner in terms of stability and features. The question I have is “Is there a feature you’re missing that emerges in 6.9 or do you feel like the project is suddenly going to collapse?” I’ve been running 6.9-rc2 on two servers for many weeks now and they hum righ
  21. @saarg - was curious it looks like Alpine 3.13.1 is out at this point. I know LinuxServer.IO tends to rebase everything across the board in a rolling fashion, but I apologize I don't know where to look for the status on that, if it has even began in any packages. Thanks in advance!
  22. I just checked my updates page and RC2 is still the current variant at the moment.
  23. YOU ARE AMAZING!!! I had it with two slots because I didn't think it would actually matter. That fixed it immediately. Thank you Thank you Thank you!!!!
  24. I have two unRAID servers, the first was built on 6.8.3 and the Cache drive is an NVMe drive and is running XFS filesystem. The second server has a single 10k RPM 300gb harddrive which I am using for a Cache drive and is built on 6.9RC2. But now unRAID is forcing me to have BTRFS which seems to be highly unstable and I would prefer XFS. I've tried to manually partition and format the drive, but unRAID is having none of it. Any ideas on how to put an HDD in a Cache pool, but keep it XFS partitioned?
  25. I have two certs and four separate proxy hosts defined, but today I noticed that I started getting the following error when I try to create a new cert. I have redacted my email address and domain utilized. Error: Command failed: /usr/bin/certbot certonly --non-interactive --config "/etc/letsencrypt.ini" --cert-name "npm-8" --agree-tos --email "MYEMAIL@MAC.COM" --preferred-challenges "dns,http" --domains "SUB.DOMAIN.COM" Traceback (most recent call last): File "/usr/bin/certbot", line 11, in load_entry_point('certbot==1.4.0', 'console_scripts', 'cert