brandon3055

Members
  • Posts

    55
  • Joined

  • Last visited

  • Days Won

    1

brandon3055 last won the day on February 2 2023

brandon3055 had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

brandon3055's Achievements

Rookie

Rookie (2/14)

21

Reputation

  1. I think the sdc errors were a bit of a red herring. I have done some more investigation, and it looks like the issue is the last docker container I added. It seems to have a memory leak or something, because it slowly consumes more and more ram until the system eventually locks up. The thing that threw me off is telegraf. It looks like there is around a gig of ram free, But apparently that's not the case.
  2. The boot drive is definitely sdc I currently have 2 other flash drives installed which are using sda and sdb. One of those is my dummy array (I'm using a raidz2 pool as my main storage)
  3. Hi guys, Just wondering if someone can confirm my suspicion here. I recently built a new unraid nas, and it's been running great for a few weeks now. At least until 2 nights ago, when the system randomly became unresponsive. I could still access mounted shares just fine, But the web UI, ssh and my docker apps were all unresponsive. In the end, I had to do a hard reset. This prompted me to finally get remote syslog up and running, as well as Telegraf. So when it happened again last night, I actually got some useful information. This is what the syslog shows immediately before the lockup. (sdc is my boot USB) The Telegraf data seems to support this. The last thing it shows is a sharp spike in ioWait I initially assumed this was caused by a docker container I installed a few hours before the first lockup. But this data seems to point squarely at my boot USB. The USB is a Cruzer Fit 16GB which worked flawlessly in my previous unraid NAS for several years. The first thing I did the first time this happened was create a flash backup so worst case I can recover. I'm just looking for a second opinion. I have attached my syslog and diagnostics from immediately after the last lockup. data-diagnostics-20230521-2249.zip syslog-10.0.0.133.log
  4. I am running Z2. For my use case, any more than that would be excessive. These are WD Red Plus drives, so they should be pretty reliable. And all important data on this server will be backed up to a secondary server made from my old nas.
  5. This was also a major contributing factor when I originally switched to unraid. But now with a more stable income and the ever decreasing cost of drives, it made sense for me to build a new array from scratch. It should cover my needs for a few years, and by the time I need to upgrade again I will most likely be ready to retire those drives to my backup server and upgrade to a set of new, higher capacity drives.
  6. I have two reasons for this. First is reliability. If that USB drive randomly fails, The entire system fails. Granted, this is already the case with the Boot USB, but unraid does everything possible to minimise load on that USB. And if it fails, the system will continue running until next reboot. I don't know what happens if the only array disk randomly decides to fail. The other reason is there does not seem to be a way to turn off the warning icons that show up when the array is unprotected. They get annoying after a while. Especially the favicon.
  7. I have been running zfs for a couple of weeks now. It's Nice! The thing I missed most when I switched to unraid was the write speed. Granted, It's not often you need it, but on those rare occasions when you need to transfer multiple hundreds of Gigabytes... It's so nice! Not to mention snapshots and self-healing! It's just rather annoying that I need the parasitic USB drives in the main array just so I can turn it on. I really hope we get an option to use a ZFS pool as the main array. Those drives serve absolutely no purpose, They just consume two slots, which I would have much rather used on additional drives in my zfs pool.
  8. There may be a way to do this in a "per user" way, but from what I understand the way these rules are set up is per IP, as in the IP address of the client you wish to give access to. So the first step is to make sure all your clients have static IP addresses on your local network. Static IP's can be configured in the client device's network settings. Or a better way to handle it is via your router if your router supports assigning static IP's to connected devices. Then your NFS rule for your unraid share would look something like this: 192.168.1.128/24(sec=sys,rw,insecure,anongid=100,anonuid=99,no_root_squash) With the IP at the start being the IP of the connected client. If you want to specify multiple clients, then simply separate them with a space. e.g. 192.168.1.128/24(sec=sys,rw,insecure,anongid=100,anonuid=99,no_root_squash) 192.168.1.125/24(sec=sys,rw,insecure,anongid=100,anonuid=99,no_root_squash) 192.168.1.127/24(sec=sys,rw,insecure,anongid=100,anonuid=99,no_root_squash)
  9. Thank you. That explains my confusion. That post says "NFS rules on the client", Unless the terminoligy is reversed with file sharing the client is my Arch system not the unraid server.
  10. So I just discovered the hard way that my backup system has not been working since I updated around the 4th. At the same time, I discovered that the system I have in place to alert me that my backups aren't working, does not cover a situation where root has no write access to the share. My setup is as follows. My main desktop is running arch, and I have all of my shares mounted via fstab using the following. 10.0.0.133:/mnt/user/Backup /mnt/share/Backup nfs defaults,nolock,soft 0 0 That has worked perfectly fine for years. But it seems since the update, the root user on arch no longer has write access to files or in folders owned by my normal non-root user. Since the update, any files or folders created by the root the on arch are created as 65534:65534 on arch. I have already done some investigating and hound the following post, which seems to identify the issue. But I must be missing something because 'no_root_squash' is apparently not a valid nfs option. At least not via fstab. Furthermore, supposedly the mount options used by UD resolve this issue. But I have two unraid systems now running 6.11.5 and when I mount my Backup share in my second unraid system via UD I see the same issues when modifying files from the client system. Given how long 6.10 has been out, I'm hoping someone has figured out a simple solution to this. Any help would be appreciated.
  11. TBH, I do feel this thread has evolved far beyond its original purpose. It has turned into a fun and interesting back and forth community discussion about 6.12. And I think the occasional little fun code only adds to that. Yes, the codes did get a little out of hand at one point. But in my opinion, the only thing that really detracts from this thread are the haters who complain about the codes.
  12. You know what. I aggree. Too many codes.... The obvious solution is to provide a harder code that will take longer to crack. That should slow things down a little! Good luck
  13. Yea in the end i just disabled cache on all shares, rsync'd everything to my backup share, Remove the cache pool and then restored everything to the appropriate shares.
  14. Going to have to continue this in the morning but i removed the bad drive and the cache si now readable but it looks like it has gone read-only as a result of having no space left? So the mover is unable to do its job. At the very least i can access the file now and can manually copy everything off if i have to. evergreen-diagnostics-20230209-0027.zip
  15. I already checked and replaced the cables to both SSD's and it had no effect. Power connections also look good but i dont have a free sata power cable to rule it out completely. The First report attached to this post was generated while the server was attemptine to start. (via ssh) The second was generated when the GUI finally loaded. evergreen-diagnostics-20230208-2225.zip evergreen-diagnostics-20230208-2235.zip