brandon3055

Members
  • Posts

    55
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by brandon3055

  1. I think the sdc errors were a bit of a red herring. I have done some more investigation, and it looks like the issue is the last docker container I added. It seems to have a memory leak or something, because it slowly consumes more and more ram until the system eventually locks up. The thing that threw me off is telegraf. It looks like there is around a gig of ram free, But apparently that's not the case.
  2. The boot drive is definitely sdc I currently have 2 other flash drives installed which are using sda and sdb. One of those is my dummy array (I'm using a raidz2 pool as my main storage)
  3. Hi guys, Just wondering if someone can confirm my suspicion here. I recently built a new unraid nas, and it's been running great for a few weeks now. At least until 2 nights ago, when the system randomly became unresponsive. I could still access mounted shares just fine, But the web UI, ssh and my docker apps were all unresponsive. In the end, I had to do a hard reset. This prompted me to finally get remote syslog up and running, as well as Telegraf. So when it happened again last night, I actually got some useful information. This is what the syslog shows immediately before the lockup. (sdc is my boot USB) The Telegraf data seems to support this. The last thing it shows is a sharp spike in ioWait I initially assumed this was caused by a docker container I installed a few hours before the first lockup. But this data seems to point squarely at my boot USB. The USB is a Cruzer Fit 16GB which worked flawlessly in my previous unraid NAS for several years. The first thing I did the first time this happened was create a flash backup so worst case I can recover. I'm just looking for a second opinion. I have attached my syslog and diagnostics from immediately after the last lockup. data-diagnostics-20230521-2249.zip syslog-10.0.0.133.log
  4. I am running Z2. For my use case, any more than that would be excessive. These are WD Red Plus drives, so they should be pretty reliable. And all important data on this server will be backed up to a secondary server made from my old nas.
  5. This was also a major contributing factor when I originally switched to unraid. But now with a more stable income and the ever decreasing cost of drives, it made sense for me to build a new array from scratch. It should cover my needs for a few years, and by the time I need to upgrade again I will most likely be ready to retire those drives to my backup server and upgrade to a set of new, higher capacity drives.
  6. I have two reasons for this. First is reliability. If that USB drive randomly fails, The entire system fails. Granted, this is already the case with the Boot USB, but unraid does everything possible to minimise load on that USB. And if it fails, the system will continue running until next reboot. I don't know what happens if the only array disk randomly decides to fail. The other reason is there does not seem to be a way to turn off the warning icons that show up when the array is unprotected. They get annoying after a while. Especially the favicon.
  7. I have been running zfs for a couple of weeks now. It's Nice! The thing I missed most when I switched to unraid was the write speed. Granted, It's not often you need it, but on those rare occasions when you need to transfer multiple hundreds of Gigabytes... It's so nice! Not to mention snapshots and self-healing! It's just rather annoying that I need the parasitic USB drives in the main array just so I can turn it on. I really hope we get an option to use a ZFS pool as the main array. Those drives serve absolutely no purpose, They just consume two slots, which I would have much rather used on additional drives in my zfs pool.
  8. There may be a way to do this in a "per user" way, but from what I understand the way these rules are set up is per IP, as in the IP address of the client you wish to give access to. So the first step is to make sure all your clients have static IP addresses on your local network. Static IP's can be configured in the client device's network settings. Or a better way to handle it is via your router if your router supports assigning static IP's to connected devices. Then your NFS rule for your unraid share would look something like this: 192.168.1.128/24(sec=sys,rw,insecure,anongid=100,anonuid=99,no_root_squash) With the IP at the start being the IP of the connected client. If you want to specify multiple clients, then simply separate them with a space. e.g. 192.168.1.128/24(sec=sys,rw,insecure,anongid=100,anonuid=99,no_root_squash) 192.168.1.125/24(sec=sys,rw,insecure,anongid=100,anonuid=99,no_root_squash) 192.168.1.127/24(sec=sys,rw,insecure,anongid=100,anonuid=99,no_root_squash)
  9. Thank you. That explains my confusion. That post says "NFS rules on the client", Unless the terminoligy is reversed with file sharing the client is my Arch system not the unraid server.
  10. So I just discovered the hard way that my backup system has not been working since I updated around the 4th. At the same time, I discovered that the system I have in place to alert me that my backups aren't working, does not cover a situation where root has no write access to the share. My setup is as follows. My main desktop is running arch, and I have all of my shares mounted via fstab using the following. 10.0.0.133:/mnt/user/Backup /mnt/share/Backup nfs defaults,nolock,soft 0 0 That has worked perfectly fine for years. But it seems since the update, the root user on arch no longer has write access to files or in folders owned by my normal non-root user. Since the update, any files or folders created by the root the on arch are created as 65534:65534 on arch. I have already done some investigating and hound the following post, which seems to identify the issue. But I must be missing something because 'no_root_squash' is apparently not a valid nfs option. At least not via fstab. Furthermore, supposedly the mount options used by UD resolve this issue. But I have two unraid systems now running 6.11.5 and when I mount my Backup share in my second unraid system via UD I see the same issues when modifying files from the client system. Given how long 6.10 has been out, I'm hoping someone has figured out a simple solution to this. Any help would be appreciated.
  11. TBH, I do feel this thread has evolved far beyond its original purpose. It has turned into a fun and interesting back and forth community discussion about 6.12. And I think the occasional little fun code only adds to that. Yes, the codes did get a little out of hand at one point. But in my opinion, the only thing that really detracts from this thread are the haters who complain about the codes.
  12. You know what. I aggree. Too many codes.... The obvious solution is to provide a harder code that will take longer to crack. That should slow things down a little! Good luck
  13. Yea in the end i just disabled cache on all shares, rsync'd everything to my backup share, Remove the cache pool and then restored everything to the appropriate shares.
  14. Going to have to continue this in the morning but i removed the bad drive and the cache si now readable but it looks like it has gone read-only as a result of having no space left? So the mover is unable to do its job. At the very least i can access the file now and can manually copy everything off if i have to. evergreen-diagnostics-20230209-0027.zip
  15. I already checked and replaced the cables to both SSD's and it had no effect. Power connections also look good but i dont have a free sata power cable to rule it out completely. The First report attached to this post was generated while the server was attemptine to start. (via ssh) The second was generated when the GUI finally loaded. evergreen-diagnostics-20230208-2225.zip evergreen-diagnostics-20230208-2235.zip
  16. Hi guys. Earlier tonight i noticed a bunch of my docker containers were down. Unsurprisingly it was because my chache filed up a gain. So i did the usual. Started the move, Then almost immediately got impatioent and told the system to reboot so i could get my dockers up and running again. Only this time the system never came back up (Atleast not the webui) So i checked dmesg via ssh and found it was continuously spamming this. From what i understand this is usually cause by bad sata connections but at this point i have tried re-seating the cables, replacing the cables and switching to diferent sata ports on the MB. It changed nothing. Usually after a while the errors will stop and the mebgui will load but as soon as i try to access files on cache it starts up again and files are inaccessible. My guess is one of the drives is failing but i have seeb both drives mentioned in the errors so i have no idea which one. Its a mirrored cache pool so If i can figure out which drive is failing it should be a simple matter of disconnecting the bad drive in order to get the sistem back up and running right? Any advice would be most appreciated. p.s. if your wondering why my nas is named what it is. Its because its slow and it tends to get stuck and bog down the network. So i geuss its just living up to its name... Edit: Looks like its sdb. But not sure if i should just remove it or attempt a scrub... evergreen-diagnostics-20230208-2054.zip
  17. -... . - .- / ---.. -.-.-- / -.-. .- -. .----. - / .-- .- .. - -.-.--
  18. WWF5ISBCZXRhNyEhISEgTGVzIEdPT09PTyEhISEhISE=
  19. Well they could do this. But then what about all the other license details? They are important to. The fact is you should always read the license details befor purchasing anything. If you dont do that then thats not Unraid's fault. And its not like a 100 page wall of text you have to dig through. like some companies would provide. Its very clear and easy to understand.
  20. Will it be possible to automatically spin down an entire raidz pool when inactive? Example use case would be a pool that is used for nightly backups.
  21. That is a good question. Will it be possible to import existing ZFS pools created by the ZFS plugin or by another system like TrueNAS?
  22. You make it sound like i'm suggesting unraid should abandon its storage technology. I'm not. Unraids storage technology is one of its best features. Especually for home/diy servers where the ability to expand over time can be extreamely useful. But as i said. Unraid is so much more than just its storage technology. Its community application support alone puts it miles ahead of TrueNAS (for me anyway) But striped arrays do still have their uses. Once you get to the point where your builting out a new NAS fully loaded with new drives. Expandability may not be such a big concern. And you may want the additional speed and unbeatable bitrot protection that comes with a striped ZFS array. I'm not sying unraid should abandon its storage sustem. I'm simply saying it would be nice to have the option of using a striped zfs array.
  23. That wouldn't be a bad thing in my opinion. Unraid has a lot more to offer than just its storage technology. Don't get me wrong. Unraids storage system is great but I think it would be better if it was optional. That would allow unraid to be a more general purpose hypervisor with a lot of really nice features. It sounds like I'm going to have to go the route of using a flash drive as disk1. Which is what I was already planning to do if I went with the zfs plugin. It's just such a nasty hack which I was hoping the official zfs support would allow me to avoid.
  24. Thats really my main concern. Will unraid let me use a RAID-Z as my main array without also having to have a 'traditional' unraid array. If i still need a 'normal' array then the official ZFS support does not really add any value over the ZFS plugins already available. Atleast not for me...