Jclendineng

Members
  • Posts

    201
  • Joined

  • Last visited

Everything posted by Jclendineng

  1. Mine is slow to non-responsive now as well. I had a crash yesterday after upgrading to RC3
  2. No idea what happened, this hasn’t happened before and I just upgraded to RC3 from RC2 so I’m attaching a report and a diag. If I can add more info let me know, I didn’t make any config changes after upgrade except switching docker to the zfs implementation. unraid-diagnostics-20230415-1822.zip
  3. I had a complete server crash, I’ll post a bug report but I didn’t have any indication of a cause. I upgraded, and a while later a complete hard reset. edit. A diff on the change log would be nice…current change log isn’t really a changelog, iirc previous rc’s have listed the rc a change was added in brackets, I vote that make a reappearance!
  4. Clarification, in ZFS Master when I set up docker in ZFS mode the created datasets are listed as "legacy" with the option to promote. What does that mean exactly?
  5. Im seeing this error when navigating to folders in 6.12-RC2 Mar 31 09:12:28 Unraid nginx: 2023/03/31 09:12:28 [error] 8110#8110: *2809 open() "/usr/local/emhttp/plugins/dynamix.file.manager/javascript/ace/mode-log.js" failed (2: No such file or directory) while sending to client, client: ip, server: , request: "GET /plugins/dynamix.file.manager/javascript/ace/mode-log.js HTTP/1.1", host: "ip", referrer: "http://ip/Shares/Browse?dir=%2Fmnt%2Fuser%2Fbackups%2Fappdata%2F2023-03-31%4002.00" I go to "Shares", click to navigate "Backups" then "appdata" and that does work but the log produces this error. Assuming it can be ignored as everything works, but I didnt see anyone else posting something like it so thought I would post it.
  6. unraid-api process is pegging 1-3 CPU cores at 100% all the time, is this expected? 7955 root 20 0 14.4g 3.7g 50432 R 310.2 1.5 3485:00 unraid-api - 3 cores here?! Also, "My Servers" is throwing errors: JSON.parse: unexpected character at line 1 column 1 of the JSON data Edit: This was resolved potentially with a reboot, that said, I hadn't done anything on it, its been sitting running since RC2 so Im not sure what caused the "My Servers" to freak out... Post reboot: 8498 root 20 0 10.9g 155760 50196 S 1.0 0.1 0:06.53 unraid-api - Normal usage unraid-diagnostics-20230331-0853.zip
  7. Ill file a bug report if this is indeed a bug but the unraid-api is using 100% of 1-2 entire CPU cores constantly. It is always using 100% of a specific core + 1 or 2 random others...expected? 7955 root 20 0 14.4g 3.7g 50432 R 310.2 1.5 3485:00 unraid-api - 3 cores here?! Edit: I placed a bug report as Im also seeing errors with "My Servers", maybe related
  8. "cache" in this sense only means "which drives the writing occurs on first" so for ZFS you would select pools as "Cache Only", I have 2 pools, a ssd pool and a platter disk pool, I select each dataset (or share) and mark each as <Pool> Only in cache settings, so each dataset is tied to the pool its on, you should create a pool, create datasets, then mark those datasets as only writing to THAT pool (called cache for the time being) An unrelated question, would it be possible to get CORS settings in the UI? I know reverse proxies are not supported but we all use them and it would be nice to add the reverse proxy as an allowed origin.
  9. FYI kiwix docker was deleted, the app in CA no longer functions. Edit: the new repo would be: ghcr.io/kiwix/kiwix-serve:latest
  10. Changed Status to Solved Changed Priority to Minor
  11. I fixed it...it was a 100% unrelated issue with a unifi switch, go figure. I always say there are no coincidences....this was and I am ashamed to say it's resolved :)
  12. I am using macvlan and switching to ipvlan doesn’t help…
  13. just ran into this issue myself and posted a bug report
  14. I had a system lockup earlier and when it came back all my dockers had lost network access. I have network from unraid, I can ping/nslookup etc but from any docker container I cannot. I have a bridge with a 10gb connection, and 2 vlans I use in docker. Edit 1: I’m reinstalling unraid. Will report back. Maybe some corruption on the flash drive, I don’t know what else would make docker networks randomly stop working. Edit 2: I reinstalled and same issue so…maybe a freak switch issue?? Edit 3: Oh and the hard lockup WAS macvlan related, I switched to ipvlan to remedy unraid-diagnostics-20230322-1613.zip
  15. Interesting! I’m assuming because running up against nic limit?
  16. Can I ask why this recommendation? Imo macvlan is superior in many situations. I’m curious what your thoughts are, I have on prem dns and dhcp servers so that’s the only way I can assign ips (though I spose I could do ipvlan + dhcp shenanigans to get it to work…)
  17. ZnapZend plug-in lets you schedule snapshots. The OG zfs plug-in page has all the information on setup and use. Note that it doesn’t actually startup properly on 6.12 so I had to add a user script to start it at array start. Works great, I’m taking snapshots this way. ZFS master is also a must have for the integration, adds a lot of gui stuff still missing in unraid. It will also show you the snapshots made.
  18. This fixed mobile display for me, awesome.
  19. Question for everyone, has anyone tried an NVME drive as a single zfs vdev? I am getting surprisingly terrible speeds. Pre-update was close to 3GB/s, Post-update I am getting 220MB/s MAX. The only change was btrfs was changed to zfs after the update. Thoughts? I know zfs will have a severe penalty for ssd/nvme due to overhead but not that severe Polling the group to see if anyone else has nvme drives to test with.
  20. All very true, but ZFS implicitly trusts RAM and will use all available RAM up to a set point, so IMO making it more important to use ECC? Since ZFS + Non-ECC will not correct on the RAM level...
  21. It seems to me (could be wrong) a zfs pool is better than the array/parity anyways unless you have multiple drives of differing sizes. Unraid draw is being able to use mutt drives and just work, but if you have all the same sizes and can do zfs I would think it would be a definite upgrade? It’s complicated and can get just about as complicated as you like which is an issue, but I have a server running 12 now and setup was really user friendly, the average user would have no issues imo. Guidance definitely is a good thing though, but I was pretty surprised at how well it’s working so far.
  22. On a brand new install? OK, that's 1 way to look at it. Lets go with that then I'll call it my fault until its updated. Appreciate the help!