pellen

Members
  • Posts

    11
  • Joined

  • Last visited

pellen's Achievements

Noob

Noob (1/14)

3

Reputation

  1. I had this issue as well (with a Cruzer Blade), and thought I would share my solution that solved it for me at least. I tried wiping all partitions of the drive in Disk Management in Windows, but had strange issues and the drive couldn't be formatted. I solved it by instead using diskpart (from cmd) to clean all the partitions and then create a new one. After that the Unraid USB flash creator worked fine. Follow this guide if you're not familiar with diskpart https://www.qualityology.com/tech/properly-delete-a-partition-on-usb-drive-using-diskpart/ And be careful! Make sure that you select the correct disk before cleaning it!
  2. Sorry for the slow response. It's a white box server. E5-2660v0 running in a SuperMicro X9SRi-3F motherboard with 32GB ECC reg etc.
  3. My server crashed as well during the plugin installation. Had a look through IPMI and could see a kernel panic. Will upload the diagnostics when it's up and running again EDIT: attaching diagnostics file. cradle-diagnostics-20210601-1622.zip
  4. I've seen this as well. Every time the Unraid.net plugin is updated the theme engine plugin behaves like this. The workarounds I've found is to either restart the machine or uninstall/install theme engine.
  5. I noticed this same issue with start/stop/restart containers. I updated the docker folder plugin earlier today so it's up to date and noticed this issue 15 minutes ago. And this is running on 6.8.3. Did a quick uninstall/reinstall of the plugin and now it works again.
  6. If this issue is due to some write amplification it could be worth to check how much logging the different containers do, and how big the log files are. If a 10MB log file is constantly written to, could it be that instead of writing a few bytes every log update, it rewrites the whole file? I noticed that I had debug logging enabled in my Plex container (official), so I had 60+MB of logs just for the "Plex Media Server.X.log" files within the last 24h. It looks like Plex creates a new log file when reaching ~11MB, and with debugging it seem to have been printing logs every 1-2 seconds. Re-writing the whole log file every 2 seconds when it's a couple of MB will cause a lot of unnecessary wear.
  7. I'm seeing the same thing! This morning I had received an email notification from Unraid that my /var/log mount was getting full. It looks like this "worker process exited on signal 6" spam in syslog filled up the syslog, and then it started throwing another error. Feb 7 19:02:45 Cradle nginx: 2020/02/07 19:02:45 [crit] 22541#22541: ngx_slab_alloc() failed: no memory Feb 7 19:02:45 Cradle nginx: 2020/02/07 19:02:45 [error] 22541#22541: shpool alloc failed Feb 7 19:02:45 Cradle nginx: 2020/02/07 19:02:45 [error] 22541#22541: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory. Feb 7 19:02:45 Cradle nginx: 2020/02/07 19:02:45 [error] 22541#22541: *5228283 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost" From what I noticed the webGUI started acting a bit strange, and the cpu load in the dashboard constantly showed 0% and the stats tab didn't show anything either. I pruned the syslog after making copies to the array, and restarted nginx. After that everything works OK again, but I'm still seeing the occasional "worker process exited on signal 6" in the syslog, so I'm guessing it's just a matter of time until the shit hits the fan again. I installed the Nextcloud container 1-2 weeks ago, so this does indeed seem to be related as you say.
  8. I love how easy it is to get up and running, while still being incredibly flexible and powerful! I would love to see a mobile friendly UI as I tend to access Unraid from my phone pretty often.
  9. Big thanks for the tip!! I will look into this I did try it out, and I never saw it showing file activity for the appdata folder, even though I had it enabled for Cache... With find -printf "%TY-%Tm-%Td %TT %p\n" | sort -n | tail I can clearly see that files are being modified several times a minute, but it doesn't show in file activity. root@Cradle:/mnt/cache/appdata# find -printf "%TY-%Tm-%Td %TT %p\n" | sort -n | tail 2018-12-17 08:57:14.9395287050 ./PlexMediaServer/Library/Application Support/Plex Media Server/Logs/Plex Media Scanner.log 2018-12-17 08:57:14.9425285960 ./PlexMediaServer/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db-shm 2018-12-17 08:57:14.9505283030 ./tautulli/logs/plex_websocket.log 2018-12-17 08:57:23.3892202060 ./sonarr/logs.db-shm 2018-12-17 08:57:48.9892855440 ./radarr/nzbdrone.db-shm 2018-12-17 08:57:52.8431448400 ./sonarr/nzbdrone.db-wal 2018-12-17 08:57:59.2369114070 ./letsencrypt/log/nginx/access.log 2018-12-17 08:57:59.4869022820 ./letsencrypt/fail2ban/fail2ban.sqlite3 2018-12-17 08:58:23.8050147460 ./sonarr/nzbdrone.db-shm 2018-12-17 08:58:24.7049819000 ./PlexMediaServer/Library/Application Support/Plex Media Server/Logs/Plex Media Server.log
  10. So I've managed to get it down to ~10GB a day now. I checked how often files in appdata was modified using find -printf "%TY-%Tm-%Td %TT %p\n" | sort -n | tail and could from there see which containers modified files most often. Apparently radarr had debug level set to trace which caused a constant stream of logs, and unifi did lots of writes to logs and database as well. bazarr was quite active as well. So setting a normal loglevel in radarr and stopping unifi and bazarr had the writes go down to roughly 10GB a day now. I still don't understand how so small writes can accumulate to double digits of gigabytes a day. Is every log/database file rewritten as soon as it's being modified?
  11. Alright. I put an Samsung 850 EVO mSATA 250GB as a cache drive in my unRAID server a couple of weeks ago as I got it dirt cheap. A few days ago I stumbled upon the SMART data and noticed that it already had passed over 3TBW. I'm using the cache for downloads that then moves over to the array, and then I always keep appdata and system on the cache as well to keep the array drives from spinning up all the time. I made a simple script that extracted the SMART TBW value every night and it's increasing with roughly 50-60GB every day, and the latest week the activity has been close to zero (i.e no downloads or other stuff put to the array), so that leaves me with appdata or system. Tried digging deeper to find what causes so much writes when nothing is happening with iotop. This is a snippet after running iotop for ~20 minutes, and during this time there was no activity in unRAID, except the docker containers running as usual. DISK READ : 0.00 B/s | Total DISK WRITE : 49.99 K/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 178.53 K/s PID PRIO USER DISK READ DISK WRITE> SWAPIN IO COMMAND 7144 be/4 root 84.00 K 552.05 M 0.00 % 0.01 % shfs /mnt/user -disks 31 2048000000 ~big_writes,allow_other -o remember=0 7427 be/0 root 0.00 B 264.48 M 0.00 % 0.03 % [loop2] 7450 be/4 root 0.00 B 52.22 M 0.00 % 0.05 % [btrfs-transacti] 16673 be/4 root 0.00 B 8.20 M 0.00 % 0.00 % [kworker/u32:15-bond0] 3409 be/4 root 0.00 B 7.48 M 0.00 % 0.00 % [kworker/u32:0-btrfs-worker] Could someone help me out here what I'm looking at? As I've understood it loop2 is the docker image? But what is shfs doing? Is it the combined writes made in my various containers? These are the containers currently running: bazaar letsencrypt mariadb mediawiki nzbget organizr PlexMediaServer radarr sonarr speedtest tautulli transmission unifi I've tried stoppingthem one by one and let iotop run for a while to see if there was a significant difference in writes, but there wasn't really any container that stood out. Stopping them all did make a big difference though, but that's not a good solution So, is 50-60GB writes a day considered normal, or is it something that's off? Any tips or tricks to tweak any of the containers? I know you might think this is a non-problem, the 850 EVO 250GB should handle 75TBW (but probably a bit more according to SSD enducance test reviews), but I really think it's odd that a bunch of low activity containers could write this amount of data every day...it can't just be logs right?