Jump to content

BRiT

Members
  • Content Count

    5696
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by BRiT

  1. Did you start off with an entirely new/fresh database? Or was it restored from an earlier backup?
  2. As typical, @Squid is correct. No good reason to duplicate a well supported plugin.
  3. What else would you be doing over the holidays? 🍬 🦃 🕯️ 🕎 🎅 🎆
  4. I believe there is already a separate rclone plugin.
  5. The 6.8-RC notes states they upgraded to FUSE-3, so maybe rclone needs to be recompiled for this new target?
  6. Was your system performance changed much through changes in these tunables on the older versions? My system performance wasn't drastically changed on older versions, just minor improvements so I suppose I'll be alright. But seeing how vastly different other systems are I would think the new unraid would require some sort of knobs to tune.
  7. That isn't a fix. While it may be an acceptable workaround for some, it's not for others.
  8. So this is something to report to Limetech and they will have to make the changes required in order to get solid performance on wider variety of hardware?
  9. Try with using different drives for all your reads -- 2 of your reads are from the same drive, disk 9.
  10. Give it some time. It takes hours of work to get each new version ready for distribution.
  11. Looks like the 6.8 series will present a new set of challenges on tuning... In particular:
  12. As stated earlier, new and current info is available at the active thread:
  13. On your shares configurations, two of them in particular, you have "prefer" configured for the Use Cache setting. This means that when Mover runs, it will copy data from the Array Drives to the Cache drive. Double check that those shares are not using up the space. d----e.cfg t-------e.cfg
  14. So files created by the docker can have the proper ownership (user and group) set without you having to run any sort of "Fix Permissions" script with unRaid. Not all systems running a docker container (Emby or Plex for instance) will have files owned by NOBODY / USERS. Not all systems will have the user mapped to UID 99 or group mapped to GID 100. Remember, dockers are not unraid specific, they can be run on other Linux systems or even Windows.
  15. Anyone with physical access to the USB drive and the array system can get access to files that are not encrypted. They can wipe out the password fields entirely on the flash drive from a different system, insert the modified flash drive into the unraid system and turn it on. After boot up, they will have the system without passwords. The way to protect against that is using drive encryption with a passphrase only you know that is not stored on anything they have physical access to. However, I am unable to answer the question if your brother is smart enough to do this.
  16. Report issues like that to the Plex Development team, sounds like it's core Plex issue, not docker container issue.
  17. Stop the container. Start the container. Check processes again.
  18. Supporting more than 28 data drives with just 2 parity drives is questionable from a sensibility standpoint. To genuinely protect more than 28 data drives one should really have multiple array pools, each with their own set of parity drives. This requires substantial development efforts from where things stand today.
  19. From your diagnostics file ... your dockers went crazy, spammed the /var/log/docker.log file(s), and caused your current trouble. Looking at system/df.txt: tmpfs 128M 128M 0 100% /var/log Looking at system/folders.txt: /var/log -rw-rw-rw- 1 root root 127782912 Oct 2 18:26 docker.log.1
  20. Pro = Unlimited attached storage devices.
  21. Excuse the formatting as I'm browsing from a tablet.
  22. I dont think Read or Write at that level is what you or I thought it was. I think those columns are how many Shares the user is explicitly in the Read or Write list of. Here is the code behind it: foreach ($users as $user) { $name = $user['name']; $list = "<a href=\"$path/UserEdit?name=".urlencode($name)."\" class=\"blue-text\" title=\"$name settings\">".truncate($name,20)."</a>"; $desc = truncate($user['desc'],40); if ($list=='root') { $write = '-'; $read = '-'; } else { $write = 0; $read = 0; foreach ($shares as $share) { if (strpos($sec[$share['name']]['writeList'],$list)!==false) $write++; if (strpos($sec[$share['name']]['readList'],$list)!==false) $read++; } } if ($user['passwd']!='yes') $list = str_replace('blue-text','orange-text',$list); echo "<tr><td></td><td><i class='icon-user'></i>$list</td><td>$desc</td><td>$write</td><td>$read</td><td></td></tr>"; } https://github.com/limetech/webgui/blob/master/plugins/dynamix/DashStats.page
  23. Some more info on what I was thinking of: Settings / Disk Settings / Tunable (poll_attributes): This defines the disk SMART polling interval, in seconds. A value of 0 disables SMART polling (not recommended).
  24. Someone better can speak to this, but have you looked at what your refresh polling delay is set for the UI? There's a setting somewhere that impacts how frequently it polls for drive temperature and status. Maybe that impacts this as well?
  25. Your /var/log is full. Your dockers started going crazy on 2019-09-19, logging the following message ad infinitum into /var/log/docker*. That eventually used all 128 Meg available for logs. You didn't noticed any issues staring over 10 days ago? time="2019-09-19T02:48:22.229554249+01:00" level=error msg="Error replicating health state for container 984fdb87d3deaeaf2575e46990b56a93d1660a02d4d8145812a9865c69897be4: open /var/lib/docker/containers/984fdb87d3deaeaf2575e46990b56a93d1660a02d4d8145812a9865c69897be4/.tmp-config.v2.json483062491: read-only file system"