Gnomuz

Members
  • Content Count

    79
  • Joined

  • Last visited

Community Reputation

25 Good

About Gnomuz

  • Rank
    Newbie
  • Birthday 10/21/1964

Converted

  • Location
    France

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Nothing new here, under Unraid, I haven't found any way to adjust clock settings with nvidia-settings, as this utility requires at least a "minimal" Xserver, whatever it may mean (my technical skills reach their poor limits). I have been able though to overclock an RTX3060ti with nvidia-settings CLI commands under Ubuntu on another rig, but still with a graphic environment like gnome. Perhaps a more skilled member of the community could try and identify which minimal setup / add-ons would be required for Unraid to let us use a utility like nvidia-settings, which for the moment is instal
  2. Well, it's been the normal behavior of the Nvidia drivers for a while. A "power limit" is enforced for the card by the vbios and drivers, and when the power drawn approaches this limit, the clocks are throttled. If you want to see what the power limits are and to which extent you can adjust them, you just have a look at the output of 'nvidia-smi -q' Mine looks like that on a P2000 (which is only powered by the PCIE slot, thus the 75W min & max) : Power Readings Power Management : Supported Power Draw : 65.82 W
  3. No need to use an incognito window for me, but I rarely use it anyway. I prefer FAHControl which gives you much more control and features. Maybe you can try and empty your browser cache ?
  4. Hello, During the night, the container has been automatically updated from 7.6.21-ls25 to 7.6.21-ls26. Since then, the existing GPU slot is disabled with the following message in the log : 08:22:15:WARNING:FS01:No CUDA or OpenCL 1.2+ support detected for GPU slot 01: gpu:43:0 GP106GL [Quadro P2000] [MED-XN71] 3935. Disabling. The server has been folding for at least 10 days, nothing else has changed in the setup (Unraid 6.9.0-beta35 with Nvidia Driver). Output of nvidia-smi : Sat Jan 9 09:26:32 2021 +--------------------------------------------------------------------
  5. It's a month now that I last posted on this thread, concluding we definitely needed help from the developers to debug this critical built-in function of Unraid, without any feedback. Diagnostics and screenshots to document the bug had been, as required by @limetech, provided on November 24th 2020, followed-up with a deafening silence. It's winter now in Europe, I've had a few power outages, and it was a mess to restart everything properly, especially because unplugging / replugging the USB cable is not that easy when you're away from home... It's just a crappy workaround, and
  6. Sorry, I've upgraded the firmware when the UPS arrived, but I used a Windows laptop directly connected to the UPS to do so. I never tried to use the firmware upgrade tool from a Windows VM in Unraid with the UPS passed through. It may work, but you'll have to test yourself. And I'm not aware of any linux CLI option to upgrade the firmware, as the APC tool is Windows only iirc. I take the opportunity of your APC UPS-related question to bump this thread as the situation is still the same for me in 6.9-beta 35. Who knows, a Christmas or @limetech miracle may happen, even if it's a bit
  7. Hi, Well, the backup process has been running continuously for 4 days and a half, so I can step back a bit more now. The data to be backed up is 952 GB locally, 891 GB have been completed, and the remaining 61 GB should be uploaded in 11 hours from now. So, the global "performance" should be 952 GB in 127 hours, an average of 180 GB per day. Roughly, that is 2.1 MB/s or 17 Mbps, which is consistent with the obvious throttling I can see in Grafana for the CrashPlan container upload speed. Data is compressed before being uploaded, so translating the size of the data to bac
  8. I've just installed the container and activated the 30-days trial. First, not the faintest issue to set up, very easy to install, I just set "Maximum memory" to 4096M to avoid crashes due to low memory. As for the upload bandwidth, my feelings are mixed so far. I have 2.5 TB to backup, and started the process on Sunday. Until Sunday 11pm (all times CET), the throughput was 16 Mbps (or 2MB/s). And then, it was between 32 and 40 Mbps all Dec. 21st long, which is the practical limit for my 4G internet connection. Great ! I then added other shares to the backup set, a
  9. This error is due to the absence of the 'nvme-cli' package in the container, and thus the 'nvme' command. You have to install the missing package through the "Post Arguments" parameter of the container (Edit/Advanced View). Here's the content of my "Post Argument" param for reference, properly working with nvme devices, to be adapted of course to your specific configuration if required : /bin/sh -c 'apt-get update && apt-get -y upgrade && apt-get -y install ipmitool && apt-get -y install smartmontools && apt-get -y install lm-sensors && apt-get
  10. Well, no upgrade on Sundays, family first ... As for write amplification, now that I can step back, I can confirm my preliminary findings about the evolution of the write load on the cache. I compared the average write load between 12/12 (BTRFS Raid1) and 12/19 (XFS) on a full 24h period, and the result is clear : 1.35 MB/s vs 454 kB/s, i.e. a factor of 3. As I can't believe the overhead due to Raid1 metadata management may explain such a difference, it obviously confirms for me a BTRFS weakness, whatever partition alignment is ...
  11. As for the (non) spin down issue, I understand it's brand new in RC1 due to the kernel upgrade to 5.9, when smartctl is used by e.g. telegraf to poll disks stats, and should be fixed in next RC. So, I keep away for the moment. That would really be pity to have all disks spun up in an Unraid array without getting the benefits from a file system which by nature keeps disks spun up but gives you performance and "minor" features such as snapshots or read caching in return, wouldn't it ? 😉 For the write amplification, I had cautiously followed the steps to align both SSDs partitions to 1M
  12. Thanks for your thoughts on my initial problem. I'm not on the latest beta RC1 because there's an issue which prevents disks from spinning down in configurations similar to mine (telegraf/smartctl). Waiting for RC2 and thus kernel 5.9+ to test again the SSDs connected to the onboard SATA controller, as it's a known issue with X470 boards. Btw I only had disconnection issues with one of the SSDs, but btrfs never coped with it. For sure, a Raid 1 file system which turns unwritable when one the pool members fails while the other is up and running and requires a reboot to start over is
  13. Everything seems to be running fine now, just a little feedback on the I/O load when switching from a btrfs Raid1 to xfs, on the same period of 4 hours comparing yesterday and today, with similar overall loads (2 active VMs and 3 active containers) : XFS : Average Write = 285 kB/s, Average Read = 13 kB/s BTRFS : Average Write = 968 kB/s, Average Read = 7 kB/s (I/O load per SSD ofc) So the write load, which is the one we all look after on SSDs, is 3.4 times more with BTRFS Raid1, and of course wears both SSDs equallly. The two SSDs had both MBR 1MiB aligned, as recomme
  14. Well, pressure is going down, thanks for the mental coaching 😉 VMs and containers restarted properly after the restoration of appdata and the move of both domains and system. Moving from array to SSD was ofc way faster than the contrary, so I didn't have to wait too long. The only expected difference I can see is the appdata, domains and system shares are now tagged with "Some o all files unprotected" in he shares tab, which makes sense, as they are on a non-redundant XFS SSD. I checked the containers for the appdata assignment, and only found Plex to have /mnt/cache/appdata. Bu
  15. Sorry for not being clear, I must say I'm a bit worried, if not upset ... I have formatted the single cache device with XFS, deleted all data in appdata (on disk2 in my case), and restored the backup I just made. So far, it seems OK, all data of the appdata share is on the SSD, and only on the SSD ! And I checked one of the hardlink not moved I gave as an example, it has been restored by CA Restore. Now moving back system and domains to cache with mover, which hopefully should not raise any issue. Once docker is running, I'll check all containers and revert them to /mnt/user/app