Jump to content

StevenD

Members
  • Content Count

    1546
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by StevenD

  1. This one is a bit cheaper and works great. https://www.amazon.com/gp/product/B083GLR3WL
  2. I don't believe I quoted you, or addressed my comment to you.
  3. You You really should use a much larger file for testing. At least bigger than your RAM size. I typically use 50-100GB files for testing.
  4. YOU started this mess by being overly dramatic, instead of properly asking for assistance.
  5. It won't work. I have two of those boards, and I could never get it to see more than one NVMe, even when using SuperMicro expansion cards. There only seem to be a couple of X9 boards that actually support bifurcation, despite the BIOS settings. I ended up buying this board. It works great and unRAID sees all the drives. https://www.amazon.com/gp/product/B083GLR3WL/
  6. Yes. It’s the ip of your Nextcloud host.
  7. This is working for me: ### nextcloud server { listen 443 ssl; server_name nextcloud.domainname.com; client_max_body_size 20G; add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;"; location / { proxy_pass https://10.1.1.20:4430; proxy_max_temp_file_size 2048M; proxy_buffering off; proxy_read_timeout 256; } location = /.well-known/carddav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host:$server_port/remote.php/dav; } }
  8. There is probably a better way, but I just have a script run at the top of every hour date >> /mnt/cache/cache1.txt smartctl -a -d nvme /dev/nvme0n1 | grep "Units Written" >> /mnt/cache/cache1.txt date >> /mnt/cache/cache2.txt smartctl -a -d nvme /dev/nvme1n1 | grep "Units Written" >> /mnt/cache/cache2.txt
  9. Upgrading to 6.9.0-beta25, and wiping and rebuilding cache seems to have fixed the excessive drive writes. I updated at 1PM yesterday. Thanks @limetech Sun Jul 19 00:00:01 CDT 2020 52,349,318 [26.8 TB] Sun Jul 19 01:00:01 CDT 2020 52,423,388 [26.8 TB] Sun Jul 19 02:00:01 CDT 2020 52,489,648 [26.8 TB] Sun Jul 19 03:00:01 CDT 2020 52,555,542 [26.9 TB] Sun Jul 19 04:00:01 CDT 2020 52,620,891 [26.9 TB] Sun Jul 19 05:00:02 CDT 2020 52,704,944 [26.9 TB] Sun Jul 19 06:00:02 CDT 2020 52,781,371 [27.0 TB] Sun Jul 19 07:00:01 CDT 2020 52,857,676 [27.0 TB] Sun Jul 19 08:00:01 CDT 2020 52,969,998 [27.1 TB] Sun Jul 19 09:00:01 CDT 2020 53,060,428 [27.1 TB] Sun Jul 19 10:00:02 CDT 2020 53,143,267 [27.2 TB] Sun Jul 19 11:00:01 CDT 2020 53,226,597 [27.2 TB] Sun Jul 19 12:00:01 CDT 2020 53,302,735 [27.2 TB] Sun Jul 19 13:00:02 CDT 2020 53,370,136 [27.3 TB] Sun Jul 19 14:00:01 CDT 2020 53,497,045 [27.3 TB] Sun Jul 19 15:00:01 CDT 2020 53,570,280 [27.4 TB] Sun Jul 19 16:00:02 CDT 2020 53,660,287 [27.4 TB] Sun Jul 19 17:00:01 CDT 2020 53,757,767 [27.5 TB] Sun Jul 19 18:00:01 CDT 2020 53,843,113 [27.5 TB] Sun Jul 19 19:00:01 CDT 2020 54,494,403 [27.9 TB] Sun Jul 19 20:00:01 CDT 2020 54,591,716 [27.9 TB] Sun Jul 19 21:00:01 CDT 2020 54,684,939 [27.9 TB] Sun Jul 19 22:00:01 CDT 2020 54,769,497 [28.0 TB] Sun Jul 19 23:00:01 CDT 2020 54,881,700 [28.0 TB] Mon Jul 20 00:00:01 CDT 2020 54,962,156 [28.1 TB] Mon Jul 20 01:00:01 CDT 2020 55,012,101 [28.1 TB] Mon Jul 20 02:00:01 CDT 2020 55,114,507 [28.2 TB] Mon Jul 20 03:00:01 CDT 2020 55,199,643 [28.2 TB] Mon Jul 20 04:00:01 CDT 2020 55,285,523 [28.3 TB] Mon Jul 20 05:00:01 CDT 2020 55,390,072 [28.3 TB] Mon Jul 20 06:00:01 CDT 2020 55,492,177 [28.4 TB] Mon Jul 20 07:00:01 CDT 2020 55,562,868 [28.4 TB] Mon Jul 20 08:00:01 CDT 2020 55,641,502 [28.4 TB] Mon Jul 20 09:00:01 CDT 2020 55,709,571 [28.5 TB] Mon Jul 20 10:00:01 CDT 2020 55,778,340 [28.5 TB] Mon Jul 20 11:00:01 CDT 2020 55,855,175 [28.5 TB] Mon Jul 20 12:00:01 CDT 2020 55,937,448 [28.6 TB] Mon Jul 20 13:00:01 CDT 2020 56,014,597 [28.6 TB] Mon Jul 20 14:00:01 CDT 2020 56,092,328 [28.7 TB] Mon Jul 20 15:00:01 CDT 2020 56,156,565 [28.7 TB] Mon Jul 20 17:00:01 CDT 2020 56,273,142 [28.8 TB] Mon Jul 20 18:00:01 CDT 2020 56,344,795 [28.8 TB] Mon Jul 20 19:00:01 CDT 2020 56,364,160 [28.8 TB] Mon Jul 20 20:00:01 CDT 2020 56,407,275 [28.8 TB] Mon Jul 20 21:00:01 CDT 2020 56,447,405 [28.9 TB] Mon Jul 20 22:00:01 CDT 2020 56,471,394 [28.9 TB] Mon Jul 20 23:00:02 CDT 2020 56,544,547 [28.9 TB] Tue Jul 21 00:00:01 CDT 2020 56,558,841 [28.9 TB] Tue Jul 21 01:00:01 CDT 2020 56,572,818 [28.9 TB] Tue Jul 21 02:00:01 CDT 2020 56,588,893 [28.9 TB] Tue Jul 21 03:00:01 CDT 2020 56,619,137 [28.9 TB] Tue Jul 21 04:00:01 CDT 2020 56,649,114 [29.0 TB] Tue Jul 21 05:00:01 CDT 2020 56,694,088 [29.0 TB] Tue Jul 21 06:00:01 CDT 2020 56,734,883 [29.0 TB] Tue Jul 21 07:00:01 CDT 2020 56,740,772 [29.0 TB] Tue Jul 21 08:00:01 CDT 2020 56,764,329 [29.0 TB] Tue Jul 21 09:00:01 CDT 2020 56,791,261 [29.0 TB] Tue Jul 21 10:00:01 CDT 2020 57,390,492 [29.3 TB] Tue Jul 21 11:00:02 CDT 2020 57,481,471 [29.4 TB] Tue Jul 21 12:00:01 CDT 2020 57,522,137 [29.4 TB] Tue Jul 21 14:00:01 CDT 2020 58,216,955 [29.8 TB] Tue Jul 21 15:00:01 CDT 2020 58,222,173 [29.8 TB] Tue Jul 21 16:00:01 CDT 2020 58,235,354 [29.8 TB] Tue Jul 21 17:00:01 CDT 2020 58,270,523 [29.8 TB] Tue Jul 21 18:00:01 CDT 2020 58,300,798 [29.8 TB] Tue Jul 21 19:00:01 CDT 2020 58,346,858 [29.8 TB] Tue Jul 21 20:00:01 CDT 2020 58,382,861 [29.8 TB] Tue Jul 21 21:00:01 CDT 2020 58,403,922 [29.9 TB] Tue Jul 21 22:00:01 CDT 2020 58,420,439 [29.9 TB] Tue Jul 21 23:00:01 CDT 2020 58,493,227 [29.9 TB] Wed Jul 22 00:00:02 CDT 2020 58,494,926 [29.9 TB] Wed Jul 22 01:00:01 CDT 2020 58,529,097 [29.9 TB] Wed Jul 22 02:00:01 CDT 2020 58,556,746 [29.9 TB] Wed Jul 22 03:00:01 CDT 2020 58,574,415 [29.9 TB] Wed Jul 22 04:00:01 CDT 2020 58,605,297 [30.0 TB] Wed Jul 22 05:00:01 CDT 2020 58,632,079 [30.0 TB] Wed Jul 22 06:00:01 CDT 2020 58,655,069 [30.0 TB] Wed Jul 22 07:00:01 CDT 2020 58,672,137 [30.0 TB] Wed Jul 22 08:00:01 CDT 2020 58,689,196 [30.0 TB] Wed Jul 22 09:00:01 CDT 2020 58,712,601 [30.0 TB] Wed Jul 22 10:00:01 CDT 2020 58,731,743 [30.0 TB]
  10. That is exactly my intent. I would prefer those dont even show up when I hit "New Posts".
  11. Is it possible to hide forum sections?
  12. Updated for 6.9.0-beta25. Also updated the plugin to use libffi-3.3-x86_64-1.txz. I discovered that the package will not successfully build with libffi-3.3-x86_64-1.txz installed. I'm afraid this plugin may not survive much longer.
  13. Sorry...somehow I never saw this. I will test it out this week and replace it in the plugin if it works ok.
  14. @SCSI I have put quite a bit of time in this, and I just can't seem to make the message disappear using open-vm-tools 11.x.x. I reverted to 10.3.10, and the ioctl message only shows up four times upon intial startup or plugin install. I am going to see if the open-vm-tools maintainers will help with this, but I doubt they will. Someone else posted this error and they closed it saying you need vsock installed. We dont need it, so there is no point in trying to install it. I am pretty sure it would need a custom kernel, which is way above my head and not something I really want to maintain. This is apparently caused by a change in the 5.x kernel. The current version of the plugin (2020.07.11) will install open_vm_tools-10.3.10-5.7.7-Unraid-x86_64-202007111402.tgz on unRAID 6.9-beta24. I did not compile a new one for -beta22.
  15. I did a bunch of testing, including removing the setting page altogether, and the messages still appear in the logs. I have tested it on both vSphere 6.7 and 7.0. i should have more time to play with it later in the week.
  16. Yes...it does. It’s really all I care about. Limetech already includes all the appropriate drivers.
  17. I tried re-compiling it with v11.0.5, but the "error" is still there.
  18. All I do is compile the open-vm-tools from Github. I honestly dont know much about it. However, there appears to be an issue with open-vm-tools and the 5.6+ kernel. https://github.com/vmware/open-vm-tools/issues/425 https://bugzilla.redhat.com/show_bug.cgi?id=1821892 That issue was closed on githiub, but Im not seeing a fix applied. The redhat links mention modifying a file. I will look into that when I compile the next beta.
  19. Either put it in your go file (in the config folder on your flash drive), or add it to the User Scripts plugin.
  20. I have another 2TB nvme installed, so I can easily backup, wipe and restore the cache pool.
  21. Looks like about 400GB was written yesterday. Nothing was written, except normal docker appdata stuff. 22,685,135 [11.6 TB] 22,687,899 [11.6 TB]
  22. Looks like that worked. I will check it again tomorrow. I would expect to see it over 12TB tomorrow. Cache1: 22,040,574 [11.2 TB] Cache2: 22,039,620 [11.2 TB]