-
Posts
1608 -
Joined
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by StevenD
-
Updated for 6.9.0-beta25. Also updated the plugin to use libffi-3.3-x86_64-1.txz. I discovered that the package will not successfully build with libffi-3.3-x86_64-1.txz installed. I'm afraid this plugin may not survive much longer.
-
Sorry...somehow I never saw this. I will test it out this week and replace it in the plugin if it works ok.
-
@SCSI I have put quite a bit of time in this, and I just can't seem to make the message disappear using open-vm-tools 11.x.x. I reverted to 10.3.10, and the ioctl message only shows up four times upon intial startup or plugin install. I am going to see if the open-vm-tools maintainers will help with this, but I doubt they will. Someone else posted this error and they closed it saying you need vsock installed. We dont need it, so there is no point in trying to install it. I am pretty sure it would need a custom kernel, which is way above my head and not something I really want to maintain. This is apparently caused by a change in the 5.x kernel. The current version of the plugin (2020.07.11) will install open_vm_tools-10.3.10-5.7.7-Unraid-x86_64-202007111402.tgz on unRAID 6.9-beta24. I did not compile a new one for -beta22.
-
I did a bunch of testing, including removing the setting page altogether, and the messages still appear in the logs. I have tested it on both vSphere 6.7 and 7.0. i should have more time to play with it later in the week.
-
Yes...it does. It’s really all I care about. Limetech already includes all the appropriate drivers.
-
I tried re-compiling it with v11.0.5, but the "error" is still there.
-
All I do is compile the open-vm-tools from Github. I honestly dont know much about it. However, there appears to be an issue with open-vm-tools and the 5.6+ kernel. https://github.com/vmware/open-vm-tools/issues/425 https://bugzilla.redhat.com/show_bug.cgi?id=1821892 That issue was closed on githiub, but Im not seeing a fix applied. The redhat links mention modifying a file. I will look into that when I compile the next beta.
-
[6.8.3] docker image huge amount of unnecessary writes on cache
StevenD commented on S1dney's report in Stable Releases
Either put it in your go file (in the config folder on your flash drive), or add it to the User Scripts plugin. -
[6.8.3] docker image huge amount of unnecessary writes on cache
StevenD commented on S1dney's report in Stable Releases
I have another 2TB nvme installed, so I can easily backup, wipe and restore the cache pool. -
[6.8.3] docker image huge amount of unnecessary writes on cache
StevenD commented on S1dney's report in Stable Releases
Correct. Certainly better. -
[6.8.3] docker image huge amount of unnecessary writes on cache
StevenD commented on S1dney's report in Stable Releases
Looks like about 400GB was written yesterday. Nothing was written, except normal docker appdata stuff. 22,685,135 [11.6 TB] 22,687,899 [11.6 TB] -
[6.8.3] docker image huge amount of unnecessary writes on cache
StevenD commented on S1dney's report in Stable Releases
Looks like that worked. I will check it again tomorrow. I would expect to see it over 12TB tomorrow. Cache1: 22,040,574 [11.2 TB] Cache2: 22,039,620 [11.2 TB] -
[6.8.3] docker image huge amount of unnecessary writes on cache
StevenD commented on S1dney's report in Stable Releases
I suppose I can run that command and see where it sits 24 hours from now. -
[6.8.3] docker image huge amount of unnecessary writes on cache
StevenD commented on S1dney's report in Stable Releases
How would I see what is being written? Eleven days ago, I replaced my cache with 2 x 1TB NVMe in a BTRFS RAID1. Since then, more than 1TB per day is being written to the drive. That seems excessive since I am only using it for Docker, appdata, and a couple of shares (which has only had ~400GB written in that time). Cache 1: smartctl 7.1 2019-12-30 r5022 [x86_64-linux-4.19.107-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Number: Samsung SSD 970 PRO 1TB Serial Number: Firmware Version: 1B2QEXP7 PCI Vendor/Subsystem ID: 0x144d IEEE OUI Identifier: 0x002538 Total NVM Capacity: 1,024,209,543,168 [1.02 TB] Unallocated NVM Capacity: 0 Controller ID: 4 Number of Namespaces: 1 Namespace 1 Size/Capacity: 1,024,209,543,168 [1.02 TB] Namespace 1 Utilization: 719,635,980,288 [719 GB] Namespace 1 Formatted LBA Size: 512 Namespace 1 IEEE EUI-64: 002538 540150134f Local Time is: Thu Jun 25 11:50:58 2020 CDT Firmware Updates (0x16): 3 Slots, no Reset required Optional Admin Commands (0x0037): Security Format Frmw_DL Self_Test Directvs Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp Maximum Data Transfer Size: 512 Pages Warning Comp. Temp. Threshold: 81 Celsius Critical Comp. Temp. Threshold: 81 Celsius Supported Power States St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 0 + 6.20W - - 0 0 0 0 0 0 1 + 4.30W - - 1 1 1 1 0 0 2 + 2.10W - - 2 2 2 2 0 0 3 - 0.0400W - - 3 3 3 3 210 1200 4 - 0.0050W - - 4 4 4 4 2000 8000 Supported LBA Sizes (NSID 0x1) Id Fmt Data Metadt Rel_Perf 0 + 512 0 0 === START OF SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART/Health Information (NVMe Log 0x02) Critical Warning: 0x00 Temperature: 47 Celsius Available Spare: 100% Available Spare Threshold: 10% Percentage Used: 0% Data Units Read: 2,930,203 [1.50 TB] Data Units Written: 21,981,884 [11.2 TB] Host Read Commands: 25,097,587 Host Write Commands: 411,986,950 Controller Busy Time: 4,473 Power Cycles: 14 Power On Hours: 281 Unsafe Shutdowns: 6 Media and Data Integrity Errors: 0 Error Information Log Entries: 0 Warning Comp. Temperature Time: 0 Critical Comp. Temperature Time: 0 Temperature Sensor 1: 47 Celsius Temperature Sensor 2: 57 Celsius Error Information (NVMe Log 0x01, max 64 entries) No Errors Logged Cache 2: smartctl 7.1 2019-12-30 r5022 [x86_64-linux-4.19.107-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Number: Samsung SSD 970 PRO 1TB Serial Number: Firmware Version: 1B2QEXP7 PCI Vendor/Subsystem ID: 0x144d IEEE OUI Identifier: 0x002538 Total NVM Capacity: 1,024,209,543,168 [1.02 TB] Unallocated NVM Capacity: 0 Controller ID: 4 Number of Namespaces: 1 Namespace 1 Size/Capacity: 1,024,209,543,168 [1.02 TB] Namespace 1 Utilization: 719,635,988,480 [719 GB] Namespace 1 Formatted LBA Size: 512 Namespace 1 IEEE EUI-64: 002538 510150a811 Local Time is: Thu Jun 25 11:53:44 2020 CDT Firmware Updates (0x16): 3 Slots, no Reset required Optional Admin Commands (0x0037): Security Format Frmw_DL Self_Test Directvs Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp Maximum Data Transfer Size: 512 Pages Warning Comp. Temp. Threshold: 81 Celsius Critical Comp. Temp. Threshold: 81 Celsius Supported Power States St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 0 + 6.20W - - 0 0 0 0 0 0 1 + 4.30W - - 1 1 1 1 0 0 2 + 2.10W - - 2 2 2 2 0 0 3 - 0.0400W - - 3 3 3 3 210 1200 4 - 0.0050W - - 4 4 4 4 2000 8000 Supported LBA Sizes (NSID 0x1) Id Fmt Data Metadt Rel_Perf 0 + 512 0 0 === START OF SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART/Health Information (NVMe Log 0x02) Critical Warning: 0x00 Temperature: 45 Celsius Available Spare: 100% Available Spare Threshold: 10% Percentage Used: 0% Data Units Read: 4,319,316 [2.21 TB] Data Units Written: 21,981,076 [11.2 TB] Host Read Commands: 38,573,640 Host Write Commands: 412,195,982 Controller Busy Time: 4,469 Power Cycles: 27 Power On Hours: 278 Unsafe Shutdowns: 13 Media and Data Integrity Errors: 0 Error Information Log Entries: 5 Warning Comp. Temperature Time: 0 Critical Comp. Temperature Time: 0 Temperature Sensor 1: 45 Celsius Temperature Sensor 2: 47 Celsius Error Information (NVMe Log 0x01, max 64 entries) No Errors Logged -
I have passed thru my USB, and boot directly from it. You do need to enable EFI boot to do that.
-
If your motherboard doesn’t support bifurcation, this one works great. https://smile.amazon.com/gp/product/B083GLR3WL/
-
9400-16i works great.
-
Time to go 16TB drives. Hardware question
StevenD replied to k0d3g3ar's topic in Storage Devices and Controllers
I am using the Toshiba 16TB drives with a 9400-16i and they work prefectly. Just be prepared for VERY long parity checks/rebuilds. -
Unraid on ESXi 7.0 - Confimed Working
StevenD replied to BruceRobertson's topic in Virtualizing Unraid
And the best part....you can boot directly off of the USB now! You no longer need to use a VMDK (my preferred method) or PLOP. -
Unraid on ESXi 7.0 - Confimed Working
StevenD replied to BruceRobertson's topic in Virtualizing Unraid
One million "Thank you"s @BruceRobertson. That worked perfectly! My openVMTools_compiled plugin works just fine as well under vSphere 7.0 and unRAID 6.9.0-beta1. -
Unraid on ESXi 7.0 - Confimed Working
StevenD replied to BruceRobertson's topic in Virtualizing Unraid
Thanks! I was actually playing around with this today. I bought a new Ryzen motherboard and processor for my gaming rig and I figured I would play around with vSphere 7.0 and unRAID before installing it. I have been unable to passthrough the USB controllers to my unRAID vm. I will try out your esxcli command. -
Nope. None at all. it just needs the plugin and the three packages that the plugin installs. One of these days I may try to figure how how to do an options page, like for NTP, but that’s somewhat beyond my knowledge.
-
I bought an RTX4000 specifically because it fit in a single slot.