Darksurf

Members
  • Posts

    164
  • Joined

  • Last visited

Everything posted by Darksurf

  1. Interesting, I'm not sure other than that docker also has a webUI? The scripts have similar lines, (they are effectively doing similar/the same thing) but that docker breaks it up into a few scripts rather than 1. If you test it let us know how it works. My only issue with rix docker is that if I try to use a full key, it doesn't work, it only uses the beta key even though I have entered the purchased key. Not really an issue as re-installing/updating the docker fixes the betakey problem.
  2. Yes. If you check my post above you'll see ls -al of disks5-10 look like this: I ran unbalance on these drives 3 times to be sure, checked there were no files left, then did a full rm -rf /mnt/disk#/* on each drive I planned to wipe and then mkdir -p /mnt/disk#/clear-me on every disk I planned to wipe. I'm 100% positive the drives were empty besides the clear-me folder. I doubt its a problem, but these drives are all formatted BTRFS not XFS. It could also be some incompatibility with 6.10.3, not sure. I ended up just removing them from the machine and creating a new config for the array and rebuilding the parity drives.
  3. What is up with the zero drive script? It immediately gives up. *** Clear an unRAID array data drive *** v1.4 Checking all array data drives (may need to spin them up) ... Checked 10 drives, did not find an empty drive ready and marked for clearing! To use this script, the drive must be completely empty first, no files or folders left on it. Then a single folder should be created on it with the name 'clear-me', exactly 8 characters, 7 lowercase and 1 hyphen. This script is only for clearing unRAID data drives, in preparation for removing them from the array. It does not add a Preclear signature. Script Finished Jul 10, 2022 19:21.37 Full logs for this script are available at /tmp/user.scripts/tmpScripts/ZeroDisks_ShrinkArray/log.txt ^C root@Oceans:~# ls -al /mnt/disk* /mnt/disk1: total 16 drwxrwxrwx 1 nobody users 84 Jul 10 04:30 ./ drwxr-xr-x 18 root root 360 Jul 10 00:12 ../ drwxrwxrwx+ 1 nobody users 190 Jul 10 18:28 Docker/ drwxrwxrwx 1 nobody users 14 Jul 4 22:49 Downloads/ drwxrwxrwx 1 nobody users 60 Jul 6 03:26 ZDRIVE/ drwxrwxrwx 1 nobody users 0 Jul 20 2021 appdata/ drwxrwxrwx 1 nobody users 16 Apr 16 2021 home/ drwxrwxrwx 1 nobody users 1884 Jul 9 04:40 system/ drwxrwxrwx 1 nobody users 138 Dec 31 2017 tftp/ /mnt/disk10: total 16 drwxrwxrwx 1 nobody users 16 Jul 10 18:33 ./ drwxr-xr-x 18 root root 360 Jul 10 00:12 ../ drwxrwxrwx 1 nobody users 0 Jul 10 18:33 clear-me/ /mnt/disk2: total 16 drwxrwxrwx 1 nobody users 12 Jul 10 04:30 ./ drwxr-xr-x 18 root root 360 Jul 10 00:12 ../ drwxrwxrwx+ 1 nobody users 260 Jul 9 23:38 Docker/ /mnt/disk3: total 16 drwxrwxrwx 1 nobody users 84 Jul 10 04:30 ./ drwxr-xr-x 18 root root 360 Jul 10 00:12 ../ drwxrwxrwx+ 1 nobody users 188 Jul 9 23:38 Docker/ drwxrwxrwx 1 nobody users 0 Jul 6 22:31 Downloads/ drwxr-xr-x 1 nobody users 0 May 9 09:08 ISOs/ drwxrwxrwx 1 nobody users 32 Jul 6 22:28 ZDRIVE/ drwxrwxrwx 1 nobody users 0 Jul 20 2021 appdata/ drwxrwxrwx 1 nobody users 16 Jul 6 21:50 home/ drwxrwxrwx 1 nobody users 394 Jul 6 22:31 system/ /mnt/disk4: total 16 drwxrwxrwx 1 nobody users 66 Jul 10 04:30 ./ drwxr-xr-x 18 root root 360 Jul 10 00:12 ../ drwxrwxrwx+ 1 nobody users 170 Jul 6 12:48 Docker/ drwxrwxrwx 1 nobody users 8 Jun 5 2021 ZDRIVE/ drwxrwxrwx 1 nobody users 0 Jul 20 2021 appdata/ drwxrwxrwx 1 nobody users 38 Jul 6 12:47 home/ drwxrwxrwx 1 nobody users 96 Jul 6 12:48 system/ drwxrwxrwx 1 nobody users 0 Dec 31 2017 tftp/ /mnt/disk5: total 16 drwxrwxrwx 1 nobody users 16 Jul 10 18:35 ./ drwxr-xr-x 18 root root 360 Jul 10 00:12 ../ drwxrwxrwx 1 nobody users 0 Jul 10 18:35 clear-me/ /mnt/disk6: total 16 drwxrwxrwx 1 nobody users 16 Jul 10 18:35 ./ drwxr-xr-x 18 root root 360 Jul 10 00:12 ../ drwxrwxrwx 1 nobody users 0 Jul 10 18:35 clear-me/ /mnt/disk7: total 16 drwxrwxrwx 1 nobody users 16 Jul 10 18:34 ./ drwxr-xr-x 18 root root 360 Jul 10 00:12 ../ drwxrwxrwx 1 nobody users 0 Jul 10 18:34 clear-me/ /mnt/disk8: total 16 drwxrwxrwx 1 nobody users 16 Jul 10 18:34 ./ drwxr-xr-x 18 root root 360 Jul 10 00:12 ../ drwxrwxrwx 1 nobody users 0 Jul 10 18:34 clear-me/ /mnt/disk9: total 16 drwxrwxrwx 1 nobody users 16 Jul 10 18:33 ./ drwxr-xr-x 18 root root 360 Jul 10 00:12 ../ drwxrwxrwx 1 nobody users 0 Jul 10 18:33 clear-me/ /mnt/disks: total 0 drwxrwxrwt 2 nobody users 40 Jul 10 00:11 ./ drwxr-xr-x 18 root root 360 Jul 10 00:12 ../
  4. Lemme know the name of that docker with a link. This could be my solution!
  5. I'm running a Ryzen Threadripper 3970X with 128G UECC memory in an ASROCK Creator TRX40 board, latest Beta BIOS, no stability issues. This could be various issues. 1. Are you updated to the latest BIOS version? 2. do you have fTPM disabled or enabled? If enabled, you'll want the latest BIOS update that fixes an fTPM stuttering issue. https://www.amd.com/en/support/kb/faq/pa-410 3. What speed are you running your Unbuffered ECC Memory at? Don't expect greater than 2933mhz for ECC memory on Ryzen 3XXX or lower. Some only work at 2666mhz. 4. If your Memory speeds aren't the problem, check your memory timings. There can be multiple jedec settings for timings or none requiring you to enter them manually to spec. 5. In the BIOS have you disabled all power saving nonsense such as suspend to RAM, aggressive ASPM, ALPM, etc. (I've found aggressive power management implementation in my old supermicro server board was a problem for my HDDs) 6. If you've done all the above, is your motherboard auto overclocking the CPU or RAM? disable auto-overclocking. As for specifics, I need to know the exact hardware in the build including the memory being used and what clock speeds and timings its rated for and what you have configured. Your logs here show normal gskill memory (non-ecc) and its running at the wrong speed and voltage (F4-3600C16-8GVKC running at 2133mhz and 1.2V). I also hope you're using UDIMM and not RDIMM ECC as RDIMM shouldn't work at all. Getting SMBIOS data from sysfs. SMBIOS 3.3.0 present. Handle 0x0018, DMI type 17, 92 bytes Memory Device Array Handle: 0x0010 Error Information Handle: 0x0017 Total Width: Unknown Data Width: Unknown Size: No Module Installed Form Factor: Unknown Set: None Locator: DIMM 0 Bank Locator: P0 CHANNEL A Type: Unknown Type Detail: Unknown Speed: Unknown Manufacturer: Unknown Serial Number: Unknown Asset Tag: Not Specified Part Number: Unknown Rank: Unknown Configured Memory Speed: Unknown Minimum Voltage: Unknown Maximum Voltage: Unknown Configured Voltage: Unknown Memory Technology: Unknown Memory Operating Mode Capability: Unknown Firmware Version: Unknown Module Manufacturer ID: Unknown Module Product ID: Unknown Memory Subsystem Controller Manufacturer ID: Unknown Memory Subsystem Controller Product ID: Unknown Non-Volatile Size: None Volatile Size: None Cache Size: None Logical Size: None Handle 0x001A, DMI type 17, 92 bytes Memory Device Array Handle: 0x0010 Error Information Handle: 0x0019 Total Width: 64 bits Data Width: 64 bits Size: 8 GB Form Factor: DIMM Set: None Locator: DIMM 1 Bank Locator: P0 CHANNEL A Type: DDR4 Type Detail: Synchronous Unbuffered (Unregistered) Speed: 2133 MT/s Manufacturer: Unknown Serial Number: 00000000 Asset Tag: Not Specified Part Number: F4-3600C16-8GVKC Rank: 1 Configured Memory Speed: 2133 MT/s Minimum Voltage: 1.2 V Maximum Voltage: 1.2 V Configured Voltage: 1.2 V Memory Technology: DRAM Memory Operating Mode Capability: Volatile memory Firmware Version: Unknown Module Manufacturer ID: Bank 5, Hex 0xCD Module Product ID: Unknown Memory Subsystem Controller Manufacturer ID: Unknown Memory Subsystem Controller Product ID: Unknown Non-Volatile Size: None Volatile Size: 8 GB Cache Size: None Logical Size: None Handle 0x001D, DMI type 17, 92 bytes Memory Device Array Handle: 0x0010 Error Information Handle: 0x001C Total Width: Unknown Data Width: Unknown Size: No Module Installed Form Factor: Unknown Set: None Locator: DIMM 0 Bank Locator: P0 CHANNEL B Type: Unknown Type Detail: Unknown Speed: Unknown Manufacturer: Unknown Serial Number: Unknown Asset Tag: Not Specified Part Number: Unknown Rank: Unknown Configured Memory Speed: Unknown Minimum Voltage: Unknown Maximum Voltage: Unknown Configured Voltage: Unknown Memory Technology: Unknown Memory Operating Mode Capability: Unknown Firmware Version: Unknown Module Manufacturer ID: Unknown Module Product ID: Unknown Memory Subsystem Controller Manufacturer ID: Unknown Memory Subsystem Controller Product ID: Unknown Non-Volatile Size: None Volatile Size: None Cache Size: None Logical Size: None Handle 0x001F, DMI type 17, 92 bytes Memory Device Array Handle: 0x0010 Error Information Handle: 0x001E Total Width: 64 bits Data Width: 64 bits Size: 8 GB Form Factor: DIMM Set: None Locator: DIMM 1 Bank Locator: P0 CHANNEL B Type: DDR4 Type Detail: Synchronous Unbuffered (Unregistered) Speed: 2133 MT/s Manufacturer: Unknown Serial Number: 00000000 Asset Tag: Not Specified Part Number: F4-3600C16-8GVKC Rank: 1 Configured Memory Speed: 2133 MT/s Minimum Voltage: 1.2 V Maximum Voltage: 1.2 V Configured Voltage: 1.2 V Memory Technology: DRAM Memory Operating Mode Capability: Volatile memory Firmware Version: Unknown Module Manufacturer ID: Bank 5, Hex 0xCD Module Product ID: Unknown Memory Subsystem Controller Manufacturer ID: Unknown Memory Subsystem Controller Product ID: Unknown Non-Volatile Size: None Volatile Size: 8 GB Cache Size: None Logical Size: None
  6. Thats a good point! I never really considered RAID0 as a option, but if you think about it, I have an array with dual parity. The likelihood that I'll have an issue there AND with a Backup RAID0 Pool isn't high, and only 1 week worth of revert/recovery if the RAID0 fails isn't that big of a deal. This isn't mission critical business data. Its just my personal tinker toys, Plex server, dockers for wikis, other webservices, etc, and a couple VMs that are actually self configured/deployable via yaml scripts using yip to act as a kubernetes cluster for Linux package building. My cache pool is a RAID0 and the mover runs daily with zero issues. I can see this being a valid option as well for personal use.
  7. Wow, I've not looked at backblaze pricing, but $70/year for unlimited personal seems pretty amazing and $5/TB/Month for Backblaze B2 is pretty good too when looking from a business perspective! Thats definitely one option considering I'd have to buy multiple (5-6) drives at $250 each minimum to have a local backup and I'd most likely go with the personal backup option if that were allowed. Thanks for the input!
  8. So its been a dream of mine to get an LTO Tape Drive one day and run backups. In reality, my wife will never let me spend that kind of money for a drive. HDDs on the other hand are far cheaper for the same amount of storage (~30T). So I've been upgrading my cute Original WD Red 3T drives to Seagate EXOS 3x14T and 3x8T drives. But newer larger drives don't exactly have the same level of reliability in my experience in Datacenters. So I'd like to make use of the extra drive bays I'm freeing up. I have 12 bays. I plan to use 6 for my array with dual parity. I'd like to use the other 4-6 bays for a weekly backup. Evidently you cannot create 2 arrays in Unraid so my 2nd Array for weekly backup idea isn't going to work. What do you recommend here? Should I just create a "pool" for backups with no parity? should I risk BTRFS RAID6 pool as a backup solution, or just go the more expensive route to BTRFS RAID10 pool? Something else? The server is on a 1500Watt UPS. Risk of unclean shutdown is low. writes would only be weekly and incremental. No need to completely rewrite the entire backup.
  9. Unfortunately, no. The Dependency Hell threw a wrench in it. I've not tried since. Dealing with such deps on a static system is risky. This kind of power is really needed on the hypervisor side. You would also have to script/add/install all the deps and tool itself on every install due to the static nature of unraid (which is fair). That being said, if all the deps and the tool were installed from the beginning, this would be far less of a problem.
  10. @limetech I've found the exact issue! I've been experiencing the exact same issues on my desktop. While researching the issue I found someone else experiencing this in other distros too. In the persistent storage udev rules on how optical drives are handled, I've tweaked the udev rules as recommended here: https://forum.makemkv.com/forum/viewtopic.php?t=25357 My desktop is back to working as it should and no longer freeze/hangs on the external Bluray drives. This should also fix unraid too. TLDR: sr* lines in /lib/udev/60-persistent-storage.rules # probe filesystem metadata of optical drives which have a media inserted KERNEL=="sr*", ENV{DISK_EJECT_REQUEST}!="?*", ENV{ID_CDROM_MEDIA_TRACK_COUNT_DATA}=="?*", ENV{ID_CDROM_MEDIA_SESSION_LAST_OFFSET}=="?*", \ IMPORT{builtin}="blkid --offset=$env{ID_CDROM_MEDIA_SESSION_LAST_OFFSET}" # single-session CDs do not have ID_CDROM_MEDIA_SESSION_LAST_OFFSET KERNEL=="sr*", ENV{DISK_EJECT_REQUEST}!="?*", ENV{ID_CDROM_MEDIA_TRACK_COUNT_DATA}=="?*", ENV{ID_CDROM_MEDIA_SESSION_LAST_OFFSET}=="", \ IMPORT{builtin}="blkid --noraid" becomes # probe filesystem metadata of optical drives which have a media inserted KERNEL=="sr*", ENV{DISK_EJECT_REQUEST}!="?*", ENV{ID_CDROM_MEDIA_TRACK_COUNT_DATA}=="?*", ENV{ID_CDROM_MEDIA_SESSION_LAST_OFFSET}=="?*", \ GOTO="persistent_storage_end" # single-session CDs do not have ID_CDROM_MEDIA_SESSION_LAST_OFFSET KERNEL=="sr*", ENV{DISK_EJECT_REQUEST}!="?*", ENV{ID_CDROM_MEDIA_TRACK_COUNT_DATA}=="?*", ENV{ID_CDROM_MEDIA_SESSION_LAST_OFFSET}=="", \ GOTO="persistent_storage_end" My desktop is now able to view/play/rip blurays and dvds again. Evidently the IMPORT lines can cause hanging when certain video discs are placed in the drive. This explains why udevd hangs and begins killing /dev/sr0 when certain discs are inserted.
  11. OK, I've troubleshot this down to being mostly a problem with MakeMKV. Evidently it freaks out on certain discs it cannot read and it can cause MakeMKV to hang and/or become an unkillable ZOMBIE process. I thought being in an Unpriviledged Docker would at least allow me to kill off that docker to remove the process, but it appears that doesn't work. I do not get udev killing off my /dev/sr0 drive on my Manjaro desktop, but MakeMKV failing on specific unreadable discs still causes it to hang and turn into a zombie process just the same, unkillable by SIGKILL. Figured that piece of info could be valuable to devs. UPDATE: READ MY POST DOWN BELOW, SOLUTION HAS BEEN FOUND/SOLVED. ITS A UDEV RULE ISSUE!
  12. OK, I think I've found whats causing me grief since my upgrade from 6.9.2 . I use a docker that uses /dev/sr0 to rip blurays and dvd's for me. Sometimes I put in a disc and that docker hangs and locks up all other dockers, I get zombie processes I cannot SIGKILL, and I'm forced to reboot to clean it up and get the docker working again. What I've noticed is this: [ 324.178091] usb 4-1: reset SuperSpeed USB device number 2 using xhci_hcd [ 324.190405] sr 0:0:0:0: [sr0] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x03 driverbyte=DRIVER_OK cmd_age=30s [ 324.190410] sr 0:0:0:0: [sr0] tag#0 CDB: opcode=0x28 28 00 00 00 04 00 00 00 02 00 [ 324.190412] blk_update_request: I/O error, dev sr0, sector 4096 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0 [ 354.383938] usb 4-1: reset SuperSpeed USB device number 2 using xhci_hcd [ 473.812944] udevd[1688]: worker [27853] /devices/pci0000:00/0000:00:08.1/0000:03:00.3/usb4/4-1/4-1:1.0/host0/target0:0:0/0:0:0:0/block/sr0 timeout; kill it [ 473.812954] udevd[1688]: seq 11256 '/devices/pci0000:00/0000:00:08.1/0000:03:00.3/usb4/4-1/4-1:1.0/host0/target0:0:0/0:0:0:0/block/sr0' killed I've never seen udev do this before in 6.9.2 . Is this due to my new bluray drive, or is this due to 6.10.0-rc4 ? This is entirely new behavior to me. to generate diagnostics, I'm required to power off the bluray drive (/dev/sr0) to continue otherwise it hangs there. Anyone got any ideas? oceans-diagnostics-20220408-2342.zip
  13. I've been having issues with Dockers locking up lately. specifically a docker that I pass through /dev/sr0 for video ripping. while trying to pull diagnostics, it just hangs when it reaches /dev/sr0. Downloading... /boot/logs/oceans-diagnostics-20220404-1902.zip smartctl -x '/dev/sr0' 2>/dev/null|todos >'/oceans-diagnostics-20220404-1902/smart/HL-DT-ST_BD-RE_WH16NS60_210524880677-0-0-20220404-1902 (sr0).txt' after it hung here for quite some time, I was forced to turn the power off to /dev/sr0 so the diag collecting would complete. After turning off the USB Bluray drive, the docker was finally able to stop so something about passing a USB3 Bluray drive to a docker and the drive throwing errors was causing all of dockers to lock up, and hanging the diag log generation process.
  14. I’ve personally noticed that touches do not perfectly align to the webpage. Physical Tapping notification counts on the far right results in the touch event happening a little to the left of the touch missing the button you’re trying to touch. This is at least what’s happening on my iPhone 13 Pro.
  15. Thanks for the RC4 update! I’ve noticed lower RAM usage over RC3. I went from 6.9.2 to 6.10RC3 and immediately noticed issues. Sadly I did this as I was heading out the door for a week long trip so I didn’t have time to get diagnostics and troubleshoot. my machine runs a TR3970X w/ 64G of ECC RAM and consistently and floated around 80% RAM usage and wast stable. After RC3 update it immediately started shutting off one of my VMs due to being OOM. I run 2 VMs each with 20G of RAM and 10core/20threads . And the rest of the server ran on 12core/24threads with 24G of RAM. I’m going to retest this memory problem with RC4 and get diags if it occurs again, but so far so good. Fresh boot, all VMs are running which is a good sign and the RAM usage is lower this time. Wondering if the Nchan issue is related.
  16. I can't get the onlyoffice docker to work. its just a permanently loading bar. I only wanted to try this locally via local IP, but its stuck.
  17. I have linux VMs. They are actually build nodes used to build packages for a linux distribution. So my VMs accept jobs to compile and packages inside containers, upload said packages, then delete everything and start another job. I'm not sure if your suggestion works in this scenario does it?
  18. You might be able to perform a pull request to update the main project is you have through testing and proof of stability.
  19. It would be nice to have virt-sparsify to reduce VM disks size when they have zeros/unused space. https://libguestfs.org/virt-sparsify.1.html
  20. Is anyone else having issues with memory ballooning not working in VMs? I check my linux VMs and they have virtio_ballooning loaded, but their memory won't increase past initial size. I'm using an ASROCK Creator TRX40 w/ Ryzen Threadripper 3970X 64G DDR4. I'm using the rule initial memory is 1core=1G and Max is 1core=2G. I'm doing this on 3 VMs 8core, 8core, and 4core. None of which see their memory balloon while compiling software and they end up crashing with OOM errors. oceans-diagnostics-20210528-1427.zip
  21. That's awesome! It would be nice if we could get a lifespan meter somewhere in the open (it seems my method may be inaccurate and yours would be better). I want to make sure my server uptime doesn't take a bad turn when I need to order an SSD and it takes a week to get here. I'd like some pre-emptive warning/monitoring so I can plan accordingly rather than have items live on a shelf for years. Thanks for the correction! I'm learning something new everyday.
  22. I'm curious if it would be possible to store a MAX TBW for SSDs in the warranty information in the Identity drive info, then have a running comparison of what smartctl shows for nvme/ssds to show how close you are to reaching that maximum so someone would know to prepare for a replacement. You'll see after doing a smartctl -a /dev/nvme0n1 I have a "Data Units Written" of 9.67 TB. This unit has a MAX TBW of 1800. Now, this isn't my cache drive, this is my desktop. But if you're using an SSD as a cache drive, I'm sure you could see how the SSD would quickly deteriorate and fail. My cache SSD on my server is currently at 169TBW with a maximum of 530TBW before failure. Having this SSD lifespan viewable from the dashboard would be very helpful. My SSD in my server is only 1year old, but its used heavily for an open source project. jcfrosty@Zero ~ $ sudo smartctl -a /dev/nvme0n1 Password: smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.11.0-sabayon] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Number: Sabrent Rocket 4.0 1TB Serial Number: 03F10797054463199045 Firmware Version: EGFM11.1 PCI Vendor/Subsystem ID: 0x1987 IEEE OUI Identifier: 0x6479a7 Total NVM Capacity: 1,000,204,886,016 [1.00 TB] Unallocated NVM Capacity: 0 Controller ID: 1 Number of Namespaces: 1 Namespace 1 Size/Capacity: 1,000,204,886,016 [1.00 TB] Namespace 1 Formatted LBA Size: 512 Namespace 1 IEEE EUI-64: 6479a7 2220653435 Local Time is: Sat Apr 17 11:32:39 2021 CDT Firmware Updates (0x12): 1 Slot, no Reset required Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test Optional NVM Commands (0x005d): Comp DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp Maximum Data Transfer Size: 512 Pages Warning Comp. Temp. Threshold: 70 Celsius Critical Comp. Temp. Threshold: 90 Celsius Supported Power States St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 0 + 10.73W - - 0 0 0 0 0 0 1 + 7.69W - - 1 1 1 1 0 0 2 + 6.18W - - 2 2 2 2 0 0 3 - 0.0490W - - 3 3 3 3 2000 2000 4 - 0.0018W - - 4 4 4 4 25000 25000 Supported LBA Sizes (NSID 0x1) Id Fmt Data Metadt Rel_Perf 0 + 512 0 2 1 - 4096 0 1 === START OF SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART/Health Information (NVMe Log 0x02) Critical Warning: 0x00 Temperature: 45 Celsius Available Spare: 100% Available Spare Threshold: 5% Percentage Used: 1% Data Units Read: 7,506,169 [3.84 TB] Data Units Written: 18,893,007 [9.67 TB] Host Read Commands: 56,347,067 Host Write Commands: 289,751,028 Controller Busy Time: 583 Power Cycles: 118 Power On Hours: 14,438 Unsafe Shutdowns: 55 Media and Data Integrity Errors: 0 Error Information Log Entries: 271 Warning Comp. Temperature Time: 0 Critical Comp. Temperature Time: 0 Error Information (NVMe Log 0x01, max 63 entries) No Errors Logged
  23. I really could use this. I use my personal server for an OpenSource Linux project so giving my team members access would be really handy. I'd like to see 1. Multiple users enabled for WebUI (simple checkbox within the user profile would be nice) 2. Different levels of access. (Example: Restart VMs, VM Access, but not Creation/Deletion or root host shell access) 3. Log Users login and change actions (VM/docker reboot, deletion, creation, etc) Just some SMB features could be handy.
  24. @rix Something has broken the docker here very recently makemkvcon is missing from the container which breaks the docker entirely.