Darksurf

Members
  • Posts

    167
  • Joined

  • Last visited

Posts posted by Darksurf

  1. On 8/9/2023 at 8:08 AM, JonathanM said:

    Either search in the binhex-plexpass support thread to see if the issue is addressed there, or install the official container instead.

    Having those variables themselves won't enable passthrough for nvidia cards. You have to enable "Advanced" view in dockerman and go to "extra parameters' and add "--runtime=nvidia" otherwise you'll get no hw transcode even though you have all the nvidia variables for the docker.

  2. I cannot get the onlyoffice-document server docker working for the love of me. I cannot understand how we have Dockers which should make this a very turnkey process and it becomes a very NOT turn-key process. I've followed instructions and there just isn't any comprehensive guide to get this nightmare working. I've tried getting both onlyoffice and collabora working, neither of which I could get working. Neither of which seem to have a very good document explaining how to get it up and working as a docker either.

    If anyone can break this down for me, I'd appreciate it, because I've fought this for hours and am giving up until someone can explain this to me. I've generated the keys/certs etc but cannot connect on https. nextcloud now requires https for some reason. It feels like someone delivered a produce with some assembly required and no instructions on how to assemble it.

  3. On 3/18/2023 at 6:05 PM, Dustin said:

    I attached my log shortly after startup.  I see several errors that read: 

     

    "Mar 18 17:37:04 UNRAID kernel: Buffer I/O error on dev sr0, logical block #####, async page read."

     

    With various different numbers for the logical block.  I attached the log.  Any thoughts on what is causing these errors and how to fix it?

     

    Thanks!

    syslog.txt 200.22 kB · 1 download


    This is due to missing/broken UDEV rules. You can add a tweaked persistent-storage udev rule to /etc/udev/rules.d/ and reload udev to make the issue stop.

    to fix this, just download the rules, copy rules to server, then in unraid root terminal go to folder where rules were placed and:

    # mv 60-persistent-storage.rules /etc/udev/rules.d/
    # udevadm control --reload-rules && udevadm trigger


    Now this fix isn't permanent. If you want it to be permanent, you need the user scripts plugin and set the plugin to echo the contents of that file into /etc/udev/rules.d/60-persistent-rules.storage and then run the reload commands.

    https://forum.makemkv.com/forum/viewtopic.php?t=25357
    https://forum.manjaro.org/t/udev-persistent-persistent-storage-rules-for-dev-sr-hangs-udev/108411/2
    https://github.com/systemd/systemd/pull/23127 (which was ridiculously ignored and the kernel/media blamed for udev not handling exceptions)

    I actually use a docker that monitors an external Bluray drive and rips discs the moment they're inserted and it texts my phone when its done so I can move on to the next one.  My family loves movies and we're always digging in the bargain bin or buying movies when rental stores sell them off or go out of business. Then I put the media away after ripping it because my children liked to use them like Frisbees. So now they're only allowed to watch movies via the Plex server on their Roku. Optical Storage isn't dead!
     

    60-persistent-storage.rules

  4. 3 hours ago, tuxbass said:

    Anyone knows how does the ripper image compare to https://github.com/jlesage/docker-makemkv?

    Interesting, I'm not sure other than that docker also has a webUI? The scripts have similar lines, (they are effectively doing similar/the same thing) but that docker breaks it up into a few scripts rather than 1.  If you test it let us know how it works. My only issue with rix docker is that if I try to use a full key, it doesn't work, it only uses the beta key even though I have entered the purchased key. Not really an issue as re-installing/updating the docker fixes the betakey problem.

  5. 2 hours ago, kizer said:

    @Darksurf

     

    Did you make sure the target drive was completely empty with the exception of a single folder clear-me

     

    Yes. If you check my post above you'll see ls -al of disks5-10 look like this:

     

    Quote

    /mnt/disk5:
    total 16
    drwxrwxrwx  1 nobody users  16 Jul 10 18:35 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx  1 nobody users   0 Jul 10 18:35 clear-me/
     

    /mnt/disk6:
    total 16
    drwxrwxrwx  1 nobody users  16 Jul 10 18:35 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx  1 nobody users   0 Jul 10 18:35 clear-me/
     

    /mnt/disk7:
    total 16
    drwxrwxrwx  1 nobody users  16 Jul 10 18:34 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx  1 nobody users   0 Jul 10 18:34 clear-me/
     

    /mnt/disk8:
    total 16
    drwxrwxrwx  1 nobody users  16 Jul 10 18:34 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx  1 nobody users   0 Jul 10 18:34 clear-me/
     

    /mnt/disk9:
    total 16
    drwxrwxrwx  1 nobody users  16 Jul 10 18:33 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx  1 nobody users   0 Jul 10 18:33 clear-me/

    /mnt/disk10:
    total 16
    drwxrwxrwx  1 nobody users  16 Jul 10 18:33 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx  1 nobody users   0 Jul 10 18:33 clear-me/

     

    I ran unbalance on these drives 3 times to be sure, checked there were no files left, then did a full rm -rf /mnt/disk#/* on each drive I planned to wipe and then mkdir -p /mnt/disk#/clear-me on every disk I planned to wipe. I'm 100% positive the drives were empty besides the clear-me folder.

    I doubt its a problem, but these drives are all formatted BTRFS not XFS. It could also be some incompatibility with 6.10.3, not sure. I ended up just removing them from the machine and creating a new config for the array and rebuilding the parity drives.

  6. What is up with the zero drive script? It immediately gives up.

     

    *** Clear an unRAID array data drive ***  v1.4
    
    Checking all array data drives (may need to spin them up) ... 
    
    Checked 10 drives, did not find an empty drive ready and marked for clearing!
    
    To use this script, the drive must be completely empty first, no files
    or folders left on it.  Then a single folder should be created on it
    with the name 'clear-me', exactly 8 characters, 7 lowercase and 1 hyphen.
    This script is only for clearing unRAID data drives, in preparation for
    removing them from the array.  It does not add a Preclear signature.
    Script Finished Jul 10, 2022  19:21.37
    
    Full logs for this script are available at /tmp/user.scripts/tmpScripts/ZeroDisks_ShrinkArray/log.txt
    
    ^C
    root@Oceans:~# ls -al /mnt/disk*
    /mnt/disk1:
    total 16
    drwxrwxrwx   1 nobody users   84 Jul 10 04:30 ./
    drwxr-xr-x  18 root   root   360 Jul 10 00:12 ../
    drwxrwxrwx+  1 nobody users  190 Jul 10 18:28 Docker/
    drwxrwxrwx   1 nobody users   14 Jul  4 22:49 Downloads/
    drwxrwxrwx   1 nobody users   60 Jul  6 03:26 ZDRIVE/
    drwxrwxrwx   1 nobody users    0 Jul 20  2021 appdata/
    drwxrwxrwx   1 nobody users   16 Apr 16  2021 home/
    drwxrwxrwx   1 nobody users 1884 Jul  9 04:40 system/
    drwxrwxrwx   1 nobody users  138 Dec 31  2017 tftp/
    
    /mnt/disk10:
    total 16
    drwxrwxrwx  1 nobody users  16 Jul 10 18:33 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx  1 nobody users   0 Jul 10 18:33 clear-me/
    
    /mnt/disk2:
    total 16
    drwxrwxrwx   1 nobody users  12 Jul 10 04:30 ./
    drwxr-xr-x  18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx+  1 nobody users 260 Jul  9 23:38 Docker/
    
    /mnt/disk3:
    total 16
    drwxrwxrwx   1 nobody users  84 Jul 10 04:30 ./
    drwxr-xr-x  18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx+  1 nobody users 188 Jul  9 23:38 Docker/
    drwxrwxrwx   1 nobody users   0 Jul  6 22:31 Downloads/
    drwxr-xr-x   1 nobody users   0 May  9 09:08 ISOs/
    drwxrwxrwx   1 nobody users  32 Jul  6 22:28 ZDRIVE/
    drwxrwxrwx   1 nobody users   0 Jul 20  2021 appdata/
    drwxrwxrwx   1 nobody users  16 Jul  6 21:50 home/
    drwxrwxrwx   1 nobody users 394 Jul  6 22:31 system/
    
    /mnt/disk4:
    total 16
    drwxrwxrwx   1 nobody users  66 Jul 10 04:30 ./
    drwxr-xr-x  18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx+  1 nobody users 170 Jul  6 12:48 Docker/
    drwxrwxrwx   1 nobody users   8 Jun  5  2021 ZDRIVE/
    drwxrwxrwx   1 nobody users   0 Jul 20  2021 appdata/
    drwxrwxrwx   1 nobody users  38 Jul  6 12:47 home/
    drwxrwxrwx   1 nobody users  96 Jul  6 12:48 system/
    drwxrwxrwx   1 nobody users   0 Dec 31  2017 tftp/
    
    /mnt/disk5:
    total 16
    drwxrwxrwx  1 nobody users  16 Jul 10 18:35 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx  1 nobody users   0 Jul 10 18:35 clear-me/
    
    /mnt/disk6:
    total 16
    drwxrwxrwx  1 nobody users  16 Jul 10 18:35 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx  1 nobody users   0 Jul 10 18:35 clear-me/
    
    /mnt/disk7:
    total 16
    drwxrwxrwx  1 nobody users  16 Jul 10 18:34 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx  1 nobody users   0 Jul 10 18:34 clear-me/
    
    /mnt/disk8:
    total 16
    drwxrwxrwx  1 nobody users  16 Jul 10 18:34 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx  1 nobody users   0 Jul 10 18:34 clear-me/
    
    /mnt/disk9:
    total 16
    drwxrwxrwx  1 nobody users  16 Jul 10 18:33 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../
    drwxrwxrwx  1 nobody users   0 Jul 10 18:33 clear-me/
    
    /mnt/disks:
    total 0
    drwxrwxrwt  2 nobody users  40 Jul 10 00:11 ./
    drwxr-xr-x 18 root   root  360 Jul 10 00:12 ../

     

  7. On 7/7/2022 at 10:18 AM, Stan464 said:

    Backblaze does have a Personal one with an Docker Container to work around the "Restrictions" which does look pretty good. I used an alternative method which wasnt ideal and moved away from Backblaze.

    But i may revisit due to the Docker Container now being avail.

    Lemme know the name of that docker with a link. This could be my solution!

  8. I'm running a Ryzen Threadripper 3970X with 128G UECC memory in an ASROCK Creator TRX40 board, latest Beta BIOS, no stability issues. This could be various issues.

    1. Are you updated to the latest BIOS version?

    2. do you have fTPM disabled or enabled? If enabled, you'll want the latest BIOS update that fixes an fTPM stuttering issue. https://www.amd.com/en/support/kb/faq/pa-410

    3. What speed are you running your Unbuffered ECC Memory at? Don't expect greater than 2933mhz for ECC memory on Ryzen 3XXX or lower. Some only work at 2666mhz.

    4. If your Memory speeds aren't the problem, check your memory timings. There can be multiple jedec settings for timings or none requiring you to enter them manually to spec.

    5. In the BIOS have you disabled all power saving nonsense such as suspend to RAM, aggressive ASPM, ALPM, etc. (I've found aggressive power management implementation in my old supermicro server board was a problem for my HDDs)

    6. If you've done all the above, is your motherboard auto overclocking the CPU or RAM? disable auto-overclocking.

     

     

    As for specifics, I need to know the exact hardware in the build including the memory being used and what clock speeds and timings its rated for and what you have configured.

    Your logs here show normal gskill memory (non-ecc) and its running at the wrong speed and voltage (F4-3600C16-8GVKC running at 2133mhz and 1.2V). I also hope you're using UDIMM and not RDIMM ECC as RDIMM shouldn't work at all.

    Getting SMBIOS data from sysfs.
    SMBIOS 3.3.0 present.
    
    Handle 0x0018, DMI type 17, 92 bytes
    Memory Device
        Array Handle: 0x0010
        Error Information Handle: 0x0017
        Total Width: Unknown
        Data Width: Unknown
        Size: No Module Installed
        Form Factor: Unknown
        Set: None
        Locator: DIMM 0
        Bank Locator: P0 CHANNEL A
        Type: Unknown
        Type Detail: Unknown
        Speed: Unknown
        Manufacturer: Unknown
        Serial Number: Unknown
        Asset Tag: Not Specified
        Part Number: Unknown
        Rank: Unknown
        Configured Memory Speed: Unknown
        Minimum Voltage: Unknown
        Maximum Voltage: Unknown
        Configured Voltage: Unknown
        Memory Technology: Unknown
        Memory Operating Mode Capability: Unknown
        Firmware Version: Unknown
        Module Manufacturer ID: Unknown
        Module Product ID: Unknown
        Memory Subsystem Controller Manufacturer ID: Unknown
        Memory Subsystem Controller Product ID: Unknown
        Non-Volatile Size: None
        Volatile Size: None
        Cache Size: None
        Logical Size: None
    
    Handle 0x001A, DMI type 17, 92 bytes
    Memory Device
        Array Handle: 0x0010
        Error Information Handle: 0x0019
        Total Width: 64 bits
        Data Width: 64 bits
        Size: 8 GB
        Form Factor: DIMM
        Set: None
        Locator: DIMM 1
        Bank Locator: P0 CHANNEL A
        Type: DDR4
        Type Detail: Synchronous Unbuffered (Unregistered)
        Speed: 2133 MT/s
        Manufacturer: Unknown
        Serial Number: 00000000
        Asset Tag: Not Specified
        Part Number: F4-3600C16-8GVKC
        Rank: 1
        Configured Memory Speed: 2133 MT/s
        Minimum Voltage: 1.2 V
        Maximum Voltage: 1.2 V
        Configured Voltage: 1.2 V
        Memory Technology: DRAM
        Memory Operating Mode Capability: Volatile memory
        Firmware Version: Unknown
        Module Manufacturer ID: Bank 5, Hex 0xCD
        Module Product ID: Unknown
        Memory Subsystem Controller Manufacturer ID: Unknown
        Memory Subsystem Controller Product ID: Unknown
        Non-Volatile Size: None
        Volatile Size: 8 GB
        Cache Size: None
        Logical Size: None
    
    Handle 0x001D, DMI type 17, 92 bytes
    Memory Device
        Array Handle: 0x0010
        Error Information Handle: 0x001C
        Total Width: Unknown
        Data Width: Unknown
        Size: No Module Installed
        Form Factor: Unknown
        Set: None
        Locator: DIMM 0
        Bank Locator: P0 CHANNEL B
        Type: Unknown
        Type Detail: Unknown
        Speed: Unknown
        Manufacturer: Unknown
        Serial Number: Unknown
        Asset Tag: Not Specified
        Part Number: Unknown
        Rank: Unknown
        Configured Memory Speed: Unknown
        Minimum Voltage: Unknown
        Maximum Voltage: Unknown
        Configured Voltage: Unknown
        Memory Technology: Unknown
        Memory Operating Mode Capability: Unknown
        Firmware Version: Unknown
        Module Manufacturer ID: Unknown
        Module Product ID: Unknown
        Memory Subsystem Controller Manufacturer ID: Unknown
        Memory Subsystem Controller Product ID: Unknown
        Non-Volatile Size: None
        Volatile Size: None
        Cache Size: None
        Logical Size: None
    
    Handle 0x001F, DMI type 17, 92 bytes
    Memory Device
        Array Handle: 0x0010
        Error Information Handle: 0x001E
        Total Width: 64 bits
        Data Width: 64 bits
        Size: 8 GB
        Form Factor: DIMM
        Set: None
        Locator: DIMM 1
        Bank Locator: P0 CHANNEL B
        Type: DDR4
        Type Detail: Synchronous Unbuffered (Unregistered)
        Speed: 2133 MT/s
        Manufacturer: Unknown
        Serial Number: 00000000
        Asset Tag: Not Specified
        Part Number: F4-3600C16-8GVKC
        Rank: 1
        Configured Memory Speed: 2133 MT/s
        Minimum Voltage: 1.2 V
        Maximum Voltage: 1.2 V
        Configured Voltage: 1.2 V
        Memory Technology: DRAM
        Memory Operating Mode Capability: Volatile memory
        Firmware Version: Unknown
        Module Manufacturer ID: Bank 5, Hex 0xCD
        Module Product ID: Unknown
        Memory Subsystem Controller Manufacturer ID: Unknown
        Memory Subsystem Controller Product ID: Unknown
        Non-Volatile Size: None
        Volatile Size: 8 GB
        Cache Size: None
        Logical Size: None

     

  9. 19 hours ago, Vr2Io said:

    I divide my data in two group, one is frequently change and other is rather static.

     

    Each assign 8 bay in total 16 bay in same enclosure, so 8 bay in normal array parity and 8 bay in raid0 pool. That's all live data.

     

    The aim was I can copy / regen data in quick way by raid0 to raid0 most time. And I can easy free up 20 bays for temporary use.

    Thats a good point! I never really considered RAID0 as a option, but if you think about it, I have an array with dual parity. The likelihood that I'll have an issue there AND with a Backup RAID0 Pool isn't high, and only 1 week worth of revert/recovery if the RAID0 fails isn't that big of a deal. This isn't mission critical business data. Its just my personal tinker toys, Plex server, dockers for wikis, other webservices, etc, and a couple VMs that are actually self configured/deployable via yaml scripts using yip to act as a kubernetes cluster for Linux package building.

     

    My cache pool is a RAID0 and the mover runs daily with zero issues. I can see this being a valid option as well for personal use.

  10. 7 hours ago, Stan464 said:

    I use Duplicacy with Storage Backend. as an example. Backblaze B2.

    Wow, I've not looked at backblaze pricing, but $70/year for unlimited personal seems pretty amazing and $5/TB/Month for Backblaze B2 is pretty good too when looking from a business perspective! Thats definitely one option considering I'd have to buy multiple (5-6) drives at $250 each minimum to have a local backup and I'd most likely go with the personal backup option if that were allowed.

     

    Thanks for the input!

    • Like 1
  11. So its been a dream of mine to get an LTO Tape Drive one day and run backups. In reality, my wife will never let me spend that kind of money for a drive. HDDs on the other hand are far cheaper for the same amount of storage (~30T).

     

    So I've been upgrading my cute Original WD Red 3T drives to Seagate EXOS 3x14T and 3x8T drives. But newer larger drives don't exactly have the same level of reliability in my experience in Datacenters. So I'd like to make use of the extra drive bays I'm freeing up. I have 12 bays. I plan to use 6 for my array with dual parity. I'd like to use the other 4-6 bays for a weekly backup.

     

    Evidently you cannot create 2 arrays in Unraid so my 2nd Array for weekly backup idea isn't going to work. What do you recommend here? Should I just create a "pool" for backups with no parity? should I risk BTRFS RAID6 pool as a backup solution, or just go the more expensive route to BTRFS RAID10 pool? Something else?

     

    The server is on a 1500Watt UPS. Risk of unclean shutdown is low. writes would only be weekly and incremental. No need to completely rewrite the entire backup.

  12. On 5/23/2022 at 3:18 AM, ThatDude said:

    Hey, did you ever get anywhere with installing virt-sparsify ?

     

    Unfortunately, no. The Dependency Hell threw a wrench in it. I've not tried since. Dealing with such deps on a static system is risky. This kind of power is really needed on the hypervisor side. You would also have to script/add/install all the deps and tool itself on every install due to the static nature of unraid (which is fair).  That being said, if all the deps and the tool were installed from the beginning, this would be far less of a problem.

  13. 15 hours ago, Squid said:

    I have linux VMs. They are actually build nodes used to build packages for a linux distribution.  So my VMs accept jobs to compile and packages inside containers, upload said packages, then delete everything and start another job. I'm not sure if your suggestion works in this scenario does it?

  14. 32 minutes ago, pk1057 said:

    I digged around,

    made a fork of the project and switched from ripit to abcde wich is more versatile and mature then ripit.

     

    With abcde there are no encoding problems !

    You might be able to perform a pull request to update the main project is you have through testing and proof of stability.

  15. Is anyone else having issues with memory ballooning not working in VMs? I check my linux VMs and they have virtio_ballooning loaded, but their memory won't increase past initial size.

     

    I'm using an ASROCK Creator TRX40 w/ Ryzen Threadripper 3970X 64G DDR4.  I'm using the rule initial memory is 1core=1G and Max is 1core=2G. I'm doing this on 3 VMs 8core, 8core, and 4core.  None of which see their memory balloon while  compiling software and they end up crashing with OOM errors.

     

     

    oceans-diagnostics-20210528-1427.zip

    • Like 1
  16. That's awesome! It would be nice if we could get a lifespan meter somewhere in the open (it seems my method may be inaccurate and yours would be better). I want to make sure my server uptime doesn't take a bad turn when I need to order an SSD and it takes a week to get here. I'd like some pre-emptive warning/monitoring so I can plan accordingly rather than have items live on a shelf for years.

     

    Thanks for the correction! I'm learning something new everyday.

  17. I'm curious if it would be possible to store a MAX TBW for SSDs in the warranty information in the Identity drive info, then have a running comparison of what smartctl shows for nvme/ssds to show how close you are to reaching that maximum so someone would know to prepare for a replacement. You'll see after doing a smartctl -a /dev/nvme0n1 I have a "Data Units Written" of 9.67 TB. This unit has a MAX TBW of 1800. Now, this isn't my cache drive, this is my desktop. But if you're using an SSD as a cache drive, I'm sure you could see how the SSD would quickly deteriorate and fail.  My cache SSD on my server is currently at 169TBW with a maximum of  530TBW before failure. Having this SSD lifespan viewable from the dashboard would be very helpful. My SSD in my server is only 1year old, but its used heavily for an open source project.

     

     

    jcfrosty@Zero ~ $ sudo smartctl -a /dev/nvme0n1
    Password: 
    smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.11.0-sabayon] (local build)
    Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
    
    === START OF INFORMATION SECTION ===
    Model Number:                       Sabrent Rocket 4.0 1TB
    Serial Number:                      03F10797054463199045
    Firmware Version:                   EGFM11.1
    PCI Vendor/Subsystem ID:            0x1987
    IEEE OUI Identifier:                0x6479a7
    Total NVM Capacity:                 1,000,204,886,016 [1.00 TB]
    Unallocated NVM Capacity:           0
    Controller ID:                      1
    Number of Namespaces:               1
    Namespace 1 Size/Capacity:          1,000,204,886,016 [1.00 TB]
    Namespace 1 Formatted LBA Size:     512
    Namespace 1 IEEE EUI-64:            6479a7 2220653435
    Local Time is:                      Sat Apr 17 11:32:39 2021 CDT
    Firmware Updates (0x12):            1 Slot, no Reset required
    Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
    Optional NVM Commands (0x005d):     Comp DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
    Maximum Data Transfer Size:         512 Pages
    Warning  Comp. Temp. Threshold:     70 Celsius
    Critical Comp. Temp. Threshold:     90 Celsius
    
    Supported Power States
    St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
     0 +    10.73W       -        -    0  0  0  0        0       0
     1 +     7.69W       -        -    1  1  1  1        0       0
     2 +     6.18W       -        -    2  2  2  2        0       0
     3 -   0.0490W       -        -    3  3  3  3     2000    2000
     4 -   0.0018W       -        -    4  4  4  4    25000   25000
    
    Supported LBA Sizes (NSID 0x1)
    Id Fmt  Data  Metadt  Rel_Perf
     0 +     512       0         2
     1 -    4096       0         1
    
    === START OF SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    
    SMART/Health Information (NVMe Log 0x02)
    Critical Warning:                   0x00
    Temperature:                        45 Celsius
    Available Spare:                    100%
    Available Spare Threshold:          5%
    Percentage Used:                    1%
    Data Units Read:                    7,506,169 [3.84 TB]
    Data Units Written:                 18,893,007 [9.67 TB]
    Host Read Commands:                 56,347,067
    Host Write Commands:                289,751,028
    Controller Busy Time:               583
    Power Cycles:                       118
    Power On Hours:                     14,438
    Unsafe Shutdowns:                   55
    Media and Data Integrity Errors:    0
    Error Information Log Entries:      271
    Warning  Comp. Temperature Time:    0
    Critical Comp. Temperature Time:    0
    
    Error Information (NVMe Log 0x01, max 63 entries)
    No Errors Logged
    

     

     

    Screenshot_20210417_113435.png

    • Like 1
  18. I really could use this. I use my personal server for an OpenSource Linux project so giving my team members access would be really handy.

    I'd like to see

    1. Multiple users enabled for WebUI (simple checkbox within the user profile would be nice)
    2. Different levels of access. (Example: Restart VMs, VM Access, but not Creation/Deletion or root host shell access)
    3. Log Users login and change actions (VM/docker reboot, deletion, creation, etc)

    Just some SMB features could be handy.