dgaschk

Moderators
  • Posts

    8920
  • Joined

  • Last visited

Posts posted by dgaschk

  1. root@Othello:~# fstrim -v /mnt/cache
    fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error
    root@Othello:~# hdparm -I /dev/sd[k-m] | grep TRIM
               *    Data Set Management TRIM supported (limit 8 blocks)
               *    Deterministic read ZEROs after TRIM
               *    Data Set Management TRIM supported (limit 8 blocks)
               *    Deterministic read ZEROs after TRIM
               *    Data Set Management TRIM supported (limit 8 blocks)
               *    Deterministic read ZEROs after TRIM
    root@Othello:~# fstrim -av
    fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error

    sd[k-m] are (3x) 860 EVO connected to a 9300-8i and configured as cache. There are too few SATA 3 ports on the MB.

     

    Does anyone have this combination working? Can you recommend a combination of HBA and SSD that does support TRIM?

     

    Thanks,

    David

    othello-diagnostics-20200108-1820.zip

  2. On 1/5/2020 at 1:04 AM, johnnie.black said:

    9340-8i is a MegaRAID controller, we recommend true LSI HBAs, like the 9300-8i, it should work with any SSD, though LSI HBAs can only trimm SSDs with reads zeros after trim support, e.g. 860 EVO is OK, 850 EVO won't trim.

    Have you confirmed this? I have a 9300-8i and 860 EVO does not TRIM.

     

    root@Othello:~# fstrim -v /mnt/cache
    fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error
    root@Othello:~# hdparm -I /dev/sd[k-m] | grep TRIM
               *    Data Set Management TRIM supported (limit 8 blocks)
               *    Deterministic read ZEROs after TRIM
               *    Data Set Management TRIM supported (limit 8 blocks)
               *    Deterministic read ZEROs after TRIM
               *    Data Set Management TRIM supported (limit 8 blocks)
               *    Deterministic read ZEROs after TRIM
    root@Othello:~# fstrim -av
    fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error

    sd[k-m] are (3x) 860 EVO connected to a 9300-8i. Maybe I should start another thread.

     

  3. After further testing weirdness ensues. I rebooted in safe mode and back to normal mode. The BIND from vfio-pci.cfg is working correctly and i am successfully using the stock Syslinux configuration. I still needed the following three lines in the go file for the VM to startup and not fill the syslog:

    #fix video for VM
    echo 0 > /sys/class/vtconsole/vtcon0/bind
    echo 0 > /sys/class/vtconsole/vtcon1/bind
    echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind
    

     

     

     

    • Thanks 2
  4. This seems to fix the problem. VNC did not connect at first but rdp and splashtop did. VNC worked after I logged in once.

     

    https://www.redhat.com/archives/vfio-users/2016-March/msg00088.html

     

    Windows 10 VM on Supermicro X11SPM-F with GeForce RTX 2080 SUPER appears to be working. VNC is not usable but splashtop works well. I passed the GeForce through using Syslinux configuration with "{append initrd=/bzroot} vfio-pci.ids=10de:1e81,10de:10f8,10de:1ad8,10de:1ad9" I installed newer nvidia drivers and chose the nvidia card as primary GPU. No ROM BIOS file is needed

  5.  

    I created a vfio-pci.cfg but the binding doesn't seem to work. I get the same result as not having a vfio-pci.cfg file:

    2019-09-21T05:40:54.449416Z qemu-system-x86_64: -device vfio-pci,host=65:00.0,id=hostdev0,bus=pci.5,addr=0x0: vfio 0000:65:00.0: group 28 is not viable
    Please ensure all devices within the iommu_group are bound to their vfio bus driver.
    

     

    VM log:

    
    
    ErrorWarningSystemArrayLogin
    
    -boot strict=on \
    -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \
    -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \
    -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
    -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
    -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \
    -device pcie-root-port,port=0x8,chassis=6,id=pci.6,bus=pcie.0,multifunction=on,addr=0x1 \
    -device pcie-pci-bridge,id=pci.7,bus=pci.1,addr=0x0 \
    -device pcie-root-port,port=0x9,chassis=8,id=pci.8,bus=pcie.0,addr=0x1.0x1 \
    -device pcie-root-port,port=0xa,chassis=9,id=pci.9,bus=pcie.0,addr=0x1.0x2 \
    -device qemu-xhci,p2=15,p3=15,id=usb,bus=pcie.0,addr=0x7 \
    -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \
    -drive 'file=/mnt/disks/Scorch/VM/Windows 10/vdisk1.img,format=raw,if=none,id=drive-virtio-disk2,cache=writeback' \
    -device virtio-blk-pci,scsi=off,bus=pci.4,addr=0x0,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=1,write-cache=on \
    -drive file=/mnt/user/backup/Win10_1903_V1_English_x64.iso,format=raw,if=none,id=drive-sata0-0-0,readonly=on \
    -device ide-cd,bus=ide.0,drive=drive-sata0-0-0,id=sata0-0-0,bootindex=2 \
    -drive file=/mnt/user/backup/virtio-win-0.1.160-1.iso,format=raw,if=none,id=drive-sata0-0-1,readonly=on \
    -device ide-cd,bus=ide.1,drive=drive-sata0-0-1,id=sata0-0-1 \
    -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 \
    -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:da:47:b1,bus=pci.3,addr=0x0 \
    -chardev pty,id=charserial0 \
    -device isa-serial,chardev=charserial0,id=serial0 \
    -chardev socket,id=charchannel0,fd=31,server,nowait \
    -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
    -device usb-tablet,id=input0,bus=usb.0,port=3 \
    -vnc 0.0.0.0:0,websocket=5700 \
    -k en-us \
    -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.7,addr=0x1 \
    -device vfio-pci,host=65:00.0,id=hostdev0,bus=pci.5,addr=0x0 \
    -device vfio-pci,host=65:00.1,id=hostdev1,bus=pci.6,addr=0x0 \
    -device usb-host,hostbus=1,hostaddr=5,id=hostdev2,bus=usb.0,port=1 \
    -device usb-host,hostbus=1,hostaddr=7,id=hostdev3,bus=usb.0,port=2 \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -msg timestamp=on
    2019-09-21 05:40:54.258+0000: Domain id=1 is tainted: high-privileges
    2019-09-21 05:40:54.258+0000: Domain id=1 is tainted: host-cpu
    char device redirected to /dev/pts/0 (label charserial0)
    2019-09-21T05:40:54.449416Z qemu-system-x86_64: -device vfio-pci,host=65:00.0,id=hostdev0,bus=pci.5,addr=0x0: vfio 0000:65:00.0: group 28 is not viable
    Please ensure all devices within the iommu_group are bound to their vfio bus driver.
    2019-09-21 05:40:54.488+0000: shutting down, reason=failed

    I am able to start the VM if I edit the Syslinux configuration with "{append initrd=/bzroot} vfio-pci.ids=10de:1e81,10de:10f8,10de:1ad8,10de:1ad9" but this is another post.

    vfio-pci.cfg rack-diagnostics-20190921-0547.zip

  6. I added a NVIDIA card to a Supermicro board and the console using the built-in ASPEED VGA worked at first. I added the vfio-pci.cfg to the config directory and rebooted. The console froze after "Loading /bzroot . . . ok". I deleted the vfio-pci.cfg file and rebooted. The console still froze. I rebooted  disabled the PCIe holding the Nvidia card. The console worked and I saved the first diagnostic attached. I rebooted enabling the PCIe slot holding the Nvidia card and the console video stopped after loading bnzroot. I saved the second diagnostic attached. Other than the console not working the basic server function seems ok. I have not yet restarted VM. Why is the console frozen?

    fozen-console.jpeg

    rack-diagnostics-20190920-0111.zip rack-diagnostics-20190920-0118.zip vfio-pci.cfg.txt

  7. On 12/14/2018 at 12:19 PM, pro9c3 said:

    is there any way to adjust the warning/critical temperature for unassigned devices? My SSD's can operate up to 70C and i'm getting too many warnings at 45C.

     

    On 12/14/2018 at 3:11 PM, johnnie.black said:

    Not currently, but you can set all the other disks individually to lower values and set the global settings to the higher values.

    I also have this issue . Thanks.

  8. 6 hours ago, wgstarks said:

    Short answer, clear preclear and format will “erase” the drive. With format there are processes to possibly recover the data since formatting just changes the indexing. This is usually rather expensive though.

     

    With clear and preclear your only choice would be to restore from a backup probably since these processes actually overwrite every sector.

     

    7 hours ago, wgstarks said:

    Clear or preclear will write zeros to the entire drive overwriting all existing data on the drive. This is done so that the parity knows exactly what is on the drive when it is added (zeros) and can maintain an accurate computation of parity. You can’t add data to a parity protected share by adding a drive that already contains the data. You’ll need to copy the data to the share first probably.

    If you have a disk formatted by unRAID (or possibly otherwise formatted correctly) that has data, it can be added to the array by resetting the array, adding the disk, and rebuilding parity.

     

    A replacement disk can be swapped in without preparation because the disk will be written with the data and formatting existing on the previous disk. (Many would preclear as a test but it’s not required in any case)

     

    Adding an an additional disk to the array will cause the disk to be zeroed(cleared), added to the array and then formatted. The new disk will be empty and any data it previously contained will be gone. (This is what preclear was originally for)

     

    If if you have a random disk with data that you want to add to the array. You must first copy the data to another location, perhaps in the array, and then add the disk as an additional disk. If the data is not in the array it should be copied to the array at this point. 

     

    So preclear is not needed and no longer appears to be supported. Does anyone know of a docker for disk test and “burn in”? Should we start removing references to preclear?  Should the OP be modified to reflect the current non-working status of preclear? If gfjardim returns to update the plugin they can update the the OP to say that the plugin works now.

  9. I haven’t tried but they should work. Someone try it and report back. I will test it eventually. I’ve never seen a shell script lose compatibility. The issue here is with the GUI integration. 

  10. The preclear script does not appear to terminate:

     

    Version: 6.7.0  Server
    Description
    Registration
    Uptimerack • 192.168.10.198
    backup and test
    Unraid OS Pro
    30 days, 21 hours, 7 minutes
    DASHBOARD
    MAIN
    SHARES
    USERS
    SETTINGS
    PLUGINS
    DOCKER
    APPS
    TOOLS
    57%
    Processes
    UID        PID  PPID  C STIME TTY          TIME CMD
    root         1     0  0 May18 ?        00:00:32 init 
    root         2     0  0 May18 ?        00:00:00 [kthreadd]
    root         3     2  0 May18 ?        00:00:00 [rcu_gp]
    root         4     2  0 May18 ?        00:00:00 [rcu_par_gp]
    root         6     2  0 May18 ?        00:00:00 [kworker/0:0H-events_highpri]
    root         8     2  0 May18 ?        00:00:00 [mm_percpu_wq]
    root         9     2  0 May18 ?        00:01:36 [ksoftirqd/0]
    root        10     2  0 May18 ?        00:56:15 [rcu_sched]
    root        11     2  0 May18 ?        00:00:00 [rcu_bh]
    root        12     2  0 May18 ?        00:00:44 [migration/0]
    root        13     2  0 May18 ?        00:00:00 [cpuhp/0]
    root        14     2  0 May18 ?        00:00:00 [cpuhp/1]
    root        15     2  0 May18 ?        00:00:40 [migration/1]
    root        16     2  0 May18 ?        00:01:33 [ksoftirqd/1]
    root        18     2  0 May18 ?        00:00:00 [kworker/1:0H-kblockd]
    root        19     2  0 May18 ?        00:00:00 [cpuhp/2]
    root        20     2  0 May18 ?        00:00:37 [migration/2]
    root        21     2  0 May18 ?        00:01:28 [ksoftirqd/2]
    root        23     2  0 May18 ?        00:00:00 [kworker/2:0H-kblockd]
    root        24     2  0 May18 ?        00:00:00 [cpuhp/3]
    root        25     2  0 May18 ?        00:00:27 [migration/3]
    root        26     2  0 May18 ?        00:01:21 [ksoftirqd/3]
    root        28     2  0 May18 ?        00:00:00 [kworker/3:0H-kblockd]
    root        29     2  0 May18 ?        00:00:00 [cpuhp/4]
    root        30     2  0 May18 ?        00:00:52 [migration/4]
    root        31     2  0 May18 ?        00:01:42 [ksoftirqd/4]
    root        33     2  0 May18 ?        00:00:00 [kworker/4:0H-kblockd]
    root        34     2  0 May18 ?        00:00:00 [cpuhp/5]
    root        35     2  0 May18 ?        00:00:45 [migration/5]
    root        36     2  0 May18 ?        00:01:37 [ksoftirqd/5]
    root        38     2  0 May18 ?        00:00:00 [kworker/5:0H-kblockd]
    root        39     2  0 May18 ?        00:00:00 [cpuhp/6]
    root        40     2  0 May18 ?        00:00:40 [migration/6]
    root        41     2  0 May18 ?        00:01:37 [ksoftirqd/6]
    root        43     2  0 May18 ?        00:00:00 [kworker/6:0H-kblockd]
    root        44     2  0 May18 ?        00:00:00 [cpuhp/7]
    root        45     2  0 May18 ?        00:00:40 [migration/7]
    root        46     2  0 May18 ?        00:01:40 [ksoftirqd/7]
    root        48     2  0 May18 ?        00:00:00 [kworker/7:0H-kblockd]
    root        49     2  0 May18 ?        00:00:00 [cpuhp/8]
    root        50     2  0 May18 ?        00:00:39 [migration/8]
    root        51     2  0 May18 ?        00:01:35 [ksoftirqd/8]
    root        53     2  0 May18 ?        00:00:00 [kworker/8:0H-kblockd]
    root        54     2  0 May18 ?        00:00:00 [cpuhp/9]
    root        55     2  0 May18 ?        00:00:27 [migration/9]
    root        56     2  0 May18 ?        00:32:31 [ksoftirqd/9]
    root        58     2  0 May18 ?        00:00:00 [kworker/9:0H-kblockd]
    root        59     2  0 May18 ?        00:00:00 [cpuhp/10]
    root        60     2  0 May18 ?        00:00:09 [migration/10]
    root        61     2  0 May18 ?        00:00:43 [ksoftirqd/10]
    root        63     2  0 May18 ?        00:00:00 [kworker/10:0H-kblockd]
    root        64     2  0 May18 ?        00:00:00 [cpuhp/11]
    root        65     2  0 May18 ?        00:00:09 [migration/11]
    root        66     2  0 May18 ?        00:00:35 [ksoftirqd/11]
    root        68     2  0 May18 ?        00:00:00 [kworker/11:0H]
    root        69     2  0 May18 ?        00:00:00 [cpuhp/12]
    root        70     2  0 May18 ?        00:00:08 [migration/12]
    root        71     2  0 May18 ?        00:00:34 [ksoftirqd/12]
    root        73     2  0 May18 ?        00:00:00 [kworker/12:0H-kblockd]
    root        74     2  0 May18 ?        00:00:00 [cpuhp/13]
    root        75     2  0 May18 ?        00:00:24 [migration/13]
    root        76     2  0 May18 ?        00:46:16 [ksoftirqd/13]
    root        78     2  0 May18 ?        00:00:00 [kworker/13:0H-kblockd]
    root        79     2  0 May18 ?        00:00:00 [cpuhp/14]
    root        80     2  0 May18 ?        00:00:11 [migration/14]
    root        81     2  0 May18 ?        00:00:38 [ksoftirqd/14]
    root        83     2  0 May18 ?        00:00:00 [kworker/14:0H-kblockd]
    root        84     2  0 May18 ?        00:00:00 [cpuhp/15]
    root        85     2  0 May18 ?        00:00:10 [migration/15]
    root        86     2  0 May18 ?        00:00:36 [ksoftirqd/15]
    root        88     2  0 May18 ?        00:00:00 [kworker/15:0H-kblockd]
    root        89     2  0 May18 ?        00:00:00 [cpuhp/16]
    root        90     2  0 May18 ?        00:00:09 [migration/16]
    root        91     2  0 May18 ?        00:00:36 [ksoftirqd/16]
    root        93     2  0 May18 ?        00:00:00 [kworker/16:0H-kblockd]
    root        94     2  0 May18 ?        00:00:00 [cpuhp/17]
    root        95     2  0 May18 ?        00:00:09 [migration/17]
    root        96     2  0 May18 ?        00:00:35 [ksoftirqd/17]
    root        98     2  0 May18 ?        00:00:00 [kworker/17:0H-kblockd]
    root        99     2  0 May18 ?        00:00:00 [cpuhp/18]
    root       100     2  0 May18 ?        00:00:08 [migration/18]
    root       101     2  0 May18 ?        00:00:32 [ksoftirqd/18]
    root       103     2  0 May18 ?        00:00:00 [kworker/18:0H-kblockd]
    root       104     2  0 May18 ?        00:00:00 [cpuhp/19]
    root       105     2  0 May18 ?        00:00:16 [migration/19]
    root       106     2  0 May18 ?        00:00:43 [ksoftirqd/19]
    root       108     2  0 May18 ?        00:00:00 [kworker/19:0H-kblockd]
    root       109     2  0 May18 ?        00:00:00 [kdevtmpfs]
    root       110     2  0 May18 ?        00:00:00 [netns]
    root       401     2  0 May18 ?        00:00:00 [oom_reaper]
    root       402     2  0 May18 ?        00:00:00 [writeback]
    root       404     2  0 May18 ?        00:01:32 [kcompactd0]
    root       405     2  0 May18 ?        00:00:00 [ksmd]
    root       406     2  0 May18 ?        00:03:24 [khugepaged]
    root       407     2  0 May18 ?        00:00:00 [crypto]
    root       408     2  0 May18 ?        00:00:00 [kintegrityd]
    root       410     2  0 May18 ?        00:00:00 [kblockd]
    root       579  5318  0 Jun13 pts/11   00:00:00 -bash
    root       597     1  0 Jun13 pts/11   00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdi
    root       618     1  0 Jun13 pts/11   00:00:01 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdi
    root       631   618  0 Jun13 pts/11   00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdi
    root       632   631  0 Jun13 pts/11   00:00:00 tail -f -n0 /var/log/syslog
    root      1060     2  0 Jun15 ?        00:00:00 [xfs-buf/md8]
    root      1061     2  0 Jun15 ?        00:00:00 [xfs-data/md8]
    root      1065     2  0 Jun15 ?        00:00:00 [xfs-conv/md8]
    root      1067     2  0 Jun15 ?        00:00:00 [xfs-cil/md8]
    root      1068     2  0 Jun15 ?        00:00:00 [xfs-reclaim/md8]
    root      1071     2  0 Jun15 ?        00:00:00 [xfs-log/md8]
    root      1074     2  0 Jun15 ?        00:00:00 [xfs-eofblocks/m]
    root      1075     2  0 Jun15 ?        00:00:00 [xfsaild/md8]
    root      1110     2  0 Jun15 ?        00:00:03 [kworker/16:1-mm_percpu_wq]
    root      1177  7651  0 16:54 ?        00:00:00 php-fpm: pool www
    root      1178  7651  0 16:54 ?        00:00:00 php-fpm: pool www
    root      1272     2  0 May18 ?        00:00:00 [ata_sff]
    root      1297     2  0 May18 ?        00:00:00 [edac-poller]
    root      1298     2  0 May18 ?        00:00:00 [devfreq_wq]
    root      1482     2  0 May18 ?        02:03:52 [kswapd0]
    root      1610     2  0 May18 ?        00:00:00 [kthrotld]
    root      1751     2  0 May18 ?        00:00:00 [vfio-irqfd-clea]
    root      1801     2  0 16:16 ?        00:00:00 [kworker/u40:3-events_power_efficient]
    root      1815     2  0 Jun15 ?        00:00:00 [xfs-buf/md22]
    root      1818     2  0 Jun15 ?        00:00:00 [xfs-data/md22]
    root      1819     2  0 Jun15 ?        00:00:00 [xfs-conv/md22]
    root      1820     2  0 Jun15 ?        00:00:00 [xfs-cil/md22]
    root      1821     2  0 Jun15 ?        00:00:00 [xfs-reclaim/md2]
    root      1822     2  0 Jun15 ?        00:00:00 [xfs-log/md22]
    root      1823     2  0 Jun15 ?        00:00:00 [xfs-eofblocks/m]
    root      1824     2  0 Jun15 ?        00:00:00 [xfsaild/md22]
    root      1842     2  0 May18 ?        00:00:00 [ipv6_addrconf]
    root      1928     2  0 May18 ?        00:00:00 [scsi_eh_0]
    root      1929     2  0 May18 ?        00:00:00 [scsi_tmf_0]
    root      1930     2  0 May18 ?        00:01:42 [usb-storage]
    root      2098     1  0 May18 ?        00:00:15 /sbin/udevd --daemon
    root      2196     2  0 May18 ?        00:09:18 [kworker/9:1H-kblockd]
    root      2198     2  0 May18 ?        00:05:27 [kworker/7:1H-xfs-log/md1]
    root      2199     2  0 May18 ?        00:02:51 [kworker/10:1H-kblockd]
    root      2206     2  0 May18 ?        00:00:00 [loop0]
    root      2207     2  0 May18 ?        00:02:33 [kworker/17:1H-kblockd]
    root      2208     2  0 May18 ?        00:16:18 [kworker/13:1H-xfs-log/md1]
    root      2209     2  0 May18 ?        00:04:54 [kworker/2:1H-kblockd]
    root      2210     2  0 May18 ?        00:00:00 [loop1]
    root      2232     2  0 May18 ?        00:02:16 [kworker/18:1H-kblockd]
    root      2247     2  0 May18 ?        00:03:09 [kworker/19:1H-kblockd]
    root      2261     2  0 May18 ?        00:02:44 [kworker/16:1H-xfs-log/md1]
    root      2272     2  0 May18 ?        00:05:49 [kworker/6:1H-kblockd]
    root      2292     2  0 May18 ?        00:06:02 [kworker/5:1H-kblockd]
    root      2293     2  0 May18 ?        00:04:46 [kworker/3:1H-kblockd]
    root      2294     2  0 May18 ?        00:05:20 [kworker/8:1H-xfs-log/md1]
    root      2295     2  0 May18 ?        00:02:28 [kworker/11:1H-kblockd]
    root      2296     2  0 May18 ?        00:02:17 [kworker/12:1H-kblockd]
    root      2297     2  0 May18 ?        00:03:13 [kworker/14:1H-xfs-log/nvme0n1p1]
    root      2298     2  0 May18 ?        00:00:00 [scsi_eh_1]
    root      2299     2  0 May18 ?        00:00:00 [scsi_tmf_1]
    root      2300     2  0 May18 ?        00:00:00 [scsi_eh_2]
    root      2301     2  0 May18 ?        00:00:00 [scsi_tmf_2]
    root      2302     2  0 May18 ?        00:00:00 [scsi_eh_3]
    root      2303     2  0 May18 ?        00:00:00 [scsi_tmf_3]
    root      2304     2  0 May18 ?        00:00:00 [scsi_eh_4]
    root      2305     2  0 May18 ?        00:03:00 [kworker/15:1H-kblockd]
    root      2306     2  0 May18 ?        00:00:00 [scsi_tmf_4]
    root      2307     2  0 May18 ?        00:00:00 [kipmi0]
    root      2308     2  0 May18 ?        00:00:00 [scsi_eh_5]
    root      2309     2  0 May18 ?        00:00:00 [scsi_tmf_5]
    root      2310     2  0 May18 ?        00:00:00 [scsi_eh_6]
    root      2311     2  0 May18 ?        00:00:00 [scsi_tmf_6]
    root      2313     2  0 May18 ?        00:00:00 [nvme-wq]
    root      2314     2  0 May18 ?        00:00:00 [nvme-reset-wq]
    root      2315     2  0 May18 ?        00:00:00 [nvme-delete-wq]
    root      2319     2  0 May18 ?        00:00:00 [scsi_eh_7]
    root      2320     2  0 May18 ?        00:00:00 [scsi_tmf_7]
    root      2321     2  0 May18 ?        00:00:00 [scsi_eh_8]
    root      2322     2  0 May18 ?        00:00:00 [scsi_tmf_8]
    root      2323     2  0 May18 ?        00:00:00 [scsi_eh_9]
    root      2324     2  0 May18 ?        00:00:00 [scsi_tmf_9]
    root      2325     2  0 May18 ?        00:00:00 [scsi_eh_10]
    root      2326     2  0 May18 ?        00:00:00 [scsi_tmf_10]
    root      2327     2  0 May18 ?        00:00:00 [scsi_eh_11]
    root      2328     2  0 May18 ?        00:00:00 [scsi_tmf_11]
    root      2329     2  0 May18 ?        00:00:00 [scsi_eh_12]
    root      2330     2  0 May18 ?        00:00:00 [scsi_tmf_12]
    root      2331     2  0 May18 ?        00:00:00 [scsi_eh_13]
    root      2332     2  0 May18 ?        00:00:00 [scsi_tmf_13]
    root      2333     2  0 May18 ?        00:00:00 [scsi_eh_14]
    root      2334     2  0 May18 ?        00:00:00 [scsi_tmf_14]
    root      2344     2  0 May18 ?        00:00:00 [scsi_eh_15]
    root      2345     2  0 May18 ?        00:00:00 [scsi_tmf_15]
    root      2390     2  0 May18 ?        00:06:22 [kworker/4:1H-kblockd]
    root      2409     2  0 May18 ?        00:05:02 [kworker/1:1H-xfs-log/md1]
    root      2411     2  0 May18 ?        00:00:00 [poll_mpt2sas0_s]
    root      2518     1  0 May18 ?        00:00:01 /usr/sbin/rsyslogd -i /var/run/rsyslogd.pid
    root      2541     2  0 May18 ?        00:05:15 [kworker/0:1H-xfs-log/md1]
    root      2548     2  0 May18 ?        00:00:00 [i40e]
    root      2615     1  0 May18 ?        01:48:45 /sbin/haveged -w 1024 -v 1 -p /var/run/haveged.pid
    message+  2691     1  0 May18 ?        00:00:00 /usr/bin/dbus-daemon --system
    rpc       2700     1  0 May18 ?        00:00:02 /sbin/rpcbind -l -w
    rpc       2705     1  0 May18 ?        00:00:00 /sbin/rpc.statd
    ntp       2735     1  0 May18 ?        00:01:52 /usr/sbin/ntpd -g -u ntp:ntp
    root      2741     1  0 May18 ?        00:00:00 /usr/sbin/acpid
    root      2756     1  0 May18 ?        00:00:21 /usr/sbin/crond
    daemon    2760     1  0 May18 ?        00:00:00 /usr/sbin/atd -b 15 -l 1
    root      2821  5318  0 Jun04 pts/4    00:00:00 -bash
    root      2839     1  0 Jun04 pts/4    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdu
    root      2860     1  0 Jun04 pts/4    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdu
    root      2873  2860  0 Jun04 pts/4    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdu
    root      2874  2873  0 Jun04 pts/4    00:00:00 tail -f -n0 /var/log/syslog
    root      3229     2  0 Jun15 ?        00:00:00 [btrfs-worker]
    root      3230     2  0 Jun15 ?        00:00:00 [kworker/u41:0]
    root      3231     2  0 Jun15 ?        00:00:00 [btrfs-worker-hi]
    root      3232     2  0 Jun15 ?        00:00:00 [btrfs-delalloc]
    root      3233     2  0 Jun15 ?        00:00:00 [btrfs-flush_del]
    root      3234     2  0 Jun15 ?        00:00:00 [btrfs-cache]
    root      3235     2  0 Jun15 ?        00:00:00 [btrfs-submit]
    root      3236     2  0 Jun15 ?        00:00:00 [btrfs-fixup]
    root      3237     2  0 Jun15 ?        00:00:00 [btrfs-endio]
    root      3238     2  0 Jun15 ?        00:00:00 [btrfs-endio-met]
    root      3239     2  0 Jun15 ?        00:00:00 [btrfs-endio-met]
    root      3240     2  0 Jun15 ?        00:00:00 [btrfs-endio-rai]
    root      3241     2  0 Jun15 ?        00:00:00 [btrfs-endio-rep]
    root      3242     2  0 Jun15 ?        00:00:00 [btrfs-rmw]
    root      3243     2  0 Jun15 ?        00:00:00 [btrfs-endio-wri]
    root      3244     2  0 Jun15 ?        00:00:00 [btrfs-freespace]
    root      3245     2  0 Jun15 ?        00:00:00 [btrfs-delayed-m]
    root      3246     2  0 Jun15 ?        00:00:00 [btrfs-readahead]
    root      3247     2  0 Jun15 ?        00:00:00 [btrfs-qgroup-re]
    root      3248     2  0 Jun15 ?        00:00:00 [btrfs-extent-re]
    root      3340     2  0 Jun15 ?        00:00:00 [btrfs-cleaner]
    root      3341     2  0 Jun15 ?        00:00:10 [btrfs-transacti]
    root      3979     1  0 Jun16 ?        00:00:17 /sbin/apcupsd
    root      4790     1  0 Jun06 ?        00:00:00 /usr/sbin/sshd
    root      4900     1  0 Jun06 ?        00:00:00 /usr/sbin/inetd
    root      4997  5318  0 Jun17 pts/13   00:00:00 -bash
    root      5029     1  0 Jun17 pts/13   00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdg
    root      5063     1  0 Jun17 pts/13   00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdg
    root      5082  5063  0 Jun17 pts/13   00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdg
    root      5084  5082  0 Jun17 pts/13   00:00:00 tail -f -n0 /var/log/syslog
    root      5095     1  0 May18 ?        01:47:28 php /etc/rc.d/rc.diskinfo
    root      5318     1  0 May23 ?        00:31:50 /usr/bin/tmux new-session -d -x 140 -y 200 -s preclear_disk_WD-WCC7K0JC7JJ0
    root      5319  5318  0 May23 pts/0    00:00:00 -bash
    root      5337     1  0 May23 pts/0    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdn
    root      5358     1  0 May23 pts/0    00:00:03 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdn
    root      5371  5358  0 May23 pts/0    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdn
    root      5372  5371  0 May23 pts/0    00:00:00 tail -f -n0 /var/log/syslog
    root      5516  7651  0 16:55 ?        00:00:00 php-fpm: pool www
    dga       5977  9690  5 12:01 ?        00:16:15 /usr/sbin/smbd -D
    root      6085     1  0 May18 tty1     00:00:00 -bash
    root      6086     1  0 May18 tty2     00:00:00 /sbin/agetty 38400 tty2 linux
    root      6087     1  0 May18 tty3     00:00:00 /sbin/agetty 38400 tty3 linux
    root      6088     1  0 May18 tty4     00:00:00 /sbin/agetty 38400 tty4 linux
    root      6089     1  0 May18 tty5     00:00:00 /sbin/agetty 38400 tty5 linux
    root      6090     1  0 May18 tty6     00:00:00 /sbin/agetty 38400 tty6 linux
    root      6310     2  0 16:46 ?        00:00:00 [kworker/10:0-xfs-cil/md1]
    root      6874     2  0 Jun15 ?        00:00:00 [kworker/8:2-events]
    avahi     7618     1  0 May18 ?        00:10:01 avahi-daemon: running [rack.local]
    avahi     7622  7618  0 May18 ?        00:00:00 avahi-daemon: chroot helper
    root      7631     1  0 May18 ?        00:00:00 /usr/sbin/avahi-dnsconfd -D
    root      7646     1  0 May18 ?        02:34:59 /usr/local/sbin/emhttpd
    root      7651     1  0 May18 ?        00:02:10 php-fpm: master process (/etc/php-fpm/php-fpm.conf)
    root      7673     1  0 May18 ?        01:36:33 ttyd -d 0 -i /var/run/ttyd.sock login -f root
    root      7676     1  0 May18 ?        00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
    root      8056     2  0 May18 ?        00:00:00 [xfsalloc]
    root      8057     2  0 May18 ?        00:00:00 [xfs_mru_cache]
    root      8232  5318  0 Jun12 pts/8    00:00:00 -bash
    root      8250     1  0 Jun12 pts/8    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdm
    root      8271     1  0 Jun12 pts/8    00:00:02 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdm
    root      8281  8271  0 Jun12 pts/8    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdm
    root      8283  8281  0 Jun12 pts/8    00:00:00 tail -f -n0 /var/log/syslog
    root      8293     1  0 Jun15 ?        00:00:13 /usr/local/sbin/shfs /mnt/user0 -disks 4194814 -o noatime,big_writes,allow_other
    root      8362     1  6 Jun15 ?        04:29:54 /usr/local/sbin/shfs /mnt/user -disks 4194815 2048000000 -o noatime,big_writes,allow_other -o remember=0
    root      8541  5318  0 May23 pts/1    00:00:00 -bash
    root      8559     1  0 May23 pts/1    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdo
    root      8580     1  0 May23 pts/1    00:00:03 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdo
    root      8593  8580  0 May23 pts/1    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdo
    root      8594  8593  0 May23 pts/1    00:00:00 tail -f -n0 /var/log/syslog
    root      8887     2  0 Jun11 ?        00:00:13 [kworker/19:1-mm_percpu_wq]
    root      9021     2  0 Jun15 ?        00:00:00 [xfs-buf/nvme0n1]
    root      9022     2  0 Jun15 ?        00:00:00 [xfs-data/nvme0n]
    root      9023     2  0 Jun15 ?        00:00:00 [xfs-conv/nvme0n]
    root      9024     2  0 Jun15 ?        00:00:00 [xfs-cil/nvme0n1]
    root      9025     2  0 Jun15 ?        00:00:00 [xfs-reclaim/nvm]
    root      9026     2  0 Jun15 ?        00:00:00 [xfs-log/nvme0n1]
    root      9027     2  0 Jun15 ?        00:00:00 [xfs-eofblocks/n]
    root      9076     2  0 Jun15 ?        00:00:19 [xfsaild/nvme0n1]
    root      9640     1  0 Jun15 ?        00:00:04 /usr/sbin/nmbd -D
    root      9690     1  0 Jun15 ?        00:00:01 /usr/sbin/smbd -D
    root      9717  9690  0 Jun15 ?        00:00:04 /usr/sbin/smbd -D
    root      9718  9690  0 Jun15 ?        00:00:00 /usr/sbin/smbd -D
    root      9744     1  0 Jun15 ?        00:00:06 /usr/sbin/winbindd -D
    root      9746  9744  0 Jun15 ?        00:00:03 /usr/sbin/winbindd -D
    root      9842     2  0 Jun15 ?        00:00:05 [loop2]
    root      9844     2  0 Jun15 ?        00:00:00 [btrfs-worker]
    root      9845     2  0 Jun15 ?        00:00:00 [btrfs-worker-hi]
    root      9846     2  0 Jun15 ?        00:00:00 [btrfs-delalloc]
    root      9847     2  0 Jun15 ?        00:00:00 [btrfs-flush_del]
    root      9856     2  0 Jun15 ?        00:00:00 [btrfs-cache]
    root      9857     2  0 Jun15 ?        00:00:00 [btrfs-submit]
    root      9858     2  0 Jun15 ?        00:00:00 [btrfs-fixup]
    root      9859     2  0 Jun15 ?        00:00:00 [btrfs-endio]
    root      9860     2  0 Jun15 ?        00:00:00 [btrfs-endio-met]
    root      9861     2  0 Jun15 ?        00:00:00 [btrfs-endio-met]
    root      9862     2  0 Jun15 ?        00:00:00 [btrfs-endio-rai]
    root      9863     2  0 Jun15 ?        00:00:00 [btrfs-endio-rep]
    root      9865     2  0 Jun15 ?        00:00:00 [btrfs-rmw]
    root      9869     2  0 Jun15 ?        00:00:00 [btrfs-endio-wri]
    root      9873     2  0 Jun15 ?        00:00:00 [btrfs-freespace]
    root      9875     2  0 Jun15 ?        00:00:00 [btrfs-delayed-m]
    root      9877     2  0 Jun15 ?        00:00:00 [btrfs-readahead]
    root      9885     2  0 Jun15 ?        00:00:00 [btrfs-qgroup-re]
    root      9886     2  0 Jun15 ?        00:00:00 [btrfs-extent-re]
    root      9896     2  0 Jun15 ?        00:00:00 [btrfs-cleaner]
    root      9897     2  0 Jun15 ?        00:00:08 [btrfs-transacti]
    root     10064     1  0 Jun15 ?        00:04:09 /usr/bin/dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --storage-driver=btrfs
    root     10195 10064  0 Jun15 ?        00:03:46 containerd --config /var/run/docker/containerd/containerd.toml --log-level info
    root     10599 27215  0 16:57 ?        00:00:00 sleep 6
    root     10804  5516  0 16:57 ?        00:00:00 ps -ef
    root     10897     2  0 Jun15 ?        00:00:00 [kworker/11:0-events]
    root     10964     2  0 16:47 ?        00:00:00 [kworker/4:2-xfs-cil/nvme0n1p1]
    root     11163 10195  0 Jun15 ?        00:00:07 containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/1ccd08c42c8bf9a41cb9119d3adbb40f3c8a6aa11e7cb255bdd1080453cd1831 -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
    root     11180 11163  0 Jun15 ?        00:00:00 s6-svscan -t0 /var/run/s6/services
    root     11295 11180  0 Jun15 ?        00:00:00 s6-supervise s6-fdholderd
    root     11487 11180  0 Jun15 ?        00:00:00 s6-supervise emby-server
    root     11491 11487  0 Jun15 ?        00:00:00 sh ./run
    root     11775     1  0 May18 ?        00:00:03 dhcpcd -w -q -t 10 -h rack -4 eth0
    root     11953 10064  0 Jun15 ?        00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 36330 -container-ip 172.17.0.2 -container-port 36330
    root     11966 10064  0 Jun15 ?        00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 7396 -container-ip 172.17.0.2 -container-port 7396
    root     11973 10195  0 Jun15 ?        00:00:07 containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/3bc68dd556bf286cc14e0ba2cabcb401954300aab1abb3e3a877c3b03b6b891d -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
    root     11990 11973  0 Jun15 ?        00:00:00 /usr/bin/python3 -u /sbin/my_init
    root     12138     2  0 Jun10 ?        00:00:13 [kworker/17:3-mm_percpu_wq]
    root     12358 11990  0 Jun15 ?        00:00:00 /usr/sbin/syslog-ng --pidfile /var/run/syslog-ng.pid -F --no-caps
    daemon   12366 11491  0 Jun15 ?        00:15:41 /system/EmbyServer -programdata /config -ffdetect /bin/ffdetect -ffmpeg /bin/ffmpeg -ffprobe /bin/ffprobe -restartexitcode 3
    root     13461 11990  0 Jun15 ?        00:00:01 /usr/bin/runsvdir -P /etc/service
    root     13462 13461  0 Jun15 ?        00:00:00 runsv cron
    root     13463 13461  0 Jun15 ?        00:00:00 runsv sshd
    root     13464 13461  0 Jun15 ?        00:00:00 runsv fahclient
    nobody   13465 13464  0 Jun15 ?        00:09:40 /opt/fah/usr/bin/FAHClient --config /config/config.xml
    root     13466 13462  0 Jun15 ?        00:00:00 /usr/sbin/cron -f
    root     13550  5318  0 May25 pts/2    00:00:00 -bash
    root     13588     1  0 May25 pts/2    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdt
    root     13610     1  0 May25 pts/2    00:00:01 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdt
    root     13623 13610  0 May25 pts/2    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdt
    root     13624 13623  0 May25 pts/2    00:00:00 tail -f -n0 /var/log/syslog
    root     13868     2  0 Jun17 ?        00:00:00 [kworker/7:1]
    root     15211     2  0 Jun10 ?        00:00:32 [kworker/8:1-mm_percpu_wq]
    root     15303     2  0 Jun14 ?        00:00:00 [kworker/12:1-cgroup_destroy]
    root     15394  9744  0 Jun15 ?        00:00:00 /usr/sbin/winbindd -D
    nobody   15581 13465  0 15:17 ?        00:00:02 /opt/fah/usr/bin/FAHCoreWrapper /config/cores/cores.foldingathome.org/Linux/AMD64/AVX/Core_a7.fah/FahCore_a7 -dir 01 -suffix 01 -version 704 -lifeline 29 -checkpoint 15 -np 19
    nobody   15585 15581 99 15:17 ?        1-06:03:11 /config/cores/cores.foldingathome.org/Linux/AMD64/AVX/Core_a7.fah/FahCore_a7 -dir 01 -suffix 01 -version 704 -lifeline 1305 -checkpoint 15 -np 19
    root     15817     2  0 Jun15 ?        00:00:38 [kworker/0:2-events]
    root     16044     2  0 16:39 ?        00:00:00 [kworker/10:1-xfs-cil/md1]
    root     16948     2  0 Jun17 ?        00:00:00 [kworker/3:3-events]
    root     17399     2  0 Jun14 ?        00:00:00 [kworker/19:0]
    root     17741  5318  0 Jun12 pts/9    00:00:00 -bash
    root     17759     1  0 Jun12 pts/9    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdp
    root     17780     1  0 Jun12 pts/9    00:00:02 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdp
    root     17796 17780  0 Jun12 pts/9    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdp
    root     17797 17796  0 Jun12 pts/9    00:00:00 tail -f -n0 /var/log/syslog
    root     18444  5318  0 Jun10 pts/6    00:00:00 -bash
    root     18462     1  0 Jun10 pts/6    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdv
    root     18483     1  0 Jun10 pts/6    00:00:02 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdv
    root     18495 18483  0 Jun10 pts/6    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdv
    root     18496 18495  0 Jun10 pts/6    00:00:00 tail -f -n0 /var/log/syslog
    root     18914     2  0 Jun14 ?        00:00:04 [kworker/6:0-xfs-cil/md3]
    root     19337     2  0 Jun14 ?        00:00:05 [kworker/12:0-mm_percpu_wq]
    root     19802     2  0 16:00 ?        00:00:00 [kworker/u40:0-btrfs-submit]
    root     19908     2  0 Jun14 ?        00:00:01 [kworker/14:1-xfs-cil/md7]
    root     20234     2  0 14:13 ?        00:00:00 [kworker/13:0-events]
    root     20468     2  0 Jun15 ?        00:00:00 [md]
    root     20469     2  4 Jun15 ?        03:03:19 [mdrecoveryd]
    root     21099     2  0 Jun15 ?        00:00:00 [spinupd]
    root     21195     2  0 Jun15 ?        00:00:00 [spinupd]
    root     21282     2  0 Jun15 ?        00:00:00 [spinupd]
    root     21424     2  0 Jun15 ?        00:00:00 [spinupd]
    root     21514     2  0 Jun15 ?        00:00:04 [kworker/9:0-xfs-cil/nvme0n1p1]
    root     21946     2  0 Jun15 ?        00:00:00 [spinupd]
    root     22077     2  0 Jun15 ?        00:00:00 [spinupd]
    root     22088     2  0 Jun15 ?        00:00:00 [spinupd]
    root     22165     2  0 Jun15 ?        00:00:00 [spinupd]
    root     22466     2  0 Jun15 ?        00:00:00 [spinupd]
    root     22467     2  0 Jun15 ?        00:00:00 [spinupd]
    root     22472     2  0 Jun15 ?        00:00:00 [spinupd]
    root     23195  9690  0 Jun16 ?        00:00:00 /usr/sbin/smbd -D
    root     23240     2  0 Jun16 ?        00:00:05 [kworker/3:2-mm_percpu_wq]
    root     23455     2  0 Jun16 ?        00:00:06 [kworker/1:2-events]
    root     24961     2 16 Jun15 ?        11:20:28 [unraidd]
    root     25181  5318  0 Jun15 pts/12   00:00:00 -bash
    root     25206     1  0 Jun15 pts/12   00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdf
    root     25227     1  0 Jun15 pts/12   00:00:01 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdf
    root     25240 25227  0 Jun15 pts/12   00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdf
    root     25241 25240  0 Jun15 pts/12   00:00:00 tail -f -n0 /var/log/syslog
    root     25463  7651  0 16:52 ?        00:00:00 php-fpm: pool www
    root     25886     2  0 16:52 ?        00:00:00 [kworker/10:2-events]
    root     27112     2  0 16:52 ?        00:00:00 [kworker/u40:1-events_power_efficient]
    root     27212     2  0 Jun16 ?        00:00:05 [kworker/5:1-events]
    root     27213     2  0 Jun16 ?        00:00:01 [kworker/18:0-mm_percpu_wq]
    root     27215     1  0 Jun15 ?        00:01:54 /bin/bash /usr/local/emhttp/webGui/scripts/diskload
    root     27455     2  0 Jun16 ?        00:00:06 [kworker/2:2-events]
    root     27880     2  0 Jun17 ?        00:00:02 [kworker/13:5-md]
    root     28513     2  0 Jun15 ?        00:00:00 [xfs-buf/md1]
    root     28514     2  0 Jun15 ?        00:00:00 [xfs-data/md1]
    root     28515     2  0 Jun15 ?        00:00:00 [xfs-conv/md1]
    root     28516     2  0 Jun15 ?        00:00:00 [xfs-cil/md1]
    root     28517     2  0 Jun15 ?        00:00:00 [xfs-reclaim/md1]
    root     28518     2  0 Jun15 ?        00:00:00 [xfs-log/md1]
    root     28519     2  0 Jun15 ?        00:00:00 [xfs-eofblocks/m]
    root     28521     2  0 Jun15 ?        00:00:08 [xfsaild/md1]
    root     28644     2  0 Jun12 ?        00:00:00 [kworker/17:0-xfs-buf/nvme0n1p1]
    root     28648  5318  0 Jun13 pts/10   00:00:00 -bash
    root     28668     1  0 Jun13 pts/10   00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sde
    root     28692     1  0 Jun13 pts/10   00:00:01 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sde
    root     28707 28692  0 Jun13 pts/10   00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sde
    root     28708 28707  0 Jun13 pts/10   00:00:00 tail -f -n0 /var/log/syslog
    root     28719     2  0 Jun15 ?        00:00:01 [kworker/5:0-xfs-cil/md5]
    root     28720     2  0 Jun15 ?        00:00:01 [kworker/18:1-xfs-cil/md6]
    root     28721     2  0 Jun15 ?        00:00:01 [kworker/15:1-xfs-cil/md8]
    root     28722     2  0 Jun15 ?        00:00:01 [kworker/2:1-xfs-cil/md7]
    root     28858     2  0 Jun15 ?        00:00:00 [xfs-buf/md2]
    root     28860     2  0 Jun15 ?        00:00:00 [xfs-data/md2]
    root     28861     2  0 Jun15 ?        00:00:00 [xfs-conv/md2]
    root     28863     2  0 Jun15 ?        00:00:00 [xfs-cil/md2]
    root     28864     2  0 Jun15 ?        00:00:00 [xfs-reclaim/md2]
    root     28866     2  0 Jun15 ?        00:00:00 [xfs-log/md2]
    root     28868     2  0 Jun15 ?        00:00:00 [xfs-eofblocks/m]
    root     28870     2  0 Jun15 ?        00:00:00 [xfsaild/md2]
    root     29026     2  0 Jun10 ?        00:00:32 [kworker/7:0-mm_percpu_wq]
    root     29364     2  0 Jun15 ?        00:00:04 [kworker/14:0-mm_percpu_wq]
    root     29498     2  0 Jun15 ?        00:00:12 [kworker/6:1-events]
    root     29604     2  0 Jun15 ?        00:00:04 [kworker/11:2-mm_percpu_wq]
    root     29606     2  0 Jun15 ?        00:00:00 [xfs-buf/md3]
    root     29607     2  0 Jun15 ?        00:00:00 [xfs-data/md3]
    root     29608     2  0 Jun15 ?        00:00:00 [xfs-conv/md3]
    root     29610     2  0 Jun15 ?        00:00:00 [xfs-cil/md3]
    root     29611     2  0 Jun15 ?        00:00:00 [xfs-reclaim/md3]
    root     29613     2  0 Jun15 ?        00:00:00 [xfs-log/md3]
    root     29614     2  0 Jun15 ?        00:00:00 [xfs-eofblocks/m]
    root     29615     2  0 Jun15 ?        00:00:00 [xfsaild/md3]
    root     29625     2  0 16:53 ?        00:00:00 [kworker/4:1-mm_percpu_wq]
    root     29971     2  0 Jun15 ?        00:00:12 [kworker/9:1-mm_percpu_wq]
    root     30121     2  0 Jun16 ?        00:00:01 [kworker/15:2-mm_percpu_wq]
    root     30331     2  0 Jun15 ?        00:00:00 [xfs-buf/md4]
    root     30332     2  0 Jun15 ?        00:00:00 [xfs-data/md4]
    root     30335     2  0 Jun15 ?        00:00:00 [xfs-conv/md4]
    root     30336     2  0 Jun15 ?        00:00:00 [xfs-cil/md4]
    root     30337     2  0 Jun15 ?        00:00:00 [xfs-reclaim/md4]
    root     30340     2  0 Jun15 ?        00:00:00 [xfs-log/md4]
    root     30345     2  0 Jun15 ?        00:00:00 [xfs-eofblocks/m]
    root     30350     2  0 Jun15 ?        00:00:00 [xfsaild/md4]
    root     30408     2  0 Jun17 ?        00:00:00 [kworker/1:3-events]
    nobody   30670  7676  0 May18 ?        03:34:05 nginx: worker process
    root     30904     2  0 16:25 ?        00:00:00 [kworker/4:0-xfs-cil/nvme0n1p1]
    root     31077     2  0 Jun15 ?        00:00:00 [xfs-buf/md5]
    root     31079     2  0 Jun15 ?        00:00:00 [xfs-data/md5]
    root     31081     2  0 Jun15 ?        00:00:00 [xfs-conv/md5]
    root     31082     2  0 Jun15 ?        00:00:00 [xfs-cil/md5]
    root     31084     2  0 Jun15 ?        00:00:00 [xfs-reclaim/md5]
    root     31086     2  0 Jun15 ?        00:00:00 [xfs-log/md5]
    root     31088     2  0 Jun15 ?        00:00:00 [xfs-eofblocks/m]
    root     31094     2  0 Jun15 ?        00:00:00 [xfsaild/md5]
    root     31144  7651  0 16:53 ?        00:00:00 php-fpm: pool www
    root     31321     2  0 Jun17 ?        00:00:00 [kworker/0:0-events]
    root     31355     1  0 Jun12 ?        00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdi
    root     31376     1  0 Jun12 ?        00:00:02 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdi
    root     31389 31376  0 Jun12 ?        00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdi
    root     31390 31389  0 Jun12 ?        00:00:00 tail -f -n0 /var/log/syslog
    root     31892     2  0 Jun15 ?        00:00:00 [xfs-buf/md6]
    root     31893     2  0 Jun15 ?        00:00:00 [xfs-data/md6]
    root     31894     2  0 Jun15 ?        00:00:00 [xfs-conv/md6]
    root     31895     2  0 Jun15 ?        00:00:00 [xfs-cil/md6]
    root     31896     2  0 Jun15 ?        00:00:00 [xfs-reclaim/md6]
    root     31897     2  0 Jun15 ?        00:00:00 [xfs-log/md6]
    root     31898     2  0 Jun15 ?        00:00:00 [xfs-eofblocks/m]
    root     32006     2  0 Jun15 ?        00:00:00 [xfsaild/md6]
    root     32060  5318  0 Jun04 pts/3    00:00:00 -bash
    root     32085     1  0 Jun04 pts/3    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdo
    root     32145     1  0 Jun04 pts/3    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdo
    root     32165 32145  0 Jun04 pts/3    00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdo
    root     32166 32165  0 Jun04 pts/3    00:00:00 tail -f -n0 /var/log/syslog
    root     32387     2  0 Jun15 ?        00:00:00 [kworker/16:0-xfs-sync/md1]
    root     32443     1  0 Jun09 ?        00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdv
    root     32464     1  0 Jun09 ?        00:00:03 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdv
    root     32477 32464  0 Jun09 ?        00:00:00 /bin/bash /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdv
    root     32478 32477  0 Jun09 ?        00:00:00 tail -f -n0 /var/log/syslog
    root     32643     2  0 Jun15 ?        00:00:00 [xfs-buf/md7]
    root     32644     2  0 Jun15 ?        00:00:00 [xfs-data/md7]
    root     32645     2  0 Jun15 ?        00:00:00 [xfs-conv/md7]
    root     32646     2  0 Jun15 ?        00:00:00 [xfs-cil/md7]
    root     32647     2  0 Jun15 ?        00:00:00 [xfs-reclaim/md7]
    root     32649     2  0 Jun15 ?        00:00:00 [xfs-log/md7]
    root     32650     2  0 Jun15 ?        00:00:00 [xfs-eofblocks/m]
    root     32736     2  0 Jun15 ?        00:00:00 [xfsaild/md7]
      Array StartedUnraid® webGui ©2019, Lime Technology, Inc.  manual

     

  11.  	
    ############################################################################################################################
    #                                                                                                                          #
    #                                        unRAID Server Preclear of disk 5XW1P0KT                                           #
    #                                       Cycle 1 of 1, partition start on sector 64.                                        #
    #                                                                                                                          #
    #                                                                                                                          #
    #   Step 1 of 5 - Post-Read verification:                                                   [7:05:50 @ 78 MB/s] SUCCESS    #
    #   Step 2 of 5 - Verifying unRAID's Preclear signature:                                                        SUCCESS    #
    #   Step 3 of 5 - Writing unRAID's Preclear signature:                                                          SUCCESS    #
    #   Step 4 of 5 - Zeroing the disk:                                                         [6:15:51 @ 88 MB/s] SUCCESS    #
    #   Step 5 of 5 - Pre-read verification:                                                    [6:21:50 @ 87 MB/s] SUCCESS    #
    #                                                                                                                          #
    #                                                                                                                          #
    #                                                                                                                          #
    #                                                                                                                          #
    #                                                                                                                          #
    #                                                                                                                          #
    #                                                                                                                          #
    ############################################################################################################################
    #                              Cycle elapsed time: 19:43:40 | Total elapsed time: 19:43:41                                 #
    ############################################################################################################################
    
    
    ########

    Does this look backwards? The steps are ordered in reverse in this output. 

     

     	
    ############################################################################################################################
    #                                                                                                                          #
    #                                        unRAID Server Preclear of disk 6YD0ZHYA                                           #
    #                                       Cycle 1 of 1, partition start on sector 64.                                        #
    #                                                                                                                          #
    #                                                                                                                          #
    #   Step 1 of 5 - Verifying unRAID's Preclear signature:                                                        SUCCESS    #
    #   Step 2 of 5 - Writing unRAID's Preclear signature:                                                          SUCCESS    #
    #   Step 3 of 5 - Zeroing the disk:                                                        [5:07:52 @ 108 MB/s] SUCCESS    #
    #   Step 4 of 5 - Pre-read verification:                                                   [5:10:00 @ 107 MB/s] SUCCESS    #
    #   Step 5 of 5 - Post-Read in progress:                                                                     (93% Done)    #
    #                                                                                                                          #
    #                                                                                                                          #
    #                                                                                                                          #
    #                                                                                                                          #
    #                                                                                                                          #
    #   ** Time elapsed: 4:38:15 | Current speed: 112 MB/s | Average speed: 111 MB/s                                           #
    #                                                                                                                          #
    ############################################################################################################################
    #                              Cycle elapsed time: 14:56:13 | Total elapsed time: 14:56:14                                 #
    ############################################################################################################################
    

    Note the weird ordering of steps in the preview.