Jump to content

Xaero

Members
  • Content Count

    161
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Xaero

  1. The reason nohup doesn't work for this is because when you disconnect, or log out of that terminal, that terminal, and any child processes of the terminal are killed. This is just Linux kernel process management doing it's job. To prevent this you can just disown the process, no need to nohup it. For example you can: $ processname & $ disown and "processname" will continue running after the terminal is killed. This is good because it means that "processname" will still respond to hangup, which may be needed. Of course, you could also call disown with nohup: $ nohup processname $ disown You can also disown processes by using their PID, but calling it immediately following the spawn of a process will automatically disown the last created child.
  2. SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usb-storage [0:0:0:0] flash sda 62.7GB Extreme [1] scsi1 megaraid_sas MegaRAID SAS 2008 [Falcon] [1:0:11:0] disk13 sdb 8.00TB WDC WD80EFAX-68L [1:0:12:0] disk5 sdc 8.00TB WDC WD80EFAX-68L [1:0:13:0] disk7 sdd 8.00TB WDC WD80EFAX-68L [1:0:14:0] disk2 sde 8.00TB WDC WD80EFAX-68L [1:0:15:0] disk3 sdf 8.00TB WDC WD80EFAX-68L [1:0:16:0] disk4 sdg 8.00TB WDC WD80EFAX-68L [1:0:17:0] disk10 sdh 8.00TB WDC WD80EFAX-68L [1:0:18:0] disk21 sdi 8.00TB WDC WD80EFAX-68L [1:0:19:0] disk8 sdj 8.00TB WDC WD80EFAX-68L [1:0:20:0] disk12 sdk 8.00TB WDC WD80EFAX-68L [1:0:21:0] disk11 sdl 8.00TB WDC WD80EFAX-68L [1:0:22:0] disk15 sdm 8.00TB WDC WD80EFAX-68L [1:0:23:0] disk16 sdn 8.00TB WDC WD80EFAX-68L [1:0:24:0] disk19 sdo 8.00TB WDC WD80EFAX-68L [1:0:25:0] disk22 sdp 8.00TB WDC WD80EMAZ-00W [1:0:26:0] disk17 sdq 8.00TB WDC WD80EFAX-68L [1:0:27:0] disk18 sdr 8.00TB WDC WD80EFAX-68L [1:0:28:0] disk20 sds 8.00TB WDC WD80EFAX-68L [1:0:29:0] disk6 sdt 8.00TB WDC WD80EFAX-68L [1:0:30:0] disk9 sdu 8.00TB WDC WD80EFAX-68L [1:0:31:0] disk14 sdv 8.00TB WDC WD80EFAX-68L [1:0:32:0] disk1 sdw 8.00TB WDC WD80EFAX-68L [1:0:33:0] parity2 sdx 8.00TB WDC WD80EMAZ-00W [1:0:34:0] parity sdy 8.00TB WDC WD80EMAZ-00W [N0] scsiN0 nvme0 NVMe [N:0:1:1] cache nvme0n1 1.02TB INTEL SSDPEKNW01 [N1] scsiN1 nvme1 NVMe [N:1:1:1] cache2 nvme1n1 1.02TB INTEL SSDPEKNW01 Results from B3 look good!
  3. It looks like his nvme drives have a 5th column. Not sure if thats the cause or not
  4. I'm not sure if this will work; awk '{ if ($3 == "") print "Column 3 is empty for " NR }' This should print Colum 3 is empty for # where # is the row number. awk 'BEGIN { FS = OFS = "\t" } { for(i=1; i<=NF; i++) if($i ~ /^ *$/) $i = 0 }; 1' This may also work to replace empty columns fields with "0" but is a copy paste from a different application
  5. The megaraid disks don't show up still, not sure what's causing that. SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usb-storage [0:0:0:0] flash sda 62.7GB Extreme [1] scsi1 megaraid_sas MegaRAID SAS 2008 [Falcon] [N0] scsiN0 nvme0 NVMe [N:0:1:1] cache nvme0n1 1.02TB INTEL SSDPEKNW01 [N1] scsiN1 nvme1 NVMe [N:1:1:1] cache2 nvme1n1 1.02TB INTEL SSDPEKNW01 *** END OF REPORT *** Everything else looks good.
  6. Don't worry, I'm sure some obnoxious niche thing with my server will cause a hiccup.
  7. sort -n -t: -k3 I believe should handle this. You may need to strip the [ and ], so it'd be sed -e "s/[\[,\]]//g" | sort -n -t: -k3 Or something along those lines. I don't have a terminal accessible to test atm. To explain: -n sort numerals -t: change change delimiter to ":" -k3 sort by column 3. EDIT: forgot the -n flag above. EDIT 2: It's also entirely possible that the disk order not line up to the port numbers.
  8. rdevName.0=sdy rdevName.1=sdw rdevName.2=sde rdevName.3=sdf rdevName.4=sdg rdevName.5=sdc rdevName.6=sdt rdevName.7=sdd rdevName.8=sdj rdevName.9=sdu rdevName.10=sdh rdevName.11=sdl rdevName.12=sdk rdevName.13=sdb rdevName.14=sdv rdevName.15=sdm rdevName.16=sdn rdevName.17=sdq rdevName.18=sdr rdevName.19=sdo rdevName.20=sds rdevName.21=sdi rdevName.22=sdp As you can see - my disks are actually numbered from 0 as far as unraid is concerned. I believe that's the host addresses starting at 11 - and that makes sense from a physical perspective as address 1 is the controller itself, addresses 2-9 are the links to the port expander, address 9 is the port expander itself (shown as "enclosu" in the report) and then address 11 is the first disk device. As you can see - they are actually numbered starting at zero as far as the md array for unraid is concerned. And yeah I suggest multi-dimensional arrays specifically because it nullifies issues like this, as instead of relying on indices and array sizes, we rely on "for each object" logic. Which will return in the order it was input, regardless of whether or not everything is incremental.
  9. Unraid 6.x Tunables Tester v4.1 BETA 1 by Pauven Tunables Report produced Sun Aug 11 20:25:03 MDT 2019 Run on server: BlackHole Short Parity Sync Test Current Values: md_num_stripes=5920, md_sync_window=2664, md_sync_thresh=2000 Global nr_requests=128 Disk Specific nr_requests Values: sdy=128, sdw=128, sde=128, sdf=128, sdg=128, sdc=128, sdt=128, sdd=128, sdj=128, sdu=128, sdh=128, sdl=128, sdk=128, sdb=128, sdv=128, sdm=128, sdn=128, sdq=128, sdr=128, sdo=128, sds=128, sdi=128, sdp=128, sdx=128, --- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10sec Duration)--- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 569 | 5920 | 2664 | 128 | 2000 | 53.4 --- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10sec Duration)--- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 123 | 1280 | 384 | 128 | 192 | 55.0 --- TEST PASS 1 (2 Min - 12 Sample Points @ 10sec Duration) --- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 | 73 | 768 | 384 | 128 | 376 | 58.4 | 320 | 50.9 | 192 | 53.9 2 | 147 | 1536 | 768 | 128 | 760 | 61.3 | 704 | 61.8 | 384 | 57.8 3 | 295 | 3072 | 1536 | 128 | 1528 | 65.1 | 1472 | 64.8 | 768 | 63.4 4 | 591 | 6144 | 3072 | 128 | 3064 | 66.0 | 3008 | 66.0 | 1536 | 66.1 --- TEST PASS 1_HIGH (30 Sec - 3 Sample Points @ 10sec Duration)--- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 |1182 |12288 | 6144 | 128 | 6136 | 65.8 | 6080 | 65.6 | 3072 | 65.0 --- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE REAL TEST --- If the speeds changed with different values you should run a NORMAL/LONG test. If speeds didn't change then adjusting Tunables likely won't help your system. Completed: 0 Hrs 3 Min 30 Sec. NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with Unraid, especially if you have any add-ons or plug-ins installed. System Info: BlackHole Unraid version 6.7.2 md_num_stripes=5920 md_sync_window=2664 md_sync_thresh=2000 nr_requests=128 (Global Setting) sbNumDisks=24 CPU: Genuine Intel(R) CPU @ 2.00GHz RAM: System Memory System Memory System Memory System Memory Outputting free low memory information... total used free shared buff/cache available Mem: 49371152 9959400 37455020 1486356 1956732 37404184 Low: 49371152 11916132 37455020 High: 0 0 0 Swap: 0 0 0 SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usb-storage - parity sdy WDC WD80EMAZ-00W [1] scsi1 megaraid_sas - MegaRAID SAS 2008 [Falcon] [N0] scsiN0 nvme0 - NVMe parity sdy WDC WD80EMAZ-00W [N1] scsiN1 nvme1 - NVMe parity sdy WDC WD80EMAZ-00W *** END OF REPORT *** lsscsi -st: root@BlackHole:/tmp# lsscsi -st [0:0:0:0] disk usb:3-9:1.0 /dev/sda 62.7GB [1:0:10:0] enclosu - - [1:0:11:0] disk /dev/sdb 8.00TB [1:0:12:0] disk /dev/sdc 8.00TB [1:0:13:0] disk /dev/sdd 8.00TB [1:0:14:0] disk /dev/sde 8.00TB [1:0:15:0] disk /dev/sdf 8.00TB [1:0:16:0] disk /dev/sdg 8.00TB [1:0:17:0] disk /dev/sdh 8.00TB [1:0:18:0] disk /dev/sdi 8.00TB [1:0:19:0] disk /dev/sdj 8.00TB [1:0:20:0] disk /dev/sdk 8.00TB [1:0:21:0] disk /dev/sdl 8.00TB [1:0:22:0] disk /dev/sdm 8.00TB [1:0:23:0] disk /dev/sdn 8.00TB [1:0:24:0] disk /dev/sdo 8.00TB [1:0:25:0] disk /dev/sdp 8.00TB [1:0:26:0] disk /dev/sdq 8.00TB [1:0:27:0] disk /dev/sdr 8.00TB [1:0:28:0] disk /dev/sds 8.00TB [1:0:29:0] disk /dev/sdt 8.00TB [1:0:30:0] disk /dev/sdu 8.00TB [1:0:31:0] disk /dev/sdv 8.00TB [1:0:32:0] disk /dev/sdw 8.00TB [1:0:33:0] disk /dev/sdx 8.00TB [1:0:34:0] disk /dev/sdy 8.00TB [N:0:1:1] disk pcie 0x8086:0x390d /dev/nvme0n1 1.02TB [N:1:1:1] disk pcie 0x8086:0x390d /dev/nvme1n1 1.02TB lshw -C storage: root@BlackHole:/tmp# lshw -c Storage *-storage description: RAID bus controller product: MegaRAID SAS 2008 [Falcon] vendor: Broadcom / LSI physical id: 0 bus info: pci@0000:01:00.0 logical name: scsi1 version: 03 width: 64 bits clock: 33MHz capabilities: storage pm pciexpress vpd msi msix bus_master cap_list rom configuration: driver=megaraid_sas latency=0 resources: irq:24 ioport:6000(size=256) memory:c7560000-c7563fff memory:c7500000-c753ffff memory:c7540000-c755ffff *-storage description: Non-Volatile memory controller product: SSDPEKNW020T8 [660p, 2TB] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 03 width: 64 bits clock: 33MHz capabilities: storage pm msi pciexpress msix nvm_express bus_master cap_list configuration: driver=nvme latency=0 resources: irq:36 memory:c7400000-c7403fff *-storage description: Non-Volatile memory controller product: SSDPEKNW020T8 [660p, 2TB] vendor: Intel Corporation physical id: 0 bus info: pci@0000:04:00.0 version: 03 width: 64 bits clock: 33MHz capabilities: storage pm msi pciexpress msix nvm_express bus_master cap_list configuration: driver=nvme latency=0 resources: irq:26 memory:c7300000-c7303fff *-scsi physical id: a1 bus info: usb@3:9 logical name: scsi0 capabilities: emulated Hope this is at least helpful. Should be able to get my computer set back up this week, finally. Also, I didn't expect my system to work out the gate. The NVME disks at least show up in the report under their own N# controllers. Still that odd issue with the last parity device being stored as the USB device. And then none of my disks actually show up. I did think of a different lookup and storage system, by the way - multi-dimensional arrays. Make an array for Controllers. For each controller make a new array named as that controller. For the first element of that array, place your desired product info string(s). Create a new array for disks. Add disks to that array. Place the array of disks as the second element of the controller array. Add that array to the array of Controllers. When going to print the data you'd then: foreach Controller in $Controllers; do printf "$Controller[0];" foreach Disk in $Controller[1]; do printf "$Disk" done done This way it becomes less possible to transpose disks across the array structure. Not sure if this is a viable approach with the formatting you want to do, though.
  10. The drive is part of a btrfs raid 1. The primary disk is mounted and the secondary disk gets identical data written to it in this case. At least that's how I understand it. I see activity on both of them when I write data to the cache volume - so I assume its working as intended, though I haven't bothered to read into it. I'm considering migrating from the Raid1 setup to a Raid0 setup when I get my 10gbe network going. I plan on having 10gbe inside the rack with a 10gbe uplink to the switch, using a dual-10gbe card. Meaning the server could easily see 20gb/s if I really hit it. Especially when migrating data from older server(s) and/or working with disk images while streaming. Oh - and to clarify - df only reports mounted filesystems
  11. Interestingly, running 'grep "rdevStatus" didn't work, but grep -i "rdevstatus" did: root@BlackHole:~# mdcmd status | grep -i "rdevstatus" rdevStatus.0=DISK_OK rdevStatus.1=DISK_OK rdevStatus.2=DISK_OK rdevStatus.3=DISK_OK rdevStatus.4=DISK_OK rdevStatus.5=DISK_OK rdevStatus.6=DISK_OK rdevStatus.7=DISK_OK rdevStatus.8=DISK_OK rdevStatus.9=DISK_OK rdevStatus.10=DISK_OK rdevStatus.11=DISK_OK rdevStatus.12=DISK_OK rdevStatus.13=DISK_OK rdevStatus.14=DISK_OK rdevStatus.15=DISK_OK rdevStatus.16=DISK_OK rdevStatus.17=DISK_OK rdevStatus.18=DISK_OK rdevStatus.19=DISK_OK rdevStatus.20=DISK_OK rdevStatus.21=DISK_OK rdevStatus.22=DISK_OK rdevStatus.23=DISK_NP rdevStatus.24=DISK_NP rdevStatus.25=DISK_NP rdevStatus.26=DISK_NP rdevStatus.27=DISK_NP rdevStatus.28=DISK_NP rdevStatus.29=DISK_OK root@BlackHole:~# mdcmd status | grep "rdevStatus" root@BlackHole:~# Similar thing happened with rdevName - I think it may be a web terminal issue, not sure: root@BlackHole:~# mdcmd status | grep "rdevName" root@BlackHole:~# mdcmd status | grep -i "rdevname" rdevName.0=sdy rdevName.1=sdw rdevName.2=sde rdevName.3=sdf rdevName.4=sdg rdevName.5=sdc rdevName.6=sdt rdevName.7=sdd rdevName.8=sdj rdevName.9=sdu rdevName.10=sdh rdevName.11=sdl rdevName.12=sdk rdevName.13=sdb rdevName.14=sdv rdevName.15=sdm rdevName.16=sdn rdevName.17=sdq rdevName.18=sdr rdevName.19=sdo rdevName.20=sds rdevName.21=sdi rdevName.22=sdp rdevName.23= rdevName.24= rdevName.25= rdevName.26= rdevName.27= rdevName.28= rdevName.29=sdx root@BlackHole:~# And finally df -h: root@BlackHole:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 24G 1.4G 23G 6% / tmpfs 32M 1.3M 31M 5% /run devtmpfs 24G 0 24G 0% /dev tmpfs 24G 0 24G 0% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 904K 128M 1% /var/log /dev/sda1 59G 4.6G 54G 8% /boot /dev/loop0 20M 20M 0 100% /lib/modules /dev/loop1 5.9M 5.9M 0 100% /lib/firmware /dev/md1 7.3T 3.3T 4.1T 45% /mnt/disk1 /dev/md2 7.3T 2.5T 4.9T 34% /mnt/disk2 /dev/md3 7.3T 728G 6.6T 10% /mnt/disk3 /dev/md4 7.3T 728G 6.6T 10% /mnt/disk4 /dev/md5 7.3T 728G 6.6T 10% /mnt/disk5 /dev/md6 7.3T 728G 6.6T 10% /mnt/disk6 /dev/md7 7.3T 728G 6.6T 10% /mnt/disk7 /dev/md8 7.3T 728G 6.6T 10% /mnt/disk8 /dev/md9 7.3T 844G 6.5T 12% /mnt/disk9 /dev/md10 7.3T 728G 6.6T 10% /mnt/disk10 /dev/md11 7.3T 1.4T 6.0T 19% /mnt/disk11 /dev/md12 7.3T 730G 6.6T 10% /mnt/disk12 /dev/md13 7.3T 728G 6.6T 10% /mnt/disk13 /dev/md14 7.3T 728G 6.6T 10% /mnt/disk14 /dev/md15 7.3T 730G 6.6T 10% /mnt/disk15 /dev/md16 7.3T 728G 6.6T 10% /mnt/disk16 /dev/md17 7.3T 730G 6.6T 10% /mnt/disk17 /dev/md18 7.3T 1.4T 6.0T 18% /mnt/disk18 /dev/md19 7.3T 728G 6.6T 10% /mnt/disk19 /dev/md20 7.3T 728G 6.6T 10% /mnt/disk20 /dev/md21 7.3T 734G 6.6T 10% /mnt/disk21 /dev/md22 7.3T 954G 6.4T 13% /mnt/disk22 /dev/nvme0n1p1 954G 100G 854G 11% /mnt/cache shfs 161T 22T 139T 14% /mnt/user0 shfs 161T 22T 140T 14% /mnt/user /dev/loop2 40G 7.9G 31G 21% /var/lib/docker /dev/loop3 1.0G 17M 905M 2% /etc/libvirt shm 64M 0 64M 0% /var/lib/docker/containers/ad97b37af764aa83b3276d7f03807a5486a8885f56fdb77f557e5b78f820e150/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/8d286807ba4757698d04b3160d399be1162d0b33dd8cfc6b86bde162bf95f1be/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/1eb3ea0e1e716beee08125eb1f4d65e421bf1182860515e1d0926a6f5f24500d/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/b6e07ad9a92216ffc1a5dd6ef6206852a466eb2aa4b9dfd5a38a990cc14f7d95/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/7935b46776f36856f516675b79cd89261734cea208e0ee25abe162293bde75a2/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/47458aa783d0ec7ca0f5bd3171dac3232d29e7f5bfea8058a8b422633afc486e/mounts/shm shm 64M 368K 64M 1% /var/lib/docker/containers/05c5042e739fd6f3e2ac99a4e2f4193ae1fb8059d305587ceb2efe960f280cd8/mounts/shm shm 64M 8.0K 64M 1% /var/lib/docker/containers/a38258d8e8231f6114f033bf8e5f4f36a99e4e0f6ed17948fec61ac54a7369d1/mounts/shm root@BlackHole:~# If you are wondering if my NVME is part of my array - no it is not. Anyways, back to trying to find all the stuff for my actual computer setup so I can get off this tiny laptop where I might be able to actually look at some code.
  12. Correct - you probably have a NVME SSD reporting as a scsi device in the kernel drivers. I'm not sure if this is a kernel change in the 6.7.x as Pauven (I believe) is running a 6.6.x build. But the change I posted above addresses this specific problem. There's some debugging that needs to be done with the disk reporting. These errors are purely for informational output in the report and should not affect the results of the tester.
  13. This should be possible, just don't expect it to be easy.
  14. So uh, this will be a weekend project. Turns out I have 3 pieces of hardware that completely break that entire section of the script. The first one was an easy fix; the nvme ssds break the array declaration because you can't have a ":" in the name of a variable. I reworked your sed line 132 to: < <( sed -e 's/://g' -e 's/\[/scsi/g' -e 's/]//g' < <( lsscsi -H ) That takes care of that, but I feel I should get a bit more "advanced" with it, since we could strip all invalid characters from that area. From there, it vomits on my megaraid controller at line 215. My resulting output, is kind of comical: SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usbstorage - [N:1:1:1] parity sdy /dev/nvme1n1 WDC WD80EMAZ-00W [1] scsi1 megaraidsas - MegaRAID SAS 2008 [Falcon] disk1 sdw WDC WD80EFAX-68L [N:1:1:1] parity sdy /dev/nvme1n1 WDC WD80EMAZ-00W [N0] scsiN0 devnvme0 - [N1] scsiN1 devnvme1 - I uh. I have 24 online 8tb reds. It seems like the associative arrays are off-by-one, but that doesn't explain the duplicated output on the 0 and then 1. I'll have to poke at it with time to sit down and really sink my teeth into it.
  15. I'll be able to test when I get home, going to change the declaration order to: InitVars Getlshw GetDisks ReportHeader ReportFooter Which *should* give me just the header and footer output so I can tinker with this particular problem. I have a feeling the error is in that general area, just need to pinpoint it. One thought is that it could be assigning it properly, and then overwrite it based on the logic being erroneous, but I can't see any obvious logical errors.
  16. scsistring=`echo $scsistring | tr -d "[]"}` - why is this brace here? line 181. Seems a bit... out of place. Even reading the code that defines the context of that line. Also since you are doing a lot to re-arrange the data into a format you like, I don't think I'll mess with that logic. You've already got associative arrays and such; I think we have a small syntax error or other tiny mistake causing erroneous output. The logic looks "okay"
  17. I'll go ahead and warn that since I don't have a desktop setup at the moment and I'm limited to a 720p screen editing this script is not easy. I see what you've done with the associative array - but I think that some systems may not output everything as you assume. So some of the array variables are null or 0 instead of being defined, which results in other logic failing and causing the erroneous output. I don't know if I can debug this with my limited free time at the moment, so I won't make any promises. root@BlackHole:~# tr -d "]" < <( sed 's/\[/scsi/g' < <( lsscsi -H ) > ) scsi0 usb-storage scsi1 megaraid_sas scsiN:0 /dev/nvme0 INTEL SSDPEKNW010T8 BTNH8435071Z1P0B 002C scsiN:1 /dev/nvme1 INTEL SSDPEKNW010T8 BTNH843506KY1P0B 002C root@BlackHole:~# lshw -quiet -short -c storage | grep storage /0/100/1/0 scsi1 storage MegaRAID SAS 2008 [Falcon] /0/100/2.2/0 storage SSDPEKNW020T8 [660p, 2TB] /0/100/2.3/0 storage SSDPEKNW020T8 [660p, 2TB] /0/a1 scsi0 storage root@BlackHole:~# lsscsi -st [0:0:0:0] disk usb:3-9:1.0 /dev/sda 62.7GB [1:0:10:0] enclosu - - [1:0:11:0] disk /dev/sdb 8.00TB [1:0:12:0] disk /dev/sdc 8.00TB [1:0:13:0] disk /dev/sdd 8.00TB [1:0:14:0] disk /dev/sde 8.00TB [1:0:15:0] disk /dev/sdf 8.00TB [1:0:16:0] disk /dev/sdg 8.00TB [1:0:17:0] disk /dev/sdh 8.00TB [1:0:18:0] disk /dev/sdi 8.00TB [1:0:19:0] disk /dev/sdj 8.00TB [1:0:20:0] disk /dev/sdk 8.00TB [1:0:21:0] disk /dev/sdl 8.00TB [1:0:22:0] disk /dev/sdm 8.00TB [1:0:23:0] disk /dev/sdn 8.00TB [1:0:24:0] disk /dev/sdo 8.00TB [1:0:25:0] disk /dev/sdp 8.00TB [1:0:26:0] disk /dev/sdq 8.00TB [1:0:27:0] disk /dev/sdr 8.00TB [1:0:28:0] disk /dev/sds 8.00TB [1:0:29:0] disk /dev/sdt 8.00TB [1:0:30:0] disk /dev/sdu 8.00TB [1:0:31:0] disk /dev/sdv 8.00TB [1:0:32:0] disk /dev/sdw 8.00TB [1:0:33:0] disk /dev/sdx 8.00TB [1:0:34:0] disk /dev/sdy 8.00TB [N:0:1:1] disk pcie 0x8086:0x390d /dev/nvme0n1 1.02TB [N:1:1:1] disk pcie 0x8086:0x390d /dev/nvme1n1 1.02TB root@BlackHole:~# tr -d "]" < <( sed 's/\[/scsi/g' < <( lsscsi -H ) ) scsi0 usb-storage scsi1 megaraid_sas scsiN:0 /dev/nvme0 INTEL SSDPEKNW010T8 BTNH8435071Z1P0B 002C scsiN:1 /dev/nvme1 INTEL SSDPEKNW010T8 BTNH843506KY1P0B 002C for example I think some of my hardware may break your script - note that my NVME SSD's report an extra column (pcie, then the bus address) compared to regular disks.
  18. Unraid Nvidia is a custom build of Unraid made by the LinuxServer.io team. Note that while it is unraid, it is customized and therefore not directly supported by LimeTech. This post covers how to install and use it. The next tidbit about my script for the nvdec wrapper - it's actually extremely easy to use - you just add it as a userscript using the CA userscripts plugin and set it to run after your normal updates: https://github.com/Xaero252/unraid-plex-nvdec And finally, as far as choices with VMs; VMs can get a little nasty with this setup. For one, if you decide to pass the nvidia GPU to your VM - docker will lose control of it until the VM is stopped, at which point any containers that want to use it will need to be restarted to see it again. If you wanted to use an nvidia graphics card for virtualization pass thru you'd typically want a Quadro card - although newer consumer cards are a bit easier to get working. I'd also wait until other users respond to this thread, as their opinion may be different than mine - and it is important to weigh your options before comitting to a significant expenditure. For example, a capable intel CPU with 4k QuickSync support and board may be similar in price in your area to a GPU. If this is the case, QuickSync is far superior to nvdec/nvenc, and you won't be passing that part of the CPU through to any containers. If you can cheaply obtain a GPU it is more affordable/effective to just add a GPU to your existing setup. So to somewhat retract on my previous statement ( I hastily posted without considering locale ) - if it's cheaper to buy a GPU and add it to your current system - I'd go that route. If it's cheaper or the same price to go intel and get a CPU that supports QuickSync 4k transcoding - I'd go that route. My system does not have QuickSync support, and I often wish it did, as that would free a PCI-E slot or this GPU for passthru.
  19. If you have a Plex Pass Option 2 (bold above) is the way to go. Add an nvidia graphics card that supports 4k h265 transcoding (see here: https://developer.nvidia.com/video-encode-decode-gpu-support-matrix) Switch to unraid-nvidia and add the nvdec wrapper and you are off to the races. Soon the wrapper won't be needed, either.
  20. To solve this, you could discard "low" values if within a threshold, rather than replacing them?
  21. I'll take a look at the sorting and filtering for those drives at the end tonight when I get off work. I have an idea for a good way to process and organize that, but need to see how the information is pulled and whether or not my idea will work. Would you rather them be ordered by Bus position or devfs node (sdX)? Also line 516 - you have "[TOWER]" hard coded. /etc/hostname has the server's name stored in it. I could nitpick over giant blocks of echos but it really is negligible at the end of the day lol.
  22. you can make your script run inside screen automagically, replace shebang and line 2: #!/bin/sh if [ -z "$STY" ]; then exec screen -dm -S screenName /bin/bash "$0"; fi Tweak screenName as appropriate.
  23. To back this up; I note that the umask is never being set by this docker. I think there was an assumption made that plex doesn't really "make" a lot of files not inherently used by itself. His use case is that the DVR functionality of plex does create files. Standard linux umask is 0022 which would create perms of 0644 on files and 0755 on directories. That allows the owner to read/write but not other users (such as SMB users, or anonymous SMB). The "normal" umask on unraid is 0000 which results in 0666 and 0777 for files and directories, respectively. I provided him a script that adds "umask 0000" after line 3 of the /etc/services.d/plex/run script of this docker. I believe this should be sufficient to resolve.
  24. There are a few ways to work around this. The first, and the one you should become most familiar with when using a Linux box of any kind is to type only part of the name you need and then press "Tab" The shell will automatically fill in the remaining characters, and add quotes when needed. The second, is to manually place quotes around anything that uses spaces, or special characters: "(This) will work" (This) won't work The final common way is to "escape" characters the shell will try to use: \(This\)\ will\ work\ too The first method is the easiest, and most common way to use the Linux terminal when working with special characters and spaces. Tab completion is your friend. It remembers commands when you don't. It remembers folder names and the location of capital letters and special characters when you don't, too. It also never forgets to double quote things that need to be, or to escape letters that should be.
  25. This is still worth reporting to LinuxServer.io - as they may be unaware that the DVR functionality of Plex is creating files without a proper umask for shares. To answer your questions regarding the UserScript - it needs to run to add the "umask" line to the docker, the docker must be running in order to execute the commands to modify the container. After which the container must be restarted to apply the actual command at startup. Running at Array start only works for the version of the docker that is installed when the array is started. If the docker is updated (either by you manually, or automatically) the change made by the UserScript will be gone. This is why it's desirable to have these sorts of things fixed upstream. There is no event that happens when dockers are updated, so there isn't really a way to just "run the script when the docker is updated" so we have to run it once per day. This is kind of kludgy and undesirable. As far as what a UserScript "could" look like to do this: #!/bin/bash con="$(docker ps --format "{{.Names}}" | grep -i plex)" run="/etc/services.d/plex/run" exists=$(docker exec -i "$con" grep -q umask "$run" >/dev/null 2>&1; echo $?) if [ "$exists" -eq 0 ]; then echo "umask line already present, exiting." exit else docker exec -i "$con" sed -i '3a umask 0000' "$run" echo "Added umask line - Restarting Plex docker..." docker restart "$con" >/dev/null fi To explain this script line-by-line so you know what we are doing: Shebang - this just defines what shell should execute this script - in this case "/bin/bash" Next we define the name of the docker container (we could hardcode this as "plex" since we know this is only usable for the ls.io plex docker) We then define the location of the run file used as the entrypoint. Next, we check if the file already contains "umask" or not. If it does, we simply exit the script, so we don't restart the docker unless we need to. If it doesn't, we add it after the 3rd line, and then restart the script. The result is that the "run" file ends up looking like this: #!/usr/bin/with-contenv bash echo "Starting Plex Media Server." umask 0000 export PLEX_MEDIA_SERVER_INFO_MODEL=$(uname -m) export PLEX_MEDIA_SERVER_INFO_PLATFORM_VERSION=$(uname -r) exec \ s6-setuidgid abc /bin/bash -c \ 'LD_LIBRARY_PATH=/usr/lib/plexmediaserver:/usr/lib/plexmediaserver/lib /usr/lib/plexmediaserver/Plex\ Media\ Server' Meaning that the next time the docker starts, "umask 0000" will be run before plex is started. This script can then be safely set to run once per day, preferably a few minutes after your automated updates, if you have the enabled. It will only make the change if it's required. P.S. Let me know if this works - as I have no way to test it.