Jump to content

Xaero

Members
  • Content Count

    162
  • Joined

  • Last visited

  • Days Won

    2

Xaero last won the day on July 19

Xaero had the most liked content!

Community Reputation

51 Good

About Xaero

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I am partially colorblind. I have difficulty with similar shades (such as more green blues, next to greens, or more blue greens next to blues.) The default Web Console color scheme makes reading the output of ls almost impossible for me: I either have to increase the font size to a point that productivity is hindered drastically, or strain my eyes to make out the letters against that background. A high contrast option would be great. Or, even better, the option to select common themes like "Solarized" et al would be great. Perhaps even, the ability to add shell color profiles for the web console. For now I use KiTTY when I can - and I've added a color profile to ~/.bash_profile via my previously suggested "persistent root" modification. Also, worth mentioning here: https://github.com/Mayccoll/Gogh Gogh has a very extensive set of friendly, aesthetically pleasing, and well-contrasting color profiles ready-to-go. Edit: Also worth noting that currently the web terminal doesn't source ~/.bashrc or ~/.bash_profile, and this results in the colors being "hardcoded" (source ~/.bashrc to the rescue) Edit2: Additionally, the font is hard coded. If we are fixing the web terminal to be a capable, customizable platform - this would also be high on the list of things to just do.
  2. The reason nohup doesn't work for this is because when you disconnect, or log out of that terminal, that terminal, and any child processes of the terminal are killed. This is just Linux kernel process management doing it's job. To prevent this you can just disown the process, no need to nohup it. For example you can: $ processname & $ disown and "processname" will continue running after the terminal is killed. This is good because it means that "processname" will still respond to hangup, which may be needed. Of course, you could also call disown with nohup: $ nohup processname $ disown You can also disown processes by using their PID, but calling it immediately following the spawn of a process will automatically disown the last created child.
  3. SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usb-storage [0:0:0:0] flash sda 62.7GB Extreme [1] scsi1 megaraid_sas MegaRAID SAS 2008 [Falcon] [1:0:11:0] disk13 sdb 8.00TB WDC WD80EFAX-68L [1:0:12:0] disk5 sdc 8.00TB WDC WD80EFAX-68L [1:0:13:0] disk7 sdd 8.00TB WDC WD80EFAX-68L [1:0:14:0] disk2 sde 8.00TB WDC WD80EFAX-68L [1:0:15:0] disk3 sdf 8.00TB WDC WD80EFAX-68L [1:0:16:0] disk4 sdg 8.00TB WDC WD80EFAX-68L [1:0:17:0] disk10 sdh 8.00TB WDC WD80EFAX-68L [1:0:18:0] disk21 sdi 8.00TB WDC WD80EFAX-68L [1:0:19:0] disk8 sdj 8.00TB WDC WD80EFAX-68L [1:0:20:0] disk12 sdk 8.00TB WDC WD80EFAX-68L [1:0:21:0] disk11 sdl 8.00TB WDC WD80EFAX-68L [1:0:22:0] disk15 sdm 8.00TB WDC WD80EFAX-68L [1:0:23:0] disk16 sdn 8.00TB WDC WD80EFAX-68L [1:0:24:0] disk19 sdo 8.00TB WDC WD80EFAX-68L [1:0:25:0] disk22 sdp 8.00TB WDC WD80EMAZ-00W [1:0:26:0] disk17 sdq 8.00TB WDC WD80EFAX-68L [1:0:27:0] disk18 sdr 8.00TB WDC WD80EFAX-68L [1:0:28:0] disk20 sds 8.00TB WDC WD80EFAX-68L [1:0:29:0] disk6 sdt 8.00TB WDC WD80EFAX-68L [1:0:30:0] disk9 sdu 8.00TB WDC WD80EFAX-68L [1:0:31:0] disk14 sdv 8.00TB WDC WD80EFAX-68L [1:0:32:0] disk1 sdw 8.00TB WDC WD80EFAX-68L [1:0:33:0] parity2 sdx 8.00TB WDC WD80EMAZ-00W [1:0:34:0] parity sdy 8.00TB WDC WD80EMAZ-00W [N0] scsiN0 nvme0 NVMe [N:0:1:1] cache nvme0n1 1.02TB INTEL SSDPEKNW01 [N1] scsiN1 nvme1 NVMe [N:1:1:1] cache2 nvme1n1 1.02TB INTEL SSDPEKNW01 Results from B3 look good!
  4. It looks like his nvme drives have a 5th column. Not sure if thats the cause or not
  5. I'm not sure if this will work; awk '{ if ($3 == "") print "Column 3 is empty for " NR }' This should print Colum 3 is empty for # where # is the row number. awk 'BEGIN { FS = OFS = "\t" } { for(i=1; i<=NF; i++) if($i ~ /^ *$/) $i = 0 }; 1' This may also work to replace empty columns fields with "0" but is a copy paste from a different application
  6. The megaraid disks don't show up still, not sure what's causing that. SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usb-storage [0:0:0:0] flash sda 62.7GB Extreme [1] scsi1 megaraid_sas MegaRAID SAS 2008 [Falcon] [N0] scsiN0 nvme0 NVMe [N:0:1:1] cache nvme0n1 1.02TB INTEL SSDPEKNW01 [N1] scsiN1 nvme1 NVMe [N:1:1:1] cache2 nvme1n1 1.02TB INTEL SSDPEKNW01 *** END OF REPORT *** Everything else looks good.
  7. Don't worry, I'm sure some obnoxious niche thing with my server will cause a hiccup.
  8. sort -n -t: -k3 I believe should handle this. You may need to strip the [ and ], so it'd be sed -e "s/[\[,\]]//g" | sort -n -t: -k3 Or something along those lines. I don't have a terminal accessible to test atm. To explain: -n sort numerals -t: change change delimiter to ":" -k3 sort by column 3. EDIT: forgot the -n flag above. EDIT 2: It's also entirely possible that the disk order not line up to the port numbers.
  9. rdevName.0=sdy rdevName.1=sdw rdevName.2=sde rdevName.3=sdf rdevName.4=sdg rdevName.5=sdc rdevName.6=sdt rdevName.7=sdd rdevName.8=sdj rdevName.9=sdu rdevName.10=sdh rdevName.11=sdl rdevName.12=sdk rdevName.13=sdb rdevName.14=sdv rdevName.15=sdm rdevName.16=sdn rdevName.17=sdq rdevName.18=sdr rdevName.19=sdo rdevName.20=sds rdevName.21=sdi rdevName.22=sdp As you can see - my disks are actually numbered from 0 as far as unraid is concerned. I believe that's the host addresses starting at 11 - and that makes sense from a physical perspective as address 1 is the controller itself, addresses 2-9 are the links to the port expander, address 9 is the port expander itself (shown as "enclosu" in the report) and then address 11 is the first disk device. As you can see - they are actually numbered starting at zero as far as the md array for unraid is concerned. And yeah I suggest multi-dimensional arrays specifically because it nullifies issues like this, as instead of relying on indices and array sizes, we rely on "for each object" logic. Which will return in the order it was input, regardless of whether or not everything is incremental.
  10. Unraid 6.x Tunables Tester v4.1 BETA 1 by Pauven Tunables Report produced Sun Aug 11 20:25:03 MDT 2019 Run on server: BlackHole Short Parity Sync Test Current Values: md_num_stripes=5920, md_sync_window=2664, md_sync_thresh=2000 Global nr_requests=128 Disk Specific nr_requests Values: sdy=128, sdw=128, sde=128, sdf=128, sdg=128, sdc=128, sdt=128, sdd=128, sdj=128, sdu=128, sdh=128, sdl=128, sdk=128, sdb=128, sdv=128, sdm=128, sdn=128, sdq=128, sdr=128, sdo=128, sds=128, sdi=128, sdp=128, sdx=128, --- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10sec Duration)--- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 569 | 5920 | 2664 | 128 | 2000 | 53.4 --- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10sec Duration)--- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 123 | 1280 | 384 | 128 | 192 | 55.0 --- TEST PASS 1 (2 Min - 12 Sample Points @ 10sec Duration) --- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 | 73 | 768 | 384 | 128 | 376 | 58.4 | 320 | 50.9 | 192 | 53.9 2 | 147 | 1536 | 768 | 128 | 760 | 61.3 | 704 | 61.8 | 384 | 57.8 3 | 295 | 3072 | 1536 | 128 | 1528 | 65.1 | 1472 | 64.8 | 768 | 63.4 4 | 591 | 6144 | 3072 | 128 | 3064 | 66.0 | 3008 | 66.0 | 1536 | 66.1 --- TEST PASS 1_HIGH (30 Sec - 3 Sample Points @ 10sec Duration)--- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 |1182 |12288 | 6144 | 128 | 6136 | 65.8 | 6080 | 65.6 | 3072 | 65.0 --- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE REAL TEST --- If the speeds changed with different values you should run a NORMAL/LONG test. If speeds didn't change then adjusting Tunables likely won't help your system. Completed: 0 Hrs 3 Min 30 Sec. NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with Unraid, especially if you have any add-ons or plug-ins installed. System Info: BlackHole Unraid version 6.7.2 md_num_stripes=5920 md_sync_window=2664 md_sync_thresh=2000 nr_requests=128 (Global Setting) sbNumDisks=24 CPU: Genuine Intel(R) CPU @ 2.00GHz RAM: System Memory System Memory System Memory System Memory Outputting free low memory information... total used free shared buff/cache available Mem: 49371152 9959400 37455020 1486356 1956732 37404184 Low: 49371152 11916132 37455020 High: 0 0 0 Swap: 0 0 0 SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usb-storage - parity sdy WDC WD80EMAZ-00W [1] scsi1 megaraid_sas - MegaRAID SAS 2008 [Falcon] [N0] scsiN0 nvme0 - NVMe parity sdy WDC WD80EMAZ-00W [N1] scsiN1 nvme1 - NVMe parity sdy WDC WD80EMAZ-00W *** END OF REPORT *** lsscsi -st: root@BlackHole:/tmp# lsscsi -st [0:0:0:0] disk usb:3-9:1.0 /dev/sda 62.7GB [1:0:10:0] enclosu - - [1:0:11:0] disk /dev/sdb 8.00TB [1:0:12:0] disk /dev/sdc 8.00TB [1:0:13:0] disk /dev/sdd 8.00TB [1:0:14:0] disk /dev/sde 8.00TB [1:0:15:0] disk /dev/sdf 8.00TB [1:0:16:0] disk /dev/sdg 8.00TB [1:0:17:0] disk /dev/sdh 8.00TB [1:0:18:0] disk /dev/sdi 8.00TB [1:0:19:0] disk /dev/sdj 8.00TB [1:0:20:0] disk /dev/sdk 8.00TB [1:0:21:0] disk /dev/sdl 8.00TB [1:0:22:0] disk /dev/sdm 8.00TB [1:0:23:0] disk /dev/sdn 8.00TB [1:0:24:0] disk /dev/sdo 8.00TB [1:0:25:0] disk /dev/sdp 8.00TB [1:0:26:0] disk /dev/sdq 8.00TB [1:0:27:0] disk /dev/sdr 8.00TB [1:0:28:0] disk /dev/sds 8.00TB [1:0:29:0] disk /dev/sdt 8.00TB [1:0:30:0] disk /dev/sdu 8.00TB [1:0:31:0] disk /dev/sdv 8.00TB [1:0:32:0] disk /dev/sdw 8.00TB [1:0:33:0] disk /dev/sdx 8.00TB [1:0:34:0] disk /dev/sdy 8.00TB [N:0:1:1] disk pcie 0x8086:0x390d /dev/nvme0n1 1.02TB [N:1:1:1] disk pcie 0x8086:0x390d /dev/nvme1n1 1.02TB lshw -C storage: root@BlackHole:/tmp# lshw -c Storage *-storage description: RAID bus controller product: MegaRAID SAS 2008 [Falcon] vendor: Broadcom / LSI physical id: 0 bus info: pci@0000:01:00.0 logical name: scsi1 version: 03 width: 64 bits clock: 33MHz capabilities: storage pm pciexpress vpd msi msix bus_master cap_list rom configuration: driver=megaraid_sas latency=0 resources: irq:24 ioport:6000(size=256) memory:c7560000-c7563fff memory:c7500000-c753ffff memory:c7540000-c755ffff *-storage description: Non-Volatile memory controller product: SSDPEKNW020T8 [660p, 2TB] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 03 width: 64 bits clock: 33MHz capabilities: storage pm msi pciexpress msix nvm_express bus_master cap_list configuration: driver=nvme latency=0 resources: irq:36 memory:c7400000-c7403fff *-storage description: Non-Volatile memory controller product: SSDPEKNW020T8 [660p, 2TB] vendor: Intel Corporation physical id: 0 bus info: pci@0000:04:00.0 version: 03 width: 64 bits clock: 33MHz capabilities: storage pm msi pciexpress msix nvm_express bus_master cap_list configuration: driver=nvme latency=0 resources: irq:26 memory:c7300000-c7303fff *-scsi physical id: a1 bus info: usb@3:9 logical name: scsi0 capabilities: emulated Hope this is at least helpful. Should be able to get my computer set back up this week, finally. Also, I didn't expect my system to work out the gate. The NVME disks at least show up in the report under their own N# controllers. Still that odd issue with the last parity device being stored as the USB device. And then none of my disks actually show up. I did think of a different lookup and storage system, by the way - multi-dimensional arrays. Make an array for Controllers. For each controller make a new array named as that controller. For the first element of that array, place your desired product info string(s). Create a new array for disks. Add disks to that array. Place the array of disks as the second element of the controller array. Add that array to the array of Controllers. When going to print the data you'd then: foreach Controller in $Controllers; do printf "$Controller[0];" foreach Disk in $Controller[1]; do printf "$Disk" done done This way it becomes less possible to transpose disks across the array structure. Not sure if this is a viable approach with the formatting you want to do, though.
  11. The drive is part of a btrfs raid 1. The primary disk is mounted and the secondary disk gets identical data written to it in this case. At least that's how I understand it. I see activity on both of them when I write data to the cache volume - so I assume its working as intended, though I haven't bothered to read into it. I'm considering migrating from the Raid1 setup to a Raid0 setup when I get my 10gbe network going. I plan on having 10gbe inside the rack with a 10gbe uplink to the switch, using a dual-10gbe card. Meaning the server could easily see 20gb/s if I really hit it. Especially when migrating data from older server(s) and/or working with disk images while streaming. Oh - and to clarify - df only reports mounted filesystems
  12. Interestingly, running 'grep "rdevStatus" didn't work, but grep -i "rdevstatus" did: root@BlackHole:~# mdcmd status | grep -i "rdevstatus" rdevStatus.0=DISK_OK rdevStatus.1=DISK_OK rdevStatus.2=DISK_OK rdevStatus.3=DISK_OK rdevStatus.4=DISK_OK rdevStatus.5=DISK_OK rdevStatus.6=DISK_OK rdevStatus.7=DISK_OK rdevStatus.8=DISK_OK rdevStatus.9=DISK_OK rdevStatus.10=DISK_OK rdevStatus.11=DISK_OK rdevStatus.12=DISK_OK rdevStatus.13=DISK_OK rdevStatus.14=DISK_OK rdevStatus.15=DISK_OK rdevStatus.16=DISK_OK rdevStatus.17=DISK_OK rdevStatus.18=DISK_OK rdevStatus.19=DISK_OK rdevStatus.20=DISK_OK rdevStatus.21=DISK_OK rdevStatus.22=DISK_OK rdevStatus.23=DISK_NP rdevStatus.24=DISK_NP rdevStatus.25=DISK_NP rdevStatus.26=DISK_NP rdevStatus.27=DISK_NP rdevStatus.28=DISK_NP rdevStatus.29=DISK_OK root@BlackHole:~# mdcmd status | grep "rdevStatus" root@BlackHole:~# Similar thing happened with rdevName - I think it may be a web terminal issue, not sure: root@BlackHole:~# mdcmd status | grep "rdevName" root@BlackHole:~# mdcmd status | grep -i "rdevname" rdevName.0=sdy rdevName.1=sdw rdevName.2=sde rdevName.3=sdf rdevName.4=sdg rdevName.5=sdc rdevName.6=sdt rdevName.7=sdd rdevName.8=sdj rdevName.9=sdu rdevName.10=sdh rdevName.11=sdl rdevName.12=sdk rdevName.13=sdb rdevName.14=sdv rdevName.15=sdm rdevName.16=sdn rdevName.17=sdq rdevName.18=sdr rdevName.19=sdo rdevName.20=sds rdevName.21=sdi rdevName.22=sdp rdevName.23= rdevName.24= rdevName.25= rdevName.26= rdevName.27= rdevName.28= rdevName.29=sdx root@BlackHole:~# And finally df -h: root@BlackHole:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 24G 1.4G 23G 6% / tmpfs 32M 1.3M 31M 5% /run devtmpfs 24G 0 24G 0% /dev tmpfs 24G 0 24G 0% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 904K 128M 1% /var/log /dev/sda1 59G 4.6G 54G 8% /boot /dev/loop0 20M 20M 0 100% /lib/modules /dev/loop1 5.9M 5.9M 0 100% /lib/firmware /dev/md1 7.3T 3.3T 4.1T 45% /mnt/disk1 /dev/md2 7.3T 2.5T 4.9T 34% /mnt/disk2 /dev/md3 7.3T 728G 6.6T 10% /mnt/disk3 /dev/md4 7.3T 728G 6.6T 10% /mnt/disk4 /dev/md5 7.3T 728G 6.6T 10% /mnt/disk5 /dev/md6 7.3T 728G 6.6T 10% /mnt/disk6 /dev/md7 7.3T 728G 6.6T 10% /mnt/disk7 /dev/md8 7.3T 728G 6.6T 10% /mnt/disk8 /dev/md9 7.3T 844G 6.5T 12% /mnt/disk9 /dev/md10 7.3T 728G 6.6T 10% /mnt/disk10 /dev/md11 7.3T 1.4T 6.0T 19% /mnt/disk11 /dev/md12 7.3T 730G 6.6T 10% /mnt/disk12 /dev/md13 7.3T 728G 6.6T 10% /mnt/disk13 /dev/md14 7.3T 728G 6.6T 10% /mnt/disk14 /dev/md15 7.3T 730G 6.6T 10% /mnt/disk15 /dev/md16 7.3T 728G 6.6T 10% /mnt/disk16 /dev/md17 7.3T 730G 6.6T 10% /mnt/disk17 /dev/md18 7.3T 1.4T 6.0T 18% /mnt/disk18 /dev/md19 7.3T 728G 6.6T 10% /mnt/disk19 /dev/md20 7.3T 728G 6.6T 10% /mnt/disk20 /dev/md21 7.3T 734G 6.6T 10% /mnt/disk21 /dev/md22 7.3T 954G 6.4T 13% /mnt/disk22 /dev/nvme0n1p1 954G 100G 854G 11% /mnt/cache shfs 161T 22T 139T 14% /mnt/user0 shfs 161T 22T 140T 14% /mnt/user /dev/loop2 40G 7.9G 31G 21% /var/lib/docker /dev/loop3 1.0G 17M 905M 2% /etc/libvirt shm 64M 0 64M 0% /var/lib/docker/containers/ad97b37af764aa83b3276d7f03807a5486a8885f56fdb77f557e5b78f820e150/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/8d286807ba4757698d04b3160d399be1162d0b33dd8cfc6b86bde162bf95f1be/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/1eb3ea0e1e716beee08125eb1f4d65e421bf1182860515e1d0926a6f5f24500d/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/b6e07ad9a92216ffc1a5dd6ef6206852a466eb2aa4b9dfd5a38a990cc14f7d95/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/7935b46776f36856f516675b79cd89261734cea208e0ee25abe162293bde75a2/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/47458aa783d0ec7ca0f5bd3171dac3232d29e7f5bfea8058a8b422633afc486e/mounts/shm shm 64M 368K 64M 1% /var/lib/docker/containers/05c5042e739fd6f3e2ac99a4e2f4193ae1fb8059d305587ceb2efe960f280cd8/mounts/shm shm 64M 8.0K 64M 1% /var/lib/docker/containers/a38258d8e8231f6114f033bf8e5f4f36a99e4e0f6ed17948fec61ac54a7369d1/mounts/shm root@BlackHole:~# If you are wondering if my NVME is part of my array - no it is not. Anyways, back to trying to find all the stuff for my actual computer setup so I can get off this tiny laptop where I might be able to actually look at some code.
  13. Correct - you probably have a NVME SSD reporting as a scsi device in the kernel drivers. I'm not sure if this is a kernel change in the 6.7.x as Pauven (I believe) is running a 6.6.x build. But the change I posted above addresses this specific problem. There's some debugging that needs to be done with the disk reporting. These errors are purely for informational output in the report and should not affect the results of the tester.
  14. This should be possible, just don't expect it to be easy.
  15. So uh, this will be a weekend project. Turns out I have 3 pieces of hardware that completely break that entire section of the script. The first one was an easy fix; the nvme ssds break the array declaration because you can't have a ":" in the name of a variable. I reworked your sed line 132 to: < <( sed -e 's/://g' -e 's/\[/scsi/g' -e 's/]//g' < <( lsscsi -H ) That takes care of that, but I feel I should get a bit more "advanced" with it, since we could strip all invalid characters from that area. From there, it vomits on my megaraid controller at line 215. My resulting output, is kind of comical: SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usbstorage - [N:1:1:1] parity sdy /dev/nvme1n1 WDC WD80EMAZ-00W [1] scsi1 megaraidsas - MegaRAID SAS 2008 [Falcon] disk1 sdw WDC WD80EFAX-68L [N:1:1:1] parity sdy /dev/nvme1n1 WDC WD80EMAZ-00W [N0] scsiN0 devnvme0 - [N1] scsiN1 devnvme1 - I uh. I have 24 online 8tb reds. It seems like the associative arrays are off-by-one, but that doesn't explain the duplicated output on the 0 and then 1. I'll have to poke at it with time to sit down and really sink my teeth into it.