Unoid

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by Unoid

  1. Update after loading 6.5TB of movies back to the zpool pool: speedteam state: ONLINE scan: scrub in progress since Sat Feb 10 12:50:25 2024 8.40T scanned at 0B/s, 355G issued at 2.01G/s, 8.40T total 0B repaired, 4.13% done, 01:08:28 to go
  2. I loaded up 800GB of movies (loves 1M sectorsize)to the zpool. scrubs average over 11GB/s reads (4 nvmes at 2.9GB/s). I'm curious if the last zpool when loaded up with 55% capacity could really slow the scrub down to 50-100MB/s?
  3. nvme cli when I toggle it on it never downloads and installs. anyone else seen this?
  4. I ran tests after backing up data to HDD. I only played with raid type [0, z1, 2vdev mirror] I left ashift at 12 since that's the default the GUI uses. I changed recordsizees=[16k,128k,256k,1M] I noticed that per raid type running the fio command, the first test is at default pool of 128k, it takes a while to write the 8 job blocks, but every test with different recordsize set to the pool the 8 job chunks of 10GB were still written at the initial 128K recordsize. It introduces error into these results. fio command taken from reading this nvme article on benchmarking nvme and keeping ARC from fudging the numbers. https://pv-tech.eu/posts/common-pitfall-when-benchmarking-zfs-with-fio/ Sharing the results anyways: fio command: fio --rw=read --bs=1m --direct=1 --ioengine=libaio --size=10G --group_reporting --filename=/mnt/user/speedtest/bucket --name=job1 --offset=0G --name=job2 --offset=10G --name=job3 --offset=20G --name=job4 --offset=30G --name=job5 --offset=40G --name=job6 --offset=50G --name=job7 --offset=60G --name=job8 --offset=70G 4x4TB TeamGroup mp34 --- Type_(recordsize, ashift) r0_(16K,12): READ: bw=3049MiB/s (3197MB/s), 3049MiB/s-3049MiB/s (3197MB/s-3197MB/s), io=80.0GiB (85.9GB), run=26867-26867msec WRITE: bw=778MiB/s (816MB/s), 778MiB/s-778MiB/s (816MB/s-816MB/s), io=80.0GiB (85.9GB), run=105330-105330msec r0_(128K,12): READ: bw=3057MiB/s (3206MB/s), 3057MiB/s-3057MiB/s (3206MB/s-3206MB/s), io=80.0GiB (85.9GB), run=26796-26796msec WRITE: bw=6693MiB/s (7018MB/s), 6693MiB/s-6693MiB/s (7018MB/s-7018MB/s), io=80.0GiB (85.9GB), run=12239-12239msec r0_(512k,12): READ: bw=3063MiB/s (3212MB/s), 3063MiB/s-3063MiB/s (3212MB/s-3212MB/s), io=80.0GiB (85.9GB), run=26746-26746msec WRITE: bw=3902MiB/s (4092MB/s), 3902MiB/s-3902MiB/s (4092MB/s-4092MB/s), io=80.0GiB (85.9GB), run=20994-20994msec r0_(1M,12): READ: bw=3059MiB/s (3208MB/s), 3059MiB/s-3059MiB/s (3208MB/s-3208MB/s), io=80.0GiB (85.9GB), run=26776-26776msec WRITE: bw=3969MiB/s (4162MB/s), 3969MiB/s-3969MiB/s (4162MB/s-4162MB/s), io=80.0GiB (85.9GB), run=20639-20639msec --- z1_(16k,12): READ: bw=3050MiB/s (3198MB/s), 3050MiB/s-3050MiB/s (3198MB/s-3198MB/s), io=80.0GiB (85.9GB), run=26860-26860msec WRITE: bw=410MiB/s (430MB/s), 410MiB/s-410MiB/s (430MB/s-430MB/s), io=80.0GiB (85.9GB), run=199875-199875msec z1_(128K,12): READ: bw=2984MiB/s (3129MB/s), 2984MiB/s-2984MiB/s (3129MB/s-3129MB/s), io=80.0GiB (85.9GB), run=27456-27456msec WRITE: bw=5873MiB/s (6158MB/s), 5873MiB/s-5873MiB/s (6158MB/s-6158MB/s), io=80.0GiB (85.9GB), run=13949-13949msec z1_(512K,12): READ: bw=2990MiB/s (3135MB/s), 2990MiB/s-2990MiB/s (3135MB/s-3135MB/s), io=80.0GiB (85.9GB), run=27402-27402msec WRITE: bw=1596MiB/s (1674MB/s), 1596MiB/s-1596MiB/s (1674MB/s-1674MB/s), io=80.0GiB (85.9GB), run=51318-51318msec z1_(1M,12): READ: bw=1086MiB/s (1139MB/s), 1086MiB/s-1086MiB/s (1139MB/s-1139MB/s), io=80.0GiB (85.9GB), run=75447-75447msec WRITE: bw=1949MiB/s (2043MB/s), 1949MiB/s-1949MiB/s (2043MB/s-2043MB/s), io=80.0GiB (85.9GB), run=42039-42039msec --- 2vdev mirror_(16K,12): READ: bw=3091MiB/s (3241MB/s), 3091MiB/s-3091MiB/s (3241MB/s-3241MB/s), io=80.0GiB (85.9GB), run=26506-26506msec WRITE: bw=1521MiB/s (1595MB/s), 1521MiB/s-1521MiB/s (1595MB/s-1595MB/s), io=80.0GiB (85.9GB), run=53867-53867msec 2vdev mirror_(128K,12): READ: bw=3085MiB/s (3234MB/s), 3085MiB/s-3085MiB/s (3234MB/s-3234MB/s), io=80.0GiB (85.9GB), run=26558-26558msec WRITE: bw=4421MiB/s (4636MB/s), 4421MiB/s-4421MiB/s (4636MB/s-4636MB/s), io=80.0GiB (85.9GB), run=18529-18529msec 2vdev mirror_(512K,12): READ: bw=3090MiB/s (3240MB/s), 3090MiB/s-3090MiB/s (3240MB/s-3240MB/s), io=80.0GiB (85.9GB), run=26510-26510msec WRITE: bw=3486MiB/s (3655MB/s), 3486MiB/s-3486MiB/s (3655MB/s-3655MB/s), io=80.0GiB (85.9GB), run=23500-23500msec 2vdev mirror_(1M,12): READ: bw=3104MiB/s (3255MB/s), 3104MiB/s-3104MiB/s (3255MB/s-3255MB/s), io=80.0GiB (85.9GB), run=26393-26393msec WRITE: bw=3579MiB/s (3753MB/s), 3579MiB/s-3579MiB/s (3753MB/s-3753MB/s), io=80.0GiB (85.9GB), run=22891-22891msec deleted fio bucket file re-run in case setting recordsize=1M but bucket wrote on first default run of 128K READ: bw=3258MiB/s (3416MB/s), 3258MiB/s-3258MiB/s (3416MB/s-3416MB/s), io=80.0GiB (85.9GB), run=25145-25145msec WRITE: bw=4440MiB/s (4656MB/s), 4440MiB/s-4440MiB/s (4656MB/s-4656MB/s), io=80.0GiB (85.9GB), run=18451-18451msec ^^^ Significant difference confirming running this test without delete the fio bucket file used for testing affects speed. I want to gather data points on trying ashift=[9,12,13]. However this isn't exposed to the gui on zpool creation. I may get time to just create the pool in bash and set ashift there, then do the format and mount (unsure if the GUI can pick it up if I do it via CLI). edit: I remade the pool in my desired style of raid z1. immediately set recordsize=1M check out the fio bucket written at 128k vs 1M when running. z1_(1M,12): 128k fio bucket: READ: bw=1086MiB/s (1139MB/s), 1086MiB/s-1086MiB/s (1139MB/s-1139MB/s), io=80.0GiB (85.9GB), run=75447-75447msec WRITE: bw=1949MiB/s (2043MB/s), 1949MiB/s-1949MiB/s (2043MB/s-2043MB/s), io=80.0GiB (85.9GB), run=42039-42039msec 1M bucket: READ: bw=3221MiB/s (3378MB/s), 3221MiB/s-3221MiB/s (3378MB/s-3378MB/s), io=80.0GiB (85.9GB), run=25432-25432msec WRITE: bw=6124MiB/s (6422MB/s), 6124MiB/s-6124MiB/s (6422MB/s-6422MB/s), io=80.0GiB (85.9GB), run=13376-13376msec Makes me wish I had deleted the fio bucket after every run. I'm settling on 1M and z1. I may still try ashift changes.
  5. Are they 4k sector size disks?
  6. May I ask what topography you have in your nvme zpool? z1? mirrored vdevs? how many disks
  7. I've been doing a LOT of reading on zfs on NVME. my drives only expose 512 sector size, not 4k which seems weird for a newish PCI-E 3.0 4TB device. ashift=9 is what I've read 2^9=512. I have a spreadsheet of a lot of tests to run in different configurations. hopefully I'll find out what is slowing these nvme's down so horribly. JorgeB, If I create zfs vdevs/pools with various options in bash, does the /boot/ OS know how to persist what I did? That's why I asked if I need to modify the zfs.conf file on /boot
  8. Random unraid question. the tunables for zfs in /sys/module/zfs/parameters/* Am I able to set each in the /boot/modprobe.d/zfs.conf I'm thinking of changing settings for ashift and default recordsize etc. Also can I use the CLI as root to zfs create? instead fo using the GUI?
  9. JorgeB: Thank you for helping walking me through troubleshooting steps. At this point I'm going to have to set the shares to send data to my HDD main array and run mover. then remake the zpool and run more tests. extended smart test = 0 errors. PCI-E link is accurate at 8GT/s x4 lanes (PCI-E 3.0 nvme's in a carrier card in a PCI-E 4.0 x16 slot). I can't tell what the issue may be. When data is moved I'll try disk benchmarks on each nvme separately.
  10. I did the same on a windows 11 gaming desktop. same speed I showed
  11. I copied a 27GB file to and from the same zpool mount. my desktop with smb mount to it is limited to 5GBps which this result is earily like reading and writing over the network making out the 5GBps. These speeds should be bottlenecked by write speed of around 2GB/s...
  12. Last I tested it was doing many GB/s writes copying large sequential. Just ran a test copying a 9GB blue ray rip (h265) I have. rsync: sent 8,868,009,936 bytes received 35 bytes 311,158,244.60 bytes/sec Grafana only showing peak of 129MB/s write for the rsync. running a dd if=/dev/zero of=/mnt/user/speedtest oflag=direct bs=128k count=32k reported 685MB/s in terminal, but grafana showed 2.03GB/s write to one nvme. a VM with a vdisk mounted of the same zpool share running kdiskmark showed similar speeds to the screenshot in orig post.
  13. I have compression=on, I'm running a scrub again to gather metrics. I don't see any extra logs to give me more insight. NVME temps are all under 50C, (I just added heatsinks to them) I captured the first 5 minutes of scrub AFTER it does the initial "indexing" or whatever operation: pool: speedteam state: ONLINE scan: scrub in progress since Mon Feb 5 10:30:29 2024 7.92T scanned at 0B/s, 188G issued at 421M/s, 7.92T total 0B repaired, 2.32% done, 05:20:51 to go Disk IO: Writes are all less than 150kb/s. Reads in screenshot. CPU is barely being taxed. mostly sitting around 3-10% across all threads, with occasional spikes to 50-100% Total CPU average under 6%.
  14. Unraid 6.12.6 Epyc 7302, 16c 3.3ghz 256 GB pc3200 ecc pool in question: 4x4TB team group MP43 nvme pcie 3 4x4x4x4 bifurcation raid z1. 32GB Ram Cache. pool at about 50% use. Compression=on Scrub speeds range from 90MB/s to 250MB/s. shouldn't scrub speeds be a good bit faster? These Team group mp43's do 3000 read, 2400ish writes sequential. TLC NAND with DRAM. 5 hour 11 minutes for 6.3TB data used of 11.8TB pool: speedteam state: ONLINE scan: scrub repaired 0B in 05:11:58 with 0 errors on Thu Feb 1 06:11:59 2024 config: NAME STATE READ WRITE CKSUM speedteam ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 /dev/nvme0n1p1 ONLINE 0 0 0 /dev/nvme1n1p1 ONLINE 0 0 0 /dev/nvme2n1p1 ONLINE 0 0 0 /dev/nvme3n1p1 ONLINE 0 0 0 errors: No known data errors
  15. I really appreciate you going above and beyond docker related support. I tried multiple network options bridge, br0, host, etc. my opnsense forwarding it spot on. For giggles I fired up an ubuntu 22.03 server VM, did the steamcmd fun and changed my port forward destination to it's IP and my friends can get in. I copied the same palsettings.ini file from the saved directory and everything. I tried nc -u host port, and got success on the docker from a remote server. so I really don't know what's going on.
  16. I have 8211 (set as default in docker config, port forwarded correctly like i have other working services. docker allocation shows it listening on it as well. when I or anyone else tries to connect, it gets a network timeout. opnsense was only showing a PASS for outbound traffic via 8211 UDP. if I checked the info firewall logs it would show packets for IN from a known host(a friend, or me on wifi hotspot) and the packet is BLOCKED with IP:random port like 20304 > server dest 8211. I agree, probably networking issue but I've done everything right.
  17. Palworld: I can connect using local machine to my unraid server with LAN IP. For some reason I see my opnsense blocking incoming connections targeting my local server :8211. But the source ports are all over the place, 20,000's, to 50,000's and I won't open such a huge range in my opnsense port forward. It's like the game sends the random UDP traffic over the random ports but not actually using 8211 like it should.
  18. As of last week telegraf will no longer start: [telegraf] Error running agent: could not initialize input inputs.smart: smartctl not found: verify that smartctl is installed and it is in your PATH (or specified in config): provided path does not exist: [] I have the .conf using sudo for execution, and PATH is fine. which smartctl shows /usr/sbin/smartctl, the binary runs just fine without adding the path to the command as well. echo $PATH includes /usr/sbin/. what's going on? EDIT: I see users above already mentioned this. I'm on 6.8.3 however
  19. Well, this is weird. I moved where the plug the USB stick into and it had more errors, I fiddled with how it was plugged in and magically it's booting again now. Doens't make much sense. I'm going to backup the USB stick again and buy a replacement. I suppose I don't need help anymore
  20. Today I noticed my power flickered and my unraid UPS cycled on temporarily. Then my Unraid USB drive became unresponsive. I wrote down some syslog problems and did a reboot thinking it'd come back, but it isn't. I popped the USB stick into my win10 box and the files seems intact. Any ideas? I do have backups of the USB stick but they're on one of my disks in my unraid server. Would have to pop one out mount on windows/linux?, and try to reformat the USB stick and copy old files back. EDIT: Plugged a monitor into my server. It does boot off the USB and if I pick the default Unraid OS, it stalls at the first step of reading /bzimage EDIT 2: Trying to boot again and it only says: SYSLINUX 6.03 EDDBoot error Apr 11 12:06:51 Tower kernel: usb 2-1.2: USB disconnect, device number 3 Apr 11 12:06:52 Tower kernel: usb 2-1.2: new full-speed USB device number 4 using ehci-pci Apr 11 12:06:52 Tower kernel: hid-generic 0003:0764:0501.0002: hiddev96,hidraw0: USB HID v1.10 Device [CPS CP1350PFCLCD] on usb-0000:00:1d.0-1.2/input0 Apr 11 12:06:53 Tower kernel: xhci_hcd 0000:03:00.0: Cannot set link state. Apr 11 12:06:53 Tower kernel: usb usb4-port2: cannot disable (err = -32) Apr 11 12:06:53 Tower kernel: xhci_hcd 0000:03:00.0: Cannot set link state. Apr 11 12:06:53 Tower kernel: usb usb4-port2: cannot disable (err = -32) Apr 11 12:06:53 Tower kernel: usb 4-2: USB disconnect, device number 2 Apr 11 12:06:53 Tower kernel: sd 0:0:0:0: [sda] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 Apr 11 12:06:53 Tower kernel: sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x28 28 00 00 16 d9 70 00 00 40 00 Apr 11 12:06:53 Tower kernel: print_req_error: I/O error, dev sda, sector 1497456 Apr 11 12:06:53 Tower kernel: sd 0:0:0:0: [sda] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 Apr 11 12:06:53 Tower kernel: sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x28 28 00 00 16 d9 b0 00 00 80 00 Apr 11 12:06:53 Tower kernel: print_req_error: I/O error, dev sda, sector 1497520 Apr 11 12:06:53 Tower kernel: print_req_error: I/O error, dev loop0, sector 15008 Apr 11 12:06:53 Tower kernel: SQUASHFS error: squashfs_read_data failed to read block 0x74fa5c Apr 11 12:06:53 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [74fa5c] Apr 11 12:06:53 Tower kernel: SQUASHFS error: Unable to read page, block 74fa5c, size 159e4 Apr 11 12:06:53 Tower kernel: print_req_error: I/O error, dev loop0, sector 15010 Apr 11 12:06:53 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [74fa5c] Apr 11 12:06:53 Tower kernel: SQUASHFS error: Unable to read page, block 74fa5c, size 159e4 Apr 11 12:06:53 Tower kernel: print_req_error: I/O error, dev loop0, sector 15012 Apr 11 12:06:53 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [74fa5c] Apr 11 12:06:53 Tower kernel: SQUASHFS error: Unable to read page, block 74fa5c, size 159e4 Apr 11 12:06:53 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [74fa5c] Apr 11 12:06:53 Tower kernel: print_req_error: I/O error, dev loop0, sector 15014 Apr 11 12:06:53 Tower kernel: SQUASHFS error: Unable to read page, block 74fa5c, size 159e4 Apr 11 12:06:53 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [74fa5c] Apr 11 12:06:53 Tower kernel: SQUASHFS error: Unable to read page, block 74fa5c, size 159e4 Apr 11 12:06:53 Tower kernel: print_req_error: I/O error, dev loop0, sector 15016 Apr 11 12:06:53 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [74fa5c] Apr 11 12:06:53 Tower kernel: SQUASHFS error: Unable to read page, block 74fa5c, size 159e4 Apr 11 12:06:53 Tower kernel: print_req_error: I/O error, dev loop0, sector 15018 Apr 11 12:06:53 Tower kernel: print_req_error: I/O error, dev loop0, sector 15020 Apr 11 12:06:53 Tower kernel: print_req_error: I/O error, dev loop0, sector 15022 Apr 11 12:06:53 Tower kernel: SQUASHFS error: squashfs_read_data failed to read block 0x3f5d8c Apr 11 12:06:53 Tower kernel: SQUASHFS error: squashfs_read_data failed to read block 0x3f5d8c Apr 11 12:06:53 Tower kernel: blk_partition_remap: fail for partition 1 Apr 11 12:06:54 Tower kernel: FAT-fs (sda1): Directory bread(block 324928) failed
  21. Binhex, I host my own cloud OpenVPN server I wish to run through. The env variables for setup don't seem to support that. Is there a quick guide on running an .ovpn file? I could more easily just run a VM but that's much more resource intensive then your docker.
  22. I made my windows 10 local account to have the same user:pass as an unraid user. I set shares to private and gave the user read write. I can access the correct shares without logging in, however they're still rear only in windows properties. I'm at a loss as what else to do. I'll have to settle for running rclone on unraid locally to have google drive access.
  23. Because It's annoying that I can't mount my google drive to the network share, or whatever equivalent other windows services I'd like to run
  24. This is the exact same issue I'm seeing. I've gone into the permission and sharing options on the windows side and did the equivilent of 777 for all users I could see. Still won't fix the issue. The next step I'll do when I get home is to try to downgrade windows to SMB 2.0 and compare.
  25. I remounted the network drive without any login credentials, shares are all public. and I still see folders listed as a faux "read only" in windows.