chris smashe

Members
  • Posts

    34
  • Joined

  • Last visited

Everything posted by chris smashe

  1. @ich777 I reset the card and re plugged in the power cable. So far so good. I have had it running for over an hour trans-coding a couple videos with no issue. The only thing i have now is the gpu stats is no longer showing the power consumption but as long as its not crashing Im good to go.
  2. I have not I will try that Corsair RM850x Yes i can move it to my windows machine if reseating it does not work Will let you know the results. Thanks
  3. Just happened again. Attached are the diagnostics smashenas-diagnostics-20230707-2045.zip
  4. OS: 6.11.4 Driver Version: Latest: v535.54.03 I have been using the same set up for a long time without an issue. The past 3 days i have had an issue twice where my GPU (1070) fans start going at full blast. I look at the server and the gpu is no longer visible. I looked at the logs and it showed that it crashed. I i thought i saved a copy of the system log but i did not, but i do have the gpu logs which i will attach. The fist time I had used the gpu to trans-code a tv show earlier in the night. Today the GPU was doing nothing and had not done anything for a couple of days (since the last time this happened.). I have to reboot to get it to fix. I have space invader ones nvidia power save and nvidia power save hourly script installed. The last time it it happened it happened around x:45 so if the every hour is on the hour this would not have ran. They have been installed since i set up the server. nvidia-bug-report.log.gz
  5. Is there a way to tie a vm to an IP for VNC connections so instead of using serverip:portnumber you would just use an ip address. I would like to do this for local dns records/firewall rules. Thanks
  6. Apps > NerdTools After that is installed Settings > nerdtools you can install it from there.
  7. Ya my tv and player supports DV (lg oled/apple tv 4k) but plex is having an issue with it (some licensing thing i guess). This is one issue that i guess my tester that i was looking for would not fix since the file is valid but plex is not playing it. If there is a tester to validate videos it would still be nice to have. It might not fix this issue but it would be good to test all downloaded videos to make sure that they are not corrupted. Thanks for your time.
  8. I did not because what i am looking for would have nothing to do with plex. It would be between radar and plex.
  9. Is there an app that can validate a show/movie after it is downloaded so that it will be known to play? I am getting color space is not supported from plex sometimes (along with other unable to play issues) and am looking for a solution to know that this is going to happen before it happens for a user. Its not a huge deal if it happens to me but is a problem when it happens for someone else.
  10. I had it set to 70% so it would start moving before it was full to account for that. I have since changed out my cache drive for a bigger one today so it won't be such an issue, but it would be a nice feature and one that I thought the plugin already did.
  11. Ok. I must have misunderstood what it did. I was trying to have something that would start the mover if i went crazy with the downloads and the cache drive filled up before the scheduled task fired. that would be very helpful if it could be added. Thanks for your help
  12. @hugenbddwhat is the Move All from Cache-yes shares pool percentage for then? Does it not start the mover once the drive gets to a certain percentage full?
  13. @hugenbdd I updated the only move threshold to 5% and the move all from cache percentage to 20% so it will run faster. I also turned on the logs. It still does not run and nothing about the mover is shown in the logs.
  14. Thanks @hugenbdd. I clicked the mover button by accident. Once this is done running, I will try your suggestions and let you know.
  15. My move when the drive gets to a certain percentage is no longer working. I am pretty sure it used to work but now i am unable to get it to fire. My settings are below I have it set so when it gets to 70% run mover but I am at 70% (and i have tried higher) and mover does not start Any ideas? Thanks
  16. open up a terminal and type cd /mnt/cache then du | sort -n the largest files/folders should be at the bottom. With that you can make a better determination what is taking up all of your space.
  17. You're comparing apples to oranges. To lower power to the same as your examples of low power machines you need to Get rid of most of your ram. The mini has 8gb, the nas has 4. Remove most of your hard drives. The mini has one ssd and the nas most likely does not take the hard drives into account for their numbers since it does not come with the drives. Remove your sas controller. Neither of your examples have one. And lastly remove your 10gb nic and go back to using the one built into your mother board. 10gb is going to use more power than 1gb and you are most likely still powering your onboard one. You can get less power use but you will have to remove items and/or replace them with more power efficient items. You're probably best to only power on the server when you are going to use it or look into adding renewable energy sources to your house if you can. If you are going to start replacing hardware you need to figure out how long of paying less in power bills it will take to offset what it cost you to buy the new hardware if that is the only reason you are getting the new hardware.
  18. I still have some 30% io wait but its no longer hanging my machine. I hope it helps you figure it out.
  19. If your are going to build each disk from parity you should have updated your parity disk first and rebuilt parity. From what i am reading your parity drive currently is at most 10tb and you put in a 12tb data disk. Your parity drive needs to be as big or bigger then your biggest data disk or it pretends that your 12tb disk is only 10tb. If you don't care about the parity like you said you could 1. remove the parity drive then go through each drive one by one and 2. add replacement drive in as an unassigned disk 3. copy the data from one of your old disks to the new disk using midnight commander or something better that i don't know about. 4. remove the old disk from the array and replace with the new disk 5. repeat with each disk until you are done transferring data. 6. Add in the parity disk and rebuild parity. you will still have all the data on your old disks in case you need it for some reason but if one fails while this is happening the data will be gone since you no longer have parity to restore from. Additional note: Stop all dockers and things that will write to the array while you are doing this transfer. If you tell the system to copy files and a new file gets added to the disk while doing the copy that new file might not go with it so when you change drives it will no longer be there. You could add the old disk back in as an unassigned disk and copy the file over but its just best to not be writing while you are doing the data transfer.
  20. I have fixed the problem, I think. So far, so good. I wanted to try everything before I started updating the hardware, so I found and updated my bios for my motherboard to the latest version available. I still have some iowaits that go up to the 50’s in certain situations, but so far, nothing I have thrown at it has locked up the server while trying to complete an awaited task. Most situations that I have tested still allowed me to transfer my test file at full speed with no degradation of speed even if there was an io wait reported. The 2 situations in which I was able to make the speed slow down slightly were if tdar was transcoding multiple files (3 at a time plus any 1 copy could be running) or if plex was detecting intros. Both of those situations made my transfer speed drop about 20% at most, but it no longer drops to 0 and locks up the server until the io catches up. I would like to have little to no iowaits but I will take having the server always respond without issue as a win for now. While trying to figure this out the past 2 days, I found multiple posts about people having the same issues, but none of the posts have solutions, so I want to post what helped me here. First, update your bios if there is an update. Something about the bios my server was using (which was old) and btfs did not work well together. This fixed my issue so save yourself 2 days and try this first. If that does not fix your issue, the couple of things you can do to try and diagnose what is causing it are First, install glances from apps. It’s an easy way to view the iowait and show tasks that are using your cpu and you can watch/log iowaits while you try different scenarios. You can get the same information with the top command, but glances made it easier to view, and it logs the last 9 warning/critical issues. Next, run the iostat with the following command: iostat -hymx 1 4 This will give you a response like below. Look to see if any disks have 100% usage, as I had below. If you do, that is the disk that is having issues. Next, install disk speed from apps and run the benchmark on that drive. In my case, the drive looked fine when I ran the benchmark without other things turned on, but that helped me rule out the disk (somewhat since I still thought we could have a disk issue). Lastly, install nerd tools, and then inside nerd tools, install iotop. After you install iotop run it using the command: iotop -o This will list out all the tasks that are using io in real-time. Keep it open and do things on your server that cause it to have a high iowait percentage to see what task is causing the problem. In my case, it was the btfs worker (shown in the youtube video linked in this thread), so if my bios update did not work, I was going to swap out the cache drive and change the drive from btfs to xfs as a last resort. If iotop is not showing the percentage and shows unavailable, run the following command and then run iotop again. It might still show unavailable for a minute, but then it started showing for me. echo 1 > /proc/sys/kernel/task_delayacct Hopefully, this helps anyone else having this issue. If it starts happening again i will open this thread back up.
  21. So it is definitely an issue with the cache drive or butter fs. I ran iotop and here is a video of it running while I do a transfer and deluge is running. A good portion of the time btrfs worker is 99% io but has no read or writes being listed. Then when you get to 31 seconds it is the only thing showing ( i used the -o flag to only show processes that have io) since it looks like its blocking everything else from running and causing the wait. I have also seen nothing listed at all for a couple seconds in my tests but that is not shown in this video. Also attached is an iostat image My next steps are to change out the drive with another drive and if that does not work, change the drive from btfs to xfs and see if that works. That will be later so if anyone has any ideas before i start that it would be helpful.
  22. I changed out the sata cable. No change. I stopped docker and ran the file transfer above and when it hung i ran pf auxf and the only thing that has a stat of D is [kworker/u16:5+btrfs-worker] I dont know what this means but hopefully this helps USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 2 0.0 0.0 0 0 ? S 15:57 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [rcu_gp] root 4 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [rcu_par_gp] root 5 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [slub_flushwq] root 6 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [netns] root 8 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/0:0H-events_highpri] root 10 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/0:1H-kblockd] root 11 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [mm_percpu_wq] root 12 0.0 0.0 0 0 ? I 15:57 0:00 \_ [rcu_tasks_kthread] root 13 0.0 0.0 0 0 ? I 15:57 0:00 \_ [rcu_tasks_rude_kthread] root 14 0.0 0.0 0 0 ? I 15:57 0:00 \_ [rcu_tasks_trace_kthread] root 15 0.0 0.0 0 0 ? S 15:57 0:00 \_ [ksoftirqd/0] root 16 0.1 0.0 0 0 ? I 15:57 0:02 \_ [rcu_preempt] root 17 0.0 0.0 0 0 ? S 15:57 0:00 \_ [migration/0] root 18 0.0 0.0 0 0 ? S 15:57 0:00 \_ [cpuhp/0] root 19 0.0 0.0 0 0 ? S 15:57 0:00 \_ [cpuhp/1] root 20 0.0 0.0 0 0 ? S 15:57 0:00 \_ [migration/1] root 21 0.0 0.0 0 0 ? S 15:57 0:00 \_ [ksoftirqd/1] root 23 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/1:0H-events_highpri] root 24 0.0 0.0 0 0 ? S 15:57 0:00 \_ [cpuhp/2] root 25 0.0 0.0 0 0 ? S 15:57 0:00 \_ [migration/2] root 26 0.0 0.0 0 0 ? S 15:57 0:00 \_ [ksoftirqd/2] root 28 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/2:0H-events_highpri] root 29 0.0 0.0 0 0 ? S 15:57 0:00 \_ [cpuhp/3] root 30 0.0 0.0 0 0 ? S 15:57 0:00 \_ [migration/3] root 31 0.0 0.0 0 0 ? S 15:57 0:00 \_ [ksoftirqd/3] root 33 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/3:0H-events_highpri] root 34 0.0 0.0 0 0 ? S 15:57 0:00 \_ [cpuhp/4] root 35 0.0 0.0 0 0 ? S 15:57 0:00 \_ [migration/4] root 36 0.0 0.0 0 0 ? S 15:57 0:00 \_ [ksoftirqd/4] root 37 0.0 0.0 0 0 ? I 15:57 0:00 \_ [kworker/4:0-mm_percpu_wq] root 38 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/4:0H-events_highpri] root 39 0.0 0.0 0 0 ? S 15:57 0:00 \_ [cpuhp/5] root 40 0.0 0.0 0 0 ? S 15:57 0:00 \_ [migration/5] root 41 0.0 0.0 0 0 ? S 15:57 0:00 \_ [ksoftirqd/5] root 43 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/5:0H-events_highpri] root 44 0.0 0.0 0 0 ? S 15:57 0:00 \_ [cpuhp/6] root 45 0.0 0.0 0 0 ? S 15:57 0:00 \_ [migration/6] root 46 0.0 0.0 0 0 ? S 15:57 0:00 \_ [ksoftirqd/6] root 48 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/6:0H-kblockd] root 49 0.0 0.0 0 0 ? S 15:57 0:00 \_ [cpuhp/7] root 50 0.0 0.0 0 0 ? S 15:57 0:00 \_ [migration/7] root 51 0.0 0.0 0 0 ? S 15:57 0:00 \_ [ksoftirqd/7] root 53 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/7:0H-kblockd] root 54 0.0 0.0 0 0 ? S 15:57 0:00 \_ [kdevtmpfs] root 55 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [inet_frag_wq] root 57 0.0 0.0 0 0 ? I 15:57 0:00 \_ [kworker/1:1-events] root 59 0.0 0.0 0 0 ? S 15:57 0:00 \_ [oom_reaper] root 61 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [writeback] root 62 0.0 0.0 0 0 ? S 15:57 0:00 \_ [kcompactd0] root 63 0.0 0.0 0 0 ? SN 15:57 0:00 \_ [ksmd] root 64 0.0 0.0 0 0 ? SN 15:57 0:00 \_ [khugepaged] root 65 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kintegrityd] root 66 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kblockd] root 67 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [blkcg_punt_bio] root 68 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [ata_sff] root 69 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [devfreq_wq] root 71 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/1:1H-kblockd] root 94 0.2 0.0 0 0 ? I 15:57 0:07 \_ [kworker/u16:4-btrfs-endio-write] root 116 0.0 0.0 0 0 ? S 15:57 0:00 \_ [kswapd0] root 120 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kthrotld] root 125 0.0 0.0 0 0 ? I 15:57 0:00 \_ [kworker/7:2-rcu_gp] root 171 0.0 0.0 0 0 ? S 15:57 0:00 \_ [xenbus_probe] root 241 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [target_completi] root 242 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [target_submissi] root 243 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [xcopy_wq] root 250 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [vfio-irqfd-clea] root 268 0.2 0.0 0 0 ? I 15:57 0:05 \_ [kworker/u16:6-btrfs-endio-write] root 274 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/2:1H-kblockd] root 335 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kstrp] root 380 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/4:1H-kblockd] root 381 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/3:1H-kblockd] root 385 0.0 0.0 0 0 ? S 15:57 0:00 \_ [scsi_eh_0] root 386 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [scsi_tmf_0] root 388 0.0 0.0 0 0 ? S 15:57 0:00 \_ [usb-storage] root 422 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/7:1H-events_highpri] root 423 0.0 0.0 0 0 ? I< 15:57 0:00 \_ [kworker/6:1H-events_highpri] root 764 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [acpi_thermal_pm] root 774 0.0 0.0 0 0 ? S 15:59 0:00 \_ [scsi_eh_1] root 775 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [kworker/5:1H-kblockd] root 776 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [scsi_tmf_1] root 777 0.0 0.0 0 0 ? S 15:59 0:00 \_ [scsi_eh_2] root 778 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [scsi_tmf_2] root 779 0.0 0.0 0 0 ? S 15:59 0:00 \_ [scsi_eh_3] root 780 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [scsi_tmf_3] root 781 0.0 0.0 0 0 ? S 15:59 0:00 \_ [scsi_eh_4] root 782 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [scsi_tmf_4] root 784 0.0 0.0 0 0 ? S 15:59 0:00 \_ [scsi_eh_5] root 785 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [scsi_tmf_5] root 786 0.0 0.0 0 0 ? S 15:59 0:00 \_ [scsi_eh_6] root 787 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [scsi_tmf_6] root 794 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [cryptd] root 829 0.0 0.0 0 0 ? I 15:59 0:00 \_ [kworker/4:2-rcu_gp] root 836 0.0 0.0 0 0 ? S 15:59 0:00 \_ [nv_queue] root 837 0.0 0.0 0 0 ? S 15:59 0:00 \_ [nv_queue] root 844 0.0 0.0 0 0 ? S 15:59 0:00 \_ [nvidia-modeset/kthread_q] root 845 0.0 0.0 0 0 ? S 15:59 0:00 \_ [nvidia-modeset/deferred_close_kthread_q] root 987 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [mld] root 988 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [ipv6_addrconf] root 999 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [bond0] root 1024 0.0 0.0 0 0 ? I 15:59 0:00 \_ [kworker/3:3-events] root 1136 0.0 0.0 0 0 ? I< 15:59 0:00 \_ [wg-crypt-wg0] root 4581 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [md] root 4582 0.0 0.0 0 0 ? S 16:00 0:00 \_ [mdrecoveryd] root 5163 0.0 0.0 0 0 ? S 16:00 0:00 \_ [unraidd0] root 5164 0.0 0.0 0 0 ? S 16:00 0:00 \_ [unraidd1] root 5165 0.0 0.0 0 0 ? S 16:00 0:00 \_ [unraidd2] root 5166 0.0 0.0 0 0 ? S 16:00 0:00 \_ [unraidd3] root 5199 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfsalloc] root 5200 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs_mru_cache] root 5202 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-buf/md1] root 5203 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-conv/md1] root 5204 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-reclaim/md1] root 5205 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-blockgc/md1] root 5206 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-inodegc/md1] root 5207 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-log/md1] root 5208 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-cil/md1] root 5209 0.0 0.0 0 0 ? S 16:00 0:00 \_ [xfsaild/md1] root 5220 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-buf/md2] root 5221 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-conv/md2] root 5222 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-reclaim/md2] root 5223 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-blockgc/md2] root 5224 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-inodegc/md2] root 5225 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-log/md2] root 5226 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-cil/md2] root 5227 0.0 0.0 0 0 ? S 16:00 0:00 \_ [xfsaild/md2] root 5238 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-buf/md3] root 5239 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-conv/md3] root 5240 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-reclaim/md3] root 5241 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-blockgc/md3] root 5242 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-inodegc/md3] root 5243 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-log/md3] root 5244 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [xfs-cil/md3] root 5245 0.0 0.0 0 0 ? S 16:00 0:00 \_ [xfsaild/md3] root 5257 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-worker] root 5259 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-worker-hi] root 5260 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-delalloc] root 5261 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-flush_del] root 5262 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-cache] root 5263 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-fixup] root 5264 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-endio] root 5265 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-endio-met] root 5266 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-endio-met] root 5267 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-endio-rai] root 5268 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-rmw] root 5269 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-endio-wri] root 5270 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-freespace] root 5271 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-delayed-m] root 5272 0.0 0.0 0 0 ? I< 16:00 0:00 \_ [btrfs-qgroup-re] root 5273 0.5 0.0 0 0 ? S 16:00 0:12 \_ [btrfs-cleaner] root 5274 0.0 0.0 0 0 ? S 16:00 0:01 \_ [btrfs-transaction] root 7859 0.0 0.0 0 0 ? S 16:01 0:00 \_ [irq/128-nvidia] root 7860 0.0 0.0 0 0 ? S 16:01 0:00 \_ [nvidia] root 7861 0.0 0.0 0 0 ? S 16:01 0:00 \_ [nv_queue] root 8129 0.0 0.0 0 0 ? I< 16:01 0:00 \_ [dio/sdd1] root 9607 0.0 0.0 0 0 ? I 16:02 0:00 \_ [kworker/0:3-mm_percpu_wq] root 9720 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-worker] root 9722 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-worker-hi] root 9723 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-delalloc] root 9724 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-flush_del] root 9725 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-cache] root 9726 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-fixup] root 9727 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-endio] root 9728 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-endio-met] root 9729 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-endio-met] root 9730 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-endio-rai] root 9731 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-rmw] root 9732 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-endio-wri] root 9733 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-freespace] root 9734 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-delayed-m] root 9735 0.0 0.0 0 0 ? I< 16:02 0:00 \_ [btrfs-qgroup-re] root 9767 0.0 0.0 0 0 ? S 16:02 0:00 \_ [btrfs-cleaner] root 9768 0.0 0.0 0 0 ? S 16:02 0:00 \_ [btrfs-transaction] root 12543 0.0 0.0 0 0 ? S 16:02 0:00 \_ [UVM global queue] root 12544 0.0 0.0 0 0 ? S 16:02 0:00 \_ [UVM deferred release queue] root 12546 0.0 0.0 0 0 ? S 16:02 0:00 \_ [UVM Tools Event Queue] root 16555 0.0 0.0 0 0 ? I 16:03 0:00 \_ [kworker/2:4-events] root 16557 0.0 0.0 0 0 ? I 16:03 0:00 \_ [kworker/2:6-rcu_gp] root 27361 0.0 0.0 0 0 ? I 16:09 0:00 \_ [kworker/7:0-events] root 31055 0.0 0.0 0 0 ? I 16:11 0:00 \_ [kworker/0:0-inet_frag_wq] root 31982 0.0 0.0 0 0 ? I 16:11 0:00 \_ [kworker/1:0-events] root 1258 0.0 0.0 0 0 ? I 16:12 0:00 \_ [kworker/3:1-mm_percpu_wq] root 1949 0.0 0.0 0 0 ? I 16:13 0:00 \_ [kworker/6:11-cgroup_destroy] root 1974 0.0 0.0 0 0 ? I 16:13 0:00 \_ [kworker/5:12-cgroup_destroy] root 1975 0.0 0.0 0 0 ? I 16:13 0:00 \_ [kworker/5:13-events] root 29577 0.0 0.0 0 0 ? I 16:27 0:00 \_ [kworker/6:0-mm_percpu_wq] root 29912 0.2 0.0 0 0 ? I 16:27 0:01 \_ [kworker/u16:3-btrfs-endio-write] root 29958 0.2 0.0 0 0 ? I 16:27 0:01 \_ [kworker/u16:7-btrfs-endio-write] root 30289 0.0 0.0 0 0 ? I< 16:28 0:00 \_ [kworker/u17:1-btrfs-worker-high] root 30290 0.0 0.0 0 0 ? I< 16:28 0:00 \_ [kworker/u17:2-btrfs-worker-high] root 30293 0.0 0.0 0 0 ? I< 16:28 0:00 \_ [kworker/u17:5-btrfs-worker-high] root 30331 0.3 0.0 0 0 ? I 16:28 0:02 \_ [kworker/u16:13-events_unbound] root 6970 0.3 0.0 0 0 ? I 16:33 0:01 \_ [kworker/u16:0-bond0] root 6971 0.3 0.0 0 0 ? I 16:33 0:01 \_ [kworker/u16:1-events_unbound] root 8726 0.1 0.0 0 0 ? I 16:34 0:00 \_ [kworker/u16:2-btrfs-worker] root 12431 0.0 0.0 0 0 ? I 16:36 0:00 \_ [kworker/6:1-cgroup_pidlist_destroy] root 12573 0.0 0.0 0 0 ? I 16:36 0:00 \_ [kworker/5:0-events] root 12586 0.0 0.0 0 0 ? I 16:36 0:00 \_ [kworker/5:1-mm_percpu_wq] root 12696 0.0 0.0 0 0 ? I 16:36 0:00 \_ [kworker/1:2-mm_percpu_wq] root 12730 0.0 0.0 0 0 ? I 16:36 0:00 \_ [kworker/7:1] root 12739 0.0 0.0 0 0 ? I 16:36 0:00 \_ [kworker/2:0-mm_percpu_wq] root 13085 0.0 0.0 0 0 ? I 16:36 0:00 \_ [kworker/0:1-events] root 13094 0.0 0.0 0 0 ? I 16:36 0:00 \_ [kworker/4:1-rcu_gp] root 13306 0.0 0.0 0 0 ? D 16:36 0:00 \_ [kworker/u16:5+btrfs-worker] root 13860 0.0 0.0 0 0 ? I< 16:37 0:00 \_ [kworker/u17:0-btrfs-worker-high] root 13861 0.0 0.0 0 0 ? I< 16:37 0:00 \_ [kworker/u17:3] root 13862 0.0 0.0 0 0 ? I< 16:37 0:00 \_ [kworker/u17:4] root 1 0.0 0.0 2592 920 ? Ss 15:57 0:00 init root 734 0.0 0.0 18016 3540 ? Ss 15:59 0:00 /sbin/udevd --daemon root 926 0.0 0.0 211388 4168 ? Ssl 15:59 0:00 /usr/sbin/rsyslogd -i /var/run/rsyslogd.pid dhcpcd 1034 0.0 0.0 3032 1976 ? S 15:59 0:00 dhcpcd: br0 [ip4] root 1035 0.0 0.0 3056 2140 ? S 15:59 0:00 \_ dhcpcd: [privileged proxy] br0 [ip4] dhcpcd 1047 0.0 0.0 3044 288 ? S 15:59 0:00 | \_ dhcpcd: [BPF ARP] br0 192.168.1.104 dhcpcd 1087 0.0 0.0 3056 292 ? S 15:59 0:00 | \_ dhcpcd: [network proxy] 192.168.1.104 dhcpcd 1036 0.0 0.0 3032 272 ? S 15:59 0:00 \_ dhcpcd: [control proxy] br0 [ip4] root 1178 0.0 0.0 3184 164 ? Ss 15:59 0:00 /usr/sbin/mcelog --daemon message+ 1188 0.0 0.0 5208 2080 ? Ss 15:59 0:00 /usr/bin/dbus-daemon --system root 1201 0.0 0.0 4272 2416 ? S 15:59 0:00 elogind-daemon ntp 1234 0.0 0.0 74552 4512 ? Ssl 15:59 0:00 /usr/sbin/ntpd -g -u ntp:ntp root 1241 0.0 0.0 2600 104 ? Ss 15:59 0:00 /usr/sbin/acpid root 1255 0.0 0.0 2576 1788 ? Ss 15:59 0:00 /usr/sbin/crond daemon 1259 0.0 0.0 2656 1540 ? Ss 15:59 0:00 /usr/sbin/atd -b 15 -l 1 daemon 2483 0.0 0.0 5148 3080 ? S 16:00 0:00 \_ /usr/sbin/atd -b 15 -l 1 root 2486 0.0 0.0 3980 2992 ? S 16:00 0:00 \_ sh root 2487 0.0 0.0 4344 2756 ? S 16:00 0:00 \_ sh root 2488 0.0 0.0 2772 860 ? S 16:00 0:00 \_ inotifywait -q /boot/changes.txt -e move_self,delete_self,modify root 4537 0.0 0.0 2608 916 tty1 Ss+ 16:00 0:00 /sbin/agetty --noclear 38400 tty1 linux root 4538 0.0 0.0 2608 932 tty2 Ss+ 16:00 0:00 /sbin/agetty 38400 tty2 linux root 4539 0.0 0.0 2608 916 tty3 Ss+ 16:00 0:00 /sbin/agetty 38400 tty3 linux root 4540 0.0 0.0 2608 888 tty4 Ss+ 16:00 0:00 /sbin/agetty 38400 tty4 linux root 4541 0.0 0.0 2608 908 tty5 Ss+ 16:00 0:00 /sbin/agetty 38400 tty5 linux root 4542 0.0 0.0 2608 912 tty6 Ss+ 16:00 0:00 /sbin/agetty 38400 tty6 linux root 4574 0.0 0.0 4344 240 ? Ss 16:00 0:00 /usr/sbin/inetd root 4575 0.3 0.0 412916 6640 ? Ssl 16:00 0:06 /usr/local/sbin/emhttpd root 4603 0.0 0.0 142600 2176 ? Ssl 16:00 0:00 /sbin/apcupsd root 4706 0.0 0.0 88096 9984 ? Ss 16:00 0:00 php-fpm: master process (/etc/php-fpm/php-fpm.conf) root 10164 0.1 0.0 91736 14100 ? S 16:35 0:00 \_ php-fpm: pool www root 11758 0.1 0.0 91864 14608 ? S 16:35 0:00 \_ php-fpm: pool www root 12837 0.1 0.0 91672 13516 ? S 16:36 0:00 \_ php-fpm: pool www root 13632 0.1 0.0 91672 13704 ? S 16:37 0:00 \_ php-fpm: pool www root 14014 0.1 0.0 91672 13504 ? S 16:37 0:00 \_ php-fpm: pool www root 14518 0.1 0.0 91672 13516 ? S 16:37 0:00 \_ php-fpm: pool www root 14609 0.1 0.0 91672 13516 ? S 16:37 0:00 \_ php-fpm: pool www root 4919 0.0 0.0 147708 4388 ? Ss 16:00 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf root 4920 0.5 0.0 149504 10508 ? S 16:00 0:12 \_ nginx: worker process root 4951 0.9 0.4 11417656 157852 ? SLsl 16:00 0:21 /usr/local/bin/unraid-api/unraid-api /snapshot/api/dist/unraid-api.cjs start root 4972 0.0 0.0 5172 3632 ? S 16:00 0:00 /bin/bash /etc/rc.d/rc.flash_backup watch root 14638 0.0 0.0 2576 872 ? S 16:37 0:00 \_ sleep 60 root 5371 0.0 0.0 91220 29040 ? SL 16:00 0:00 /usr/bin/php -q /usr/local/emhttp/webGui/nchan/notify_poller root 5373 0.0 0.0 91220 29016 ? SL 16:00 0:00 /usr/bin/php -q /usr/local/emhttp/webGui/nchan/session_check root 5375 0.0 0.0 91284 29728 ? SL 16:00 0:00 /usr/bin/php -q /usr/local/emhttp/plugins/dynamix.system.temp/nchan/system_temp root 5377 0.1 0.0 91456 29240 ? SL 16:00 0:03 /usr/bin/php -q /usr/local/emhttp/webGui/nchan/device_list root 5379 0.0 0.0 4016 3036 ? S 16:00 0:01 /bin/bash /usr/local/emhttp/webGui/nchan/disk_load root 16043 0.0 0.0 2576 932 ? S 16:38 0:00 \_ sleep 2 root 5381 0.1 0.0 91220 29188 ? SL 16:00 0:02 /usr/bin/php -q /usr/local/emhttp/webGui/nchan/parity_list root 5775 0.0 0.0 91284 29540 ? SL 16:01 0:01 /usr/bin/php -q /usr/local/emhttp/webGui/nchan/wg_poller root 5777 0.0 0.0 91220 29216 ? SL 16:01 0:00 /usr/bin/php -q /usr/local/emhttp/webGui/nchan/update_1 root 5779 0.0 0.0 91416 29616 ? SL 16:01 0:01 /usr/bin/php -q /usr/local/emhttp/webGui/nchan/update_2 root 5781 0.1 0.0 91220 29264 ? SL 16:01 0:03 /usr/bin/php -q /usr/local/emhttp/webGui/nchan/update_3 root 7727 0.0 0.0 140236 304 ? Ssl 16:01 0:00 /usr/local/sbin/shfs /mnt/user0 -disks 14 -o default_permissions,allow_other,noat root 7739 3.1 0.0 693092 21496 ? Ssl 16:01 1:08 /usr/local/sbin/shfs /mnt/user -disks 15 -o default_permissions,allow_other,noati root 9785 0.0 0.0 31864 9560 ? S 16:02 0:00 /usr/sbin/virtlockd -d -f /etc/libvirt/virtlockd.conf -p /var/run/libvirt/virtloc root 9790 0.0 0.0 31936 10460 ? S 16:02 0:00 /usr/sbin/virtlogd -d -f /etc/libvirt/virtlogd.conf -p /var/run/libvirt/virtlogd. root 9809 0.0 0.0 1391124 21664 ? Sl 16:02 0:00 /usr/sbin/libvirtd -d -l -f /etc/libvirt/libvirtd.conf -p /var/run/libvirt/libvir nobody 9918 0.0 0.0 7484 3096 ? S 16:02 0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-r root 9919 0.0 0.0 7352 332 ? S 16:02 0:00 \_ /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefi root 4564 0.0 0.0 12324 5000 ? Sl 16:32 0:00 /usr/bin/ttyd -d0 -t disableLeaveAlert true -t theme {'background':'black'} -t fo root 4569 0.0 0.0 7604 4308 pts/0 Ss 16:32 0:00 \_ bash --login root 16077 0.0 0.0 5172 3004 pts/0 R+ 16:38 0:00 | \_ ps auxf root 14384 0.0 0.0 7604 4240 pts/1 Ss 16:37 0:00 \_ bash --login root 14460 0.1 0.0 5580 3608 pts/1 S+ 16:37 0:00 \_ top root 12001 0.0 0.0 79008 13516 ? Ss 16:36 0:00 /usr/sbin/smbd -D root 12006 0.0 0.0 77216 9116 ? S 16:36 0:00 \_ /usr/sbin/smbd -D root 12007 0.0 0.0 77208 4352 ? S 16:36 0:00 \_ /usr/sbin/smbd -D csmashe 14228 40.1 0.0 80940 17064 ? S 16:37 0:30 \_ /usr/sbin/smbd -D root 12014 0.0 0.0 2588 2004 ? Ss 16:36 0:00 /usr/sbin/wsdd2 -d root 12017 0.0 0.0 75032 11940 ? Ss 16:36 0:00 /usr/sbin/winbindd -D root 12020 0.0 0.0 75232 12792 ? S 16:36 0:00 \_ /usr/sbin/winbindd -D root 13492 0.0 0.0 75860 10448 ? S 16:37 0:00 \_ /usr/sbin/winbindd -D avahi 12049 0.0 0.0 5640 3404 ? S 16:36 0:00 avahi-daemon: running [SmasheNas.local] avahi 12052 0.0 0.0 5344 268 ? S 16:36 0:00 \_ avahi-daemon: chroot helper root 12060 0.0 0.0 2676 120 ? S 16:36 0:00 /usr/sbin/avahi-dnsconfd -D
  23. I changed the share for the test of me moving the 24gb file from the cache drive to the array. It stayed up at 113mb/s longer before it dropped and when it did drop it dropped as low as 53MB/s. Not as bad as going to the cache drive but still not great. Server is not doing anything else but this. io wait hovering around 16% in this configuration which is lower than if it was sending to the cache file but still not great. Nothing should be saturating the array and definitely not the ssd. Im not 100% sure on what causes io waits other than the disk so if anyone can help me figure out why this is happening that would be greatly appreciated.