Jump to content

robobub

Members
  • Content Count

    36
  • Joined

  • Last visited

Community Reputation

3 Neutral

About robobub

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I never realized I could change this, I did have this left at the default 24. Not a problem at all, it's a low priority issue and the plugin otherwise works great. Thanks!
  2. I noticed my cache read speeds were quite slow and did some investigation. I had recently switched from btrfs-encrypted cache to xfs-encrypted cache due. It seems that on my system my read speeds with xfs/reiserfs encrypted cache are extremely slow, while unencrypted or btrfs-encrypted is much faster. Any ideas what could be going on? Anyone else have this issue? reiserfs-encrypted 130 MB/s write 25 MB/s read xfs-encrypted 130 MB/s write 20 MB/s read xfs-unencrypted 200 MB/s write 450 MB/s read btrfs-encrypted 130 MB/s write 450 MB/s read Methodology: # urandom to ensure it's not due to the SSD controller compressing the data dd if=/dev/urandom of=/mnt/cache/test bs=4M count=256 conv=fsync # remove from cache before reading sync; echo 3 > /proc/sys/vm/drop_caches dd of=/dev/null if=/mnt/cache/test bs=4M count=256 Also verified speeds with dstat -D sdb. Writing/reading to array itself is as expected, ~80 MB/s for both read and write. SSD Cache drive: ATA SAMSUNG SSD 830 (scsi) CPU: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz Unraid: 6.8.3 changes to blockdev readahead, io scheduler, nr_requests all have no effect and are set to default and rebooted. All dockers/vm/scripts off. All shares set to not use cache.
  3. How is the temperature-based pausing and resuming of the parity check supposed to work? It seems like it ran through the entire time without pausing even though the debug messages clearly show drives were hot. It does appear that the scheduled resume happened with all drives already hot/warm, does it only pause if a cool drive transitions to warm/hot? Perhaps there should be a check whether to start in the first place with the current temperature: some days are hot and it pushes them into the warm zone and the check shouldn't run at all. Here is a log with the scheduled start time is 6am and end time is noon, but 2 drives are already hot, 3 warm. It appears to have run the whole way, repeating the message "drives=24, hot=2, warm=3, cool=19. Correcting Parity Check with all drives below temperature threshold for a Pause" every 5 minutes but not pausing. Highlights from the attached log: Also, I have 5 drives (2 hot, 3 warm), not 24. All 19 cool drives do not exist. Unraid 6.8.3 Parity Tuning 2019.10.23 20200703_parity_tuning_temperature.log
  4. Using android here, might be a different issue in my case? Also, for me, it works without letsencrypt reverse proxy (on LAN) and fails with letsencrypt reverse proxy (still on LAN, same upload speed)
  5. Having an issue uploading large files to nextcloud only using letsencrypt reverse proxy, works fine without letsencrypt. Even just a 2.3 GB file: the file completes uploading on the client, and I see that it's processing and copying the file into the final location on nextcloud/<user>/files/<path>. However, this only lasts for around 1 minute then stops writing the file, and tells the client that it timed out. Watching the file get written, it's in the range of 800~1200 MB. If I turn reverse proxy off and revert those settings, it works fine and the "processing" of copying into the final location runs for longer than that minute. All the guides I've seen about configuring letsencrypt are removing client_max_body_size, but that was already removed back on 01/21/2019. I'm on the latest nextcloud docker and letsencrypt docker. There were some timeout settings in letsencrypt/nginx/proxy.conf: send_timeout, proxy_*_timeout, increasing those significantly and restarting yielded the same result. Same with modifying proxy_max_temp_file_size in letsencrypt/nginx/proxy-confs/nextcloud.*.conf I'm not really seeing anything in letsencrypt/nextcloud's log/[nginx,php]/*.log either. Is there a loglevel I should be changing?
  6. Ah, with a new share the issue doesn't exist. The issue is caused by using any form of btrfs compression on the disk. Please see if you can reproduce: ## no compression, no issue root@Tower:/mnt/user/test# rm -rf /mnt/user/test/1; root@Tower:/mnt/user/test# rm -rf /mnt/cache/test/1 # should be no-op, but just in case root@Tower:/mnt/user/test# cd /mnt/disk1/test root@Tower:/mnt/disk1/test# v total 0 root@Tower:/mnt/disk1/test# btrfs property get . root@Tower:/mnt/disk1/test# ## no compression, no issue root@Tower:/mnt/disk1/test# mkdir -p 1/2/3 root@Tower:/mnt/disk1/test# cd /mnt/user/test/1/2/3/ root@Tower:/mnt/user/test/1/2/3# touch a root@Tower:/mnt/user/test/1/2/3# v /mnt/cache/test/1/2/3/a -rw-rw-rw- 1 root root 0 Jan 31 18:13 /mnt/cache/test/1/2/3/a root@Tower:/mnt/user/test/1/2/3# cd /mnt/user/test ## re-do test with compression root@Tower:/mnt/user/test# rm -rf /mnt/user/test/1; root@Tower:/mnt/user/test# rm -rf /mnt/cache/test/1 # should be no-op, but just in case root@Tower:/mnt/user/test# cd /mnt/disk1/test root@Tower:/mnt/disk1/test# btrfs property get . root@Tower:/mnt/disk1/test# btrfs property set . compression zlib root@t1000:/mnt/disk1/test# btrfs property get . compression=zlib root@Tower:/mnt/disk1/test# mkdir -p 1/2/3 root@Tower:/mnt/disk1/test# cd /mnt/user/test/1/2/3/ root@Tower:/mnt/user/test/1/2/3# touch a touch: cannot touch 'a': Operation not supported root@Tower:/mnt/user/test/1/2/3# tree /mnt/cache/test/ /mnt/cache/test/ └── 1 1 directory, 0 files root@Tower:/mnt/user/test/1/2/3# touch a touch: cannot touch 'a': Operation not supported root@Tower:/mnt/user/test/1/2/3# tree /mnt/cache/test/ /mnt/cache/test/ └── 1 └── 2 2 directories, 0 files Occurs with zlib, std, lzo, or chattr +C
  7. With Btrfs-encrypted array disks and xfs-encrypted cache disk? Any diagnostic recommendations on my end?
  8. You can see what is launching that find process by going up a few lines: dynamic cache dirs plugin. Add your external hard drive to the excluded folders path of that folder caching plugin
  9. Something is scanning your /mnt/disks/Seagate_Backup_Plus_Drive with the command find with PID 25568 (the 2nd column in your screenshot, PID) This doesn't show up on your diagnostics what process it is and what spawned it. You can try to trace what was running by running ps axjf | egrep 25568 The first 4 columns will list other processes associated with it, so then you add that to the egrep command. If any of the columns are 0,1,2, I would exclude it as it is one of the root processes. ps axjf | egrep '25568|<column 1>|<column2>|<column 3>' The process number 25568 could change, so make sure to run lsof again to find out what command and PID is currently accessing it. Also, the command I listed earlier to change the schedulers doesn't work. Forgot that you need to use tee if outputting to multiple devices. Though of course what I mentioned earlier about finding what is scanning your drive is more important. # Change all of them excluding the flash drive with sd[b-z], or enumerate specific ones with sd[b,c,e] # note, changing scheduler will change nr_requests echo none | tee /sys/block/sd[b-z]/queue/scheduler echo 4 | tee /sys/block/sd[b-z]/queue/nr_requests
  10. I don't have your exact situation, so I'm not guaranteeing it will fix it. But it takes 1 second to try, and you can always change it back (you can just cat that same location to see what it was before changing). It is a bit odd that it goes up without starting any copying. I would look to see if something is scanning it (e.g. iotop+lsof, or a fancier tool)
  11. Okay? Your USB drive will have a device ID, and you can change the I/O Scheduler and Queue size for that device. I suggested changing both the source and destination drives. Not that it's too relevant, but the post I linked was moving data from the array to the cache drive. My suggestion applies with any data transfer. It's a simple test that has a chance of helping.
  12. One thing you can try is adjusting the I/O Scheduler and queue size on the USB drive (and perhaps even the array drives being written to) I had an issue where large writes would lock up my system with high I/O wait because perhaps the drive controller didn't like it. # Enumerate the drives being written to and read from, or you could do all of them with sd* # note, changing scheduler will change nr_requests echo none > /sys/block/sd[c,d,e]/queue/scheduler echo 4 > /sys/block/sd[c,d,e]/queue/nr_requests
  13. My BIOS has a legacy boot mode option, but it's called CSM (Compatibility Support Module). Google your motherboard model and some combination of Legacy, UEFI, CSM, etc. Also go through your manual page by page. I suppose it could be possible that you don't have that option, but it's unlikely. That should only be necessary when using the unRAID usb. There should be a way to run UEFI if you make your own. Perhaps try passmark's memtest86 v8 https://www.memtest86.com/download.htm ? wiki says v5 should support UEFI, and also confirms it was sold to passmark. Free version is limited to 4 passes or something. https://en.wikipedia.org/wiki/Memtest86
  14. I normally ssh in, but I verified this also occurs on 6.8.2 with Chrome 79.0.3945.130 on Windows.