Jump to content

IamSpartacus

Members
  • Content Count

    799
  • Joined

  • Last visited

Community Reputation

32 Good

1 Follower

About IamSpartacus

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    NY

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I updated fio. Try again and let me know This worked, thank you!
  2. Thanks for the response. I'm sure my issue is because Unraid is running in a VM. I'm probably SoL.
  3. Were you running those fio tests on Unraid or another system? I ask because I'm testing a scenario running Unraid in a VM on Proxmox with an all NVMe zfs pool added as a single "drive" as cache in Unraid. However, I'm unable to run fio even after it's installed via NerdTools. I just get "illegal instruction" no matter what switches I use.
  4. Has anyone gotten fio to successfully run? I just get "illegal instruction" even with --disable-native set. I've tried this in both 6.9 beta 25 and 6.8.3 stable.
  5. I'm seeing major SMB issues in 6.9 beta 25. Simply browsing my shares it often takes 20-30 seconds for each subfolder to load in Windows Explorer. I'm seeing the same behavior on 4 different Windows 10 1909 machines on my network that I've tested with. I'm also seeing the following error messages in my syslog as soon as I first access a share from Windows Explorer: Aug 20 15:51:07 SPARTA smbd[35954]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:07 SPARTA smbd[35954]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:07 SPARTA smbd[35954]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:07 SPARTA smbd[35954]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:07 SPARTA smbd[35954]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:07 SPARTA smbd[35954]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:07 SPARTA smbd[35954]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:07 SPARTA smbd[35954]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:27 SPARTA smbd[36254]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:27 SPARTA smbd[36254]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:27 SPARTA smbd[36254]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:27 SPARTA smbd[36254]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:27 SPARTA smbd[36254]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:27 SPARTA smbd[36254]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:27 SPARTA smbd[36254]: sys_path_to_bdev() failed for path [.]! Aug 20 15:51:27 SPARTA smbd[36254]: sys_path_to_bdev() failed for path [.]! Diagnostics attached. sparta-diagnostics-20200820-2147.zip
  6. Does anyone know if NFS v4 is/will be supported in 6.9?
  7. Fixed this by simply doing an erase on the disk via the preclear plugin.
  8. Having trouble finding the command to delete a (non-cache) btrfs pool that was created using this method. The pool currently has a single device in it but I can't for the life of me find the command needed to blow the pool away completely. Trying to delete the partition from UD fails as well. Any ideas?
  9. I'm seeing the samba service using a lot of memory and not releasing it on a consistent basis. I have samba shares from this server mounted to another Unraid server and do lots of file transfers (radarr/sonarr imports, tdarr conversions) between them. But even unmounting those shares does not release the used memory by samba. Maybe someone can enlighten me on what could be causing this memory usage to remain even when no shares are in use. athens-diagnostics-20200528-1444.zip
  10. You probably have to set the number of backups to keep number to something other than 0 first.
  11. Well than there's my answer. I suspect Radarr is the cultprit as it's constantly upgrading movie qualities.
  12. Ok so I see all the extra share cfg files. Most of those shares don't exist anymore and I've confirmed none of those folders exist on any of my disks. The only shares I have left are the following: Furthermore, while those shares were originally created with capital letters, they've all been converted to lower case. I guess the 'mv' command while changing the directory case does not change the share.cfg file, I'll have to fix that. The only shares that are set to Use Cache = Yes (meaning they eventually write to the array) are 4k, media, and data. And on every disk, those top level folders are indeed lower case. I've also confirmed all my docker templates reference the lower case shares. I've also just done a test file transfer to each share and every transfer wound up on cache and not on any of the disks. So if all the top level folders are correct on each of the disks to match the current shares, I'm not sure what could be causing the disk spin ups. QUESTION: If say radarr/sonarr are upgrading a file quality that exists on the array, would that cause a parity write during the write to cache since sonarr/radarr is technically deleting the previous file and overwriting it with the new/better quality version?
  13. I'm looking for some help in identifying what is causing my parity drives to constantly spin up. Every single one of my shares uses the cache drive and I only enact the mover once per day. Yet my parity drives are constantly being spun up periodically even though my disk spin down is set for 1hr of idle time. My data disks spin down and stay down unless the shares are being accessed, but I'm constantly seeing parity drives up during the day. So my question is, what could be causing parity writes to the array when all my shares use cache and none write directly to the array? unraid.zip
  14. It does, I'm using it now and it works well.