Jump to content

je82

Members
  • Posts

    468
  • Joined

  • Last visited

Everything posted by je82

  1. nvm i figured out what was going on, apparently there was some stuff on the cache that was queued for disk4 as they were transferred to cache before the split setting change.
  2. All shares have the same split setting, but i did apply the split setting a few hours ago, i did not restart/remount the array since the setting change, is this necessary?
  3. Hello. My settings as follows: Allocation method: Fill-Up Why is the mover writing and reading from 2 drives? It should only be hitting Drive 2 as per the Fill-Up ruleset, yet it is hitting Drive 2 and 4 when moving files from the cache to the array? Why? How can i get help understanding the mover and why it is doing what it is doing, according to my understanding with the Fill-up ruleset the mover should be picking the next drive in "line" which has space above the minimum space grade set in the share, which is Disk 2 in this case. Why is it doing stuff with disk 4?
  4. Okey problem is now that i have already filled 1 disk it only has 28kb, how do i move files from it to another drive? I see i can simply ssh in and do a mv /mnt/disk#/ /mnt/disk# command, but will this trigger the parity drives to do their magic and write correct data when doing moves commands from a shell between disks?
  5. I read in the wiki (which may or may not be very outdated) that: Is this true for XFS and modern unraid installs? I like to use "fill up" setting because the data i am filling is never going to move so i prefer just filling all the disks asap and move to the next, but is this a mistake? If i should not fill drives to 100% where is the setting to limit it from filling up? That setting would be nesessary when using the "fill up" setting in disk settings if this is indeed bad practice to do. Thanks.
  6. signed, would love to see a nice progressbar on the main/dashboard letting me know of the movers progress, if possible. keep up the good work guys!
  7. Not exactly sure how to answer this. I'm thinking of something like a veracrypt container which to unraid looks like one single file, you then mount it on a system in order to gain access to its contents. The mounted contents are displayed as a drive on the local system which mounted the volume, but the container that is mounted is hosted on the unraid array. How unraid sees the container i have no idea, my guess is it just sees it as one file? I was just interested in knowing how unraid deals with such scenarios because if unraid splits the veracrypt container data over multiple drives i believe the container would be corrupt but perhaps that cannot happen cause unraid only sees it as one file? That would also mean it bypasses the cache completely too? I guess only one way to find out is to try it, i personally don't need this but a client did want to have certain documents encrypted in their own containers on an already encrypted array. But if unraid only sees it as one file, and if you add data to the container how does the parity drives deal with it on a technical level? Is the container restorable via parity? I am intrigued how this works.
  8. Hello. How will unraid deal with a large encrypted containers that is stored on the array and mounted on a remote computer via SMB? I mean unraid has some kind of intelligence knowing that "new files" are going to the cache for the nightly "move to array" but if you're mounting a encrypted container that is already stored on the array on a remote computer and you send files to the encrypted container how does this work on unraid? How does unraid know there are new files, the mount point is not on unraid. New files created in the container, will they hit the array directly or will they hit the cache first? If the share that is hosting the encrypted container has more than 1 drive available, what happens when unraid is essentially splitting the container over multiple drives? Any ideas if it is possible use encrypted containers with unraid? And theorettically how does it work?
  9. just stumbled upon this and wow, i feel like this data display should be the default, it's much more interesting than the read/writes imo and i had no idea you could toggle to it!
  10. Aye, , i do have ECC, i've just had some issues with SMB transfering directly to the array due to bad array performance which i do not know if it is working as intended or if it is indeed an issue somewhere. I'll keep an eye on the mover and see how the data looks, i doubt there will be issues but for some reason whenever the array has been heavily loaded with write operations smb has been troublesome locking files and what not. It's my first unraid build so i have no reference point if my system is working as intended or if there's indeed some problem somewhere.
  11. So there's no crc checking on the data being transfered between cache and array?
  12. Hi. When unraid triggers the mover to move cached data to the array is there any checks in place to make sure that data writes correctly? I have had issues writing directly to my array using SMB so i am abit worried how it is going to go when it writes from cache to array.
  13. I believe i found the issue, it is related to my array's performance... and it is the array when under heavy write load this happens. When writing to the cache array there's no issues what so ever. My follow up question would be, what can i do to improve array performance? My 2 parity drives are 2x 12tb ironwolf hdds. In my chassi i have set 1 parity drive to be connected to my first LSI card, and the second parity drive to be on the second LSI card, i figured this would be a good idea in case one card becomes bad and start writing corrupt data the other card has a parity drive that works? Would it be a better idea to put both parity drives on the same LSI card? Or is there any other "good practices" in order to improve write performance directly to the array? It's obviously not a huge issue, i have 2x 1tb ssd for cache and once my data is all moved this wont be a problem anymore. The only thing that surprised me is the fact that the plugin "CA Auto Turbo Write Mode" which i more or less though would improve writing to the array directly didn't really do anything for me. Maybe i failed to really enable it, but it said it was on in the plugin options window. I'm marking this as solved as it appears it is not a samba issue after all, it is the array having trouble dealing with big writes directly to the array resulting in files locking etc. Reading performance direct from array seems fine, i guess it is all about those parity drives doing their magic that really slows down writing to array directly.
  14. I now know what the issue is and a temporary way of fixing it but in the long run i would really like to see the unraid team recreate the issue on their end and fix it, perhaps upgrading SMB on the unraid would be something to do? Here's a thread explaining the issue:
  15. Hello. I have issues with my SMB protocol running on the Unraid server. Symptoms are as follows: 1. I setup share to be private and only read/write from particular user. 2. I authenticate as the user in windows, and i get access to the share, everything is good so far. 3. I start sending a large mount of data to the smb share, around 10tb... Soon enough when smb is trying to close/open new connections during transfer unraid is replying with access denied. 4. SMB will wait a period of time (a few seconds) and attempt a new connection, then the server gives access. It's just like that under load SMB fails to properly authenticate or ignores the authentication but while not under heavy load everything works properly. Example of a fix that works 100%: I set the smb share to public for everyone thus disabling unraid from checking the credentials, now smb works properly all the time, there's no weird timeout and smb runs smooth without issue. I have isolated that the issue is NOT any of my windows machines, it is not an issue with credential manager having mutliple entries, this is an issue on the unraid server so i believe anyone running Unraid Version: 6.7.2 can re-create this issue in their lab rather easily. It's hard to detect this issue if you are using windows explorer and transfer file this way, it's much easier to detect the problem using something like FreeFileSync which has by default a feature called "Fail-safe file copy" and that means whenever a transfer is initiated it will name the file on the destination it is writing to filename.extention.randomhash.ffs_tmp Whenever the file has written complete it does a crc check and then renames the file to it's original filename removing the .randomhash.ffs_tmp but when the unraid server is having issues handling the credentials it will return an access denied while FileFreeSync is doing it's rename and FreeFileSync will cause a popup dialog telling you that it has no access to the file it is trying to rename. The same thing does occur i windows explorer but windows explorer does not give error, it stalls to 0kb/s transfer speed, waits for 5-10 seconds then tries again and new then your transfer continues which makes the error harder to spot. Is there anything i can do to fix this issue without having to do Public? I was looking at the /etc/samba/smb.conf and there's something called: "ntlm auth = Yes" what would happen if we force more modern credential handling such as the modern NTLMv2? Your ideas are welcome.
  16. So i've been doing a lot of testing and transferring over 40tb of data using SMB. Logging brought some clues to whats going on, but i do not think the problem is related to SMB, it's probably an unraid thing? So this only occurs at the end of a file/begginning of a new transfer. SMB logging throws randomly "access denied" to the file, it's like the unraid system sometimes are having the files "in use" as soon as it has completed a transfer, while most of the time it doesn't. I throttle the speed to which the transfer is going then the problem does not occur, so probably a mixture of things resulting in a file being randomly locked and smb throws access denied. I am doing all my transfers directly to the array with the default disk settings, i did try the "speed improvement" plugin but i couldn't see it yealding any better performance, i am averaging 60mb/s writing directly to the array after the inital memory cache is out. Once i am done transfering my data to the unraid system i can start telling if this smb lock will occur when streaming data too.. It most likely wont, also i will enable my cache drives once the data is moved so new data will end up on those initially and hopefully that will work properly. If you have ideas feel free to comment!
  17. It may look similar but my smb only slows to a complete halt... then waits for around 10-15 seconds, then it resumes at 110mb/s again... speeds are fine, its just that it seems to become unresponsive at random times. It is definitely not the network, it responds quickly via web interface while smb is timing out and also pinging the unraid server results in constant 1ms ping times. I turned of smb logging as it was eating usb memory fast. I cannot confirm but it appears to happen whenever a file has completed transfer and a new file is being created to transfer, at that moment sometimes it becomes unresponsive. I'll do more testing this weekend when i have some more time to spend on it.
  18. The strange problem here is that i have access, then suddenly smb throws me a access denied when writing a lot of data to the array. SMB drops to a stall and then 30 seconds later it tries again and resumes operation. I'm using a rather popular motherboard "X10SRL-F" from supermicro which i doubt has any compatability issues with unraid as it is used by many as far as i can tell. Im going to have to do more testing before i can figure out exactly what whats going on, i'm no expert in smb but the verbose logging seems strange that first i start transferring a file and access is granted then mid stream suddenly smb alerts that access is denied and it resets. And it only occurs after transfering a rather large amount of random data. Never happened during a smaller transfer of like 5gb of data. More likely to occur around 100gb of data transfer.
  19. I have now hardcoded 2 entries into my credentials vault in the windows machine sending the files, my idea is that windows smb is randomly for whatever reason authenticating with the wrong credentials resulting in a read/write error, the strange thing is this did not occur as far as i can tell with my old unraid installation which i did send over 200gb of files via smb to. Will report back with how it goes. EDIT: Thinking of it i may have had the share public access for everyone on the unraid test. I only tested so that account security worked properly on 1 share but this was not the share i decided to fill with data when testing.
  20. So i have SMB logging enabled and its very verbose... hard to find what i am looking for. I did send around 50gb of random files until it occured again. What i can gather it seems that the SMB is throwing access denied mid stream as a file is being transferred, see below: Nov 6 11:57:18 NAS smbd[28881]: smb2: fnum 848165825, file Unsorted New/3DRenderTest2.mp4> Nov 6 11:57:18 NAS smbd[28881]: [2019/11/06 11:57:18.241739, 3] ../lib/util/access.c:365(allow_access) Nov 6 11:57:18 NAS smbd[28881]: Allowed connection from 1.1.1.2 (1.1.1.2) Nov 6 11:57:18 NAS smbd[28881]: [2019/11/06 11:57:18.241765, 1] ../source3/smbd/service.c:346(create_connection_session_info) Nov 6 11:57:18 NAS smbd[28881]: create_connection_session_info: guest user (from session setup) not permitted to access this > Nov 6 11:57:18 NAS smbd[28881]: [2019/11/06 11:57:18.241778, 1] ../source3/smbd/service.c:529(make_connection_snum) Nov 6 11:57:18 NAS smbd[28881]: create_connection_session_info failed: NT_STATUS_ACCESS_DENIED Nov 6 11:57:18 NAS smbd[28881]: smb2: fnum 848165825, file Unsorted New/3DRenderTest2.mp4> Later during same file transfer: Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.654102, 3] ../auth/ntlmssp/ntlmssp_util.c:72(debug_ntlmssp_flags) Nov 6 12:00:37 NAS smbd[28881]: Got NTLMSSP neg_flags=0xe2088297 Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.715919, 3] ../auth/ntlmssp/ntlmssp_server.c:552(ntlmssp_server_preauth) Nov 6 12:00:37 NAS smbd[28881]: Got user=[] domain=[] workstation=[DESKTOP] len1=1 len2=0 Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.715962, 3] ../source3/param/loadparm.c:3872(lp_load_ex) Nov 6 12:00:37 NAS smbd[28881]: lp_load_ex: refreshing parameters Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.716040, 3] ../source3/param/loadparm.c:548(init_globals) Nov 6 12:00:37 NAS smbd[28881]: Initialising global parameters Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.716171, 3] ../source3/param/loadparm.c:2786(lp_do_section) Nov 6 12:00:37 NAS smbd[28881]: Processing section "[global]" Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.716336, 1] ../lib/param/loadparm.c:1822(lpcfg_do_global_parameter) Nov 6 12:00:37 NAS smbd[28881]: WARNING: The "null passwords" option is deprecated Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.716610, 2] ../source3/param/loadparm.c:2803(lp_do_section) Nov 6 12:00:37 NAS smbd[28881]: Processing section "[flash]" Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.716765, 2] ../source3/param/loadparm.c:2803(lp_do_section) Nov 6 12:00:37 NAS smbd[28881]: Processing section "[Work]" Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.716841, 3] ../source3/param/loadparm.c:1621(lp_add_ipc) Nov 6 12:00:37 NAS smbd[28881]: adding IPC service Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.716867, 3] ../source3/auth/auth.c:189(auth_check_ntlm_password) Nov 6 12:00:37 NAS smbd[28881]: check_ntlm_password: Checking password for unmapped user []\[]@[DESKTOP] with the new passwo> Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.716882, 3] ../source3/auth/auth.c:192(auth_check_ntlm_password) Nov 6 12:00:37 NAS smbd[28881]: check_ntlm_password: mapped user is: []\[]@[DESKTOP] Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.716897, 3] ../source3/auth/auth.c:256(auth_check_ntlm_password) Nov 6 12:00:37 NAS smbd[28881]: auth_check_ntlm_password: anonymous authentication for user [] succeeded Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.716921, 3] ../auth/auth_log.c:610(log_authentication_event_human_readable) Nov 6 12:00:37 NAS smbd[28881]: Auth: [SMB2,(null)] user []\[] at [Wed, 06 Nov 2019 12:00:37.716911 CET] with [(null)] status> Nov 6 12:00:37 NAS smbd[28881]: {"timestamp": "2019-11-06T12:00:37.716954+0100", "type": "Authentication", "Authentication": > Nov 6 12:00:37 NAS smbd[28881]: [2019/11/06 12:00:37.717204, 3] ../source3/smbd/smb2_write.c:215(smb2_write_complete_internal) I can't be certain that this is the cause though, but whenever the speed dropped to 0 and smb timed out from what i can tell this message was the only thing out of the ordinary. I have no entries in my cred vault @ windows, i do use the same credentials for my windows machine as the account on the unraid smb share that has access. Any ideas if this is the problem? I find it strange that other clients would experience access issues at the exact same time to the unraid as it is throwing denieds to the machine transferring though, but maybe? I will have to do more file transfers.
  21. My preclear disks are done in 2 hours from now, i will then enable smb logging and see if smb is crashing and provide logs. My setup is as follows: 1. No cache is being used at the moment. I have 2x 1tb SSD cache's but they are not being used for now because i am trying to migrate over 100tb data, this migration is being done via SMB. The share i am sending data to does not have cache enabled. 2. Tunable (md_write_method): is set to "auto" which i guess is the default value? I have tested using the plugin "CA Auto Turbo Write Mode" enabled / disabled but i don't really see any difference in speeds, neither do i see any difference in smb being more stable with/without.
  22. One batch of 200gb of random files ranging in between 700mb/4gb per file, media stuff... just ctrl-v / ctrl-c in explorer to the unraid share. Update: i did try the thing you mentioned now, and unraid SMB becomes unaccessible from another client machine at the same time as SMB drops from the machine transferring the files, both computers can ping the unraid server so network connectivity is not lost. It's definitely something with smb. I read here about some issue that seems very near what i am experiencing: https://www.ixsystems.com/community/threads/smb-shares-dropping-during-write.73204/ https://redmine.ixsystems.com/issues/43558#change-495697 Updated: verified that it is not a simple bad network cable issue, same thing occurs with different cable. Also verified that when smb drops the webgui is still up and accessible.
  23. if it was a connectivity issue wouldn't pings be dropped when network errors occurs? I start transferring files, suddenly after like a couple of minues the transfer halts and i get a permission error, right after this error occurs i cannot access the unraid install via \\hostname\sharename\ but i can ping the unraid server and my ssh session does not drop either, also i can ping from the unraid server to the network while this occurs as well. I do believe it is the smb daemon that is hung and restarts. I will have to wait until the pre clear is done to enable smb logging to see whats going on, no errors being logged in the regular logs as far as i can tell
  24. Not sure yet as it is only occuring when i am transfering large amount of files and all my linux installs are tiny webservers that don't have any big files to transfer.... i would have to put up a new linux machine and do that test if need be, but first i am going to enable smb logging and see what that tells me but right now i am preclearing some disks so it will have to wait until tomorrow
×
×
  • Create New...