ptr727

Members
  • Posts

    139
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ptr727's Achievements

Apprentice

Apprentice (3/14)

10

Reputation

  1. If you follow the details of the thread, my research shows the the problem is NOT SMB, the problem is the Unraid FUSE filesystem. Mount your SMB share under a user share (Unraid FUSE code in play), poor performance. Mount your SMB share under a disk share (no Unraid code), normal performance.
  2. For the time being I gave up on Unraid fixing this, I moved to Proxmox with ZFS: Removed link at the request of @limetech
  3. Seems like a lot of dangling volumes for the once or twice I changed a tag, and I can't control what the container authe does. Should these not be cleaned up the same way orphan containers are detected and cleaned? I'll delete them for now.
  4. I noticed that my server has many many dangling docker volumes: root@Server-1:~# docker volume ls --filter dangling=true DRIVER VOLUME NAME local 0ba5094dffcf33bd70ec9bb78ed0d3641a4fb64e0c791429b38df5153a66af50 local 0bae1eb00b48ae1ef5355af7dd7eca6f4f3ac13ce800b66a5102479a6859673c local 1c45b2c3475f654a3105aa0190e4afa61af2683a87969685e892246f54257d38 local 1eb893aac8214c83c0c1073958583c41566ef174421bc1b05da931ea37884e7a local 1fc2714cc077fec6a8bf8d73a9c6a0dcca5f2dca6e0eabb41accd94d043645d8 local 2b54d36658f495bab15960e2551fed5b7b56f309a89cf86530dfca1241ac96ea local 3c2940350c48c07ccb49c1468f775df8d0dcbfbe8676c10fccc1e68b5b86dbee local 3cb4872b5aceb2652f7aebdbb3ba2d8570c2a646b748f516077c141429c5d648 local 3dd25a2b301027698e470040271ccc15ce128e07d05d6318d7804ca53e0ac1b5 local 3eb3b12de3266f890fbed2dc5817b924b1be22c7cf79558aad83b61dc759894b local 4cedcb55b0e0d461729aee3b3edff6bc6e0917348a8554ddead92dc5d4c6ea97 local 5daada5a741ec7b46b7861f1bbafce4e2cec6eec6c587714f5cebdbbe6f4aa31 local 6a499eef70c54c7ba5404cd1a9cb04209d4225d53ce754d3ef90d84c37519a4a local 6ca7dc1f22d83997cc3de9aca32ca426710d4592617c743dcf64df4d51901284 local 7a5f7dbed474d98ff640da70ef6f8b6c391563302ff5134c33d4657e15768199 local 8b48a369ed5bd2975d0c07c127c078af2ad87043fb147bf7ccf1094b29309bff local 8b8299336f3d24384299acfec10c6debc354ae92fc966be09e368214eece1556 local 8cb9b784c3a731786f3c0e9cb38c88d776e365e24b82732508d9d4abd9feb75d local 8e2120cc6746af544b55521d1b1dff58f56f860347b024fd623c459bdea47fe0 local 8ea8ba7266017e65d51c1756d8c8c78d7297a0cf279c42ee793c5ef2d6b9c6d4 local 08b2822c3b25578568a4342e15f4a605d97e751e68e76fbe55c919bc693a3bbd local 9c6be94de472ff22bd23c9f35cc0edc4f2e9cb86c57b727de3a756e71a80155e local 9f552dcba0debad6b8f1db8d77599e238ffe6f3772e838f189961c0071951bef local 9fc7f40d03c3d0d4a1750c8903bd9cf677ca4e72f0d775519b6c9bfc8a2b7657 local 12a82918e7bca4b0ba3ec9c06b6405ec1f3043d922eba9672cdcbdba7b392680 local 16d47df3ecdd99db8d5298a8c7cf4c943e8c1d86a8ee74b0792168043d949dc4 local 17d2ba123a57d82add7b8f66ab434f63795d6b8fdbd335d3be18d4d85a8e19a3 local 21cf82ec509bfbaa294d87ceec9eee50ed78e0317b2fba4e42e2e398376d5fcb local 26aa0f1615d09c7099509cde919b1029d4a69806e7f7ce28d79e462ae404c942 local 32d7562f889adc79b35e1f5b7c34bd3cb9a809d3b6edb8886f7fc2c74bc51852 local 33f7181599fed19baaf6cc584c88c1f64218d689c6ceb7d4b3e2b5413fbd10b5 local 0036ca6a315c699d0e0b6cdc5c5c9e9320b40841815d19d786477d589d4a632d local 39d634a4f88618dcf8297e9090a630cbf8ad0cc3bbf8a8fb4ea8b0534dfed336 local 040d22a1827ed6a895ba06f79c2b61b5d78067b9b28c2f17aaa4eef52b419845 local 42d8335863f313a7c9891f269e86f7fc1a12843d635376165cad096e04d194a2 local 050dba4623bbcadae9152051cc530aba2af2c304026513183ef5ae401457231f local 58bfcad418a3ab7201b61985b27f9002b3982f94aedafc4c7cc18d817b2e40ad local 62e3b3eb4cc9b7ccd2441e5135be0b91b214d04058e34f6f7891f0755b4dfde9 local 66f6108f42a512ee40a48e49666de464732625a058b0a888fee390c81a82c159 local 74be80d32a57abddddc12059a01f1abcba37d95679fd0299a22d9d1b48ae4a95 local 84c779adda114875b8080c7ef4c377b0ef422946a69b401ce59afe2be3f0886d local 90cf4300ff4183743094837f0567ea5d66965257721aa70b56438fdc80f8fe69 local 91ab878addc510405fa5786d8eeafebe7f2390860bc44a6cfe036ba79fd1aade local 94aac047de515d6433804416643546d018dd6ab3e34ae1a1f1fa3f03a3761889 local 109a2c3a458e704e1b366ecfe6cfb126aa8adff39633fc9559a780fd00daa6f2 local 0274f5b53786839283743d4aace871da57109cb749f5ed1106352df3f9354048 local 363f7410e68a05eb8208998bdf5f360917fe7192806650d7e9991a5f562eb44a local 513bbc6d1113d20f1f1fb048c6ca6253e7406f213699b5d472361eb0bee45f3d local 554cb6440e38b9f866127558fde189009ccffe2fffe020402791ed7307cbc2cf local 633ea99bbd1b14288cc33135dad25ebf206a5301c606b2289f7563989e42ea1f local 778e95f47bfa7babbde920f31a66d8e191117a062f6b937fdd90a70567034865 local 799f0dd7615758b833e85fbc502c62015233e237f644958431199990d8b9d36b local 900d3f693c9382801068835650fa22457b14d3f8a29b830ec1b878786474b2d9 local 906eec9f15bbb386ae75e4ecff4866f626e469be5d843077ce1f3d6def18c13f local 1720e7fefa26139af28c724d91dbe02930e27a2768553ad69af6c941c365e174 local 2778bbf1356bf1633b4ae9ec1bdd5823c6c28062c808c458a4aab64c02cc66ec local 9921f44aa7d00b3a6d01c5326f925debd5285978318622b0de2f5f3c3c4f4d50 local 38042c4a79002076313400fa79275c80a508574b485be6a9171dfe6ce0520fa3 local 592267e0fbfafb21c228e4664447bb9254565a6c06b6d4de0ee5095756375730 local 2456727b799ec90e06b8bf9d3d12adab69ec7d2151822fc358d38b045c5ac6cd local 6272694e114f4c5a1c683966914dc276a5b2786d59551ed80d89dd7f9f2b4882 local 6335079e4e4873c6fb472b9bef1136a2e86e26a57cdbfc24fc30c21ad60717b4 local 7607692bd6877301f868f703ab7df771c9666f00f20e96bf0fa21fe64f2b5483 local 8815124c23bee7a8d6b55e75a74a9c52e5589706e5f2c5064296c88726169775 local 27659576bd05e56efe5c72d5f0fd58a64a9d863c870487cf8c0e7b5842f31015 local 1651077376dbc05b5726c0a43adbf3821c0c2c197ba25c86f1f76e1ed4567e44 local a3d763caa0822612da8b65a3e11a6a95d3416846206b7e4361713a6425752418 local a8e5380221990c90defed125f69e206c9ad4ae045088bf4462580b46f7e18778 local a31a3811cb19e54dad492bac34c6f771c13ad316cd3c00790936acedb724d593 local a44d2a96cf79c51e2fab7a02541c7b05c80df4bb59ed3a87ac6fbee1168e2a95 local a99b757e20052a5f064f34f55b7f6bdd7f4ab545c278a15e70303cc6b4b42e98 local a7176c2660c7c5fa618d821ea95446140b0905c54e978e026f1026af9e3e19be local aa3fda870c05b95f1d3adeb84e74f98bde648920a73f2b30f3151d76b5fdc51f local aaddef3b9e5014dce7dc517622b772cbffff53178c3fcf9eee2f20fae882ac02 local b4edbbfad833f10c9394a5bcd1925721a66cc322530d06ab6e85ee14ebfe236d local b6b2b18c620e0d2f8832777ab83e2619671ca38954b5c80d2c8c04a3c908376d local b14657bd1542b35573e64c852cf543f32951dd6e0f1a888dd388f06ff8ab61f3 local bc8e8bc066509bc65ab5d453b17988538d5df8b87ffeb3c8841a0fae953138c7 local bd894ed6bdec599b047952184d01f9c3ef19da7ff80ca46cd94d2af2cf4c06a1 local bf1166b85b0de409235c342d7821d3c0a88a72dfec2cca2c3181c2e053bd6d19 local c2e0e6f4e7f5e8943f31cfc4c31a8187a70e7aa563d223e37d1295f11a267bbf local c3c0269d87808ff1b046262aab24d9c2978c02a109d3c8104626eb0613db43fb local c5db1ff205df3454739bd2d016e87a6930d75da484afa296827177a0575c359f local c8fcb8c363c0a3a821d1a26def8d6e8a1ae30ea454d43431a82b6307ecf1c8b0 local c76f173d9ed52a1e98703cb1c7239593faf674e94a5d649efe158f81aceb7fb7 local c88c0f4b2533f9f127e52a7f8fa0bc7a1c8d28bb5fe633fd7d668c02c8272381 local c696b2343bd41fb38b894173c02e8f682d81202dade5962503f3d3764e22f823 local c4247ed18ad96b4a4be4c290b3c2599e23ad7e69c5ae2b61e9adae9934c2c65e local c9253951bc3f5ca0a96ad65d1601696a1a4e5dfdfe03ac44bd6c7889dd2a936e local ced572fad9d73e2cfe406831e6e9dd2777da590e499464987cd269bf4b0ebbde local d2b06f94e8d02fb91559f6289a59007ca5fc5ad8a41128a0e8092bc6b2b52f9d local d20e4304b0f635353ef04fb23002fd13f299fb450417da444eddf9ce7a24038e local d113c687080fa7d4457b2d6c4e05e6eb01791e02deba23cce4e9c0e08bd4d93c local da610ec7c395feb90453ad2db60195c17758c3a6d3e28230b5f057af8ab4c2bf local dbcaf5db103cc231f3cee1effd6e3f79eac886adb564bf61039f88088864368d local de9742bdcb34d5e9796d2148e016e1410f8b2b8d9c0c7471720c1ee271470e9f local e4ec369f8b353e9a6ba14026bf54fe8f2717deb2ac3b1bb02b97764dfc8bfaea local e5e24d890c9d739b2265a020deb52f7662e1fb16336cf6a23d7d26aa3f7f26e5 local e10b43b39c99eae92f5c46994c3e21f7f41eb90c968401f312105617a109b8e7 local e17b5e18e4af64ba42a31023d18c7d2b79b98ccf2fe6b65648d5b9f009c3c024 local e39ab9979529fe059185773bf79bc48eb84e519742e4c5f4e34b98470fe4f32f local e88f619472bd5359e8a0406aa162d62b13d84f41673f1f776afa09dfd55a81b8 local eb11645d3333bfd60e72e2e829766d0ba03849c9f19d7fa57612d67e28b14e39 local ecd8b3e9b7594b5dcd98d9bdaad762c3acf1b662c42423a975f2ffcecbbeb1b7 local ed4a6e0d8ee2382f9b039b5c8bdbb1b4fd3a2035e134a402d72b56f34f90bed9 local efb9064152f3fd682dde869ddbeff87b0ae54ea2cd746be065006c9d3359c416 local f2cd3b92f38c2200305f2ab46a86724bd9611ff7b9b244906efff1e877b2e708 local f4cbfa96bf3d7b5ee8853dc0a6c74396316d23472a8b00dcfef76c53a16ff10f local f5cd96bc2ca512aa40f91f63bc2c38b40bfd48829a2828b3c488e51df6b1fdd7 local f7be593ee92d98ec3c337456873d47c5c5c7e6a2c6b6663beb7d65569e16843c local f025a437b1b036915d972a7e8d679eae9f07946a56674e4224fd91d89a565d86 local f93f44310e76ebe885563dfc8becb2664f5bfee3b0ca0d0893137d4302e0e169 local f609c86470b20aefa1d48366b01e65c00ebc6055f623938d68cfc692c7fb7831 local f72969cc93564e720998f8ed7019649e204727b95a3172b0e0f7c7d3bc7d7c55 local f7864781b0787de129eb194c9b3847fdf676552557a4752a46dd2c7ea1164268 local fa3f212b923f11e2f78c412e717219fe2512388b55ca22b50fa4ce5015b716ca local fc77d523159dc21408437aa7f380439f92c7e0b415d0ee35ac4ca573a6001e4a local ff7cbd0df2a61036bfa689ea5faa492a2ade39970f5f791f8812677cd19e9325 root@Server-1:~# Why are the volumes left dangling? Can I safely delete them using docker volume prune?
  5. Unfortunately not, but we (90266) only have about two months of 90F+ in my garage, rest of the year mid 70's. My garage is insulated and drywalled, I have a 24/7 extractor fan in the ceiling, I have 4 top mounted extractor fans on the rack, and SNMP and email alert environment monitoring in the rack. On really hot days I run a high speed floor fan and leave the garage door about 6" open. The biggest problem I have is when we park hot engine cars in the garage, the servers don't like that. I've trained the family to let the cars cool down before parking inside in the summer. I wish I had planned better when we built the house and ran ducting for the rack to the outside.
  6. Hi, I'm wondering if BTRFS is the right solution for a resilient cache / storage solution? I run two Unraid servers, primary 40TB disk plus 2TB cache (4 x 1TB SSD), secondary 26TB disk plus 2TB cache (4 x 1TB SSD). On two occasions I've lost my entire Cache volume due one of the drives "failing". I say failing, but really both times it was my own fault, I didn't want to shut down, and I pulled the wrong drive, and immediately plugged it back in. But this is no different to a drive failing, or a connection failing. Pulling disks during certification of large resilient storage systems is a perfectly good test. One would expect the loss of 1 disk in a 4 disk BTRFS RAID10 config to be a non-issue, not so, first the log started showing BTRFS corruption issue, ok, seems it is not being auto fixed, then I run a cache scrub, no errors, still errors in the log, scrub with repair, reported repaired. Then I started getting docker write failures, seems my cache became read-only, and BTRFS corrupt. In both cases I resorted to rebuilding the cache from scratch, and restored appdata backups, lost the VM's (unlike docker stop/restart no easy way to backup VM's). I've run hardware RAID for a long time, including hardware that uses SSD caching, I've lost disks, pulled disks, but in all cases the array eventually comes back on its own. I simply do not have the same trust in Unraid's cache, I think it is fragile, I think it is unreliable to the point where it needs to be backed up constantly. I'd like to see the Unraid/Limetech publish their resiliency test and performance plans? What is tested for, what are known failure scenarios, what are known recoverable scenarios, are my expectations of resiliency and performance unfounded? And this is not about BTRFS, this is about Unraid, I don't care what Unraid uses for the cache volume, it could have supported SSD's in data volumes and no cache would be required, it could have used ZFS and we would have different problems, BTRFS was an Unraid choice, and I find it fragile. What are your experiences with cache resiliency?
  7. How are your SSD's in the array? My understanding is that SSD's are not supported in the array, something to do with how parity calculations are done.
  8. My two servers are in a rack in my garage, I do all work remotely, BIOS updates, BIOS config, OS installs, boot media selection, etc. No need for keyboard or monitor, no need to physically go to the rack for software maintenance tasks. I would not use a mobo without IPMI support.
  9. When I installed the SAS3 cards, I figured I may just as well go with a SAS3 backplane. So I bought refurbished SAS3 backplanes and replaced my SAS2 backplanes, much cheaper compared to a SAS3 case. See: https://blog.insanegenius.com/2020/02/02/recovering-the-firmware-on-a-supermicro-bpn-sas3-846el1-backplane/
  10. I use LSI-9340-8i, SM SAS3 backplane, and 4 x 860 Pro's in BTRFS cache in two Unraid servers. I won't call it a recommendation, but I've not seen errors with this hardware combo.
  11. I tested EVO 840, Pro 850 and Pro 860. Pro 860 works with LSI and TRIM. See: https://blog.insanegenius.com/2020/01/10/unraid-repeat-parity-errors-on-reboot/
  12. I am having a weird problem that I can't explain. I use HandBrakeCLI to convert a file \\server\share\foo.mkv to \\server\share\foo.tmp I rename \\server\share\foo.tmp to \\server\share\foo.mkv I store the \\server\share\foo.mkv modification time for later use. The \\server\share\foo.mkv attributes are read by MediaInfo, FFprobe, and MKVMerge. I come back later, and the stored time no longer matches the file modified time. No other apps are modifying the file, and I can't figure out why the modification time would change. I am running from Win10 x64 to Unraid over SMB to a user share. I ran ProcessMonitor on the Win10 machine, filtering by the file name, and after HanBrakeCLI exists, no modification are done from the Win10 system to the file. The pattern is HandBrakeCLI open write close, My code open read attributes close, FFprobe/MediaInfo/MKVMerge open read close. Wait, My code open read attributes close, timestamp changed from last read timestamp. I wrote a little monitoring app that will compare the file modified time with the previous time every second. The pattern is always the same, file modified time changes on HandBrakeCLI exit, then it changes two more times. E.g. 4/30/2020 10:31:25 PM : 4/30/2020 10:31:21 PM != 4/27/2020 7:40:43 PM 4/30/2020 10:31:45 PM : 4/30/2020 10:31:36 PM != 4/30/2020 10:31:21 PM 4/30/2020 10:31:48 PM : 4/30/2020 10:31:48 PM != 4/30/2020 10:31:36 PM E.g. 4/30/2020 10:54:02 PM : 4/30/2020 10:54:01 PM != 4/27/2020 7:40:43 PM 4/30/2020 10:54:12 PM : 4/30/2020 10:54:03 PM != 4/30/2020 10:54:01 PM 4/30/2020 10:54:17 PM : 4/30/2020 10:54:17 PM != 4/30/2020 10:54:03 PM E.g. 4/30/2020 11:16:13 PM : 4/30/2020 11:16:12 PM != 4/27/2020 7:40:43 PM 4/30/2020 11:16:24 PM : 4/30/2020 11:16:15 PM != 4/30/2020 11:16:12 PM 4/30/2020 11:16:26 PM : 4/30/2020 11:16:26 PM != 4/30/2020 11:16:15 PM I am speculating that the file modification is happening on the Unraid side. A wild guess; maybe the FUSE code buffers the write, and the last buffered write updates the modified time, and when Samba comes back later, the now modified time is read, instead of the time at SMB file close? Any other ideas?
  13. I have some code that uses dotnetcore filesystemwatcher to trigger when changes are made to directories. When the directory is a SMB share on Unraid, and changes are made from a docker container the underlying directory, the SMB share does not trigger the change. E.g. SMB share \\server\share\foo points to /mnt/user/foo Windows client monitors for changes in \\server\share\foo Docker container writes changes to /mnt/user/foo Windows client is not notified of the changes. Is this expected behavior with Samba on Linux (I did not test), or is this something with Unraid and user shares that are not triggering Samba changes?
  14. Seems highly unlikely that this is a LSI controller issue. My guess is the user share fuse code locks all IO while waiting for a disk mount to spin up.
  15. I noticed that existing SMB network IO will halt while a new disk spins up, even if that disk has nothing to do with servicing existing IO requests. E.g. start a ffmpeg encode session for source and destination media on the SMB network share, wait for other disks to spin down, ffmpeg chugs along, open file explorer and browse around the SMB filesystem, every time you hit a share with disks spun down there will be a delay while the disks will spin up, while the disks are spinning up ffmpeg transcoding halts until the disk is spun up. Expected behavior is that existing IO is not halted while unrelated disks are spun up that have nothing to do with servicing that IO.