TheWoodser Posted November 28, 2020 Share Posted November 28, 2020 On 11/23/2020 at 1:21 PM, jbartlett said: Edit the settings and verify that "Web Port" has a value. It defaults to "18888". Yep..... 18888 Quote Link to comment
jbartlett Posted November 28, 2020 Author Share Posted November 28, 2020 28 minutes ago, TheWoodser said: Yep..... 18888 View the advanced settings and check that the "WebUI" setting has "http://[IP]:[PORT:18888]/". If it does, it sounds like a bug not related to my Docker. Quote Link to comment
TheWoodser Posted November 29, 2020 Share Posted November 29, 2020 22 hours ago, jbartlett said: View the advanced settings and check that the "WebUI" setting has "http://[IP]:[PORT:18888]/". If it does, it sounds like a bug not related to my Docker. This is a fresh install. Does it matter that this address is NOT in the subnet of the IP of the UnRAID server? Quote Link to comment
TheWoodser Posted November 29, 2020 Share Posted November 29, 2020 Just now, TheWoodser said: This is a fresh install. Does it matter that this address is NOT in the subnet of the IP of the UnRAID server? I noticed after posting that this says "8888" vice "18888". Changing that now...and restarting. Quote Link to comment
jbartlett Posted November 29, 2020 Author Share Posted November 29, 2020 2 hours ago, TheWoodser said: This is a fresh install. Does it matter that this address is NOT in the subnet of the IP of the UnRAID server? The first IP is the IP of the Docker and it's fine. The 2nd set after the arrow should have the IP of your unraid box plus :18888. Mine is "172.17.0.2:8888/TCP <--> 192.168.1.7:18888". I also have Network Type set to "Bridge", Console Shell Command set to "Shell", and Privileged set to ON. Try this: Edit the Docker and change the name to something other than "DiskSpeed" to change the local cached version's name and then delete the docker. Install DiskSpeed via the AppStore or from the XML file on the 1st post on this thread. If nothing works, you may want to post in the 6.8.1 general support thread or 6.9.0 beta thread depending on which version of unraid you're running that the Web UI link isn't being displayed. -John Quote Link to comment
TheWoodser Posted November 29, 2020 Share Posted November 29, 2020 2 minutes ago, jbartlett said: The first IP is the IP of the Docker and it's fine. The 2nd set after the arrow should have the IP of your unraid box plus :18888. Mine is "172.17.0.2:8888/TCP <--> 192.168.1.7:18888". I also have Network Type set to "Bridge", Console Shell Command set to "Shell", and Privileged set to ON. Try this: Edit the Docker and change the name to something other than "DiskSpeed" to change the local cached version's name and then delete the docker. Install DiskSpeed via the AppStore or from the XML file on the 1st post on this thread. If nothing works, you may want to post in the 6.8.1 general support thread or 6.9.0 beta thread depending on which version of unraid you're running that the Web UI link isn't being displayed. -John John, Thanks for the help. I think there is something wonky with my network settings. I opened a new thread....no Docker I install will give me a WebUI option. Woodser Quote Link to comment
jbartlett Posted December 2, 2020 Author Share Posted December 2, 2020 (edited) Feature in progress - file fragmentation & allocation map. One thing I've learned is that for some reason, the underlaying OS likes to split up files all over the drive. This is from a drive on my backup NAS, files are just copied there but I see that large files are broken up all over the drive in most cases. Black = Allocated, non fragmented Red = Allocated, fragmented White = Unallocated Edited December 2, 2020 by jbartlett 1 Quote Link to comment
jbartlett Posted December 4, 2020 Author Share Posted December 4, 2020 I just noticed a trend in the file fragmentation. It seems the OS has the tendency to break the file up into chunks of a set size. I found this really strange because, well, why do it at all? Noticing the trend of 32768, I did some research and some file systems (like ex4) have a maximum number of blocks that can fit in one extent and then the OS has to create a new extent to continue, thus creating a forced fragmentation chain by design. But it's interesting that in some extents break this barrier (btrfs in this case), typically on the last two extents but not always. I had designed my app to display fragments in red but I think I'll just default everything to black with the option of display fragmented files in red after the scan. File size of 10,000 BC.m4v is 2995331020 (731282 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 32767: 793856.. 826623: 32768: 1: 32768.. 65535: 848352.. 881119: 32768: 826624: 2: 65536.. 98303: 913888.. 946655: 32768: 881120: 3: 98304.. 131071: 979424.. 1012191: 32768: 946656: 4: 131072.. 163839: 1056000.. 1088767: 32768: 1012192: 5: 163840.. 196607: 1121536.. 1154303: 32768: 1088768: 6: 196608.. 327679: 1165152.. 1296223: 131072: 1154304: 7: 327680.. 731281: 1318144.. 1721745: 403602: 1296224: last,eof 10,000 BC.m4v: 8 extents found Also adding a "Directory Hog" feature to show you where your drive space is going. 2 Quote Link to comment
servidude Posted December 28, 2020 Share Posted December 28, 2020 Thanks, @jbartlett, for the WIP updates! I've been using your excellent docker for some time now and have a couple of questions: Would you expect these speed tests to show up any difference in file system performance? I'm trying to see if I can measure a quantifiable difference between my XFS and BTRFS formatted drives (the graphs don't show anything obvious). Any possibility or plans to measure write (not just read) speeds? Thanks again, and happy holidays! Quote Link to comment
jbartlett Posted December 29, 2020 Author Share Posted December 29, 2020 1. I don't think there would be any difference in speeds regardless of file system in reading of existing files. The creation & deletion of files, the file system can play an impact in the speed of the operations but that is also highly variable such as tree depth, number of files in a directory, even drive utilization percentage. I haven't thought about trying to benchmark that, not sure if there's enough value to the results to warrant it. 2. I've thought about it but I don't think many people would truly test such logic with their data, even if the write is writing what was already there back onto it. If I were to implement it, it might be only on an unpartitioned drive. Quote Link to comment
servidude Posted December 29, 2020 Share Posted December 29, 2020 8 hours ago, jbartlett said: 1. I don't think there would be any difference in speeds regardless of file system in reading of existing files. The creation & deletion of files, the file system can play an impact in the speed of the operations but that is also highly variable such as tree depth, number of files in a directory, even drive utilization percentage. I haven't thought about trying to benchmark that, not sure if there's enough value to the results to warrant it. Thanks for your thoughts on this. I've tried, somewhat naively, to time a file copy from /mnt/diskX to /mnt/diskY (and /mnt/diskZ) and compare the results. They weren't particularly illuminating (nor consistent). If I understand this correctly, the write speed would be affected by where on the drive's platter the data is being deposited, which is usually something beyond the user's control. Is my assumption correct? Quote Link to comment
jbartlett Posted December 31, 2020 Author Share Posted December 31, 2020 On 12/29/2020 at 5:51 AM, servidude said: Thanks for your thoughts on this. I've tried, somewhat naively, to time a file copy from /mnt/diskX to /mnt/diskY (and /mnt/diskZ) and compare the results. They weren't particularly illuminating (nor consistent). If I understand this correctly, the write speed would be affected by where on the drive's platter the data is being deposited, which is usually something beyond the user's control. Is my assumption correct? You are correct. I had an empty XFS volume that I copied large files to and when I created a file allocation map, they were NOT located at the start of the drive but spread out in three general areas across the drive. 1 1 Quote Link to comment
Kiefer Posted January 2, 2021 Share Posted January 2, 2021 I've relatively new to UNRAID and learning how to improve performance now. This utility is awesome! My server is cobbled together from old parts and I'm replacing / reassigning hardware as I can. When running a speed benchmark is there any reason the parity drive output would be expected look different than the smooth curve of the others? The array is running but there shouldn't be any data transfer or parity checks ongoing at this time. If the answer is no, then I guess I just have a really wonky drive? Quote Link to comment
Squid Posted January 2, 2021 Share Posted January 2, 2021 Assuming that nothing else was going on at the time, then it would appear the drive has slow spots. The SMART report might explain it. Quote Link to comment
Kiefer Posted January 2, 2021 Share Posted January 2, 2021 @Squid, I think you're right. Thanks! I'll probably just pull the drive. Swap drive assignments (parity to be fastest) and add in a SSD cache. See if that helps with data transfer performance. Quote Link to comment
jbartlett Posted January 3, 2021 Author Share Posted January 3, 2021 Check the SMART report to see if there are pending reallocated sectors, such could explain slow spots because it's trying to do multiple reads of a sector. You can force a check for bad sectors by performing a preclear on it with no pre or post reads. Note that this is only if you would intend to use the drive as long term no update no risk storage. I had a drive with similar slow spots that developed over 20k pending sectors after a preclear. Quote Link to comment
jbartlett Posted January 4, 2021 Author Share Posted January 4, 2021 That's a trend across pretty much every SSD and I don't have an answer for you as to why. On the HDDB, I take the peek speed and report that as the transfer speed. Quote Link to comment
LammeN3rd Posted January 5, 2021 Share Posted January 5, 2021 On 1/4/2021 at 5:50 AM, jbartlett said: That's a trend across pretty much every SSD and I don't have an answer for you as to why. On the HDDB, I take the peek speed and report that as the transfer speed. that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity. Quote Link to comment
Fireball3 Posted January 5, 2021 Share Posted January 5, 2021 that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity. Just to make sure I understand it right. The flat line that basically indicates the max. interface throughput is trimmed (empty) space on the SSD? Quote Link to comment
LammeN3rd Posted January 5, 2021 Share Posted January 5, 2021 26 minutes ago, Fireball3 said: 11 hours ago, LammeN3rd said: that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity. Just to make sure I understand it right. The flat line that basically indicates the max. interface throughput is trimmed (empty) space on the SSD? Yes, the controller or interface is probably the bottleneck Quote Link to comment
jbartlett Posted January 5, 2021 Author Share Posted January 5, 2021 So the correct logic would be to take the lowest speed as the maximum read speed? There will be interface issues on some systems so an average of the lowest speed of every reported SSD of the same make/model/revision would be better representative of the whole. Quote Link to comment
LammeN3rd Posted January 6, 2021 Share Posted January 6, 2021 you could have a look at het used space on a drive level, but that's not that easy when drives are used in BTRFS raid other than 2 drives in raid1. NVMe drives usually report namespace utilisation so looking at that number and testing only the Namespace 1 utilization would do the trick. this is the graph from one of my NVMe drives: and this is the used space (274GB): Quote Link to comment
LammeN3rd Posted January 6, 2021 Share Posted January 6, 2021 to be honest, I don't think it makes real sense to test more than the first 10% of an ssd, this would bypass this issue on all but completely empty ssd's. and ssd's don't have any speed difference for different positions of used flash when doing 100% read speed test, for a spinning disk this makes total sense but from a flash perspective a read workload has no difference as long there is data there. Quote Link to comment
Fireball3 Posted January 6, 2021 Share Posted January 6, 2021 It's it true that solid state drives are always filled up according to those graphs posted? Will empty space always be shifted in the right side of the graph/drive or will it be more like a sawtooth pattern on a well run in drive where data may have been deleted in random areas of the drive? Quote Link to comment
jbartlett Posted January 6, 2021 Author Share Posted January 6, 2021 docker exec -it DiskSpeed bash root@af468d0f3720:/usr/local/tomcat# nvme id-ns /dev/nvme0n1 NVME Identify Namespace 1: nsze : 0x3a386030 ncap : 0x3a386030 nuse : 0x10facc48 nsze: Total size of the name space LBA ncap: Max number of LBA nuse: LBA's allocated to the name space. It looks like if I do a dd read on the device starting at the start not to exceed "nuse" would return data read If nuse is under a given duration/size, a benchmark can not be done. Alternately, if a file is found in excess of a given size that has no unwritten extends reported by "filefrag -e <fn>" on it, it can be read for a given number of seconds. Thoughts? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.