david11129

Members
  • Posts

    70
  • Joined

  • Last visited

Recent Profile Visitors

763 profile views

david11129's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. When you added 16 4tb drives I'm guessing you had them on the same controller so hitting all drives at once like parity does will slow it down. The controller becomes a bottleneck at that point basically. 8 drives didn't max it out but 24 sure will. I'd recommend either adding a 2nd HBA, going with a HBA that has the bandwidth to support 24 drives at full speed, or moving some drives to SATA ports to lower the bandwidth hit. I'm sure you've already resolved your issue but wanted to add this for anyone who stumbles on this in the future. You could also simply live with it too. It's not often every drive is active simultaneously in my experience. Mainly during parity operations. Writing and reading from a few drives at once won't show any slowdowns until you have enough going to use all the available bandwidth on the bus or card.
  2. When you RDP into the VM after assigning the GPU and not getting a signal, is the GPU seen by the guest OS at all in windows device manager? Since this is a Ryzen machine, did you redo the steps to have unraid not grab the GPU after changing slots? I haven't done this so i may be way off base, but my understanding was you have to blacklist the card so that way unraid doesn't grab it for use as the only display adaptor. Since it was working before, I'm sure you at least know what I'm attempting to ask haha. Also, sometimes going back to the basics helps so don't laugh at the suggestion, but did you make sure that the HDMI/DP cable is inserted fully into the GPU and the monitor? And that the monitor is set to the right input? Again, sometimes the obvious things trip us up so it's worth verifying. Even the best of us have overlooked these things before.
  3. That's amazing!! I would love to know what's going on behind the scenes that is fixing it, but I'm sure you're just glad to have decent write speeds now! Did you set thhe CPU governor to something else too by chance? If you open the terminal an enter: "watch -n1 lscpu" you can see what speed the cpu is operating at. I used conservative for a long time because it ramped down well when extra power wasn't needed, and ramped up when it was. With conservative off, i save about 15 watts according to my kill a watt. I have it set to performance now, because with 17 drives, dual cpu's, and 3 ssd's, low power is out of my reach. My idle isn't that bad though actually, with performance mode and all disks spun down, i idle a hair below 100 watts. With everything ramped up and moderate cpu uasge, It maxes around 400 watts.
  4. I'll expand on my test results later, but I wanted to let you now that the default cache ration for ram is 20%, 9.6gb in your case. i changed it via the tips and tweaks plugin. The setting "Disk Cache 'vm.dirty_ratio' (%):" is where you can change it. I had it set to 10 because I was having out of memory errors a while back, related to something I had misconfigured, not sure what anymore. When I changed it to 35% today, I did notice it stated the default is 20%. I've never reeally thought about changing it back to the default, mainly because I have enough sticks on hand to increase my ram to 256GB. Also, realize that increasing the ram cache is just going to put off the point at which your transfer slows down.
  5. Just theorizing here, but it may be that when the second transfer occurs, the first is still being written from ram. Linux uses the ram as a cache for writes and lots of other things. So instead of being a synchronous write, the disk spindle is having to move a bunch to write the new date, and the data the was still in ram from the first transfer. They most likely aren't being written right next to eachother and the extra spindle movement is going to increase overhead. I have 96gb of ram, and I believe I changed the amount of ram to use to cache writes. After parity completes, I will attempt to increase the ram cache size and report results. Edit: just checked and my ram cache is set to 10% of ram size. That means that up to 9.6gb will be used to cache writes. That makes sense because that is about when the transfer falls off for me. So after the ram cache is filled, it is both writing that data to the disk, and I am sure it is writing the second transfer straight to the disk. Edit: Increasing the ram cache to 35% did work. I transferred 32gb of files at full line speed. Once the cache fills up. that would decrease a bunch, especially with the parity operation going on now. I believe this is probably just normal behavior for a parity protected array that isn't striped. Spindle overhead and parity calculations are going to come into play once the ram cache is filled. I hope someone more helpful can chime in here. I believe I have done what I can as far as confirming what you are seeing. Ultimately I guess the answer is to disable parity during the initial data sync. I can say that I have experience no issues with Unraid the last 3 years after I set it up. My 1tb cache is large enough to cache all writes until the mover runs, and even when downloading TB's to the array with no cache, my internet speed is a bottleneck before the disks become one. I have 400mbs down and all downloads run at max speed. Unraid is great for media serving and data protection. Because it isn't striped, even if you lose more disks than parity can recover, you don't lose any data and the remaining disks and can read them on any linux distro. If you need Constant writing of your max line speed at all times, I suspect that using a striped solution is going to fit your needs better.
  6. Ok. my cache transfer ran at full line speed, regardless of share setting. Neither one ahd an impact on the transfer, but I am also only running gigabit. I have to wait for my parity to rebuild before I can run any other tests on this.
  7. I disabled parity and it transfers the entire time at line speed. I suspect this is why they suggest disabling parity until your initial data transfer is finished. Presently I am running a transfer to my cache drive and will update when it's finished.
  8. Well it isn't helpful in solving the problem, but I can confirm I experience the exact same behavior. I duplicated your set up, and using teracopy or file explorer in windows, I show the same results. I am disabling parity right now and I will confirm if that solves it or not. Though, regardless of disk fill up setting, both transfers dropped off for me after about 8gb were written.
  9. The amount of data actually has nothing to do with parity speed. The only thing that matters is the size of the parity drives, in your case 10tb. I am only running single parity with an 8tb drive, and my monthly checks take ~18 hours. Edit: just checked the history and my parity time is ~18 hours, not 22 as originally stated. Edit: I'll try my best to simplify how parity works. Here's a link for more info: https://wiki.unraid.net/Parity#How_parity_works Drives store data in 0's and 1's. You 10tb disk can contain 10TB of them. 01010101010 etc. Let's imagine you have two drives, and a single parity disk. Disk one contains 01010101. Disk two contains 10110110. Parity in this case would be 11100011. If you add them together from the beginning, 0+1 is 1. 1 is an odd number so set the parity to 1. On the 4th position, 1+1 is 2. 2 is even so set the parity to 0, and so on for each position. If disk one died, you would calculate in reverse. Since the first bit in parity is a 1, and we know the first bit on disk two is also a one, we know the first bit on the dead disk has to be 0, because that is the only possible one it could be and still have parity be odd. This is super simplified, and ignores dual parity completely, but it's how I think about it when attempting to wrap my head around what is actually happening. Now you see why data size is irrelevant, since data is simply stored as 0's and 1's, and the parity drive is only going to hold so many of those, 10Tb in your case.
  10. Once the data is written, the parity will be created at the same speed it would during any normal check. I do those monthly anyways. So worst case, you're just having to redo the parity an extra time.
  11. If I were you, I would screenshot the main page so you have the disk assignments available. Then stop the array and change both parity drives to none and start the array again. Once you are finished transferring files, do the same thing and assign each disk to the parity position it was it previously and start the array. Then check parity and leave write corrections to parity checked. This should let you transfer way quicker, then build the parity later. I don't see a reason why that wouldn't work, but I suggest you read a bit and confirm. Afaik doing it this way shouldn't lead to any negative consequences.
  12. Are those the speeds you're seeing on the main page, or are you getting the speed from whatever you are transferring from? Did the disk being written to change when the next video queue'd up? If so I suspect the system is still reading the needed data for parity from the first movie, when the second movie begins writing. In reconstruct write, all disks are read except the one being written to. In theory, if the disk being written to changes before the parity is generated from the last transfer, you will run into a disk being written to for movie#2, while still being read to create parity for movie #1. These are all educated guesses on my part, they are logical, but someone could prove me wrong.
  13. Are you basing the speeds off of what the main page says? As you can see, when I started transfers to two different disks, my parity speed looks similar to yours. My actual transfer speed did not change from ~1GB/s. I am not sure if the listed speed is the speed at which it is updating parity, or if the main page is just unreliable for reported disk speed. I can 100% say that the windows transfer speed window never dipped from 112MB/s. To show this, I again did the transfer, watched my main page list the speed as around ~55MB/s, then took a screenshot of iotop showing the actual speed.
  14. What happens if you change your Share setting to high water? I suspect that parity is being reconstructed, but it hasn't has a chance to do so because you keep changing which disk is being written to. As in it's still reading to generate parity from the last movie you transferred, when the next movie is being transferred to a new disk. If you minimize the number of disks you are writing to at a time, you shouldn't see the parity being written until the transfer moves to the next drive. I use high water because I am able to leave more of my disks spun down when watching movies etc. You have 2 parity drives, so for you to lose data you would need to have 3 drives fail. edit: I would stop the transfer, then change the share setting.
  15. I'm not sure what the problem is then, especially if you set it to reconstruct write and hit apply under disk settings. I just tested what my server's behavior is during a large write. It writes the file to one drive, then reads the rest to build the parity. The drive being written to does not have much in the way of reads occurring. How are the disks connected? Are the disks directly connected to the motherboard sata ports, or do you have an HBA present? To be honest, with you seeing high reads and writes simultaneously, I would try to set reconstruct write again. Like I said, for me, the disk being written to writes at line speed and has minimal read activity. If I fire up two transfers, then I start to see both high read and high writes on my drives. Also, and I don't believe it has to do with your problem, but I would install the tips and tweaks plugin from the apps and set your CPU scaling governor to performance. I was having some stuttering and other issues because the cpu speed was not ramping up and was stuck at 800mhz. When I set it to performance, it ramped up as needed. Conservative works as well.