david11129

Members
  • Posts

    71
  • Joined

  • Last visited

Everything posted by david11129

  1. My server is showing tons of /dev/SDA errors. Failing sectors etc. It's still running right now. Is there an easy ti back up my system before I reboot and reinstall? I have backups of my files and stuff on the array. My USB backup is rather old though .
  2. When you added 16 4tb drives I'm guessing you had them on the same controller so hitting all drives at once like parity does will slow it down. The controller becomes a bottleneck at that point basically. 8 drives didn't max it out but 24 sure will. I'd recommend either adding a 2nd HBA, going with a HBA that has the bandwidth to support 24 drives at full speed, or moving some drives to SATA ports to lower the bandwidth hit. I'm sure you've already resolved your issue but wanted to add this for anyone who stumbles on this in the future. You could also simply live with it too. It's not often every drive is active simultaneously in my experience. Mainly during parity operations. Writing and reading from a few drives at once won't show any slowdowns until you have enough going to use all the available bandwidth on the bus or card.
  3. When you RDP into the VM after assigning the GPU and not getting a signal, is the GPU seen by the guest OS at all in windows device manager? Since this is a Ryzen machine, did you redo the steps to have unraid not grab the GPU after changing slots? I haven't done this so i may be way off base, but my understanding was you have to blacklist the card so that way unraid doesn't grab it for use as the only display adaptor. Since it was working before, I'm sure you at least know what I'm attempting to ask haha. Also, sometimes going back to the basics helps so don't laugh at the suggestion, but did you make sure that the HDMI/DP cable is inserted fully into the GPU and the monitor? And that the monitor is set to the right input? Again, sometimes the obvious things trip us up so it's worth verifying. Even the best of us have overlooked these things before.
  4. That's amazing!! I would love to know what's going on behind the scenes that is fixing it, but I'm sure you're just glad to have decent write speeds now! Did you set thhe CPU governor to something else too by chance? If you open the terminal an enter: "watch -n1 lscpu" you can see what speed the cpu is operating at. I used conservative for a long time because it ramped down well when extra power wasn't needed, and ramped up when it was. With conservative off, i save about 15 watts according to my kill a watt. I have it set to performance now, because with 17 drives, dual cpu's, and 3 ssd's, low power is out of my reach. My idle isn't that bad though actually, with performance mode and all disks spun down, i idle a hair below 100 watts. With everything ramped up and moderate cpu uasge, It maxes around 400 watts.
  5. I'll expand on my test results later, but I wanted to let you now that the default cache ration for ram is 20%, 9.6gb in your case. i changed it via the tips and tweaks plugin. The setting "Disk Cache 'vm.dirty_ratio' (%):" is where you can change it. I had it set to 10 because I was having out of memory errors a while back, related to something I had misconfigured, not sure what anymore. When I changed it to 35% today, I did notice it stated the default is 20%. I've never reeally thought about changing it back to the default, mainly because I have enough sticks on hand to increase my ram to 256GB. Also, realize that increasing the ram cache is just going to put off the point at which your transfer slows down.
  6. Just theorizing here, but it may be that when the second transfer occurs, the first is still being written from ram. Linux uses the ram as a cache for writes and lots of other things. So instead of being a synchronous write, the disk spindle is having to move a bunch to write the new date, and the data the was still in ram from the first transfer. They most likely aren't being written right next to eachother and the extra spindle movement is going to increase overhead. I have 96gb of ram, and I believe I changed the amount of ram to use to cache writes. After parity completes, I will attempt to increase the ram cache size and report results. Edit: just checked and my ram cache is set to 10% of ram size. That means that up to 9.6gb will be used to cache writes. That makes sense because that is about when the transfer falls off for me. So after the ram cache is filled, it is both writing that data to the disk, and I am sure it is writing the second transfer straight to the disk. Edit: Increasing the ram cache to 35% did work. I transferred 32gb of files at full line speed. Once the cache fills up. that would decrease a bunch, especially with the parity operation going on now. I believe this is probably just normal behavior for a parity protected array that isn't striped. Spindle overhead and parity calculations are going to come into play once the ram cache is filled. I hope someone more helpful can chime in here. I believe I have done what I can as far as confirming what you are seeing. Ultimately I guess the answer is to disable parity during the initial data sync. I can say that I have experience no issues with Unraid the last 3 years after I set it up. My 1tb cache is large enough to cache all writes until the mover runs, and even when downloading TB's to the array with no cache, my internet speed is a bottleneck before the disks become one. I have 400mbs down and all downloads run at max speed. Unraid is great for media serving and data protection. Because it isn't striped, even if you lose more disks than parity can recover, you don't lose any data and the remaining disks and can read them on any linux distro. If you need Constant writing of your max line speed at all times, I suspect that using a striped solution is going to fit your needs better.
  7. Ok. my cache transfer ran at full line speed, regardless of share setting. Neither one ahd an impact on the transfer, but I am also only running gigabit. I have to wait for my parity to rebuild before I can run any other tests on this.
  8. I disabled parity and it transfers the entire time at line speed. I suspect this is why they suggest disabling parity until your initial data transfer is finished. Presently I am running a transfer to my cache drive and will update when it's finished.
  9. Well it isn't helpful in solving the problem, but I can confirm I experience the exact same behavior. I duplicated your set up, and using teracopy or file explorer in windows, I show the same results. I am disabling parity right now and I will confirm if that solves it or not. Though, regardless of disk fill up setting, both transfers dropped off for me after about 8gb were written.
  10. The amount of data actually has nothing to do with parity speed. The only thing that matters is the size of the parity drives, in your case 10tb. I am only running single parity with an 8tb drive, and my monthly checks take ~18 hours. Edit: just checked the history and my parity time is ~18 hours, not 22 as originally stated. Edit: I'll try my best to simplify how parity works. Here's a link for more info: https://wiki.unraid.net/Parity#How_parity_works Drives store data in 0's and 1's. You 10tb disk can contain 10TB of them. 01010101010 etc. Let's imagine you have two drives, and a single parity disk. Disk one contains 01010101. Disk two contains 10110110. Parity in this case would be 11100011. If you add them together from the beginning, 0+1 is 1. 1 is an odd number so set the parity to 1. On the 4th position, 1+1 is 2. 2 is even so set the parity to 0, and so on for each position. If disk one died, you would calculate in reverse. Since the first bit in parity is a 1, and we know the first bit on disk two is also a one, we know the first bit on the dead disk has to be 0, because that is the only possible one it could be and still have parity be odd. This is super simplified, and ignores dual parity completely, but it's how I think about it when attempting to wrap my head around what is actually happening. Now you see why data size is irrelevant, since data is simply stored as 0's and 1's, and the parity drive is only going to hold so many of those, 10Tb in your case.
  11. Once the data is written, the parity will be created at the same speed it would during any normal check. I do those monthly anyways. So worst case, you're just having to redo the parity an extra time.
  12. If I were you, I would screenshot the main page so you have the disk assignments available. Then stop the array and change both parity drives to none and start the array again. Once you are finished transferring files, do the same thing and assign each disk to the parity position it was it previously and start the array. Then check parity and leave write corrections to parity checked. This should let you transfer way quicker, then build the parity later. I don't see a reason why that wouldn't work, but I suggest you read a bit and confirm. Afaik doing it this way shouldn't lead to any negative consequences.
  13. Are those the speeds you're seeing on the main page, or are you getting the speed from whatever you are transferring from? Did the disk being written to change when the next video queue'd up? If so I suspect the system is still reading the needed data for parity from the first movie, when the second movie begins writing. In reconstruct write, all disks are read except the one being written to. In theory, if the disk being written to changes before the parity is generated from the last transfer, you will run into a disk being written to for movie#2, while still being read to create parity for movie #1. These are all educated guesses on my part, they are logical, but someone could prove me wrong.
  14. Are you basing the speeds off of what the main page says? As you can see, when I started transfers to two different disks, my parity speed looks similar to yours. My actual transfer speed did not change from ~1GB/s. I am not sure if the listed speed is the speed at which it is updating parity, or if the main page is just unreliable for reported disk speed. I can 100% say that the windows transfer speed window never dipped from 112MB/s. To show this, I again did the transfer, watched my main page list the speed as around ~55MB/s, then took a screenshot of iotop showing the actual speed.
  15. What happens if you change your Share setting to high water? I suspect that parity is being reconstructed, but it hasn't has a chance to do so because you keep changing which disk is being written to. As in it's still reading to generate parity from the last movie you transferred, when the next movie is being transferred to a new disk. If you minimize the number of disks you are writing to at a time, you shouldn't see the parity being written until the transfer moves to the next drive. I use high water because I am able to leave more of my disks spun down when watching movies etc. You have 2 parity drives, so for you to lose data you would need to have 3 drives fail. edit: I would stop the transfer, then change the share setting.
  16. I'm not sure what the problem is then, especially if you set it to reconstruct write and hit apply under disk settings. I just tested what my server's behavior is during a large write. It writes the file to one drive, then reads the rest to build the parity. The drive being written to does not have much in the way of reads occurring. How are the disks connected? Are the disks directly connected to the motherboard sata ports, or do you have an HBA present? To be honest, with you seeing high reads and writes simultaneously, I would try to set reconstruct write again. Like I said, for me, the disk being written to writes at line speed and has minimal read activity. If I fire up two transfers, then I start to see both high read and high writes on my drives. Also, and I don't believe it has to do with your problem, but I would install the tips and tweaks plugin from the apps and set your CPU scaling governor to performance. I was having some stuttering and other issues because the cpu speed was not ramping up and was stuck at 800mhz. When I set it to performance, it ramped up as needed. Conservative works as well.
  17. After it finishes, I would run it again and make sure it's 0 again. I've had them pop up when writing during an unclean shutdown before. As long as the error count is fixed and stays 0 on the next check, I wouldn't worry about it.
  18. I don't have much to add in the way of helping you, but I can say you should be able to write at the Max drive speed of whichever drive in Unraid is being written to. I only have 1GB ethernet, and can saturate that easily. My arrary writes at ~112MBS. In my system, the fastest disk is capable of ~150MBS, so I don't see the point of 10gb in Unraid unless you have a large cache drive to write to that can actually utilize the speed. Those 10tb Ironwolf's aren't shingled right? As in SMR drives? Final question, you are using MB/s correctly right? As in 70 Megabytes per second? I ask because you mention you could be seeing higher speeds with USB 2.0. USB 2.0's theoretical max speed is 480mbs. Megabits per second. 70MB/s is 560mbps. If you mean that you are seeing 70mbps then something is definitely not working correctly.
  19. I would love to replace my original Lime-Tech Badge on my server! I'll try and post pics later if anybody is interested.
  20. Welcome! I have been using Unraid for almost 2 years now and have been amazed with its features and reliability. I can't comment on your synology questions, but allow me to touch on the drives. You can add your 2xSSD's as cache. The 6TB drives can be added to the array, so long as the largest drive is parity. If you have the 6tb drives, and a bunch of smaller drives, one of the 6tb ones would be dedicated to parity. You can choose not to run parity, but in my opinion that removes one of Unraid's best features, data protection. Keep in mind if you choose to add the 6tb drives or the SSD's, they will most likely need formatting. Please be sure to transfer all the data off them to be safe first. What did you mean about combining the debian server into Unraid exactly? Transferring the CPU and RAM over? That should work just fine if the sockets and memory are compatible. I've upgraded my Server many times, even changing sockets from a dual core to a quad core, and finally to a dual socket Xeon build. Unraid always picked up where it left off at and never complained about the hardware change. Again, welcome to Unraid and the forums. If you have any other questions don't hesitate to ask. I've found the people here are always willing to help, and quite a few are very knowledgeable.
  21. Would this explain the Segfault as well? Any idea why the event wasn't BIOS detected? I pulled the First CPU's ram and replaced it with 4x8gb sticks. I have ECC errors with other ram sticks, and those were all reported in the BIOS event log. Also, Ths machine was my ESXI host for awhile. Is Esxi just not as in your face with system warnings? I consider this a good thing.
  22. I recently upgraded my server from an E3-1285lv3 system to a dual E5-2680 systems with 128gb ram. The ram is new to the server and not carried over. The last couple days I got a warning for machine check errors, and then today I got a segfault warning from Plex. I'm suspecting a possible bad RAM stick, but surprisingly the BIOS doesn't have any ECC related logs. I am going to pull half the ram and try to narrow it down some. Does this sound like the right direction? I am attaching diagnostics for help. Thanks in advance! The problems start yesterday at approximately 1am. tower-diagnostics-20190719-0146.zip syslog.txt
  23. Basically the title. When I went to sap motherboards and CPU's out, I happened to check the stats page before shutting down. At some point during the last 3 weeks or so it was up, the stats shows a very high number in the ram used graph, in the negative bytes. Once the mobo, cpu and ram were changed out and the server restarted, I checked the stats page again and the issue is still present. A screenshot and diagnostics are attached. Many thanks! tower-diagnostics-20180829-2241.zip
  24. This isn't a bad deal either : https://www.newegg.com/Product/Product.aspx?item=N82E16822149628 $110 after Promo for the 5tb version. i have 4 of them currently, and the only problems I had seem to have been caused by some sort of PSU problem. Once i swapped PSU's the reallocated sector count quit rising. Both of these deal work out to around .022 cents per gig. And the 5tb is gonna give more storage for the bay it occupies.
  25. I don't need more than one x16 slot. I would have been fine with several x8 slots. I basically need room for maybe a full size GPU, and 2 HBA's. I'm actually leaning towards installing ESXI rather than Unraid on this board. My X10SL-F mobo and e3-1285lv3 CPU are both hardly being taxed other than the occasional plex stream. They also use much less power than I imagine this system is going to. I already have 96gb of ram to put in it. Do you know if this problem only affects Unraid, or is it going to be a problem with Esxi as well?