Jump to content

Marino

Members
  • Posts

    135
  • Joined

  • Last visited

Everything posted by Marino

  1. It is on reconstruct write already. I thought this would be better when transferring >9TB
  2. I don't know why, but the speeds are not really normal in my opinion. Copying Files which are 20 gigs and greater.
  3. I have the 200-08P too. This is for my Ubiquiti Stuff (CloudKey, 3AP's) along with an 200-26 and the house electronic is using an 200-26P. Good devices. But sometimes a bit annoying, that the 08 is the only one which is having a UI, which is a bit different from the others.
  4. I got the note. But it isn't fun anymore, when both of my servers are in my living room and are mostly not accessible. 10g NIC and even Link Aggregation is really nice. In my case I am working on a MacBook Pro and using LAN for that. What cables do you using? Maybe there is a way for me to use this in future, too. My Appartement is fully cabled with CAT7, but there is no wall plug to use this. Unraid > 10 gig to Cisco SG200 (SFP+) > Living room would be cool, when I could use this with my MacBook. For something likes this, i really enjoy spending time for research. I have a 6 month old daughter, so I've to set priorities. And these are on things what i like to do I cross one fingers that your test is successful! Sounds really nice
  5. Now I am thinking you get me wrong. 3x12TB in total, not 2. One for parity and the other 2 for replacing old ones and for swap 3x4TB in an other server. In Total 24TB with an use of 14-16TB + reordering data from the other server).
  6. preclearing was successfully finished on both. But only on shows precleared on "FS" and this long before finished. But atm both of them are getting cleared.
  7. Many people sad I would be better, because it gives the disks a test run. You're right. It stresses the disks a little bit to run 60 hours in "one task" with full load. At the one hand is that a good test and on the other hand a long time and the are getting more temperature as they would in normal use what is a disadvantage. Sometimes when it was very warm here, I used 4 hard drive mounting brackets and put them randomly in the freezer and on the disks. There were no condensate, I looked for it. But it cooles the disks 5-6°C down... I have to reorder some things. Extern I have 4TB of Videos and Photos some data from my other server will go on this server and in addition to that I lost a 3TB drive on my other server and will go to a dual parity which takes another data drive with 4TB. Two disks are going to be disconnected. The 1 and 1,5TB ones. Thats loss of 9,5TB in total for data on the server. So the 2 12TB disks are going to be a little more than half full (14-16TB + reordering with other server) and for that I need a second disk in this array. I don't want to take an 3-8TB one for that, when I have a 12TB parity.
  8. This pisses me off a little. After nearly 60 hours of preclearing, I finally could add the two 12TB disks in my array. But unraid don't recognize them as precleared and startet with clearing to add them. When this task is done (14 hours), I've to copy data of 4 Disks (9,5TB) and then remove these 4 disks. After removing unraid has to calculate parity again. This hole thing takes like forever Don't get me wrong, preclearing is testing the drives an gives them over 60 hours of running time which is great. But I thougt with zeros on it, I can add them instantly. Kind of And finally when this is done, I can add the other disks in my other array to build dual parity, copy the offline 3TB disk to the new one and calculate parity. I had always thought that unraid increases the availability, but my servers are offline for over a week...
  9. 2 12TB HDD's are in Post-Read of PreClearing at 27%. One of them is shows as "Precleared" but the other is not. Only 12 hours left to assign them, yay.
  10. I thought it would be less stressful for the data disks. But I also thought that the drives are spinning the complete time and as I looked on my server a few seconds ago the speed of computing parity increased to 185MB/s and all the data drives aren't spinning anymore while computing parity.
  11. Thank you. I was confused, because i hoped to copy the parity data. When this would work with parity swap why not here? Now I understand why I can't unassign the drive in step 2 and again in step 10, because the first means the data drive and the second obviously the parity drive. Then this should work perfectly and I have to reconstruct to the old parity drive (3TB) and not on a 4TB drive as I wanted before, because the old parity is placed as Disk7 and the 4TB is for swapping.
  12. Okay, when this only covers up this special task I am not wondering, why there was no copy-button. Which is then the right way to replace an existing parity drive with a larger one? When there is no option to copy, which would be very nice, then I always have to compute the parity from scratch and the only thing I could have done better is to take out the vsalid parity drive in case something happens during parity build?
  13. Okay, so in this case where i wanted to swap the 4TB with the 12TB one it could not working this way, because I don't had any data disk which I wanted to replace?
  14. I did a reboot (Step 5 and 8). Maybe I misunderstood the wiki. When i unassigned the parity drive in Step 2, how can i unassign it after reboot again in Step 10? After reboot (Step 5) it wasn’t assigned anymore. I left the drive in the case, because I wanted to copy from it. Because I took out the autostart of the array, I had not to stop it (Step 9). Step 12: Where Do i had to resign it? What I’ve done: Stopped the array unassigned the old parity drive start the array. There was no checkboy „Yes I want to do this“ stopped array power down power on (Array was not started, no autostart activated) New drive assigned as parity Because of the lack of „copy“ button I assigned the old parity as data drive (maybe this was wrong. But when there is a new parity drive and I want to replace a data drive (Step 12), how can I restore data without parity and with on data-device less?
  15. Oh, I am on 6.5.3 on both servers. But why didn't the copy button showing up? When exactly the same is happening at the other server, the parity could be invalid and I've got some problems to recover the data from Disk7.
  16. I uploaded this in the other thread before. This is my other server. When I am done here, I'll get 3 4TB drives. Disk7 has to be rebuild. 2 of the 4TB drives should be a dual parity and one is for replacing the Disk7. Because they're 4TB I have to swap one parity before rebuilding on a 4TB drive.
  17. You're welcome. I have to thank you helping me, solving this problem. So: "Thank you guys for your awesome help!". We can blame marvell for this. After parity computing, adding two more 12TB disks and copying from the old disks, I'll only have 2 data and 1 parity drives and a cache. So, next week as it seems. Which gives me 2 more ports on Intel. Saying this, I have to fill about 15TB of Data before I've to use this. When I need more ports, I'd go with an extra controller instead of using marvell. @johnnie.black This is postet every time: https://wiki.unraid.net/The_parity_swap_procedure Maybe I missunderstood, but in #13 is written that there should be a copy-button to copy the parity data to the new drive. Am i wrong? I am doing this upgrade to get 3 4TB drives out of this server. I had problems with my old unraid-server, which has one parity and 8 3TB drives in total. I have to reconstruct one drive, because of cable issues. 2 of the 4TB drives will build a dual parity and the third one is for rebuilding and not touching the drive which was causing the problem. Therefore I have to swap one parity drive and add one more to it (at the end). When the swap procedure is wrong, what is the right way to do it without computing parity at least for one drive. We can discuss this in my other thread if thats the better place for that. While I tried to make the swap (wiki) I managed to make the parity invalid. In no case this should happen on the other server. Because I need for reconstruction a 4TB drive in a 3TB-enviroment. Before I can do this, an upgrade of parity (3>4TB) is needed. When the 3TB parity gets invalid in either way, I've got some problems reconstructing the drive which is offline! This is my other thread, where I have to make a parity-swap when this server is ready, without loosing parity, because I need it to reconstruct a disk.
  18. PreClear is Done. I've connect the drive to the Intel-Controller and it works immediately. So I can blame the Marvell-Controller. I just wantet to make a parity swap with copying the parity disk like the parity swap procedere in the wiki describes. But sadly there is no copy-button so my only chance is to compute the parity again. Because of 12TB, it'll take 35 hours. Copying 4TB would have been much faster. EDIT: I wonder if it is possible to upgrade the marvell-firmware on the Gigabyte-Mainboard.
  19. The log looked like I never startet so I decided to end this thing an start it from scratch. I was lucky. PreClear could resume the clearing and is now at 0% of Step 5. I wish I'd known that before than Preclear would be done in a few hours. So it'll take another 13 to 15 hours and I could have saved some time for researching this problem
  20. Yes, it is in the Dock at the moment. LED lighting is constantly on instead of flashing. The drop of temperature also give the signal that it is done, but in preview is no success in Step 5 and it don't show that it is ready?!?
  21. Mhhh. PreClear is almost ready, but now it is stucked. "unRAID's signature on the MBR is valid" and in preview ""Step 5 of 5 - Post-Read in progress ...". The Elapsed time is "frozen" since about 9 hours and the disk is getting cooler. From 43 to 44°C while writing zeros to 38°C. How can I see, if preclear is doing something or not?
  22. I missunderstood preclear, when It shows almost 100%. This may take a while, because it is now in Step 2/5 (zeroing). What I'll test next is when preclear is finished to connect it instead of the cache on the Intel. And I should take a look into the BIOS, because of virtualisazion and marvell. Maybe it is a controller-problem with marvell, because 4Kn should run (as the gigabyte service said). When nothing helps, I disconnect P3 in any way. But I will keep you up to date, even when it takes a "few" hours to finish preclearing.
  23. Nice to know. I just saw that my Cache Disk (SSD on Intel Port 6) isn't recognized anymore ether. Preclear needs another hour. So I can test at the eveneing on another port. Maybe the Cache disk cable was touched. I'll see.
  24. So swapping a disk is okay? Just swap to another port and reassign it in unraid? Sooner or later there will be only 12TB disks, So for 7+ disks I'll need another controller, if thats the problem. EDIT: The disk which is working on that is an 250GB 2,5" WD Blue. Maybe that does have not the same effect als really large ones?!?
  25. Assuming that the controller has some problems then I would lose 4 Ports to use. Could I take the cache off and place the 12TB here, then make a parity swap and replace it with another disk (parity) on port 1? I could now (when working) use port 6 and after parity swap I would connect it to port 1. Is this problematically or is it possible to stop the array and assign a working disk to another port?
×
×
  • Create New...