rcrh

Members
  • Posts

    227
  • Joined

  • Last visited

Everything posted by rcrh

  1. I've been through the bios settings and can't find anything related to pae. on the bios info page it shows 8gb install. I've been trying to get a bios upgrade installed but that's a whole other kettle of fish.
  2. That didn't work. Here is my syslinux.cfg default menu.c32 menu title Lime Technology LLC prompt 0 timeout 50 label unRAID OS menu default kernel bzimage append initrd=bzroot label Memtest86+ kernel memtest
  3. OK, I did a bunch more searches and found more conflicting results so I thought I'd go right to Lime-Tech. Here is my question to Tom: Can unRaid 5.0.6 use RAM above 4gb? Here is Tom's response: Yes unRaid-5 uses what’s called “PAE” kernel which has 64GB limit. So it appears I have a hardware issue somewhere or an unraid config issue. Anyone else have any ideas?
  4. Wow, I'm completely surprised by this. The old v5 wiki even says that 4+ is recommended. http://goo.gl/R8jWPp It sure seems like my entire hardward upgrade to a 64bit processor was a waste :'(
  5. yup. version 5.0.6 It really has a 4gb limit? I thought I've seen lots of posts around here where people are talking about using more ram. I've even seen posts talking about special tweeks to limit ram usage to 4gb in v5.x. Have a really missed that limitation?
  6. I'm running Dynamix gui so I get slightly different info. Info button says: Memory: 32768 MB (max. 4 GB) but the dashboard reports: Memory size allocated 3291 MB installed 8192 MB (max. 4 GB) I also seem to be running around 80% memory utilization. All this seems to suggest only 4gb is being used by unRaid.
  7. So it ran through the night and only got to 1.25tb of the parity. I knew my data driver are only 1tb so I killed the process, shut it down & moved ahead with the upgrade of the next data drive. That data rebuild went perfectly and I ran another parity rebuild that also worked as expected. I have one more data drive to upgrade but it appears that everything is on track. Thanks again Garycase.
  8. I'm wrapping up a mob/processor upgrade and am having an issue. my BIOS reports the full 8gb of RAM that is installed but unraid is only seeing & using 4gb. Since this isn't a new build and my old system was a 32bit system I'm wondering if there might be something in my config that might be limited unRaid. I do have two options on my boot menu and one is limited to 4gb but that is NOT the option that I'm running. Anyone have any suggestions?
  9. I've just upgraded by parity drive and now have a 2tb drive followed by 3 1tb data drives. I started a parity check and expected it to end at about the 50% point once it passed all of the data on the 1tb drives. But at the moment it is at 52%, 1.04tb and says it is currently running at 3.71MB/sec with an estimated finish in 4372 minutes. Oh, and the three data drives have spun down. It does seem to be writing new sync error corrections so maybe it is zeroing out the balance of the 2tb drive. Anyone have any insight to this? I'm going to let it continue running but I fear it is going to be awhile at 3.71MB/sec!
  10. ...FWIW I think you also chose a drive smaller than you should have for the replacement. With drives up to 8TB these days, the "sweet spot" in terms of cost/TB is 3TB (or even 4TB with some sales). Unless you have the 2tb drives on hand and then that price to size sweet spot is even sweeter!
  11. All and all I'm happy with the process except for being forced to rebuild the data to the old parity drive. You should be able to finish the copy process and be left in a position like you were before you began: a working parity & one missing data drive. At that point you should be able to either rebuild to the parity drive or to a new drive. I get that the process is called a "swap" and it did swap the parity & data around. But I don't think my case is an unusual case in that if you're increasing the size of your parity you're probably also increasing the size of your data which in turn likely means you're not going to use your old parity disk. It just seems like this should be an option. In my case it would have saved me about 8 hours. Oh well. The bigger point is that it worked and I didn't loose any data.
  12. I think it worked. W00t! I'm going to run a parity check and then move on to upgrading the data drives to 2tb. Thanks to everyone here for their help.
  13. so the parity copy finished and here is the message I get Stopped. Ugrading disk/swapping parity. Start will expand the file system of the data disk (if possible); and then bring the array on-line and start Data-Rebuild. Yes I want to do this I'm going to click "yes" and let it "expand the file system". I guess I'll be able to put the second new drive in tomorrow.
  14. Not sure what else I could have done. I assigned it into the only slot available. The third & fourth slots are occupied by my remaining data drives. Yeah, with hindsight I see that but I still had another new 2tb drive to put in. I had no intensions of using the old parity drive so rebuilding data to it was a waste of time. Even with this mistake I'm still not sure why it didn't come back up and recongnize the first new drive as the parity. I can only guess that bringing up the array does something to "lock in" the parity copy. There will be more to report tomorrow.
  15. OK garycase we've established that I'm not clear. Let me try again. 1) powered down 2) removed redball drive 3) installed new 2tb drive 4) powered up 5) unassigned parity drive 6) assigned new drive as parity 7) assigned old parity as data 8 ) checked the box for parity swap 9) clicked COPY 10) went to bed 11) in the AM confirmed that copy had completed 12) powered down 13) removed old parity drive & installed second new 2tb drive 14) powered up 15) saw message that I had too many missing drive. 15a) asked a question here 16) powered down 17) removed the second new 2tb drive and reinstalled the old parity drive. 18 ) powered up 19) unassigned parity drive 20) assigned the first new 2tb drive as parity 21) assigned old parity as data 22) checked the box for parity swap 23) clicked COPY that was at about 12:30 and it's now 55% complete. I think this is absolutely everything that I did. Sorry if this is still unclear. But frankly until the second attempt at a parity swap finishes this is kinda pointless. Thanks for your help
  16. garycase, That's EXACTLY what I did and then I went on to add a second t2b drive to replace the old parity drive. I didn't know that the parity swap did a data rebuild. But my plan was to swap the parity & then rebuild the data onto a second new disk. original setup parity: 1tb redball: 1tb data: 1tb data: 1tb target setup partity: 2tb data: 2tb data: 1tb data: 1tb When I thought the parity swap was done I powered down and added the second 2tb drive. It was at this point that I got messages about too many missing disks. I'll report back tonight when the second attempt at a parity swap finishes.
  17. itimpi, OK, I see what you're saying. I'll report back tonight when the copy finishes. I can't believe I would have moved on if the process wasn't reporting 100% complete but I've done stranger things before my second cup of coffee. Thanks.
  18. itimpi, "Not sure what you mean by this. The parity swap process requires the array to be removed and the data rebuilt to the old parity drive." I'm not sure where this "requirement" is. The copy process finished and the next thing I did was power down the system. Nothing "required" me to do anything else. It seems like if you MUST start the array and rebuild the data to the old disk then unraid should just go ahead and do this. In any case I'm doing that now. The parity copy is running and I'll start the array to rebuild the data in about five or six hours.
  19. Here is everything I did: 1) powered down 2) removed failing drive 3) installed new drive 4) powered up 5) ran parity swap 6) powered down (note that I didn't start the array at this point and rebuild the data to the old parity drive) 7) removed old parity drive 8 ) installed new drive 9) powered up 10) looked (with horror) at a message saying too many missing drives. I'm going to put the old parity drive back in and see what things look like. *** Old parity is back in and I'm rerunning the parity copy. This time I'm going to start the array once the copy finishes. This does seem like a waste of time since I don't plan on using the old parity drive but if that is what it takes I'll do it.
  20. So I had a data drive red ball and was taking the opportunity to move from 1tb drives to 2tb drives. This required doing a parity swap since the replacement drive was larger then the parity drive. yesterday I removed the failing drive and ran the parity swap overnight. This morning I downed the system and removed old parity drive and replaced it with a second new 2tb drive. When I started up the system the main console still lists the old parity drive and if I change it to the new one it tells me I have too many missing drives and won't do anything. I thought this would be a two step process but am now wondering if I have to do it in three or four steps as follows: 1) replace the failing drive and run a parity swap, 2) rebuild data to the former parity drive, 3) update parity (is this needed) 4) replace the old parity (now data) drive with new 2tb data drive and again rebuild data. Is this right? I really thought this would be a two step process at most. Thanks in advance.
  21. I have a lot of data to move between two v5x servers. I've mounted a share from the second server as CIFS and am using midnight command to copy files. Is there a speed advantage to using rsync or would there be a better way to mount the share? I'm hoping someone can offer up a trick that will save me DAYS of copying. Thanks in advance.
  22. Thanks to all for the feedback. This has been an interesting read. One thing I'll add is that while I mentioned plex or emby I have no desire to have them do transcoding on the server. I want them to manage a centralize database and scrap metadata but I don't need them to transcode. This should significantly reduce the demands on a processor.