elmetal

Members
  • Posts

    80
  • Joined

  • Last visited

Everything posted by elmetal

  1. Been using this tool forever! Love it. Recently I removed the image (because I'm an idiot and didn't read the prompt when it asked if I had chosen the right docker to remove. Anyway. I re-added via CA appstore and here's the error I get on startup... https://pastebin.com/z6xzhPj7
  2. That's true. As someone who just upgraded from 6.8.3 to 6.9, how would you go about this? Move everything to array, remove cache pool, create new cache pool, move everything back, change docker location to folder in the docker settings, add each docker and point to the proper appdata location etc and everything should be good to go?
  3. maybe a stupid question but wouldn't libvirt.img also benefit from this change?
  4. was it your image? I am having the same issue
  5. I too am getting the Array index [2] out of range, array size is [1] when trying to benchmark
  6. It's a thread with 4 pages. Which instructions are you talking about>
  7. Hey guys. Been running unRAID for about 7 years now, and I just got ahold of a "new" to me server that is running a Phenom 2 x6. I am wondering if the newest build of unraid supports CnQ and C1e and all the power saving functions so I am not wasting needless money Also, how do I check that?
  8. dead links in main post. anybody have final version vmdk for 5.0?
  9. I have the exact same issue. Is there a way to tell unraid to ignore the redball and continue as if it were green? and then rebuild the truly missing drive? I am certain my redball happened from an unsafe shutdown, the drive is 100% ok.
  10. Hey guys running 5.x I have 10 drives in my array. 9 drives and a parity. Here's what happened. I had a drive redball, and within a couple of hours I was ready to replace it since I keep some warm spares in my server. Took the array offline, Drive # 8 was redballed so I planned on replacing it with a precleared drive that I had ready to swap. As soon as I took the array offline, drive #3 disappeared. It didn't go red, it just disappeared. I've experimented with moving it to a different slot in my case (since I have NORCO 5x3 enclosures) and no matter what I do, when I boot up unraid, I cannot see that drive to place it back in slot #3. Now here's the fun part. Drive #8 shows fine in long SMART test. I assume it redballed due to a hard shutdown when my power surged enough to kill the battery on my UPS. And then somewhere along the way drive 3 disappeared. I saw the drive there on the array everything fine. As soon as I unmounted the array, drive 3 was never again seen by the OS. My question is: Is there a way I can tell unraid: Drive 8 is fine, just use it, and then rebuild drive 3 on a clean drive? because as of right now I have a dual drive failure. I thought the TRUST MY ARRAY feature would work (I haven't tried it) but I want unraid to accept my current drive 8 as if it had never failed. is there somewhere I can change a setting to let unraid ever forget that it redballed? I'm willing to accept the risk.
  11. so a 1015 in IR mode can be used as a Datastore RAID 1. right?
  12. at that price I can just get another m1015 and flash ir in IR mode and do that for <90 bucks.... right?
  13. I deleted the log off my dropbox. I'll have to get a new log soon
  14. I've rebooted it a million times. problem persists.
  15. does this quit cache_dirs? That's the biggest thing for me right now, it doesn't stop cache_dirs and can never unmoutn
  16. my cache_dirs will randomly stop running (I don't know if it has to do with the fact that I am doing preclears) is there a quick script anyone has that I can add to my crontab to make sure it's always running? (say run the script every 30 mins) I can make something right quick but if someone already has that'd be sweet thanks guys
  17. RAID1 write performance isn't that great on the M1015 (or any card that lacks a cache/battery really) I would suggest the Dell PERC 5/i or 6/i. The 5/i is a pretty good card that is cheap, has cheap BBU, and performs pretty good for its price. The 6/i is better, 6g, and more expensive. Anything Dell is almost guarenteed to work out of the box with ESXi. The P400 isn't too bad. The only thing I don't like about HP cards in general is they almost always have an "advanced license" of some sort to add extra features. I currently use 2 ssds as my datastores. what would be the cheapest method for me to RAID1 those datastores (I'm currently using ssd1 as the main, and ssd2 as backup) with ok performance? I already have 2 m1015s (that I use for unraid) so I'm familiar with the card and the flashing. I don't want to break the bank, but I also don't want to undo the speed of the SSDs
  18. https://dl.dropboxusercontent.com/u/116430/new%20%202.txt too large for attachment apparently guess I forgot to mention. latest version (5.0.4)
  19. so I have like 8 shares but for the purpose of this issue let's say I have 2: tv and backup. if I turn on nfs share on both, if I try to mount the tv share elsewhere, I end up with backup files. and mounting backup also results in backup files. so I went in /boot/config/shares/sharename.cfg (where sharename is the sahre name) and it looks as though all my shares have a different NFSID but these 2 share an ID (102). My shares have, 100, 102, 102, 103, 104, 105, 106 so on. so they overlap on the 102 and no one has 101. I've tried to manually change it to 101 and save, restart NFS but no go. right back to 102. WHAT DO I DO??? For now I've just disabled NFS on backup. I need a solution
  20. Did you ever find a fix for this? I detest SMB and I love how fast NFS is but if I can't get this fixed I'll have to SMB it... Anyone?