Kilack

Members
  • Posts

    73
  • Joined

  • Last visited

Everything posted by Kilack

  1. I migrated all the drives to XFS, took a long time.. but worth it... a stable system... loving it. Now I am really enjoying the step up from Version 5
  2. A really nice plugin, thanks. it would be great if there was a select ALL/deselect ALL. There also seems to be a small display bug in gather mode. If i go into gather mode and expand out a share then select an item, it lists the drives that the item appears on - perfect. So Item A - Disk 1 Disk 3 I then select a second Item and then it kind of breaks. Item A - Disk6 Item B - Disk 1 Disk 3 So it seems to get confused, and puts Item A's drives against the new item selected and puts what should be Item B's drive against the first selected show. Quite easy to reproduce, let me know if you need anything
  3. Yeah this is exactly what I did last week, changed from the same cards over to M1015's (LSI). Just made sure I put the cables into the same ports, rebooted, no issues at all, just came straight up.
  4. Hi, I was using version 5 of Unraid up until about a year ago, then shifted to version 6 which I really like. Unfortunately I had freezes that would occur ranomly, sometimes it would go two weeks, other times just a few days. It seemed to happen when there was a lot of disk access from multiple sources, eg something trying to write while another trying to read. What happened is that, the web interface would stop responding and the shares would die. You could still telnet in though, what I found was that if I went into /mnt and went through each disk and did a ls command, i would eventually strike a disk that wasnt responding. I had two marvel sas cards and thought maybe it was incompatability with those (some known issues with version 6). Recently I just swapped those out for IBM M1015's cross flashed into IT mode. The same thing is happening though Before the ls command would die on different drives however so far its been the same drive 3 times in a row.. no errors on it if i run parity etc but for some reason it's frozen up and not responding at all. Is this a type of behaviour a drive could exhibit? anything else I should be looking for? It is a WD Red 3TB
  5. Hi, On 6.3, ran out of space on one drive, moved a big folder over to another drive. Now in windows i cannot get into that folder, gives access denied error. I have rebooted etc but still the same, even ran a disk check on actual drive where the data is. I checked the folder was actually removed off the old drive too, yes. If I telnet into unraid, all the directory structure is there and all the files. I am stumped. Any ideas?
  6. 2 days ago I took the plunge and upgraded from a 5.xx beta after running it for several years with no issues to 6.2.4. I had never upgraded earlier as it was working just fine and am a firm believer of if it ain't broke, don't change it. However 2 parity drives was a big pull.. not worried about vm's or docker etc, as already have a standalone machine that runs that. I upgraded, did a clean install, wiped out the flash card and started clean. All was fine, interface is nice!! - like it. So everything is fine until about 20 hours into using it.. Then I see some apps complaining it cant see the share and starts alerting. Check on local computer cannot access the share. Cannot access it via the webui either. Weird that one computer that was already attached can still see the shares though but no others in the house can. Computer that can still see the shared SMB's cannot see the interface though, just times out. I can still telnet into the unraid from any machine, though none can access the webui/shares now, to start with they could. Please keep in mind, no network changes have been made at all and the setup has worked fine for 2-3 years with no changes apart from disk additions. So I am stumped. Restarted Unraid, worked fine with all pc's using it, but after another 8 or so hours same issue... new connections not tolerated. Some computers can still access shares but cant get into the webui. Tempted to roll back to beta 13 i believe it was.. but hoping someone can help me Any ideas? tower-diagnostics-20170102-1921.zip
  7. I'd second that, I have been on beta 11 for a month or so now without issue and using these cards. I had the blk error on beta 12 and quickly moved off it. I can't really comment on parity check times as can't remember how long it took when I was using beta 12 for that brief period of time.. thanks mike. I think I'll give it a try.. I'd rather have some slow read/write speeds than a completely unusable setup Correct me if I'm wrong. the best way, if i have a saslp-mv8, is to downgrade to 5b11, right? i
  8. I had two WD's fail in the same week and they were only about a month old. Very rare though. Apart from that, I have some pc's that have hard drives that are about 12-15 years old and still going strong... I have the old bigfoots that still work.. The only drives I have ever had that have failed are these two wd's.... and seriously.. I have about 50 hard drives.. in computers ranging back to 386's... I don't throw hardware out... I always buy a new machine... but hd's at least for me have been very reliable.. but still i would never risk data without at least some protection (unraid) and a backup of really serious data...(zfs raidz2).
  9. Assuming you have a license for the new flash drive then yes.
  10. As for the file allocation method, it really is up to you as in each circumstance you might want something different. eg for my tv shares, I always use most free because I want new shows to go on the disk with the most free space which makes more sense to me because of the way tv shows will grow a lot so you want new shows on a disk with the most space. for movies fill up or high water would be suitable because once you have a movie it doesn't grow like a tv show does. Some like high water because it distributes more evenly and they think that makes it safer in case two drives fail.... you will lose less data...
  11. Sounds on the low side but 60MB+ is definitely on the high side unless you are running SSD's or using a cache. I think the vast majority are around 20-30MB.. I have the latest black WD drive as parity and earx's as data drives and I get around 30MB. So yes you should be aiming for more but 60 is way unrealistic without using a cache drive.
  12. My thoughts are, unless there is some feature in 5 beta 12 that you desperately must have then don't bother upgrading. It is a beta. If what you are using now is working then why change? If it ain't broke, don't change it!!! . Patience is a virtue etc etc, wait until its a non beta if you can.
  13. Doesn't sound good... I guess you have tried flashing it again, like running 5it.bat etc? Or even, the bat that wipes it.. is that 3.bat? I'd go through them all again and hope that one might pick it up. Very weird though.
  14. I'm using beta 11 with the AOC-SASLP-MV8 and have no issues at all. Seems most people having issues are on beta 12 and using two of these cards, I just have one in at the moment. I have a couple of the m1015's which has been flashed to generic LSI firmware it IT mode and also i got a couple of these AOC-USASLP-L8i (also lsi cards) but havent used them yet with unraid. If you are going to buy something go for one of these LSI ones, they leave more options too as they are supported by solaris and bsd, where as your marvel ones aren't. It means if you ever did want to experiment with zfs etc you wouldn't need to buy new cards. I run both, unraid and some zfs on solaris so makes sense to get cards that can be used by both in my opinion so I can swap them around etc.
  15. With regards to your setup, yes it will spread them over 3 disks.. ideal? who knows.. depends if you want all 3 drives spun up when you access your movies etc.. Some people seem to like the high water allocation, i prefer fill the disk method myself, means you are going to use less disks generally, also makes it easier to do backups as you have your data on less drives. oops after reading your message I see it does seem to be running at gigabit speed.. so removed my first part
  16. Unfortunately yes. We hope a second parity drive will be added eventually as when you start being able to have 20 drives, there is a chance of two drives failing especially on a rebuild. Also.. and I maybe wrong here but at the moment, if there are parity errors, you can never be 100% sure if its the data drive that has errors or its the parity drive with an error. Drives can randomly flip bits, bitrot etc. With two parity drives, it would be able to compare both parity drives with the data drives and work out which drive is actually wrong which would be pretty awesome....
  17. telnet into the server and remove them directly there
  18. Remove the tick from the option "Correct any Parity-Check errors by writing the Parity disk with corrected parity. " Then run the parity check. If a drive fails a write then it will be automatically removed the array and a red dot placed next to it. The array won't stop, as that is the point of unraid - you can access the data of the entire array even if one drive is removed. However you need to fix things as if a second drive dies you will lose data...
  19. Perfect, thanks ohlwiler. I ran initconfig which blue-balled all of my drives. When I started the array, a parity sync started right up. Exactly what I wanted ... only 556 minutes left I'm assuming this parity sync will completely overwrite any parity sync that happened previously. Hopefully no more disks fail during the parity build so I can get my array into a protected state. And hopefully newegg gets me a replacement disk soon. Worth buying an extra drive for situations like this
  20. I just tried this on a second m1015 and when I get to the stage of doing 5it.bat it comes back with ERROR: failed to initialize PAL Exiting. Any ideas? update: ahh reading the thread I see it is caused on the chipset, makes sense as I am flashing this one in a different pc than I used last time. OK Ill have to take my desktop pc apart again and use that
  21. Thanks guys, I'll leave them be for now then but keep an eye on those values.
  22. Couldnt help but notice this thread as I have all WD drives currently. EARX's EARS's EADS's faex - parity Should I be running wdidle on these? is that the recommened practice? What kind of LCC should I be seeing per power on hours roughly? small sample eads 9 Power_On_Hours 0x0032 076 076 000 Old_age Always - 18092 193 Load_Cycle_Count 0x0032 189 189 000 Old_age Always - 34107 ears 193 Load_Cycle_Count 0x0032 199 199 000 Old_age Always - 5222 9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3158 earx 193 Load_Cycle_Count 0x0032 199 199 000 Old_age Always - 4205 9 Power_On_Hours 0x0032 099 099 000 Old_age Always - 1162 faex 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 179 9 Power_On_Hours 0x0032 099 099 000 Old_age Always - 1163 wow parity is low, thats cause its not a green drive and doesnt suffer this issue? so i should run that program then?
  23. Yeah I got one of those readers for my second pro license, well worth it, I would have two from the start but didn't even think about card readers.. Definitely the best way to go.
  24. You have a similar board to me, I have the p5q pro, managed to get into the lsi bios by going into the main bios and disabling the marvel ide boot rom. Yours is probably the same, try that... Have same issue have a P5B Plus ... going to check on that next reboot I tried Brit's thing by removing all cards from the mobo but that didn't help i can enter the adaptec and my other cards bios's just not the BR10i it doesn't really bother me .. it would only save a little time at boot as far as i understand if you disable int13? anyway next reboot i check for disabling ide ... although i can't remember seeing this... i have a jbm363 onboard but using that for my cache drive .... will try disabling and see if i can get in ... Well if its the same on your boards, it will be under Advanced in the bios, then "onboard devices configuration" then there is a marvel section, you just need to disable the boot rom there. Won't stop HD's connected to the marvel from being used either.