uproden

Members
  • Posts

    12
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

uproden's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I have discovered that you don't have to mess with the exports file, just hit F12 on your browser (developer mode) and change the maxlength of the "shareHostListNFS" text field to whatever you want and it will save just fine. So the probem seems to only be in the front-end HTML, shame it has not been fixed yet should be a 3 second fix.
  2. There seems to be some limit on the length of the rules you can specify in the web UI for an NFS share (Share --> NFS Security Settings --> Rule). I have ALLOT of linux machines that mount shares and need to specify more rules than is allowed. It seems to work fine if I put them in the exports file but any changes made via the UI truncates the rules again. Is there someway around this limitation?
  3. Well miracle of all miracles, somehow overnight the rebuild continued and is now at 75% and the constant stream of errors in the logs have stopped, at least for now. I have used this HBA for about 4 years now with no hickups at all, I do have a backup controller but it's the same one, is this for sure a controller issue? Also I wasn't sure what to post really, I can grab the full diagnostic if that will help.
  4. Long time unraid user, really never had any huge problems until recently. Basically I started having streaming issues with a new drive that I put in, I dealt with it for a few weeks but it started to get bad so I tried to verify that the drive was having issues and sure enough any video files I tried to play from that drive (drive 10) would stutter. So I ordered a new drive and sent that drive back. Got the new drive in, replaced it in unraid and the rebuild started, got about 20% in and said there were read errors on the new drive. I didn't think much of it at the time because I have had DOA drives before, so sent that one back and ordered yet another new drive. Put the new one in and failed after about 10%. Now I am thinking it's hardware, I've looked in the logs a few times during all of this and didn't see anything alarming so I though maybe it was a cable or something so I reseated all of the sas connectors to the backplane and the expander and the HBA. Started a rebuild again, same thing. Somewhere along the way errors started creeping in to the logs , lots of retrys, etc... So again I shut it down, re-seated everything and replace the SAS cables for the row on the backplane that contained the new drive. Started rebuild, same thing. Now it starts to go downhill, after starting the rebuild for the umpteenth time, all of a sudden another drive fails. I have two parity drives so I'm kinda okay but I'm seeing a pattern develop here. Which leads me to right now. I moved the new drive to another bay and the array is rebuilding with the first replaced drive being emulated and one drive marked as dead. I have no expectation that the rebuild is going to work and I'm scared s$@#$less that my unraid is going to collapse in on itself. I'm attaching latest syslog, any help would be unbelievably and greatly appreciated! nasur1-syslog-20180609-1932.zip
  5. Thanks thats what I was afraid might be the case. If I am sure the 4 drives are totally dead is there any way to bring up the array as it is now, data loss and all? Thanks again...
  6. I'm quite gutted at the moment so I may not be thinking clearly but it appears that somehow 4 out of my 15 unraid XFS drives has failed (2 of them were parity). I was replacing some fans in my case and when I went to power it back on 4 drives can no longer be "seen" by unraid. I've tried different connectors, power, I took the 4 drives out and put them in another machine, cannot read them at all. Can read all the rest of them. I'm still trying things and maybe it's just something silly but, if in the end I have had 4 drives fail, is there any way to leverage the two parity drives to get any of the data back at all? Thank you in advance.
  7. I thought I had read that XFS has bitrot protection but I could be wrong. I would really like to know if it's possibly / easy to read a single XFS drive outside of the array. Does anybody know? Guess I could bring up a test server and test it all out, I just figured asking on here would save time.
  8. Please excuse me if this is a stupid question but I've been searching for quite a while for an answer and I haven't come up with anything concrete. One of the reasons I chose Unraid for my home setup is that if some kind of unforeseen failure occurs you can always pull your array drives out and read them individually in another machine, as long as the drives are physically okay you will get your data back. I am currently using reiserFS and I wanted to switch to XFS for, at the very least, the bit rot protection it is supposed to provide but, my question is... Will I be loosing the ability to "easily" read an individual drive should a failure occur? Again I apologize if this is an obvious but I haven't found an answer on these forums or have not thought of the correct search terms when looking. TIA! Uproden...
  9. Finally upgraded from v5 to v6 B15 today and everything seem to go very smoothly, I believe I followed all of the instructions to a T and everything appeared to be working, however, about 10 hours in SMB stopped responding and then the web interface died. I was able to ssh in and there was nothing in any log file that I could see that looked remotely abnormal. Things got much worse though as I decided to try and stop the array and reboot, stopping samba seemed to work find but when I went to umount several drives said they were in-use, fuser -mv showed that it was in-use by root with no other details (no other process I could see), running top nothing looked like it was pegged or anything, memory seemed fine. I tried to kill a bunch of ancillary processes like docker but nothing allowed me to umount. In the end I resolved to having to reboot but reboot on the command line would not work, it said system going down but it never did. I ended up having to reset the machine. Mind you on V5 this machine has run for over a year with no problems what so ever. The only errors I have seen in any logs is regarding the senors which I have yet to resolve but I would imagine should not cause this problem. I have disabled docker as I am not running any on this machine yet and I am now, of course, running a parity check which seems to be more "intense" than in v5 as it now takes a very very long time to access the shares and folders remotely while the parity check is running. Anyway, though I would pass this along. I am not a linux expert by any means but I'm not a total noob either and I believe I checked the basics and found nothing. I would really like some advice on what I could do / check / look at if this happens again to see if I can determine what the problem is. I am hoping it was just a fluke....
  10. I have an existing UNRAID setup that I need to expand from 1 Supermicro AOC-SAS2LP-MV8 card to 3 cards. I already own the AOC-SAS2LP-MV8 cards so there is no option to change them at this time. I know there are some problems using more than one but I think some people have gotten it to work with various settings I will try to figure those out separately from this post. I would like to know if there is anyone out there using 3 of these cards in their system right now successfully and what mobo they are using that has at least 3 x PCI x8 slots. TIA!
  11. First of all I am very new to unraid so please forgive me if I'm missing something obvious. I woke up this morning to an unresponsive unraid box, first time this has happened. I was unable to get to the web interface but I could SSH in. Poking around with ps I saw about 400 instances of SMBD running. When I tried to kill them nothing happened I tried killall -HUP, kill -9, etc.... nothing worked. Finally had to power cycle the box :-( I am running 5.0.5, I looked through the syslog (didn't save it, oops sorry) but didn't see anything obvious. The box has been running perfectly for about a week now no problems at all. I am running the cached drives script and dynamix if that makes any difference. Any ideas on what may have caused this or how to prevent it or better yet what to look for if it happens again?... 20 hour parity check here I come TIA!!!