Burning_Beard

Members
  • Posts

    17
  • Joined

  • Last visited

Everything posted by Burning_Beard

  1. Hello All. I recently got the error for /var/log being full from the fix common problems plug in. "Either your server has an extremely long uptime, or your syslog could be potentially being spammed with error messages. A reboot of your server will at least temporarily solve this problem, but ideally you should seek assistance in the forums." I upgraded to 6.8.2 almost a month ago so I don't have a long uptime. I first noticed an issue when I wasn't able to play anything from plex. I tried to restart the docker but it would fail with a 403 error. I stopped the array, started it again, and the dockers seem to come back on after that. Since then I have been noticing some random CPU spikes for unknown reasons and also been getting a high temp alert from the cache drives randomly. I have not done a full reboot yet. After reading around it seems like that is more of a temporary way to clear things and not an actual fix. Any suggestions on where to start on things? TIA diagnostics-20210619-1016.zip
  2. I've tried adding internal IPs into the allowed host section on the template. I assume that isn't correct? (sorry I'm very much a novice at docker)
  3. I'm having an issue accessing the webui. Getting a Bad Request (400) error. I'm sure it's user error on my part, but I'm not sure what I'm doing wrong. I filled in the required info on the template and left the non required fields blank. I have since gone back and entered info into all the fields and I still get the same results. Any suggested troubleshooting?
  4. Array is back up and parity is valid. I really boiled down to a hardware issue (Had some user error thrown in there too). Either I got 2 bad cards and sets cables, or (more likely) there is a compatibility issue between my MB, the SAS card, and the HD cage backplane. Drives all function fine when connected SATA and bypassing the SAS card. I ended up just going all SATA directly to the MB and shrunk the array by 2 drives. As noted earlier some of the drives were just place holders and since I don't need to worry about that going all SATA, I'm not really missing the drives at this point. Thanks to those who helped
  5. These are old errors I think. I have moved this drive to different connections and it has continued to have issues. I did a quick preclear on the drive and it completed successfully. Still no luck. I just started a more in depth preclear to see how it responds
  6. Well I have made some progress. All files have been backed up to an external drive so I have a little more freedom to try things. I went ahead and downgraded to version 6.3.5 as was suggested in another post from a few months back. This was surprisingly successful...to an extent. Once I did this all but 2 of my drives came back online. Disk6 and Cache drives stayed unmountable (The cache was mountable prior to downgrading) I then updated back to 6.5.2 and everything stayed as is. So big jump forward. All my files are still available at this point which I was surprised of. I was able to get the Cache drive working again by removing from the array, adding a new cache drive, starting array, stopping array, and adding the original back. The dockers seem to be working although I'm sure the swapping of the drives may have caused some issues. Guess I will find out. My focus has mainly been on Disk6. I have tried about everything and I think the drive is just done for. SMART has errors occasionally, and passes fine other times. I have tried formatting the drive many different ways and unraid still doesn't like it. Sometimes the drive will mount outside of the array and other times it won't. Strangely I was able to plug it into usb adapter and into my windows pc. Disk manager saw it fine, so I formated it and ntfs worked just fine. Mounts outside the array in ntsf just fine too. Once I try and put it back to xfs on unraid the same mounting problems come back. Sorry for rambling. I have a new diagnostic attached. I out of ideas and open to suggestions diagnostics-20180607-1946.zip
  7. I have found a little work around that seems to be working. I left 1 array drive unassigned and started the array. I mounted the unassigned drive in UD along with the external. Started up Krusader and started to copy the files from UD drive to external drive. It is painfully slow (I suspect the external being USB is the bottle neck), but it seems to be working so far. At the current pace it will be sometime next week until I have all the files backed up to the external. I am open to other ideas if someone knows how to make this faster, but until then I will keep on trucking like this. Thanks to those who have helped so far. I will post an update as things progress and ultimately hope to be able to put a (Solved) next to my title post sometime next week.
  8. Well it looks like the card may be the culprit. I replaced it and I am now able to mount the drives outside of the array. I tried to start the array with 'parity is valid' selected and the drives still say Unsupported partition layout. Except disk 3. It seems to be fine. It starts in the array and the files seem to be there. I didn't dig into the folders/shares much but saw files in there. I picked up an external drive to use as a backup drive. Since I am able to mount the drives in UD I am hopeful I can back them up to the external and more or less start from scratch. Is there a good way to move all my data to the external with the drives in UD? My linux/terminal skills are lacking, so the only thing I know to do is to move them via the shares on the network, which will take forever.
  9. I actually just pulled a drive and put it on a sata to usb adapter and it was able to mount. I am going to try and plug sata directly from the mother board to a couple of the drive cage ports (I have 2 open sata ports) and see if the back plane of the drive cage is the culprit or if its the SAS card. Converting back to the old card will take a little while and I only have a short time before I need to head out. Thanks for the suggestion. The 'just because' drive add was partially planning ahead. The way the old card worked is every time a drive was added I had to reboot and go into the card settings and set the drive to JBOD. So instead of needing to reboot every time I wanted to expand I just added a few drives as place holders. Then upgraded them as needed for capacity. ETA: When plugged directly from mother board to drive cage I am able to mount in unassigned. So it's either the card or the SAS cables. Good call jonathanm ETA2: Drive 3 is even mounting and showing files now. It is clearly the card or cables issue. I will need to get some more parts tomorrow and see how it goes. Seems like a good enough reason to duck out of work early ? Thanks for the assistance so far. I'm sure I'll have more questions tomorrow.
  10. The plot seems to be thickening. I have unassigned the drives from the array, and tried to mount them via Unassigned devices. I am not having much luck with them mounting. 2 of the drives mount with no issues, problem is those 2 drives are empty. I don't believe they ever had anything written to them though. (They were just some drives I had around that I just threw in the array when I first built it). So far all of the drives that do have data on them (at least I hope they still do) will not mount. When click mount the button goes grey and says mounting but never does. Sometimes I get a green check mark, the page refreshes and the drive is still unmounted. Now I'm starting to get very worried
  11. Thank you for the head up. I fear my parity may already be invalid, but I will keep the different commands in mind.
  12. That's not the answer I wanted to hear. ? I will take a look and see how the UD likes the drives and start working on a plan to get the data moved around. Thanks for the info.
  13. I have been thinking of other ideas...and I came up with a few, but I'm not sure they would work. First idea: Do a New Config, and set up my current drives (minus Disk 3 or even with the old Disk 3) in an array without the parity drive? Would that fix the Unsupported Partition issue since I wouldn't need to select the parity is valid option? Or would I just make a bigger mess? Second idea: Do a New Config with all drives as is, and let it run the parity check. Since the "new" disk 3 won't mount I figure this won't get me anywhere. Third idea: New Config with the "old" disk 3 in place and let it run the parity check. The "new" disk was only in place for a few days, it only had a couple of DVR recordings added since replacing the old drive. I don't mind losing those few recordings if it will fix the issues.
  14. Thanks. I will give that a shot. (I'm stuck at work right now so it will be a while before I can try it)
  15. OK. I didn't realize that I could run xfs_repair on it without it being assigned and in maintenance mode. See I'm already learning things ?
  16. Thanks for the suggestion. I will try to mount one of the disks with unassigned devices and see what it shows me. Since none of my disks are showing available inside the array, if I do the Unassigned approach that would mean I basically need to do a full rebuild of the array (Start from scratch) right?
  17. I'll start this off by acknowledging that I have made a mess of this. I just hope I can salvage my data. I am having an issue with all drives (except Parity and Cache) showing Unmountable: Unsupported partition layout. Here is how I got to this issue: I had a RAID SAS card setup as JBOD that ran 6 disks at 3G each. It also did not pass smart drive info outside of the green thumb. I decided to upgrade to a SAS card that runs 6G with SAS to SATA breakout cables. This card also passes the smart info. All was working great prior to the card upgrade After getting everything installed, unRAID booted right up, but showed 5 of 7 drives were "wrong" even though they were the same drives. (Parity drive was right and Disk 3 was right) After reading some posts I proceeded to do a New Config (preserving all current assignments), and then chose Parity is Valid before starting the array. This is where the many problems began. Upon starting Parity drive stayed green, Disk 1, 2, 4, 5, 6 were green, but have the Unmountable: Unsupported partition layout error. Disk 3 (which was green prior to new config) now shows a Unmountable: No File System. Ugh.... I will also note, that this disk 3 is less than a week old. (I replaced/upgraded a 2T drive that was starting to get a few errors) The new drive was running fine before the SAS card upgrade. After some more digging around the forums and google, I saw it mentioned that updating (or downgrading) versions of unRAID could clear the Partition Layout error. I was on 6.4.0 (I think) and upgraded to 6.5.2. Still no luck with anything. ( I am still running 6.5.2) I proceeded to try to xfs_repair disk 3 in hopes that once it came back, the parity would be "Valid" again. The repair tool seemed to have fixed a few issues, but had a fatal error -- couldn't map inode. I tried with -L and got the same error. I am now at the point I'm not sure what to try in fear of losing data (I realize disk 3 may be lost) In a desperate attempt I put my old disk 3 in to see if maybe it would clear the Partition Layout error on the other disks....it doesn't. A couple of notes: I have run SMART tests on the disks and they return no errors. When the "bad" disk 3 is installed I can't take any other drives offline, but when the "old" disk 3 is installed I can. I have included 3 diagnostics files. One before I went crazy swapping drives, and 2 with the different drives installed. TIA for any assistance, and sorry for the long post and for making a mess of things lol. BAD DISK3-diagnostics-20180530-1754.zip diagnostics-20180529-2222.zip OLD DISK3-diagnostics-20180530-1751.zip