jaj08

Members
  • Posts

    93
  • Joined

  • Last visited

Everything posted by jaj08

  1. And it worked on the rest, have 3 environments, personal, small business, and small business testing.
  2. Just worked for me on one of my instances, now to move on to my others.
  3. jaj08

    NextCloud

    Looking forward to someone putting this together. Currently an Owncloud user, but seems like following the main developer over to his new project is probably the smarter move.
  4. I installed the power down plugin and open files plugin, so the next time this happens perhaps the open files will help me track something down. Assuming it doesn't just lock up the interface again. I suppose for now I sit and wait until the next time this problem happens.
  5. So tracking down what was happening during the time the system stops responding to file share request is difficult as this is actually used in an office environment. We use unRAID as a way to archive data off our front line servers, so at any point there could be a few data copies going on at once. At one point a user thought he was triggering the lock ups by renaming a folder as he swore the system locked up each time after he attempted to rename a particular folder. Of course when I then went and tried to rename the same folder, no lockups, and we actually went another month or so before the next lockup occurred after having a few back to back. I ended up forcing a reboot, on Reboot I noticed the Active Directory component showed that it was no longer joined to the domain. Once joined back to the domain all file shares once again worked as expected, with a disk consistency check in progress. I had also manually done a disk check on the flash drive before I booted it up. System Specs Supermicro - X8SIL Intel® Core™ i3 CPU 550 @ 3.20GHz 8gb RAM 22 Data drives, no cache drive. In the past when I tried to do a shut down from the web interface the clean shutdown procedure always seemed to lockup on the syncing file system process, as it would then never complete the shut down process. This time I locked up the web interface trying to do the diagnostics. I'm sure it's not as helpful, but here are the diagnostic reports after the reboot. stlo-unraid2-diagnostics-20160317-0928.zip
  6. Still nothing, browser window has never generated an error, just shows that its spinning/working. And putty session is not generating anything. Something in how the system is locked up is preventing these tasks from completing.
  7. How long does this usually take? Questioning if the same issue that locked up the web interface is at play as so far I have received no output in my putty session
  8. So I have been having this problem randomly for quite some time now. In the past when it would happen I would reboot the server, although most of the time the reboot would hang up, and I would end up having to force power the server off/on to get it back to life. Then on bootup a disk check would be forced, and all would work again then as expected. The problem just happened again, so this time I thought I would try to dig in a little deeper instead of just rebooting the problem away. Right now I can get to the web interface without issues, and the initial listing of file shares displays, but I can't actually open any file shares when browsing from a windows system. I actually went under diagnostics and tried to download a report. But now the web interface appears hung up after 30 minutes of no changes, and no exported log files. Any ideas on how I can manually retrieve this information to assist in troubleshooting? Running unRaid 6.1.8
  9. File system corruption should not really affect a parity run as the parity process is not aware of file systems. However all is well that ends well. Well there goes that theory then. Strange, will be keeping a closer eye on SMART Checks and file system checks moving forward.
  10. Turns out I had some corrupted file systems that I documented in more detail here. http://lime-technology.com/forum/index.php?topic=41955.msg400882#msg400882 Parity is once again at a happy point on a 6tb drive.
  11. Ok, So I'm a little concerned as to what initiated these problems but here are the end results. After forcing the drive that was showing unformatted back into the array I did the following. I Ran a SMART Extended test on every drive, every drive came back healthy, no new reallocated sectors... etc. I then ran reiserfsck on each drive. Two of my drives, disk9 and disk11 reported needing to run the --rebuild-tree option. I then manually backed up the data off these two disk, I believe disk9 had maybe 10 files that it reported it was unable to copy, all others seemed to backup. I then ran the rebuild-tree option, it performed some repairs. I followed it up by running a regular check again. At this point I was then confident that all file systems were healthy. Last step was to allow the parity to build, previously this would come to a stand still and never complete, I am guessing because of the corrupted files. This time parity rebuilt without issue. Original goal when I started this ordeal was to go from a 4tb to 6tb parity drive. It seems this upgrade uncovered some issues. With the SMART test passing though I am not fully convinced I should blame the hardware of disk9 and disk11 so I may just keep a close eye on things and plan to do some file system checks again in the near future some time.
  12. That's it, I see now where it has the Red-X and states drive is being emulated. I do not have a proper Parity in place at this point so whats the proper way to just ignore the parity/emulated setup?
  13. Ok, well I decided to move this drive to another unraid system, and while the GUI still showed unmountable/unknown file system, I was able to manually mount the drive, perform a reiserfsck, and then browse through some of the folders on this drive. I then placed the drive back in my primary unraid server and it still shows as unmountable I then did mount /dev/sdq1 /mnt/t and once again can see folders on this drive. Any ideas?
  14. I am sure, system was upgraded to 6, but all drives remain reiser.
  15. I've seen that but I see the following notes reiserfsck --rebuild-sb /dev/md3 -> rebuilds superblock based on series of questions, answers MUST be accurate! Is there a list of proper questions/answers or does it default to all the proper answers?
  16. So after experiencing a slow parity rebuild during a time where I was upgrading the parity drive it turns out I have a drive that is probably going bad. SMART info doesn't show anything particularly wrong with the drive, but Unraid won't mount the drive. When I tried to do a reiserfsck it suggests using the rebuild-sb as a file system is not being seen. Can anyone assist in the proper way to attempt recovering of the file system on this drive? I have no parity at this time so I would like to attempt to recover this data. Every drive in this system is reiser as I know none had been converted to the new default of XFS.
  17. Yes, it is reiser, none of my drives have been converts to XFS and I haven't added a new one since it was the default
  18. Ok, thought it was strange some were running still while others were not... But I think I now have a better idea of what is going on. Disk3 is now showing as unmountable after the last reboot, and reiserfschk suggest the rebuild-sb option. Of course this problem happened during a parity upgrade so I need to attempt and recover as much as I can as I have no other way to recover this data. Uploading another diagnostic report now that I have a drive claiming unmountable. Any advice on recovering this data and forcing it to mount? the rebuild-sb option had a lot of options to it so I opted to not complete this process as of yet. stlo-unraid2-diagnostics-20150725-1519.zip
  19. So I decided to kick off a Long SMART Test on each drive, sure enough a handful of them claim aborted by system, whereas others continue to run. I tried to find a pattern such as all being connected to the same controller card but that doesn't appear to be the case. I wonder if this could be a power supply issue causing this strangeness. Disk1, 2, 3, 4, 5, 8 and 17 all stopped running the Long Smart test... Other drives are still showing in progress.
  20. Looks like I'm having similar stability issues with the 4tb drive now.. So something is giving me issues. I had done a couple parity checks before replacing the drive, so it will be frustrating if I end up with a failing hard drive that is causing these problems and I can't recover the data manually. I am seeing the same Temperature reading disappearing with the 4tb drive as well. I wonder if this could be a failing port/cable for the slot I use for parity. These are all hot swap ports on a 24 drive case. I'm technically not back in the office until August but I may drop in to try and move the parity drive to another slot before I depart.
  21. I noticed when it drops to the KB/sec rebuild it looks like the temperature reading shows blank for the parity drive and SMART data won't load. When I attempt to stop the rebuild the system is locked up forcing a manual shutdown. For time being I have put the 4tb drive back in to allow it to rebuild again and get back to a parity protected array
  22. I am trying to upgrade to a 6tb parity drive from 4tb using an HGST NAS drive. Rebuild starts at normal speeds, but at some point it drops down to KB/Sec. Thinking maybe the drive went bad I killed it, moved the drive to my desktop, and ran a wipe pass on it, but performance remained consistent through the entire drive during this process and SMART info looked good. I thought I would try it again, so put the drive back in, started a parity rebuild, same thing. Starts off at normal speed, and at some point drops to KB/sec. I believe the first time it showed it was only in the 1.xtb range of rebuilding... This time it appears it made it to 2.x tb. Motherboard is a SuperMicro X8SIL-F and uses Supermicro AOC-SASLP-MV8 controller cards. Without ripping the case open I can't say for sure what controller the port is plugged into. I have done rebuilds and added a drive while running 6.0.0 recently, not sure if it was 6.0.1 though the last time a rebuild happened. Any ideas? Including my diagnostics. stlo-unraid2-diagnostics-20150724-0725.zip
  23. +1. I was just wondering the same thing as I noticed version 6 seems to scan ok through Spiceworks now, but version 5 did not. It is probably using SSH to do its inventory. But with this change I was hopeful SNMP would now be supported. I use cacti for pretty graphs of all our server utilization and would be nice to add unraid. Would a plugin allow support for this?
  24. Good to go, I had to modify the my.service file like Leifgg mentioned. Once I did that I can now access the GUI using your docker desktop app. This is perfect, thanks for the assistance.
  25. I wasn't aware that a docker gui was now available. I would much rather not have a windows system managing the GUI. Is this included in the default install or do I need to do some searches to find this additional docker?