Jump to content

Micaiah12

Members
  • Posts

    94
  • Joined

  • Last visited

Everything posted by Micaiah12

  1. Hey everyone. Having an issue with radarr and Sab that I thought I resolved. The mover is having issues. It keeps posting to the logs that the destination path does not exists. I have screen shots of the config posted. Thanks.
  2. Hey all. So after an extensive tear down of my network with new ethernet cords, new switch, and resetting the router to defaults. The server is now back online. No idea what caused it, but we are working now. Thanks all!
  3. I have deleted the file as well as ran through some more trying new cables for ethernet without any luck.
  4. The router most likely is only a 100M version. However, I've never had a problem with it till now. Over thirty days with this router no problem. I've already changed the cables.
  5. Unfortunately no, the motherboard only has one. I'm not home at all but when I do get home I could test boot Linux and test the Ethernet port. However, it's unlikely that it would just go bad like that. The motherboard isn't even a year old.
  6. That was the first thing I did. Replaced the Ethernet cord as well as try every other port on the router. Still no luck.
  7. How is that possible when it's currently set to dynamic? My router is working just fine. All other devices work. Does unraid require something like a dns flush?
  8. Hello All, Started having some issues with my server after 6 days of up time. I went to update some of my dockers and after the update I could not reach my server from the web gui. I also didn't have ssh access. So I logged in locally and rebooted the server. After the server came back up I logged in locally and tested the internet with a ping command. It's output was Network unreachable. I changed the network config from static to dynamic and reboot again. This did not fix it. Any ideas? I have posted the diagnostics. Thanks. tower-diagnostics-20170614-2100.zip
  9. We did have a power outage this morning. The whole town was down for a few hours. I do have a ups on it, however come to think of it. I don't remember if I set the server up to use it. That may be something I need to check. Thanks though!
  10. After the xfs repair. It looks like it created a lost + found folder. I took the array out of maintenance and started it up normally and it looks like all the shares are there. I am about to verify the data. Is there any idea what could of caused that? And any way to keep that from not happening? Thanks.
  11. here is the reply from the console after running that command. - scan filesystem freespace and inode maps... freeblk count 5 != flcount 6 in ag 3 agi unlinked bucket 57 is 202361 in ag 3 (inode=201528953) sb_icount 4544, counted 13824 sb_ifree 349, counted 243 sb_fdblocks 17397874, counted 8314928 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 correcting nblocks for inode 201528953, was 145361 - counted 145233 correcting nextents for inode 201528953, was 2365 - counted 2363 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 2 - agno = 1 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 201528953, moving to lost+found Phase 7 - verify and correct link counts... Maximum metadata LSN (51:65442) is ahead of log (1:2). Format log to cycle 54. XFS_REPAIR Summary Sat Mar 18 17:51:38 2017 Phase Start End Duration Phase 1: 03/18 17:51:33 03/18 17:51:33 Phase 2: 03/18 17:51:33 03/18 17:51:34 1 second Phase 3: 03/18 17:51:34 03/18 17:51:35 1 second Phase 4: 03/18 17:51:35 03/18 17:51:35 Phase 5: 03/18 17:51:35 03/18 17:51:35 Phase 6: 03/18 17:51:35 03/18 17:51:36 1 second Phase 7: 03/18 17:51:36 03/18 17:51:36 Total run time: 3 seconds done
  12. Lol, my bad, ran xfs_repair on /dev/sde, but it's a cache so it needs to be done xfs_repair -v /dev/sde1. Apparently it was in the footnotes.
  13. I ran the xfs_repair on the cache drive and it exited with this error. .Sorry, could not find valid secondary superblock Exiting now. Any ideas?
  14. Will do boss, I will let you know how it goes.
  15. Sorry my bad. tower-diagnostics-20170318-1717.zip
  16. Hey all, Starting to have really weird issues with my unraid server. It started when I was in one of my torrent dockers and things started getting flagged as files not found. I thought it was a weird thing with one of the other dockers moving the files. I went to the URL of the other docker and it wouldn't load so I restarted the lets encrypt docker, however. It kept saying server error. So I stopped the array, restarted and now when I start the array I can see the disks, but none of the shares. Docker is turned off because it can't find the docker.img file. I have the array in maintenance mode after another reboot. I have the logs attached. Hopefully someone has some suggestions. Thanks! tower-diagnostics-20170318-1707.zip
  17. Loving this docker it's been working like a charm, one question though. I have a SSL certificate that is signed. How do I replace my Self Signed SSL certificate for Owncloud with my signed one with this docker image? Thanks.
  18. I posted this on reddit as well. https://www.reddit.com/r/unRAID/comments/5v8t8h/any_way_to_run_multiple_dockers/ Basically, I would like to run two docker images on different ports. Say like two plex servers on one machine or two cloud9 IDE instances. How can I do that? Thanks
  19. The syslog is only 5 minutes long, and the array isn't started, so there's little data available to conclude much. But just on what I can see, I'll make some comments. I'm afraid there's no easy way to say it, your hardware kind of stacks the deck against you. While unRAID does run on old systems, it's going to be hard to get good performance or reliability from your setup. * The motherboard is old, nForce based, with a BIOS from 2009. I've had a couple myself, so believe me when I say *please* consider replacing it! Yours is newer than the original awful ones, many bugs fixed, but still has some issues. The 2 network ports are prone to failing, and I think one of yours has already failed and the other isn't working right, more on that later. The nForce boards and boards based on derivative chipsets are notorious for spurious IRQ 7's. On one of mine, I was able to reserve IRQ 7, effectively removing it from assignment, which saved me from the failures associated with the kernel noticing a spurious IRQ 7 and shutting it off, effectively disabling every system attached to it! On the newer kernels we use now, I've noticed that the kernel usually recognizes an NForce board and removes IRQ 7 from the available IRQ's, but on yours, for some unknown reason, it's still available, which *may* be a cause of trouble. I don't see anything using it, but not every device reports the IRQ it's using. And I strongly recommend checking for a newer BIOS, which will usually work better with newer technologies like virtualization. I am looking into getting a new motherboard soon. I am most likely rebuilding this entire server once I get the funds later this week. * It's an older CPU but appears fast enough, dual core. But I don't think it will be good enough for virtualization, especially with that old BIOS. I recommend turning off virtualization. * You have added a HighPoint RocketRAID card, a model I don't see in the Hardware Compatibility wiki. It's either not fully compatible or it's not configured right, as it's not providing the correct drive identifications, and it's not providing SMART access. I don't believe the RocketRAID's have a good reputation here, but then I don't know of anyone with that card. You might try the advice in this thread, perhaps it will help. If it does correct the drive ID's and SMART access, you'll have to do a New Config and reassign the drive (and set 'Parity is already valid'). I don't know anything about that card, as to how to configure it correctly. The SATA ports on it use the hptiop driver, one I've never seen before, so I have no confidence in it. If it was decent, I would have seen others using it successfully. Doesn't mean it won't work, but... I'd recommend instead an ASM1062 based card, they are cheap (under $15) and fast, fully supported, but only 2 ports. I am going to replace the card. Is the ASM1062 your only suggestion? I would prefer a four port card. * The evidence is odd, conflicting, but bonding is enabled, and it's trying to bond both onboard network ports together. But the second port isn't working, and the first port is only able to do 100mbps. Both are supposed to be gigabit ports. Turn bonding off, it's only complicating the situation, not helping, and see if you can get the first port to do gigabit. Better yet, disable both and add an Intel gigabit network card, highly recommended around here. Is that a setting somewhere in unraid? * Your Parity drive (the Hitachi) has a very nice SMART report, that even after about 38000 hours shows no evidence of any mechanical problems, and no evidence of ever having to correct bad sectors. Which would be great if we could stop there! But it also has an error log showing multiple bad sectors in the past, and one IDNF! The IDNF was over 450 hours ago, probably before you even thought about unRAID, and the bad sectors were even longer before. But what is odd, is that neither are reflected in any way in the SMART stats. An IDNF is a sector ID Not Found, something that should NEVER happen, unless something is seriously wrong, either mechanical issues or serious corruption in the low level formatting, something we can't correct. Yet the SMART data shows no evidence of it, or the previous bad sectors! And you have prepared the drive for unRAID, and it's working fine! I don't understand it, and would definitely monitor this drive very closely. It is a very old drive. It has passed several diagnostics as well as a surface scan without any error. Like I said though, it is an old drive. I have a 4TB that is being tested right now to replace the hitachi. The other drives are spares that have been sitting around the office. Mostly pulled out of donated computers. My whole idea with unraid was to use it to utilize these types of drives, if it is having problems with them, I will most likely be going out and getting brand new drives. * Disk 1 (ST31000524AS) has fewer hours (20983) and no bad sectors currently, but has remapped 329, with a Reported_Uncorrect of 9527. It looks mechanically fine at the moment, but has had issues in the past. I would closely monitor this drive too. * Disk 2 is an unknown drive on the RocketRAID, without SMART so cannot say anything about it. * Disk 3 (ST31500341AS) has 29 remapped sectors, no current bad sectors, but has had a few mechanical issues. And the drive temperature at some point reached 58, over its limit of 55 (100 - 45). I'd monitor it too. * Disk 4 is an APPLE_HDD_ST1000DM003, an Apple drive made by Seagate. Hopefully it's better than some of the other Seagate DM models! I don't like the drop in the Start_Stop_Count, and the power cycling seems very high for a Seagate, but otherwise the SMART report looks fine. * The system indicates an unclean shutdown, so once you try to start the array, it's going to want to do a parity check. Here is the strange part. It actually did the parity check. It ran it and finished everything with only 2 errors. It has also done 2 more since the unclean shutdown. It's been up for 3 whole days now with now problem. * And finally - what we really need to see is the diagnostics after it hangs, so what I would advise is to install the Fix Common Problems plugin first, and start its Troubleshooting Mode. That will automatically save syslogs and diagnostics repeatedly to the flash drive, so that once it hangs and you restart, you can retrieve and post them. We can then see what happened at the trouble point. It's running now as we speak however there has been no errors or problems yet. I will update you on anything. In the mean time I am going to replace the motherboard and the cpu as well as the raid card eventually. Do you have any suggestions on motherboard/cpu paring that would do well for just NAS operations. I don't really plan to virtualize much off this server aside from a Linux OS to code and develop in. I plan to use this server mainly for storage and docker containers for home automation. So a good cpu/mobo paring would be nice for that. let me know if you have any suggestions and I will keep you guys updated if anything goes wrong with the server. So far so good.
  20. Hello Everyone. 10 days since my previous issues I am having maybe the same/different problem. It worked flawlessly for 10 days with new hard drives. I have also moved over some of the drives to a sata card so that I could help out the chipset on the motherboard. It was working just find and now it seems to be having the same problem. Whenever a parity check is started it will go till about 80-90% then the entire unsaid system will freeze. I can ping it, however I can not access the web GUI, shares, or dockers. I will have to hard reboot it to get everything back online. Anyone have any idea what is going on? I have analyzed the logs, but I am still learning on how to read them. I have attached them for further review. My current setup is 3 drives on my onboard sata controller and then one drive on by sata pci card. Thanks all! tower-diagnostics-20170114-1650.zip
  21. Just an update. Parity check is complete after weeding out the bad drives. There were a few errors but it said it corrected them all. So far so good. The data seems to be steady now. I'm not loosing any folders or files and the web portal hasn't frozen up. I'll replace the drives with good drives and carry on from there. Thanks all for your help!
  22. I have a rocket raid controller. However, none of the disks are seen when I connect those to it, and I have looked all over for a solution for that model and have found none.
  23. In my experience that often means that the system is having problems with one or more drives not reading reliably. Looking at the syslog you are getting errors reported on ata5 ata5.00: ATA-8: WDC WD1003FBYZ-010FB0, WD-WCAW33LPFF9Y, 01.01V03, max UDMA/133 After taking a look at the dashboard it does look like that drive is causing some problems. However it has been disabled by unRaid for quite some time now due to the problems it was having. Is there any way that could still be causing this whole meltdown. Even though it was disabled? Thanks.
  24. Hmm ok, I will look at those two drives. If I remember right, one of those drives in an external. It does seem that once it starts it's parity check and about 2-5 hours in it will absolutely freeze everything. Bad hard drives very well may be the problem. I will remedy the situation when I get home and let you know. Thank you so much!
  25. I had to hard reset the tower today. Here are the diagnostics posted below. Thanks! tower-diagnostics-20170102-0646.zip
×
×
  • Create New...