lolsamsam

Members
  • Posts

    16
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

lolsamsam's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Consider this solved. I made a knucklehead mistake. I had only plugged in 5 of the TEN molex 4 pin ports on the case backplane. (I feel like it was a miracle the thing ran at all.) After rectifying my mistake, turned spin-down on and crossed my fingers...made it through the night without errors! Thanks for providing support.
  2. Quick update: I took out the SAS expander and the problem persists. I am now thinking its possible its the HDD backplane? I recently switched from a NORCO 4220 case to an iStarusa E4M20 case. I am also now having trouble stopping the array without forcing terminal commands to stop disk activity. Thank you for the help so far, I am at a loss. Would it be possible that this is XFS related? As I need to repair it?
  3. It had been a few weeks to test this out again, I have kept my server up by making sure all the disks continue to spin. So I did some research on my end, and I was unable to find new firmware upgrades for this 12TB model, and I continue to have this spin down issue. I will be trying to use a 2nd LSI HBA (the one I had previously slotted) to plug it in and see if the issue continues. I am going to be pretty sad to find out if it really was the SAS expander (I am in denial that it is, but my mind keeps going back to this). I saw a few pics of folks with this particular SAS expander, so I was inspired to use this. So bizarre though its JUST when it spins down it goes haywire and have read errors everywhere.
  4. Thank you for the response, per your recommendation, I disabled spin-down and the errors have not occurred since. Though I am not sure if I like the idea of keeping the drives spun up 24/7, is this good or bad. For recent changes in hardware, I recently added a SAS Expander, RES2SV240. 4 ports to the HDD backplanes, 2 ports to the LSI 9200 HBA card. Would the SAS Expander be the culprit? I have no SAS drives, all are SATA drives.
  5. Hi guys, I have been trying to figure out the past few days what may be causing these read errors that I am getting on whats seemingly to be random drives. I've attached the diagnostic file for reference. At first, I thought it was just bad disks, but the smart test did not fail. I did a parity-sync to rebuild the array and went on my way. The very next day, I would get multiple disk errors, I figured this can't be a bad disk. I remade the array via the "New Config" method, but now I am getting 5 disk errors, and it seems endless. I was thinking its possibly cabling, so I have ordered replacement cables, but still not sure. I am hoping if I can seek some assistance in confirming the diagnosis. Thanks in advance. s-ephesus-diagnostics-20200630-2232.zip
  6. I attached the most updated syslog.Currently, I noticed that my Emby Docker just hangs, not sure what the cause is there. Sonarr did not hang this time, but it had only been an hour since I had it up. Thanks for the help in advance. ephesus-diagnostics-20170205-1712.zip
  7. Thanks for the catch, I am going to work on this. Could this be source of the issue? Maybe, Filesystem corruption can break user shares. Dockers could be filling up with errors just like syslog was. So I ran the reiserfsck for all my disks and got a "No corruption found" message. Is this normal? In the meantime, my array is in maintenance mode running a parity check as we speak.
  8. Thanks for the catch, I am going to work on this. Could this be source of the issue?
  9. I am having an issue where almost always a docker or two will crash after a day or two. Sonarr will always crash. As of today, Sonarr and the Emby Docker has crashed and are unable to shutdown. I had a nice streak of 30 days running back in November, but now, a docker or two will always crash and I am at a loss on what is causing it. I am not sure if it is memory related, cpu related, but I have attached the logs to see if I can receive some assistance. Thanks in advance! ephesus-diagnostics-20170131-1803.zip
  10. Currently in a power outage when unbalance was running and moving files, I have a UPS that should allow for a clean shutdown. Is there anything I should expect when the power comes back on and I power up my server? Any issues? Thanks for the great tool Btw.
  11. Yes, thanks to the FCP plugin, I was able to fix that issue, I removed the illegal characters and replaced it with something that is compatible. The current error from FCP is moving the docker image completely to the cache, which I am still trying to figure out how to do. (FCP does give me a suggestion on how to do it) If the docker.img file is the only thing sitting in the share, then setting it to be cache only should do the trick. But if there are other files / folders in it, then you should probably delete the docker.img file and then recreate it either on the root of the cache drive, or in a newly created share set to be cache-only. No idea how the system responds if the docker.img gets moved to the array while containers are still running. Turns out the docker.img in the user share had not been modified since 2015, could have been a remnant from an old configuration. I was able to delete the share without issue. New/Current docker.img is kept on the cache drive.
  12. Yes, thanks to the FCP plugin, I was able to fix that issue, I removed the illegal characters and replaced it with something that is compatible. The current error from FCP is moving the docker image completely to the cache, which I am still trying to figure out how to do. (FCP does give me a suggestion on how to do it)
  13. Save us the trouble of asking for it next time. I apologize, I have attached it for reference. I'll edit the original post as well. ephesus-diagnostics-20160904-1000.zip
  14. Hi everyone, I have been trying to figure out this issue for a few weeks now and having pulling my hair out with frustration! I have an issue where I would randomly lose access to the array and a docker would crash.I can access via telenet to enter commands, however after creating logs via the powerdown package, the server will not restart. I have to then do a hard reset. The last time I ran the server, it was up for a day then crashed, before I did a reset I checked on a few things. Dockers Accessible after/during the crash: Sabnzbd Couchpotato Plex Plexpy Crashplan Dockers not accessible after/during the crash: Sonarr VM installed (did not check if it was not accessible after the crash): 1 Win 10 VM I have been an unraid user since the 4.7 days and have never had an issue quite like this since I switched over to 6.0 and dockers, I am at a loss right now to figure out what the issue. I appreciate the help in advance! I attached a copy of syslog, however I have the entire diagnostics package that powerdownpackage creates if needed. EDIT: Attached package. syslog.txt ephesus-diagnostics-20160904-1000.zip
  15. Hi folks, I've been trying to figure out a solution to this problem and have been looking everywhere for what the problem may be and I decided that I need to post it, because I just have no idea what to do. My problem is that the ethernet speeds are capped at 10mb/s. I was trying to upgrade from 4.7 to 5.0rc8, but decided that it got too complicated that I decided to revert right back to 4.7 and deal with setting it up again another time. So when I reverted, my ethernet transfer was capped at 10mb/s. This if from my ethinfo in unMenu. NIC info (from ethtool) Settings for eth0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 1000baseT/Full Advertised auto-negotiation: Yes Speed: 10Mb/s Duplex: Half Port: MII PHYAD: 0 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000033 (51) Link detected: yes So is there any suggestions? I think I had messed with something that caused it go bonkers like this. (Setting new permission on 5.0 maybe?) I have no idea. The unRaid server is sitting in a closet that is connected via cat6 to a switch and then cat6 to the router. Thanks in advance.