momoz

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by momoz

  1. it does the same thing. I tried GUI and Safe Mode. downgrade works fine though. I validated it said it was safe to reboot, plugins all good.
  2. I've never had a problem with upgrades until today... Was on 6.12.3, upgraded to 6.12.6 and it runs through the boot sequence, screen goes black and never "comes online", not completely sure if it does or not but I can't ping the server. Was able to downgrade back to 6.12.3 and it's booted up, however (and maybe I've not noticed in a few versions) but after the bzimage booting process, it scrolls a bunch and then goes to a blank screen and sleep. There are no diagnostics in my log folder (recent anyways). Where should I start looking?
  3. Having an issue the last few weeks with vPN.. I use NordVPN (Wireguard configs) and it seems the tunnel stops working (UI works, but no data flow, unable to ping when in console). stopping/restarting container seems to fix the issue.. sometimes I've noticed the container doesn't connect to VPN which then causes the UI to never load (never gets to "listening on port.. " Anyone see this issue?
  4. same issue with a win10VM writing to an unraid share I've not seen anything confirmed to identify what is causing and why..?
  5. Ok, So I can get the /dev/dri folder structure to appear, but it seems once I start a VM the folder goes away. Can you not have QuickSync or HW decode in both docker and VMs? mike
  6. What issue are you talking about? Sorry.. I didn't quote the similar issue.. My issue was that Deluge would start (the WebUI) but the daemon wouldn't start and would throw errors. I ended up solving by deleting (renaming) the session.state file. This let the daemon start up and I'm back in business. thanks and sorry for the totally unclear post.. LOL mike
  7. Did anyone upgrade to latest unraid 6.3.1? That is the only thing that has changed for me.. Could be a python issue too?
  8. Had 6.0 running since release and had a disk failure that has turned into a nightmare. Ended up losing two disks at once. So, since I was having to spend time on my unraid I decided to enhance it (maybe a big mistake?). After recovering data from one drive successfully and getting entire system stable with a complete parity check (successful). I upgraded to 6.2 then 6.2.1. Had another drive fail (failed smart test) which prompted me to replace my:Supermicro AOC-SASLP-MV8 with an AOC-SAS2LP-MV8.. thinking it may be the controller? Rebuilt and fully stable for a day and then another drive threw an error disable and failed smart tests. I ended up doing a parity/swap/disable to install newest 8TB drives (seagates). During the parity rebuild of disk5 I noticed that my array still has some lingering issues. the /mnt/user share throws a : Structure needs cleaning and my user shares were not working. I decided to remove the disk5 and see how it looked as just the emulated disk5. same situation. How can I have had a successful parity build and then the emulated disk5 have this structure needs cleaning? Any thoughts?
  9. that was my thoughts too, I replaced the SAS breakout cable and reseated and it happened after that. I've ordered a replacement controller just in case it happens again. very odd, but this morning, parity check/correct complete and my array is stable once again... Whew!
  10. I've had an interesting weekend.. not one I want to repeat anytime soon. Long Story short, I had a drive failure which lead me to replace, which I ended up screwing up my disk1 (was able to recover the most important stuff). During my journey, I was able to bring the new replacement disk1 online, parity rebuild and an optimal array. So on to recovering more data from the failed "Old Drive1". I added to the unraid box and used SNAP to mount the disk. I performed an rsync which ended up hitting the failed drives main issue (read errors). These read errors caused all the drives on the same SAS/SATA breakout cable to go into an offline situation and my new disk1 went into a disabled state.. The new disk1 is fine, I've done all the scans and it's good. I had to reboot to get the disks to show back up, of course disk1 was in the disabled state. Given that I didn't really care about the data on disk1 at this point, I did the trust my array procedure and brought everything back online fine, parity check, all good. So, foolishly, I did the rsync again and it did the same darn thing again. Disk1,disk2,disk3 and the olddisk1 are on the same sas breakout cable on a supermicro SAS/SATA 8port. I've since removed the olddata1 drive and brought everything back online and will attempt further data recovery on a separate box and move the data over from a known good drive. Has anyone seen the behavior? Very odd that it causes the controller to whack out. Thanks, Mike
  11. Try deleting the .gz files in /boot/config/plugins/dockerMan/ and then rebooting. (Assuming of course that your cache drive is NOT showing unformatted) Very strange, but that did the trick. Thanks again to Squid!!
  12. Hi Guys, I upgraded to B13 and my docker was working earlier today. I powered down my server to perform some other "fun" work and got everything back up but now docker will not start. I see nothing in syslog to indicate problems. my docker.img file sits on a cache drive formatted BTFRS. I've read a little through the thread and saw most were issues with unformatted cache drive. My cache drive appears to be working fine. Any ideas on how to diagnose and possibly fix this? Thanks, Mike
  13. Thanks Squid, that is exactly what I ran and the results were no volume or something basically "nothing to process". Can't recall
  14. So, I did the reiserFSchk this morning, can't recall the message but basically it had nothing. I am sure I hosed it up badly. At any rate, my new disk1 and array is fully online and I was able to mount the old disk1 with SNAP and recover my most critical data that was on the drive. Now I am recovering all the "tier 2" stuff that I wouldn't be so upset about losing. TBH, I caused my own nightmare by not paying attention and not having replaced a failed drive in over a year or two. I do think unraid should give some extra "Hey idiot.. you may not want to do that" warning, but honestly, this was all on me Thanks all for the help! Mike
  15. so no go on the reiserFSchk... Can I add the old "disk1" as disk6 to the array?
  16. So.. The question I have is what about the XFS format when the original was reiserFS? For the record while I still remember this is the detail of what happened: 1) rebooted unraid 2) Disk1 was redballed (ResierFS) 3) shut down array powered off and replaced disk 1 (2tb) with new 3tb precleared disk 4) booted unraid 5) selected new disk as disk 1 placement 6) started array figuring it will trigger the rebuild and it showed unformatted disk 7) I selected the checkbox to format and unraid started formatting disk1 as XFS <--apparently.. BAD! UGH! 8 ) Array showed drive rebuilding If this doesn't go well...and disk1 is empty, can I add back the original redball as a disk in the array? How should I go about adding it back to the system to try and copy off data?
  17. reiserfs is extremely resilient, you may still be able to recover the data from the new disk after the rebuild is complete. DO NOT WRITE ANYTHING TO DISK1. Rough outline of what needs to happen to start to try to recover disk 1's data. 1. Finish rebuild 2. Restart array in maintenance mode 3. Run reiserfsck --check /dev/md1 4. Post output of that command here on the forum and solicit further advice even though It formated XFS when it was originally reiserfs?
  18. how should I go about connecting the original drive back to see if my data is salvageable?
  19. well crap. It was riserfs and was redballed. I removed the drive, installed new precleared disk. Formatted (xfs) and started the rebuild.
  20. So, I haven't had a failed drive a year or two... rebooted my unraid and disk1 won't come back to life. Swap in a brand new disk that I had precleared, set as XFS and it started to rebuild. Should I see data on the /mnt/disk1 as it's rebuilding? When I telnet into unraid and look at /mnt/disk1 there is nothing there. thanks
  21. Needo - Any chance you can update your sickrage docker to fix the edge issue? I have the database error too. It has been broken for a bit. I finally fixed it and pushed it to docker so I could use it myself. You are welcome to pull from it if you want. It's just the same as Needo's docker, but with a fixed EDGE file. https://hub.docker.com/u/ninthwalker/ docker command: docker run -d --name="sickrage" -v /path/to/sickrage/data:/config -v /path/to/downloads:/downloads -v /path/to/tv:/tv -v /etc/localtime:/etc/localtime:ro -e EDGE=1 -p 8081:8081 ninthwalker/brentrage
  22. Help! I did the "Update Now" in Sickbeard which must have advanced the database.. Now the docker app will not start since the database is higher than the needo sickbeard docker. I tried updating the docker app and it is already upgraded and EDGE is set to 1. How can I get this fixed?? Thanks, Mike Update: Well.. I jumped the gun posting as I fixed by reverting to the previous database version. After posting I recalled SB backs up the database with a version number. Still though.. need to figure out how to keep dockers current for situations like this. I had a similar issue with Couchpotato a few weeks ago and ended up rebuilding CP completely.
  23. Needo- Any way you could modify the mariaDB docker to allow cron and a backup folder: http://lime-technology.com/forum/index.php?topic=34202.msg318445#msg318445 Thanks for all the work!! Finally moved all of my plugins to dockers this weekend!
  24. Looking for the same info... currently in the same state.