• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

limawaken's Achievements


Apprentice (3/14)



  1. anyone experienced docker guru seen this problem before? do i need to map some paths or something for unraid to be able to get the logs?
  2. i managed to get eclipse-mosquitto docker (the official version from dockerhub) running. it seems to work. using mqtt explorer i was able to access and see it working. whenever i start a docker i like to view the logs which can be conveniently accessed via the GUI. but for this docker it doesn't work, i only see an empty log like this: i have installed other dockers from dockerhub before and their logs all show up like one would expect. is there some kind of config for getting the gui logs to work?
  3. for installing other drivers or updating drivers, is manually installing from terminal the only way? then every time the docker starts the drivers need to be installed again? is it possible to set it up in the config file or something like that? my current set up for CUPS is an Ubuntu VM because I had to install my printer's drivers.
  4. i tried doing diagnostic from SSH but it seem to get stuck i also tried powerdown -r and it also got stuck. finally i had to hold the power button to force shutdown. unraid started back up and of course started parity check. all the dockers appeared up-to-date but some dockers couldn't start so i removed them and added them back. really great that we can add dockers back by using templates so that bit was super easy. things seem like back to normal now but since couldn't get the diagnostics so i guess we won't be able to figure out what went wrong. but i think it most probably had to something to do with docker. maybe just a glitch.
  5. several of the dockers and plugins had updates, so i started with the plugins and those updated fine. i moved on to the dockers and did update all as i normally do. this time i noticed it was trying to stop the first docker for a long time. i opened the docker page from another browser tab and it showed the docker was already up-to-date but i couldn't access it and the docker's log said something about it being marked to be destroyed or something like that. but i think the dockers were still being updated eventhough it looked like it was still stuck trying to stop the first docker, because the next dockers also started to showing up-to-date when i came back to the page. after a while i couldn't access the webui and none of the dockers seem to be working. i thought i'd give it some time and waited about an hour now, webui still down. i get the nginx internal server error 500 page. but SSH is still ok. VMs are also ok (pfsense still working). any suggestions what i should do? should i try restarting unraid? what's the command for restarting? or try to restart webui? (if that's possible) curious whether this has anything to do with my btrfs cache pool failing 2 days ago. docker image, appdata, vms were on it. moved everything that was on it to a new xfs cache pool and everything seemed fine. but whatever the issue, the docker update seems to have triggered it. maybe a corrupted docker image?
  6. oh i see... i always thought that turning on less secure app access was needed for other applications to use google smtp. I have set up app passwords and google basically gave me some random password that i pasted over my current password (for my unraid SMTP settings). all my other settings were unchanged. hit test and received the test email. interesting thing, after setting up app passwords my option for less secure app access has disappeared. so i had to start setting up app passwords for my other things. phew, that caused me some alarm. i really thought that without the less secure app access option we weren't able to use gmail smtp! thanks guys!
  7. i received a notice from google that less secure sign-in will not be available after May 30. i have been using google smtp to send email notifications, so i believe this change will effect me. or am i mistaken? what changes would i need to make in order to continue getting email notifications from my server? i use google smtp for several other things, so it would seem that this change will effect several other devices and applications. are there any suggestions or alternatives for other smtp services that others are using? thanks!
  8. yes, that is something to be very concerned about... and i really do need that alternative routing solution like you mentioned. with myservers plugin i think i could get the most current flash backup, but that would add a couple of steps to get back up and running. it'd be nice if it could be always ready-to-go, but i guess better to be safe.
  9. sorry, those diagnostic reports are like a foreign language to me... what tipped me off was looking at the docker's (shinobi) log.
  10. maybe for me having 2 disks in a pool kinda doubles the probability of corruption. could be some poor connection between my mobo sata controller and the disks (sata cable, drive cage). last memtest i did a couple months ago went well. incidentally there was a lightning storm last month, knocked out my server... had to replace my server power supply, UPS and network switches. although that was a couple week ago maybe that sudden unclean shutdown contributed to the btrfs corruption.
  11. i think i had this before... thought my usb was corrupted or something... i was panicking and balls were sweating... i shut the server down and after calming down a bit i pulled the usb and blew at it like those old nintendo carts... plugged it back in and it worked again... gave thanks to the Lord really need to rethink my dependency on my server, its like my whole life depends on it. very bad in your case maybe its a bad usb stick.
  12. i had this problem before. Mine was caused by a docker (shinobi). The disk Shinobi was storing recordings in was not available so a lot files accumulated as temporary files that used up my RAM. After all, Unraid basically runs from the RAM. Maybe try looking closely at your dockers? yours could be a completely different issue, so this might not help you at all. but hopefully it does.
  13. i actually hadn't thought of doing it this way... i was able to copy some files off the cache disk so i guess all the data was intact, but would mover still work since the filesystem was already corrupted and the cache had become read-only?
  14. i would like to copy my unraid usb to a spare and have it ready to go if my current one suddenly dies. i read the guide for replacing a usb, but it seems to involve replacing the current license. basically i want to setup another usb with the exact same config as my current one so that if I swap the usb everything runs like normal, all the VMs and dockers start up, same unraid server name and IP, etc. can this be done? i understand i would have to purchase another license. would this be the correct steps to setting this up: 1. use the usb creator tool to set up the new usb 2. use another PC to boot and register license because my pfsense runs on my current unraid. i should be able to register a new license with my current sign-in, right? 3. copy the config folder from my current usb over to the new usb but don't overwrite the new pro.key a somewhat related question... if i had to use a new usb with the trial license on my server now (like if the usb just suddenly died already), then would i still be able to start up my pfsense vm so that i can get connected to the internet and request for a replacement license?
  15. Hello guys, for years prior to this i used only a single drive xfs cache and never faced any problems. Last year i decided to give multiple disk cache pools a go, to have a bit of peace of mind in case of a cache disk failure. I know jonathanM isn't a big fan of BTRFS, and despite coming across several hints throughout the forums to avoid btrfs like the plague... I thought i'd give it a try anyway. so i got 2 new 1tb SSDs and set it up. I only used it for 8 months and I have had to rebuild my cache pool 3 times. Same problem each time. VMs and dockers suddenly crash. I use pfsense VM so I know quite soon when the problem hits because everyone will be asking if the internet is down. its usually most inconvenient when this happens. Usually what happens is the logs would be filled with BTRFS errors and the filesystem becomes read-only. there are not many topics on repairing btrfs filesystems, so rebuilding seems to be the easiest solution. i don't know if its really btrfs reliability... it could be my hardware (still running on pretty old i7-3770 hardware), but the disks work fine as individual XFS disks. i think it would be better to setup both disks as single disk cache pool, and clone the contents of the cache disk to the other disk. if the cache disk fails i can quickly point my shares to the cloned cache pool. i have not experienced a single ssd failure in all my years so i shouldn't have to do this very often. taking this into consideration it just doesn't make much sense to use BTRFS pool for me now. is there a plugin that can clone or mirror my cache disk contents so i can simply point my shares to the new disk? i really like ca backup restore appdata, it saved me many times. but the backup is a TAR.