master.h

Members
  • Posts

    127
  • Joined

  • Last visited

Everything posted by master.h

  1. The Docker FAQ was actually my first stop. The FAQ link that seemed to apply to me was "Why does my docker.img file keep filling up still, when I've got everything configured correctly?" That link said you could force dockers to have a max size on log files by adding an Extra Parameter in each of your dockers. I double and triple checked (again just now TeamViewered in to my system, as I'm not at home atm) and none of my dockers were configured to save anything internally. Every host path field in all my dockers was pointed to something on the cache drive. However, I found the offending docker (resilio-sync), but don't know why it was filling docker.img. I wasn't actively using that docker yet, just had the sync folders set up and indexed, so I went ahead and deleted it. Immediately docker.img dropped down to 14GB used instead of like 38GB or whatever it was. So.... why? There's a setting in there for the folder you want to sync, so I passed through /mnt/user (which is mapped internally to /sync) so I could create sync groups for each of my user shares. I mean I guess it filled up because there's a ton of stuff I was trying to sync, but how do I avoid that in the future? I know there's the option to specify internal paths to map to external paths (to the cache drive for instance) but I have no idea where inside the docker Resilio was storing this sync information. Did I configure something wrong?
  2. Just checked again, and my docker.img has grown to 88% used. Anyone have any input on tracking down where this growth is coming from?
  3. I've attached the settings of the docker page. I think I've got everything mapped correctly, pointing to something in /mnt/appdata (or at least something outside the docker). Except for cAdvisor, and that one was set up after I started getting space notifications. The appdata users share is configured to be a cache-only share as well, if that matters.
  4. My docker.img was created 50GB, and the Docker Settings page is telling me that it currently has 38GB used space. I'm getting notifications in my browser that it's something like 74% full. Label: none uuid: 08f2d3d8-faa3-4148-bdc4-0997d60d7194 Total devices 1 FS bytes used 35.34GiB devid 1 size 50.00GiB used 38.02GiB path /dev/loop0 I've installed cAdvisor to see if I can find out what is eating up all the space, but according to it, I'm significantly under 50GB (I should note I do have two instances of dropbox running, but only one is showing up here for some reason). Repository Tags ID Virtual Size Creation Time yujiod/minecraft-mineos latest sha256:b4e67aaf1a2f72bbd 413.40 MiB 6/12/2016, 9:14:00 AM sparklyballs/handbrake latest sha256:13dc3efa788e046c8 1004.53 MiB 3/1/2017, 9:08:12 PM roninkenji/dropbox-docker latest sha256:b7cba78d15383f97d 449.57 MiB 11/16/2016, 4:58:04 PM linuxserver/resilio-sync latest sha256:d2f42979c3f189355 35.86 MiB 4/21/2017, 5:49:28 PM linuxserver/plex latest sha256:c8900bbc5549da132 412.23 MiB 4/21/2017, 5:42:03 PM linuxserver/duckdns latest sha256:01b9e64da57ef17a9 21.20 MiB 4/21/2017, 5:48:37 PM hurricane/ubooquity latest sha256:28ad70a06a09c4b86 470.86 MiB 6/21/2016, 2:58:18 PM google/cadvisor latest sha256:f9ba08bafdeaf8158 54.66 MiB 3/9/2017, 5:30:29 PM aptalca/docker-rdp-calibre latest sha256:e5bda5ab506738375 1.37 GiB 9/16/2016, 11:02:39 PM I also tried to get in to each container and find all files larger than 100MB, but kept getting an error "invalid number 100M" docker -exec -it containername bash find / -xdev -type f -size +100M Any help figuring this out would be appreciated.
  5. @Fireball3 this worked for me as well. I've got some reverse breakout cables on order now but it looks like all is well. Thanks for the instruction
  6. AFAIK this came directly from a Dell T3400 (or something like that, some sort of desktop machine geared towards CAD applications) but it's sat in a drawer for about three years. This was the first time I've ever plugged it in, anyway. And yes, there was a message about megacli failing as well. I'll continue on with 5_DELL_IT and report back. Thank you kindly.
  7. I just tried to flash a Perc H310 with the toolset linked here (the update from 11.04.2017). While running 1.bat I got this error. I typed quit to exit because I didn't know what to input for the firmware. C:\SAS2FLSH.EXE -l Adapters.txt -c 0 -list Adapter Selected is a LSI SAS: SAS2008(B2) Controller is not operational. A firmware download is required. Enter firmware file name or quit to exit: Due to error remaining commands will not be executed. Unable to Process Commands. Exiting SAS2Flash.
  8. Well I ended up recreating the Win10 VM as a SeaBIOS rather than OVFM and even before installing qemu-ga-x64.msi I'm able to change the resolution, which is more than what I could do before.
  9. Well I just fixed it... it really helps if you read the setup instructions completely. Ugh. I missed the line that said "set your library to config on first run." Once I did that, I was able to add the environment variable and map it to my existing library just fine. All my books are showing up in the docker and are being served out over 13579 just fine.
  10. I just deleted and recreated the docker twice, and had the same issue both times. The first time I added in a preconfigured library as in the screenshot above, the second time I created a new library. Both custom and new library give the same error pages as I described earlier.
  11. Is this what you're referring to? The "library" entry is the location of my preconfigured Calibre library, and LIBRARYINTERNALPATH is the variable I got from your post here.
  12. I just set up the RDP-Calibre docker, and was able to get a preexisting library mapped into it. When I open the WebUI, I see all my books and it's great. I'm not exactly sure how to make the library available outside the docker, though. I enabled the web server under preferences in Calibre and set the server port to 13579, added a username/password, and I've mapped docker port 8081 to 13579. When I go to saidin:13579, I get prompted for a username/password, and once I enter it, I end up at the normal web interface where you can view the library and download books. However, when I click on either "Newest" or "All Books", I get the error below. If I click on "Random Book" I get a 404 page saying this library has no books. How do I make my library available outside the docker? Error: No books found printStackTrace.implementation.prototype.createException@http://saidin:13579/static/stacktrace.js:81:13 printStackTrace.implementation.prototype.run@http://saidin:13579/static/stacktrace.js:66:20 printStackTrace@http://saidin:13579/static/stacktrace.js:57:60 render_error@http://saidin:13579/static/browse/browse.js:134:18 booklist@http://saidin:13579/static/browse/browse.js:271:29 @http://saidin:13579/browse/category/allbooks:34:17 .ready@http://saidin:13579/static/jquery.js:392:6 DOMContentLoaded@http://saidin:13579/static/jquery.js:745:3
  13. @JohnSnyder did you ever end up resolving this? I'm having the same issue, except with TeamViewer rather thanNoMachine. Additionally, I'm not able to RDP to my Windows 10 box, it's on a different network than either my server or my desktop, so not sure what the deal is there. Any help would be appreciated.
  14. @CHBMB You're probably right, just resetting the container caused some sort of DNS update. Whatever it actually did, it's working now with now issues. @mr-hexen I am aware of the quality settings on the android client; I've had Plex Media Server running in a Windows VM in ESX for quite some time now, and never saw this "indirect" connection to the server; very likely it was because of some DNS thing like CHBMB suggested. I could be misunderstanding what I saw on Plex forums, but what I found made it sound like the "indirect" connection status meant that Plex Media Server was employing some sort of relay service to reach my android client, and that relay service was forcing transcoding and quality settings. I'm sure I could have changed them on the android client, but the impression I got was that changing these settings wouldn't have actually fixed the issue. Could be wrong though, it was getting pretty late (for me, anyway).
  15. Just as an FYI I suppose to anyone who might get this same issue... I installed this docker and got it up and running no problem, could stream locally to my Windows machines and Xbox one without issue. On the web UI I was getting the message "Fully accessible outside your network" with the green checkmark and everything. However, on my Android install of Plex (my cell phone) I kept getting an error "direct connection unavailable." I could see my server name in the list of available servers, but instead of being Directly connected, There was a label of "indirect" on the server name. Apparently that means there is some sort of connectivity issue and Plex uses some sort of relay system to connect my phone over the cellular network to my server, which results in forced transcoding and crap for quality. I'm not sure what the issue was, but I fixed it by changing the networking mode of this docker from Host to Bridged then back again. I don't think this really changed anything aside from rebuilding the docker image, but all is well now.
  16. My favorite type of fix: an easy one. Thanks!
  17. I just tried to install the plugin for the first time on the system I just upgraded to 6.3.3, and I get this error. However I can see the contents of the PLG in my web browser when I navigate directly to the install URL on the first page of this thread. Just tried on a second system I have and got the same issue. plugin: installing: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg plugin: downloading https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg plugin: downloading: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg ... failed (Network failure) plugin: wget: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg download failure (Network failure)
  18. Went through the process to fix my flash drive just after I posted this, and already it's jacked up again. It seems to have ran a parity check last night, but here's what it returned: I'm thinking the flash drive is going bad, does anyone have any flash drive checking utilities they would recommend?
  19. I have Unraid 6.1 virtualized in ESXi 5.5 with the Plop method. It's been running this way since Unraidv5, and I've experienced this issue off and on, but it has become more frequent recently. The issue is this: after a seemingly random period of time, my flash drive becomes "corrupted" (I'm using quotes because I don't know what else to call it). I lose the ability to navigate to \\tower (I can hit it by IP but not by DNS name). The web GUI still works with http://tower, though. I have a folder on the root of the flash drive called "custom" that houses some scripts to copy data and the like. Once the drive becomes "corrupted" i can no longer navigate to anything but the stock folders on the flash drive. It's as if the custom folder doesn't exist: ls -l doesn't show anything, and if I try to cd /boot/custom, it just doesn't work; pressing Tab to autocomplete the path doesn't work either). To fix this, I have to power down Unraid, but as soon as I stop the array, the page refreshes and shows me the "tools" tab and tells me that there is an error reading the flash GUID and to contact support (which I did, and Tom suggested I post here). I then change back to the Main tab, and Unraid tells me I have too many disks and need to upgrade my license. I then power off Unraid and insert the Unraid USB key into my PC, at which point Windows tells me that there's a problem with this drive and it needs scanned. I always choose Scan & Fix, and it completes, but never finds any errors. I've tried formatting the drive and installing Unraid from scratch many times (backing up the Config folder) but the issue still exists. Any help would be greatly appreciated. I've attached a screenshot of the flash GUID error page, and my most recent syslog can be downloaded from here: https://www.dropbox.com/s/rufgmjnh5hbfcbl/syslog_2015-08-31_21.39.02.txt?dl=0 Edit: I also have noticed that once all this starts happening, ESXi no longer sees the flash drive, either.
  20. It looks like I've got it fixed. I copied the stock vsftpd.conf file as dlandon suggested, and went from there. From the stock file, I only changed a few lines. Basically just turned off writing, changed the local root, and disabled the check against vsftpd.user_list. # vsftpd.conf for unRAID # connect_from_port_20=YES write_enable=NO local_root=/mnt/user/ExternalAccess local_umask=0 # # No anonymous logins anonymous_enable=NO # # Allow local vsftpd.user_list users to log in. local_enable=YES userlist_enable=NO #userlist_deny=NO #userlist_file=/boot/config/vsftpd.user_list check_shell=NO # # Logging to syslog syslog_enable=YES log_ftp_protocol=NO xferlog_enable=NO # # Misc. dirmessage_enable=NO ls_recurse_enable=YES listen=NO seccomp_sandbox=NO
  21. I added those two entries as suggested, and it didn't make a difference. Same behavior: prompted for a login, but not able to authenticate.
  22. I also see entries in the log file where an FTP connection is made. Feb 27 10:08:25 Saidin vsftpd[8816]: connect from 63.77.139.252 (63.77.139.252) Feb 27 10:08:30 Saidin vsftpd[8845]: connect from xx.xx.xxx.xxx (xx.xx.xxx.xxx) Feb 27 10:08:32 Saidin vsftpd[8854]: connect from xx.xx.xxx.xxx (xx.xx.xxx.xxx) Feb 27 10:08:36 Saidin vsftpd[8876]: connect from xx.xx.xxx.xxx (xx.xx.xxx.xxx) Feb 27 10:08:41 Saidin vsftpd[8905]: connect from xx.xx.xxx.xxx (xx.xx.xxx.xxx)
  23. So I've been using the same vsftpd.conf settings since I first started up with unRAID back in 5.0 beta 8. Yesterday I just made the jump from 5.0.6 to 6.0b14b, and suddenly my FTP doesn't work. I get prompted for a username and password when trying to connect, so I know it's "working," I just can't authenticate any more. No usernames or passwords were changed. Here is my vsftp.conf file (I should note that I also never had any users listed in the "FTP Users" box on the settings page). Does anyone have suggestions as to how to get FTP working properly again? # vsftpd.conf for unRAID # write_enable=NO connect_from_port_20=YES anon_world_readable_only=NO # # No anonymous logins anonymous_enable=NO # # Allow local users to log in. local_enable=YES local_umask=077 local_root=/mnt/user/ExternalAccess #check_shell=NO # # All file ownership will be 'root' guest_enable=YES guest_username=root anon_upload_enable=YES anon_other_write_enable=YES anon_mkdir_write_enable=YES # # Logging to syslog syslog_enable=YES log_ftp_protocol=NO xferlog_enable=NO # # Misc. dirmessage_enable=NO ls_recurse_enable=YES
  24. 1) OpenSSH 2) Open VM Tools I run unRAID as a VM in ESXi, so I have those two installed to 1) auto power down in the event of power loss and 2) manage the VM from vCenter if necessary.
  25. I've got two servers: 1: ESXi with 2 datastore disks, then a passed-through PERC H310 with 8 drives for unRAID 5.0.5 2: unRAID 5.0.5 bare metal with 5 disks