Jump to content

master.h

Members
  • Posts

    127
  • Joined

  • Last visited

Posts posted by master.h

  1. The Docker FAQ was actually my first stop. The FAQ link that seemed to apply to me was "Why does my docker.img file keep filling up still, when I've got everything configured correctly?" That link said you could force dockers to have a max size on log files by adding an Extra Parameter in each of your dockers. I double and triple checked (again just now TeamViewered in to my system, as I'm not at home atm) and none of my dockers were configured to save anything internally. Every host path field in all my dockers was pointed to something on the cache drive.

     

    However, I found the offending docker (resilio-sync), but don't know why it was filling docker.img. I wasn't actively using that docker yet, just had the sync folders set up and indexed, so I went ahead and deleted it. Immediately docker.img dropped down to 14GB used instead of like 38GB or whatever it was. So.... why? There's a setting in there for the folder you want to sync, so I passed through /mnt/user (which is mapped internally to /sync) so I could create sync groups for each of my user shares. I mean I guess it filled up because there's a ton of stuff I was trying to sync, but how do I avoid that in the future? I know there's the option to specify internal paths to map to external paths (to the cache drive for instance) but I have no idea where inside the docker Resilio was storing this sync information.

     

    Did I configure something wrong?

    Screenshot_20170426-134947.png

  2. I've attached the settings of the docker page. I think I've got everything mapped correctly, pointing to something in /mnt/appdata (or at least something outside the docker). Except for cAdvisor, and that one was set up after I started getting space notifications. The appdata users share is configured to be a cache-only share as well, if that matters.

    Capture.JPG

  3. My docker.img was created 50GB, and the Docker Settings page is telling me that it currently has 38GB used space. I'm getting notifications in my browser that it's something like 74% full.

     

    
    Label: none  uuid: 08f2d3d8-faa3-4148-bdc4-0997d60d7194
    	Total devices 1 FS bytes used 35.34GiB
    	devid    1 size 50.00GiB used 38.02GiB path /dev/loop0

    I've installed cAdvisor to see if I can find out what is eating up all the space, but according to it, I'm significantly under 50GB (I should note I do have two instances of dropbox running, but only one is showing up here for some reason).

     

    Repository			Tags		ID			Virtual Size	Creation Time
    yujiod/minecraft-mineos		latest	sha256:b4e67aaf1a2f72bbd	413.40 MiB	6/12/2016, 9:14:00 AM
    sparklyballs/handbrake		latest	sha256:13dc3efa788e046c8	1004.53 MiB	3/1/2017, 9:08:12 PM
    roninkenji/dropbox-docker	latest	sha256:b7cba78d15383f97d	449.57 MiB	11/16/2016, 4:58:04 PM
    linuxserver/resilio-sync	latest	sha256:d2f42979c3f189355	35.86 MiB	4/21/2017, 5:49:28 PM
    linuxserver/plex		latest	sha256:c8900bbc5549da132	412.23 MiB	4/21/2017, 5:42:03 PM
    linuxserver/duckdns		latest	sha256:01b9e64da57ef17a9	21.20 MiB	4/21/2017, 5:48:37 PM
    hurricane/ubooquity		latest	sha256:28ad70a06a09c4b86	470.86 MiB	6/21/2016, 2:58:18 PM
    google/cadvisor			latest	sha256:f9ba08bafdeaf8158	54.66 MiB	3/9/2017, 5:30:29 PM
    aptalca/docker-rdp-calibre	latest	sha256:e5bda5ab506738375	1.37 GiB	9/16/2016, 11:02:39 PM

    I also tried to get in to each container and find all files larger than 100MB, but kept getting an error "invalid number 100M"

    docker -exec -it containername bash 
    find / -xdev -type f -size +100M

    Any help figuring this out would be appreciated.

  4. 5 hours ago, Fireball3 said:

    @master.h

    OK, now you have the same output as @EdgarWallace. See 2 posts above.

    Have you been trying other things on this controller before you started 1.bat?

    On an untouched, stock card this is not normal!

    Obviously your card is also wiped already!

    Is the megacli command also failing? This is only shown on screen.

     

    Continue with step 5_DELL_IT

    You won't have a SAS address as the card is already clean, just copy the one

    inside the 6efi.bat and use that. Same for you @EdgarWallace

     

     

     

     

     AFAIK this came directly from a Dell T3400 (or something like that, some sort of desktop machine geared towards CAD applications) but it's sat in a drawer for about three years. This was the first time I've ever plugged it in, anyway. And yes, there was a message about megacli failing as well. I'll continue on with 5_DELL_IT and report back. Thank you kindly.

     

  5. I just tried to flash a Perc H310 with the toolset linked here (the update from 11.04.2017). While running 1.bat I got this error. I typed quit to exit because I didn't know what to input for the firmware.

     

    C:\SAS2FLSH.EXE -l Adapters.txt -c 0 -list 
    	Adapter Selected is a LSI SAS: SAS2008(B2)   
    
    	Controller is not operational. A firmware download is required.
    Enter firmware file name or quit to exit: 	Due to error remaining commands will not be executed.
    	Unable to Process Commands.
    	Exiting SAS2Flash.

     

  6. Well I just fixed it... it really helps if you read the setup instructions completely. Ugh. I missed the line that said "set your library to config on first run." Once I did that, I was able to add the environment variable and map it to my existing library just fine. All my books are showing up in the docker and are being served out over 13579 just fine.

  7. 5 hours ago, master.h said:

     

     

    Is this what you're referring to? The "library" entry is the location of my preconfigured Calibre library, and LIBRARYINTERNALPATH is the variable I got from your post here.

     

     

    Capture.JPG

     

     

    I just deleted and recreated the docker twice, and had the same issue both times. The first time I added in a preconfigured library as in the screenshot above, the second time I created a new library. Both custom and new library give the same error pages as I described earlier.

  8. 10 hours ago, aptalca said:

     


    Don't enable the server in the gui. There is already a separate server instance running at the other port. You probably didn't set the library path variable in the container settings. It's described on the docker hub page.

     

     

     

    Is this what you're referring to? The "library" entry is the location of my preconfigured Calibre library, and LIBRARYINTERNALPATH is the variable I got from your post here.

     

     

    Capture.JPG

  9. I just set up the RDP-Calibre docker, and was able to get a preexisting library mapped into it. When I open the WebUI, I see all my books and it's great. I'm not exactly sure how to make the library available outside the docker, though. I enabled the web server under preferences in Calibre and set the server port to 13579, added a username/password, and I've mapped docker port 8081 to 13579. When I go to saidin:13579, I get prompted for a username/password, and once I enter it, I end up at the normal web interface where you can view the library and download books. However, when I click on either "Newest" or "All Books", I get the error below. If I click on "Random Book" I get a 404 page saying this library has no books. How do I make my library available outside the docker?

     

    Error: No books found
    
    printStackTrace.implementation.prototype.createException@http://saidin:13579/static/stacktrace.js:81:13
    
    printStackTrace.implementation.prototype.run@http://saidin:13579/static/stacktrace.js:66:20
    
    printStackTrace@http://saidin:13579/static/stacktrace.js:57:60
    
    render_error@http://saidin:13579/static/browse/browse.js:134:18
    
    booklist@http://saidin:13579/static/browse/browse.js:271:29
    
    @http://saidin:13579/browse/category/allbooks:34:17
    
    .ready@http://saidin:13579/static/jquery.js:392:6
    
    DOMContentLoaded@http://saidin:13579/static/jquery.js:745:3

     

  10. On 2/24/2017 at 1:30 PM, JohnSnyder said:

    I've done that, and still all I get is the 800 x 600 option - nothing more.

     

    Anything else I can do?

    @JohnSnyder did you ever end up resolving this? I'm having the same issue, except with TeamViewer rather thanNoMachine. Additionally, I'm not able to RDP to my Windows 10 box, it's on a different network than either my server or my desktop, so not sure what the deal is there. Any help would be appreciated.

  11. @CHBMB You're probably right, just resetting the container caused some sort of DNS update. Whatever it actually did, it's working now with now issues.

    @mr-hexen I am aware of the quality settings on the android client; I've had Plex Media Server running in a Windows VM in ESX for quite some time now, and never saw this "indirect" connection to the server; very likely it was because of some DNS thing like CHBMB suggested. I could be misunderstanding what I saw on Plex forums, but what I found made it sound like the "indirect" connection status meant that Plex Media Server was employing some sort of relay service to reach my android client, and that relay service was forcing transcoding and quality settings. I'm sure I could have changed them on the android client, but the impression I got was that changing these settings wouldn't have actually fixed the issue. Could be wrong though, it was getting pretty late (for me, anyway).

  12. Just as an FYI I suppose to anyone who might get this same issue... I installed this docker and got it up and running no problem, could stream locally to my Windows machines and Xbox one without issue. On the web UI I was getting the message "Fully accessible outside your network" with the green checkmark and everything. However, on my Android install of Plex (my cell phone) I kept getting an error "direct connection unavailable." I could see my server name in the list of available servers, but instead of being Directly connected, There was a label of "indirect" on the server name. Apparently that means there is some sort of connectivity issue and Plex uses some sort of relay system to connect my phone over the cellular network to my server, which results in forced transcoding and crap for quality. I'm not sure what the issue was, but I fixed it by changing the networking mode of this docker from Host to Bridged then back again. I don't think this really changed anything aside from rebuilding the docker image, but all is well now.

  13. I just tried to install the plugin for the first time on the system I just upgraded to 6.3.3, and I get this error. However I can see the contents of the PLG in my web browser when I navigate directly to the install URL on the first page of this thread. Just tried on a second system I have and got the same issue.

     

    plugin: installing: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg
    plugin: downloading https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg
    plugin: downloading: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg ... failed (Network failure)
    plugin: wget: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg download failure (Network failure)

     

  14. Went through the process to fix my flash drive just after I posted this, and already it's jacked up again. It seems to have ran a parity check last night, but here's what it returned:

     

    Last checked on Thu 03 Sep 2015 07:04:44 PM CDT (today), finding 0 errors.

    Duration: Warning: Division by zero in /usr/local/emhttp/plugins/dynamix/include/Helpers.php on line 98 . Average speed: 0.0 B/sec

     

     

    I'm thinking the flash drive is going bad, does anyone have any flash drive checking utilities they would recommend?

  15. I have Unraid 6.1 virtualized in ESXi 5.5 with the Plop method. It's been running this way since Unraidv5, and I've experienced this issue off and on, but it has become more frequent recently. The issue is this: after a seemingly random period of time, my flash drive becomes "corrupted" (I'm using quotes because I don't know what else to call it). I lose the ability to navigate to \\tower (I can hit it by IP but not by DNS name). The web GUI still works with http://tower, though. I have a folder on the root of the flash drive called "custom" that houses some scripts to copy data and the like. Once the drive becomes "corrupted" i can no longer navigate to anything but the stock folders on the flash drive. It's as if the custom folder doesn't exist: ls -l doesn't show anything, and if I try to cd /boot/custom, it just doesn't work; pressing Tab to autocomplete the path doesn't work either).

     

    To fix this, I have to power down Unraid, but as soon as I stop the array, the page refreshes and shows me the "tools" tab and tells me that there is an error reading the flash GUID and to contact support (which I did, and Tom suggested I post here). I then change back to the Main tab, and Unraid tells me I have too many disks and need to upgrade my license. I then power off Unraid and insert the Unraid USB key into my PC, at which point Windows tells me that there's a problem with this drive and it needs scanned. I always choose Scan & Fix, and it completes, but never finds any errors. I've tried formatting the drive and installing Unraid from scratch many times (backing up the Config folder) but the issue still exists.

     

    Any help would be greatly appreciated. I've attached a screenshot of the flash GUID error page, and my most recent syslog can be downloaded from here: https://www.dropbox.com/s/rufgmjnh5hbfcbl/syslog_2015-08-31_21.39.02.txt?dl=0

     

     

    Edit: I also have noticed that once all this starts happening, ESXi no longer sees the flash drive, either.

    Capture.PNG.29d4fed3c526069e5a86138a63b05da0.PNG

  16. It looks like I've got it fixed. I copied the stock vsftpd.conf file as dlandon suggested, and went from there. From the stock file, I only changed a few lines. Basically just turned off writing, changed the local root, and disabled the check against vsftpd.user_list.

     

     

    # vsftpd.conf for unRAID
    #
    connect_from_port_20=YES
    write_enable=NO
    local_root=/mnt/user/ExternalAccess
    local_umask=0
    #
    # No anonymous logins
    anonymous_enable=NO
    #
    # Allow local vsftpd.user_list users to log in.
    local_enable=YES
    userlist_enable=NO
    #userlist_deny=NO
    #userlist_file=/boot/config/vsftpd.user_list
    check_shell=NO
    #
    # Logging to syslog
    syslog_enable=YES
    log_ftp_protocol=NO
    xferlog_enable=NO
    #
    # Misc.
    dirmessage_enable=NO
    ls_recurse_enable=YES
    listen=NO
    seccomp_sandbox=NO
    

  17. I also see entries in the log file where an FTP connection is made.

     

     

    Feb 27 10:08:25 Saidin vsftpd[8816]: connect from 63.77.139.252 (63.77.139.252)
    Feb 27 10:08:30 Saidin vsftpd[8845]: connect from xx.xx.xxx.xxx (xx.xx.xxx.xxx)
    Feb 27 10:08:32 Saidin vsftpd[8854]: connect from xx.xx.xxx.xxx (xx.xx.xxx.xxx)
    Feb 27 10:08:36 Saidin vsftpd[8876]: connect from xx.xx.xxx.xxx (xx.xx.xxx.xxx)
    Feb 27 10:08:41 Saidin vsftpd[8905]: connect from xx.xx.xxx.xxx (xx.xx.xxx.xxx)

  18. So I've been using the same vsftpd.conf settings since I first started up with unRAID back in 5.0 beta 8. Yesterday I just made the jump from 5.0.6 to 6.0b14b, and suddenly my FTP doesn't work. I get prompted for a username and password when trying to connect, so I know it's "working," I just can't authenticate any more. No usernames or passwords were changed. Here is my vsftp.conf file (I should note that I also never had any users listed in the "FTP Users" box on the settings page). Does anyone have suggestions as to how to get FTP working properly again?

     

    # vsftpd.conf for unRAID
    #
    write_enable=NO
    connect_from_port_20=YES
    anon_world_readable_only=NO
    #
    # No anonymous logins
    anonymous_enable=NO
    #
    # Allow local users to log in.
    local_enable=YES
    local_umask=077
    local_root=/mnt/user/ExternalAccess
    #check_shell=NO
    #
    # All file ownership will be 'root'
    guest_enable=YES
    guest_username=root
    anon_upload_enable=YES
    anon_other_write_enable=YES
    anon_mkdir_write_enable=YES
    #
    # Logging to syslog
    syslog_enable=YES
    log_ftp_protocol=NO
    xferlog_enable=NO
    #
    # Misc.
    dirmessage_enable=NO
    ls_recurse_enable=YES

×
×
  • Create New...