Jump to content

JustinAiken

Members
  • Posts

    452
  • Joined

  • Last visited

Posts posted by JustinAiken

  1. Quote

    Is this reproduceable? I can't see how this would have happened unless there was a race condition where two processes were deleting the files at the same time.

     

    Just tried to "Purge Everything and Start Over"  (which seemed to work), then ran the benchmark again... got the same thing:

    Lucee 5.2.6.59 Error (application)
    Message	source file [/tmp/DiskSpeedTmp/benchmark_0000_01_00.0.txt] is not a file
    Stacktrace	The Error Occurred in
    /var/www/Benchmark.cfm: line 117 
    115: <CFIF URL.Restart EQ "Y">
    116: <CFLOOP index="CR" from="1" to="#BenchCheck.RecordCount#">
    117: <CFFILE action="Delete" file="/tmp/DiskSpeedTmp/#BenchCheck.Name[CR]#">
    118: </CFLOOP>
    119: <CFLOCATION URL="Benchmark.cfm" addtoken="NO">

     

  2. Updated to today's version, tried to run a benchmark, got this error:

     

    Lucee 5.2.6.59 Error (application)
    Message	source file [/tmp/DiskSpeedTmp/benchmark_0000_01_00.0.txt] is not a file
    Stacktrace	The Error Occurred in
    /var/www/Benchmark.cfm: line 117 
    115: <CFIF URL.Restart EQ "Y">
    116: <CFLOOP index="CR" from="1" to="#BenchCheck.RecordCount#">
    117: <CFFILE action="Delete" file="/tmp/DiskSpeedTmp/#BenchCheck.Name[CR]#">
    118: </CFLOOP>
    119: <CFLOCATION URL="Benchmark.cfm" addtoken="NO">
    

     

  3. 17 hours ago, coppit said:

     

    Did you set it up using the SpaceInvader One approach? I had to hard-reboot my server, and after a few days it ended up the same way. I'm wondering if a log or something is causing the container to lock up after a few days.

     

    Yep, walked through the space invader video.

     

    First time I was using the container heavily, got a hard lock.

    Next I tried just running it for a few days, but not using it for DNS; so it was running 24/7, but handling 0 requests.

    After a few days, the docker container showed as "healthy", but was unkillable.

     

    Anecdotally speaking, every hardlock a docker container has ever done to my unRaid has been a container using the s6 system (gogs used to lock mine up occasionally, but it quit using s6, and hasn't locked since)

  4. This container seems to be locking up my whole Docker - the Docker page in unraid wouldn't load.  Tried stopping Docker from the settings tab, no luck.

     

    SSHing I see this:

     

    root@Tower:~# docker ps
    CONTAINER ID        IMAGE                   COMMAND             CREATED             STATUS                PORTS               NAMES
    62518c9bb052        diginc/pi-hole:latest   "/s6-init"          6 days ago          Up 4 days (healthy)                       pihole
    
    root@Tower:~# docker stop pihole
    # Hangs for a few minutes.  Eventually ctrl-c out
    
    root@Tower:~# docker kill pihole
    # Hangs forever
    
    

     

  5. 45 minutes ago, Trylo said:

     

    I'm sorry, I'm not very fluent with container work. I don't see a parameter in readme ther eis "unmask" and there is also information about data volumes.

    Do I need to put in extra parameter or change the unmask value?

     

    Neither; in the container settings page, click "Edit" next to the storage path mapping, and a modal popups; It has a few options, but you'll just change "Access Mode" off of read-only.

  6. On 11/16/2017 at 4:41 PM, mattekure said:

    I noticed that the /usr/local/emhttp/plugins/ssh folder has different permissions that all of the other plugin folders there. 

    ....

    manually changing it stops the error from showing up.  

     

     

    For anyone wondering how:

     

    Do chmod 755 /usr/local/emhttp/plugins/ssh to fix it for your current session.

     

    Or add to `/boot/config/go` to make it fixed each time you start:

     

    # Fix ssh plugin icon:
    chmod 755 /usr/local/emhttp/plugins/ssh

     

    • Upvote 1
  7. Just tried this after using the commandline `diskmv` from the MUSDL plugin for awhile... looks really nice, but doesn't seem to work from me.  Just about every move I try, stops immediately with a `exit status 23 : Partial transfer due to error` 

     

    But if I drop to the command line, and manually paste in the rsync command, it goes through..  

     

    Actually, watching the output from the current `rsync` command I copy pasted in, I think that `.git` repos break it..

     

     

    • Community Applications -> For installing dockers
    • Preclear Disks -> Makes it so nice and easy to preclear - no CLI needed
    • Server Layout -> So I can remember which drive is where
    • Nerd Tools -> Vim is better than nano
    • ssh Plugin -> Remember your key, so you can do passwordless ssh

     

  8. Troubling email today...  https://support.code42.com/Release_Notes/CrashPlan_for_Small_Business_version_6.6

     

    Quote
    • Although using CrashPlan for Small Business on a headless computer and installing CrashPlan for Small Business on a NAS device are unsupported, previous versions of the CrashPlan for Small Business app would still function in these configurations. However, beginning with version 6.6.0, the CrashPlan for Small Business app does not function in either of these configurations.   

     

×
×
  • Create New...