JustinAiken
-
Posts
452 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by JustinAiken
-
-
48 minutes ago, jbartlett said:
Update your Docker and try the benchmark again and let me know if it worked
That worked! Pretty graphs loaded... wow, I need a faster cache drive!
-
Quote
Is this reproduceable? I can't see how this would have happened unless there was a race condition where two processes were deleting the files at the same time.
Just tried to "Purge Everything and Start Over" (which seemed to work), then ran the benchmark again... got the same thing:
Lucee 5.2.6.59 Error (application) Message source file [/tmp/DiskSpeedTmp/benchmark_0000_01_00.0.txt] is not a file Stacktrace The Error Occurred in /var/www/Benchmark.cfm: line 117 115: <CFIF URL.Restart EQ "Y"> 116: <CFLOOP index="CR" from="1" to="#BenchCheck.RecordCount#"> 117: <CFFILE action="Delete" file="/tmp/DiskSpeedTmp/#BenchCheck.Name[CR]#"> 118: </CFLOOP> 119: <CFLOCATION URL="Benchmark.cfm" addtoken="NO">
-
Updated to today's version, tried to run a benchmark, got this error:
Lucee 5.2.6.59 Error (application) Message source file [/tmp/DiskSpeedTmp/benchmark_0000_01_00.0.txt] is not a file Stacktrace The Error Occurred in /var/www/Benchmark.cfm: line 117 115: <CFIF URL.Restart EQ "Y"> 116: <CFLOOP index="CR" from="1" to="#BenchCheck.RecordCount#"> 117: <CFFILE action="Delete" file="/tmp/DiskSpeedTmp/#BenchCheck.Name[CR]#"> 118: </CFLOOP> 119: <CFLOCATION URL="Benchmark.cfm" addtoken="NO">
-
17 hours ago, coppit said:
Did you set it up using the SpaceInvader One approach? I had to hard-reboot my server, and after a few days it ended up the same way. I'm wondering if a log or something is causing the container to lock up after a few days.
Yep, walked through the space invader video.
First time I was using the container heavily, got a hard lock.
Next I tried just running it for a few days, but not using it for DNS; so it was running 24/7, but handling 0 requests.
After a few days, the docker container showed as "healthy", but was unkillable.
Anecdotally speaking, every hardlock a docker container has ever done to my unRaid has been a container using the s6 system (gogs used to lock mine up occasionally, but it quit using s6, and hasn't locked since)
-
This container seems to be locking up my whole Docker - the Docker page in unraid wouldn't load. Tried stopping Docker from the settings tab, no luck.
SSHing I see this:
root@Tower:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 62518c9bb052 diginc/pi-hole:latest "/s6-init" 6 days ago Up 4 days (healthy) pihole root@Tower:~# docker stop pihole # Hangs for a few minutes. Eventually ctrl-c out root@Tower:~# docker kill pihole # Hangs forever
-
Ran the benchmark, was admiring the graphs (really nice looking data!), when suddenly it changed to this error: https://gist.github.com/JustinAiken/4a554195fefa3d5abf584d5779bcf26e
-
50 minutes ago, WannabeMKII said:
Found pihole-FTL process with PID 435 (my PID 699) - killing it ...
Different PIDs, but seeing the same thing. WebUI doesn't load at all
-
45 minutes ago, Trylo said:
I'm sorry, I'm not very fluent with container work. I don't see a parameter in readme ther eis "unmask" and there is also information about data volumes.
Do I need to put in extra parameter or change the unmask value?
Neither; in the container settings page, click "Edit" next to the storage path mapping, and a modal popups; It has a few options, but you'll just change "Access Mode" off of read-only.
-
Been running the 6.4.0rc18 now for about 5 days, and had no issues.
Really loving the gui not hanging when waiting for docker!
-
On 11/16/2017 at 4:41 PM, mattekure said:
I noticed that the /usr/local/emhttp/plugins/ssh folder has different permissions that all of the other plugin folders there.
....
manually changing it stops the error from showing up.
For anyone wondering how:
Do chmod 755 /usr/local/emhttp/plugins/ssh to fix it for your current session.
Or add to `/boot/config/go` to make it fixed each time you start:
# Fix ssh plugin icon: chmod 755 /usr/local/emhttp/plugins/ssh
- 1
-
Just tried this after using the commandline `diskmv` from the MUSDL plugin for awhile... looks really nice, but doesn't seem to work from me. Just about every move I try, stops immediately with a `exit status 23 : Partial transfer due to error`
But if I drop to the command line, and manually paste in the rsync command, it goes through..
Actually, watching the output from the current `rsync` command I copy pasted in, I think that `.git` repos break it..
-
One thing I noticed about the new terminal..
# From ssh or telnet: $ echo $HOME /root # From the new web terminal $ echo $HOME /
..which means that `/root/.bash_profile` doesn't get loaded in the webterminal, so you have to `source /root/.bash_profile` if you want the nice pretty prompt
-
12 hours ago, Ashe said:
the terminal does not show up in black and white themes only azure and grey themes, has anyone else seen this?
Terminal works with the black theme here (Chrome/Mac)
-
- Community Applications -> For installing dockers
- Preclear Disks -> Makes it so nice and easy to preclear - no CLI needed
- Server Layout -> So I can remember which drive is where
- Nerd Tools -> Vim is better than nano
- ssh Plugin -> Remember your key, so you can do passwordless ssh
-
- Updated from 6.3.5 without incident
- All dockers work as before
- All plugins work as before, with sole exception of server-layout, which was fixed with a `sed` tweak
- Terminal works great, nice feature!
- Holding off on the https webgui and encrypted drives - excited for both, but running base 6.4 features before trying the new stuff
-
23 hours ago, Gary489 said:
Ill buy one or 2 of the 2tb drives. As long as the price is right!
PM'd!
-
I just removed 5 2TB and 1 1.5TB drive from my server to go all 4/8TB's...
The drives are all good; no SMART issues, but they're all well out of warranty. Are they worth selling at all?
-
On 12/10/2017 at 7:08 PM, jonathanm said:
You need to cover pin 3 of the SATA power connector on the drive so the drive will power up.
Worked like a charm, thanks!
-
> Beyond that, and the intermittent VM issues on RC15 and the continuing Ryzen issues (out of LT's control), 6.4 RCx is more "stable" than 6.3.5
So if I don't use VMs and have Intel... I'd be more stable switching at this point? (Tons of Docker usage..)
-
Just got a WD80EMAZ, unshucked without breaking tabs. Plugged it into my Norco 4220, it's not recognized
-
Updated to the new Docker 2.0 version with the 6.6.0 Crashplan
- Required my email/pass to login
- Worked smooth, continued on backing up
- GUI is all new... and terrible. But that's Crashplan's fault, not Djoss's!
-
Troubling email today... https://support.code42.com/Release_Notes/CrashPlan_for_Small_Business_version_6.6
Quote- Although using CrashPlan for Small Business on a headless computer and installing CrashPlan for Small Business on a NAS device are unsupported, previous versions of the CrashPlan for Small Business app would still function in these configurations. However, beginning with version 6.6.0, the CrashPlan for Small Business app does not function in either of these configurations.
-
Updated to the new 1.20 container today without issue.
-
After working ever since day 1 of the Pro container, it started failing-to-connect-to-backup-engine consistently yesterday. Bumped up the max mem to `2048M`, now it works great again
unRAID OS version 6.5.2 available
in Announcements
Posted
Coming up on 5 days of uptime with 6.5.2, no issues so far