-
Posts
7 -
Joined
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by MJFox
-
-
did you ever find something? I'm getting tired of Fing and looking for a replacement as well
-
Hi!
I'm fairly new to the unraid community, I built my first unraid machine and purchased a license just about two weeks ago. Since then I'm wondering what's the best practice to set up a LAMP enviroment with Unraid.
My first guess was to set up an Ubuntu VM and install apache, php and mysql. This worked well but I'm struggeling to find a good way to set up the webserver root (htdocs-directory). What I would like to to is to use a Unraid share inside the VM so I don't have to set up and manage another samba configuration inside the VM. First I tried to set up a mount tag and mount a 9p file system inside the VM. By raising the msize in the fstab mount I'm getting a reasonable performance (it was quite bad with the default msize). But I noticed that Apache doesn't update a file until I read the file inside the VM over the console (e.g. with the midnight commander editor). Until that Apache always seems to get a cached version of the file. I found two apache settings:
EnableSendfile Off
EnableMMAP Off
After adding this into apache.conf I need to reload twice to get the latest version of a file. So still not perfect.
I tried to mount the htdocs-directory over smb but I just got very weird results (Apache offering random files for download instead of serving them, files appearing and dissapearing randomly for Apache etc.).
I finally set up another samba service inside the VM, but I don't like to manage two different samba configurations, creating another backup etc.
What's your solution for your LAMP environment?
Cheers
MJFox
-
so what is the best alternative to mount a (share-)folder from the host into the VM if you want to avoid 9p? I tried to mount the share over SMB and over NFS and set an apache web root to the mounted folder and I got very weird problems like the files disappearing and re-appearing randomly, apache offering the files for download instead of serving them, apache didn't see the latest version of a file until I opened the file on the VM-console etc.
- 1
-
thank you very much for this great script, it works very well! 👍
-
2 hours ago, Cessquill said:
Hopefully somebody else can answer your actual question, but as a side thought you could replace...
mdcmd spinup 0 mdcmd spinup 1 mdcmd spinup 2
...with...
disks=$(ls /dev/md* | sed "sX/dev/mdXX"); for disknum in $disks; do /usr/local/sbin/mdcmd spinup $disknum; done
I currently run this on a CRON schedule every 10 minutes between 19:00 and 23:00 to speed up Plex. I'd therefore be interested in your question too, as it's more useful.
thanks for your suggestion
I just tried and tested my script and it works well in the background, I just changed "tail -F " to "tail -n 1 -F " so my script only checks the last line of the log-file
give it a try
Cheers
MJFox
-
Hi!
Great little plugin, thanks a lot.
I have a question though: I want to start a script that runs in background after the disk array starts.
This is the content of the script:
#!/bin/bash tail -F "/mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Server.log" | \ while read line ; do echo "$line" | grep "authenticated user" if [ $? = 0 ] then mdcmd spinup 0 mdcmd spinup 1 mdcmd spinup 2 sleep 30 fi done
what it does is it spins up the disks whenever somebody opens plex
obviously I want to keep the script running in the background when unraid boots up... is it ok to just start the script with the "At Startup of Array"- option? will it keep running in the background then?
thanks
MJFox
[Plugin] CA Appdata Backup / Restore v2.5
in Plugin Support
Posted
thank you for this great plugin!
currently the process is:
-) stop all the containers
-) backup all the containers
-) start all the containers
I let the plugin create individual zip-files for each container
wouldn't it be better to change the process like this in such a case:
-) stop the first container
-) backup the first container
-) start the first container
-) stop the second container
-) backup the second container
-) start the second container
etc.
this would greatly reduce the downtime for each container