• Posts

  • Joined

  • Last visited

Everything posted by plowna

  1. Just a note for anyone else. I've been trying to setup pass-through authentication to this docker. The documentation for FreshRSS (under nginx) is not great, and it a took a while to figure it out. There's three changes you need to make (once you've set it up with an initial user). 1. Edit the file config.php (located in /config/www/freshrss/data/ ) and change the auth_type to http_auth: 'auth_type' => 'http_auth', Has to be done here as by default it's grayed out in the web ui. 2. Start the FreshRSS docker, and open a console into it. Then create your .htpasswd file: htpasswd -c /config/nginx/site-confs/.htpasswd <freshrss username> This will request the password for the user, and write the file out there. Repeat for any other users if need be. 3. Edit the nginx site file (located at: /config/nginx/site-confs/default ) server { listen 80; listen 443 ssl; index index.html index.htm index.php; server_name _; ssl_certificate /config/keys/cert.crt; ssl_certificate_key /config/keys/cert.key; client_max_body_size 0; root /usr/share/webapps/freshrss/p; index index.php index.html index.htm; location ~ ^.+?\.php(/.*)?$ { fastcgi_pass; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location / { try_files $uri $uri/ index.php; } location /i { auth_basic "Login"; auth_basic_user_file /config/nginx/site-confs/.htpasswd; location ~ ^.+?\.php(/.*)?$ { fastcgi_param REMOTE_USER $remote_user; fastcgi_pass; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } } Feel free to correct if necessary. This means if using a different reverse proxy than nginx or apache etc (i.e. edge firewall/router) to auth through that and pass through the auth directly to the application. Note the default is to just create a new user and log them in. There's another option in the config.php: 'http_auth_auto_register' => false, Thanks seeya.
  2. Add them to the users group. If you are logged in edit the file /etc/group & add them to the end of the line ie. users:x:100:joe,jill,new_user,other_user you get the idea.
  3. There will likely be others with far more experience in this regard. I can only speak from my recent experience (detailed here). I take it you want to keep the full slackware install on a drive that is not connected via SATA as you'd like to maximise the amount of drives connected to the array. Also I take it the proliant box has 6x SATA inputs & with the added card will add another 2. An option might be to get a portable hdd - might be better to use another 2.5 sata hdd & put it in an external case. You'd need to check to ensure the proliant can boot off it. Or run with 2x USB flash disks - one for the full install & the other as the 'dongle' so to speak. The 'OS' flash disk would appear as a /dev/sdX entry anyway so I don't imagine there'd be much difference in the slackware installation of it. So long as the slack installer can see it to install it. The biggest risk with any sort of setup like this (ie full slackware install) is it introduces more points of failure.
  4. Got my script written for starting VM's on bootup. I found a much more comprehensive one (okay for some reason the forum doesn't like my link, it points to a pastebin entry), it seemed a bit too complicated for me. My script is made to be called by root but will just run the guests as the chosen user. (It calls them by UUID) Some of it is a bit kludgey, I'm not the best of bash script writers. I've created a button for it in unmenu, which just calls the script. I'm still a bit hesitant on starting the VM's automatically on boot, I'd like to get the timing correct. ie Only if the network interface & array is online. Anyway, here's the script: #!/bin/sh VBOX_MANAGE=/usr/bin/VBoxManage VBOX_HEADLESS=/usr/bin/VBoxHeadless VBOX_USER=vmadmin # kind of redundant considering I couldn't figure out how to use it VMNAME= for uuid in $(su vmadmin -c "VBoxManage list vms | sed -e 's/^.*{\(*.\)}$/\1/'") do eval "su vmadmin -c \"VBoxManage list vms | sed '/{$uuid}$/!d' | sed -e 's/.{$uuid}$//' > /tmp/vmname.tmp\"" VMNAME=`cat /tmp/vmname.tmp` echo "Starting $VMNAME ..." sudo -u $VBOX_USER -H -b $VBOX_HEADLESS -startvm $uuid done rm /tmp/vmname.tmp (edit: improved it upon review ...) Shutting down the VM's is made a bit simpler by both of them being Linux guests. I've set up 'passwordless' ssh logins using keychain, so its just a matter of sending a couple of shutdown commands to them. (I don't think they respond to ACPI events, I've never investigated this fully) From there its not difficult to add them to rc.local_shutdown. Last step will be adding a command to pause all open torrents that access the array when the array goes down.
  5. Got it installed & running ok. Ran a quick cfgmaker public@localhost > mrtg.cfg, I wasn't sure if the .cfg you put in was the same. I added a link to the 'Useful Links' page for easy access.
  6. I'll give this a go - I know of the program but have never put in the effort to find out how it works. Am I correct in assuming I need to install the mrtg slackware package (wherever it is) and then use the .conf file provided to enable it within unMenu?
  7. Finally got unMenu set up & installed. Turned out the locally added share (ie mount.cifs // /media/my_share) for mediatomb caused the initial commands that uu invoked (ie df --block-size=1000) to get stuck in a loop. Browsing to //tower:8080 would just stall 'waiting for server'. Welp that seals it not using locally added shares!! Also installed unMenu to the flash drive (/flash/unmenu) then added symlinks to the relevant directories off /boot (/boot/packages, /boot/unmenu etc). All seems to be working ok. Just need to get myself one of these USB <-> serial cables now ...
  8. Replying to my own ... reply. I'm pretty sure deluge has a cli interface, so if I can add in an extra command to tell the deluge server to pause all torrents when the array goes down ... that would be neat. I can always manually start them again, but its the pausing when the array goes down that concerns me the most. Food for thought.
  9. Cheers! That helps tremendously. The shutting down of access to shares before stopping array shouldn't be a problem - the point of the vm's is that they can then access the user shares of unRAID via CIFS, so the normal way of shutting down the array should be sufficient (if I'm correct it shuts down SMB/CIFS shares first anyway). At this stage only a couple of things to work on: Auto-starting the vm's after everything has booted up ... I might use that startup script thingo you've posted, it looks neat. Creating a script that will do what clicking 'stop array' does, as well as shutting down the vm's & then performing a shutdown of the server in one hit ... again I think one of your scripts there looks neat, I will use that as a starting point. Once thats working ... get the UPS I have talking to the server ... its an old Cabac one with a new battery. Still have to get the serial <-> USB cable for it though. I think unMenu can do it, but when I tried to install it yesterday it didn't work at all for me, didn't have to time to investigate =( Trying to get deluge (bittorrent server ... thats what one of the vm's is for) to pause all seeds when the share connection drops (ie when the array goes down). Otherwise it will instantly try to start downloading all my seeding torrents ... I'll post notes as I get through all this stuff.
  10. First off, not sure which forum this comes under. I believe I'm finally finished setting this build up on my server at home, I just wanted to share some of my notes from going through the process in case someone else has similar problems or is setting up a similar environment. Also, apologies in advance for the length. All up it took me 4 or 5 days, most of that time was spent making backups of existing data, partially restoring the backups and waiting for parity checks/builds to complete ... found a faulty drive in the process. At this stage I'm still 'testing' it (and will be for a few months due to using 5b14), however everything appears to be working fine. I'll be adding more drives (and yes buying a couple of licenses in the near future), so I'll see how that turns out. From the start, I wanted to create a build that satisfied these requirements: Using unRAID 5.0 beta 14 Running from hard disk (Slackware 13.1, kernel 3.1.1) 64-bit system (Slackware64 w/ multilib) Latest VirtualBox installed & running 2x VM guests Mediatomb compiled & running (I have a custom import.js script, and play everything on my WD TV Live) Notes: I used this guide for most of it. Very well written and really not that many problems! (I have compiled a few kernels in my time, so that may have been an advantage ...) For adding in multilib support, following the 'quick & dirty' part of this guide was all that was necessary. The forum thread here regarding 64-bit support mentions changing the symlinks to ensure unRAID uses the correct libraries; this wasn't necessary. (Actually unRAID restored the original symlinks upon reboot!) My build displays HDD's the same way under the Main tab of the unRAID web interface as mentioned in the thread. The guide to installing VirtualBox in unRAID (here) was very informative but ultimately redundant in this setup. If you install the dev libraries with Slack64 during install and compile your own kernel (as part of the guide to installing unRAID to a full slack distro) then the VirtualBox installer from Oracle's website should simply install without a hitch. Just remember to run guest VM's under a user not root ... Here's a list of problems I encountered & the solutions: Unable to boot from flash disk Cause: flash disk not properly formatted & prepared correctly Solution: Proper formatting of flash disk using guiformat.exe (fat32format) Creating USB flash boot disk for Slackware64 Cause: Provided file & instructions are linux-specific Solution: Boot from existing linux-based USB flask boot disk, use dd from within root console to overwrite 2nd USB flash boot disk (dd if=/folder/usbboot.img of=/dev/sdX bs=512) Cannot boot slackware system once installed on HDD Cause: Question at end of Slack install process - 'Detected OS/2 or Partition Magic boot loader' Solution: ANSWER NO at this question. Additional info: Suspect Slack installer is detecting the unetbootin/tuxboot bootloader on the USB flash disk Slackware USB install doesn't show any Slackware64 libraries to install!! Cause: Using Slackware USB flash disk to install Slackware64 Solution: Use the Slackware64 USB flash disk installer to install Slackware64 (sigh) Unable to access newly created files on user shares Cause: Messing with users outside of unRAID ... initially thought it was a samba bug Solution: Don't create/edit/touch users outside of unRAID to be used within unRAID! Finally, on compiling & running Mediatomb: ffmpeg Used AlienBob's ffmpeg SlackBuild to setup ffmpeg. The pre-built packages didn't work as they are compiled in an x11 environment (my server is headless). Had to make a few customisations: You need YASM 1.0 at least to build x264. Used to find the latest. Added --disable-fontconfig to the libass configure. Disabled vaapi in the final ffmpeg compile - short answer is couldn't bothered getting the drivers setup for the AMD chipset on my MB. This gives you ffmpeg 0.9 mediatomb Used slackBuilds from to add in support for libmp4v2 & SpiderMonkey js - need the js for my custom import.js script in mediatomb. Compiling mediatomb against latest ffmpeg (0.9) fails as specified in this bug report. There's a patch there which when added to the Mediatomb sources then allows Mediatomb to be compiled with the slackBuild from SBo. As its probably not a good idea to be accessing /mnt/diskX or /mnt/shares directly from within the server (and behind unRAID's back ... it might get offended) I added an entry to fstab to mount one of unRAIDS CIFS shares - specified noauto as I'll figure out how to auto-mount it later. Will probably fail if its mounted before emhttp brings it up. At the moment I only have the one 1Tb WD drive in there - its not an EARS but the 3x 2Tb drives I have are. One is used for parity, and one is holding a backup of just about everything I hold dear so is not going into the array any time soon. Maybe in another 6 months if the array is still stable & no other bugs have turned up (maybe 5.0 will be out of beta by then too). The third 2Tb EARS was the faulty one (wasted a good 12 hours waiting for 2x parity checks to confirm) and I'll be sending it away for RMA tomorrow (today is a public holiday). I have a 4th one but can't use it yet (no comment). Hardware is: AMD Phenom II X6 1090T 8gb g.skill memory (1300mhz I think) Gigabyte 890GPA-UD3H v2.1 Antec 900-2 case, got about 10 fans all over it. cheers