plowna

Members
  • Content Count

    9
  • Joined

  • Last visited

Community Reputation

0 Neutral

About plowna

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed
  1. Add them to the users group. If you are logged in edit the file /etc/group & add them to the end of the line ie. users:x:100:joe,jill,new_user,other_user you get the idea.
  2. There will likely be others with far more experience in this regard. I can only speak from my recent experience (detailed here). I take it you want to keep the full slackware install on a drive that is not connected via SATA as you'd like to maximise the amount of drives connected to the array. Also I take it the proliant box has 6x SATA inputs & with the added card will add another 2. An option might be to get a portable hdd - might be better to use another 2.5 sata hdd & put it in an external case. You'd need to check to ensure the proliant can boot off it. Or run with 2x
  3. Got my script written for starting VM's on bootup. I found a much more comprehensive one (okay for some reason the forum doesn't like my link, it points to a pastebin entry), it seemed a bit too complicated for me. My script is made to be called by root but will just run the guests as the chosen user. (It calls them by UUID) Some of it is a bit kludgey, I'm not the best of bash script writers. I've created a button for it in unmenu, which just calls the script. I'm still a bit hesitant on starting the VM's automatically on boot, I'd like to get the timing correct. ie Only if the netw
  4. Got it installed & running ok. Ran a quick cfgmaker public@localhost > mrtg.cfg, I wasn't sure if the .cfg you put in was the same. I added a link to the 'Useful Links' page for easy access.
  5. I'll give this a go - I know of the program but have never put in the effort to find out how it works. Am I correct in assuming I need to install the mrtg slackware package (wherever it is) and then use the .conf file provided to enable it within unMenu?
  6. Finally got unMenu set up & installed. Turned out the locally added share (ie mount.cifs //127.0.0.1/share /media/my_share) for mediatomb caused the initial commands that uu invoked (ie df --block-size=1000) to get stuck in a loop. Browsing to //tower:8080 would just stall 'waiting for server'. Welp that seals it not using locally added shares!! Also installed unMenu to the flash drive (/flash/unmenu) then added symlinks to the relevant directories off /boot (/boot/packages, /boot/unmenu etc). All seems to be working ok. Just need to get myself one of these USB <-> serial c
  7. Replying to my own ... reply. I'm pretty sure deluge has a cli interface, so if I can add in an extra command to tell the deluge server to pause all torrents when the array goes down ... that would be neat. I can always manually start them again, but its the pausing when the array goes down that concerns me the most. Food for thought.
  8. Cheers! That helps tremendously. The shutting down of access to shares before stopping array shouldn't be a problem - the point of the vm's is that they can then access the user shares of unRAID via CIFS, so the normal way of shutting down the array should be sufficient (if I'm correct it shuts down SMB/CIFS shares first anyway). At this stage only a couple of things to work on: Auto-starting the vm's after everything has booted up ... I might use that startup script thingo you've posted, it looks neat. Creating a script that will do what clicking 'stop array' does, as well as
  9. First off, not sure which forum this comes under. I believe I'm finally finished setting this build up on my server at home, I just wanted to share some of my notes from going through the process in case someone else has similar problems or is setting up a similar environment. Also, apologies in advance for the length. All up it took me 4 or 5 days, most of that time was spent making backups of existing data, partially restoring the backups and waiting for parity checks/builds to complete ... found a faulty drive in the process. At this stage I'm still 'testing' it (and will be for