Vexorg

Members
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

12 Good

About Vexorg

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. tips on how to update splunk to newest version since it looks like this project is dead. exec into the container and do: wget -O splunk-7.2.3-06d57c595b80-linux-2.6-amd64.deb "https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=7.2.3&product=splunk&filename=splunk-7.2.3-06d57c595b80-linux-2.6-amd64.deb&wget=true" dpkg -i splunk-7.2.3-06d57c595b80-linux-2.6-amd64.deb /opt/splunk/bin/splunk start --accept-license PS. if this deletes all your logs, it should not, or burn down your house please don't blame me
  2. Cool, just installed, so far so good.
  3. Sorry for replying to an old thread but I'm also trying to get this container working a little better then l already have. I was able to install and run the container but would like to be able to expose the /etc/raddb directory to the host, something like /mnt/user/appdata/freeradius/raddb. I've tried to add it after once it's been installed but that borks things and it no longer starts. I assume the docker file needs to be changed to change how the install works but being a docker newbie not really sure how I would do that.
  4. I'm back to answer my own question. Seems like I'm a boob and forgot what I did when I first setup this docker way back. I guess I updated this docker and looks like anything that is installed into the docker is lost :( apk add --no-cache ipmitool This is what is needed to install the ipmitool into the docker. To the maintainer of this docker, is it possible to include this program in the base install?
  5. Hi all, I'm having a little issue with my telegraf docker. All was running nice and smooth with all types of data being stored in influxdb. After a restart of the telegraf docker it stopped getting temps/fans from my motherboard. I have a supermicro board and I can get the stats using ipmitool. Now in the telegraf log i get: 2018-09-10T05:11:00Z E! Error in plugin [inputs.ipmi_sensor]: ipmitool not found: verify that ipmitool is installed and that ipmitool is in your PATH I checked and ipmitool is still in /usr/bin/ and is executable. My google-fu suggested that the docker n
  6. I have 128G ram and the transfers are 600GB and 1.6TB. My drives are relativity fast with ~200MB/sec transfer. Also have cache drives but not using them as they are 500GB and seem pointless for these big transfers.
  7. Hello all, I've just built my unraid server and I'm in the process of moving content from my old storage systems to it. I have a two systems I'm transferring to to unraid over a a bonded connection using 802.3ad via a cisco switch that can also do it. I've configured the two machines in a way that they each use one of the connections. In theory i'll get 2x1Gig... On the unraid server I'm using krusader to transfer from old server to unraid, skipping a middle man. Other is from a windows machine to the unraid server. When i start the transfers the old server
  8. I didn't have to do it again. I just had to do a reboot and it seems the setting has suck so far.
  9. After a ton of Google-fu I was able to resolve my problem. TL;DR Write cache on drive was disabled found an page called How to Disable or Enable Write Caching in Linux. The artical covers both ata and scsi drives which i needed as SAS drive are scsi and are a total different beast. root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 0 [cha: y, def: 0, sav: 0] This shows that the write cache disabled root@Thor:/etc# sdparm --set=WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 This
  10. Hello all, I've just build a new unraid server and after playing around with a bit before I really start using it I was finding it the drive writes seemed to be a bit slow. I first noticed the slowness while I was doing a badblock run on the new drives and after 50 hours it had just finished the first pass. For 4TB drives I understand it should take a total of 40hours or so for this size of drive. I did not see any errors and killed the process and proceeded to build the array with the drives and when it was doing the parrity build the stated write speed to the drive wi