• Posts

  • Joined

  • Last visited

Everything posted by Vexorg

  1. I wonder what's the longest gap between reply's but I'd like to add my two cents. I was having problems running Tdarr-node's handbrakecli encoding docker, it would only use one CPU core and therefore take forever to encode. I wanted to test to see if handbrake would work better running on the host machine and not in a docker or vm. I found this thread but being over a decade old I really had no hopes but figured I give it a try and I'm glad to say it does work! I had to tweak some things but you can still get this to work. I'm running unraid 6.9.2. I used to find all the missing packages: handbrake-1.5.1-x86_64-1alien.txz fontconfig-2.13.92-x86_64-3.txz freetype-2.12.1-x86_64-1.txz gcc-g++-12.1.0-x86_64-1.txz glibc-2.35-x86_64-2.txz graphite2-1.3.14-x86_64-3.txz harfbuzz-4.4.1-x86_64-1.txz libdrm-2.4.110-x86_64-1.txz libogg-1.3.5-x86_64-1.txz libtheora-1.1.1-x86_64-4.txz libva-2.14.0-x86_64-1.txz libvorbis-1.3.7-x86_64-3.txz ldd Handbrakecli will show the libraries and I would just search for the missing ones and install them.
  2. map /temp to /dev/shm /dev/shm is a temp file system that by default uses half your ram. I have 128G and it uses 64G but eats away from total ram as you use it. So if there is nothing in the temp folder I have 128G free but one I start to use it my systems total ram decreases. If the /dev/shm is full i only have 64G remaining for the rest of my system. Also if something goes wrong with tdarr it might leave files in there so you'll have to delete them yourself or reboot your system. Please note if this burns down your house I'm not responsible.
  3. tips on how to update splunk to newest version since it looks like this project is dead. exec into the container and do: wget -O splunk-7.2.3-06d57c595b80-linux-2.6-amd64.deb "" dpkg -i splunk-7.2.3-06d57c595b80-linux-2.6-amd64.deb /opt/splunk/bin/splunk start --accept-license PS. if this deletes all your logs, it should not, or burn down your house please don't blame me
  4. Cool, just installed, so far so good.
  5. Sorry for replying to an old thread but I'm also trying to get this container working a little better then l already have. I was able to install and run the container but would like to be able to expose the /etc/raddb directory to the host, something like /mnt/user/appdata/freeradius/raddb. I've tried to add it after once it's been installed but that borks things and it no longer starts. I assume the docker file needs to be changed to change how the install works but being a docker newbie not really sure how I would do that.
  6. I'm back to answer my own question. Seems like I'm a boob and forgot what I did when I first setup this docker way back. I guess I updated this docker and looks like anything that is installed into the docker is lost :( apk add --no-cache ipmitool This is what is needed to install the ipmitool into the docker. To the maintainer of this docker, is it possible to include this program in the base install?
  7. Hi all, I'm having a little issue with my telegraf docker. All was running nice and smooth with all types of data being stored in influxdb. After a restart of the telegraf docker it stopped getting temps/fans from my motherboard. I have a supermicro board and I can get the stats using ipmitool. Now in the telegraf log i get: 2018-09-10T05:11:00Z E! Error in plugin [inputs.ipmi_sensor]: ipmitool not found: verify that ipmitool is installed and that ipmitool is in your PATH I checked and ipmitool is still in /usr/bin/ and is executable. My google-fu suggested that the docker needs root access and I think it does. Like I said this all was working before.
  8. I have 128G ram and the transfers are 600GB and 1.6TB. My drives are relativity fast with ~200MB/sec transfer. Also have cache drives but not using them as they are 500GB and seem pointless for these big transfers.
  9. Hello all, I've just built my unraid server and I'm in the process of moving content from my old storage systems to it. I have a two systems I'm transferring to to unraid over a a bonded connection using 802.3ad via a cisco switch that can also do it. I've configured the two machines in a way that they each use one of the connections. In theory i'll get 2x1Gig... On the unraid server I'm using krusader to transfer from old server to unraid, skipping a middle man. Other is from a windows machine to the unraid server. When i start the transfers the old server transfers at about 450Mbit, being a decade old server I can live with that. From windows I get the full gig. After about 4 minuets both transfers tank big time and never seem to recover. See the attached pic, it shows the two ports along with the port channel combined speed. Not sure if I'm filling a samba buffer or an MD buffer or something else. In the long run really does not matter as after these big transfers the unraid server will have all the old data on it. PS. I also turned on fast writes before this all started.
  10. I didn't have to do it again. I just had to do a reboot and it seems the setting has suck so far.
  11. After a ton of Google-fu I was able to resolve my problem. TL;DR Write cache on drive was disabled found an page called How to Disable or Enable Write Caching in Linux. The artical covers both ata and scsi drives which i needed as SAS drive are scsi and are a total different beast. root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 0 [cha: y, def: 0, sav: 0] This shows that the write cache disabled root@Thor:/etc# sdparm --set=WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 This enables it and my writes returned to the expected speeds root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 1 [cha: y, def: 0, sav: 0] confirms the write cache has been set Now I'm not total sure why the write cache was disabled under unraid, bug or feature? While doing my googling there was a mention of a kernel bug a few years ago that if system ram was more then 8G it disables the write cache. My current system has a little more then 8G so maybe?
  12. Hello all, I've just build a new unraid server and after playing around with a bit before I really start using it I was finding it the drive writes seemed to be a bit slow. I first noticed the slowness while I was doing a badblock run on the new drives and after 50 hours it had just finished the first pass. For 4TB drives I understand it should take a total of 40hours or so for this size of drive. I did not see any errors and killed the process and proceeded to build the array with the drives and when it was doing the parrity build the stated write speed to the drive with 47MB/sec which seemed to me a bit slow for the drives. I checked the spec's from the manufacturer and it said it should be closer to 200MB/sec so I'm getting about 25% of the speed. Next thing I did was boot into windows and ran a few benchmarking tools from there and they all reported read/write speeds about 200MB/sec... I did find that one of the LSI cards was in a PCIe 4x slot instead of an 8X one so I moved it to a new slot. Next try was a live linux USB to see if the problem was with unix after things seem to be fine in windows. Used knoppix and ran a few tests and the drives speeds where in the 200MB/sec again?!? My conclusion is the Unraid distro is the problem and not with hardware. Now for some hardware details of my system: Supermicro x10DRi-ln4+ 2x intel e5-2630's 3x LSI 9211 HBA's connected to a supermicro BPN-SAS-846A backplane 6x HGST 4tb 3.5" HDD HUS726040ALS214 SAS drives root@Thor:~# hdparm -Tt /dev/sdh /dev/sdh: Timing cached reads: 21860 MB in 1.99 seconds = 10970.43 MB/sec Timing buffered disk reads: 570 MB in 3.00 seconds = 189.82 MB/sec root@Thor:~# dd if=/dev/zero of=/dev/sdh bs=128k count=10k 10240+0 records in 10240+0 records out 1342177280 bytes (1.3 GB, 1.2 GiB) copied, 28.5249 s, 47.1 MB/s I'm not really sure where to look at next.