Jump to content

je82

Members
  • Content Count

    231
  • Joined

  • Last visited

Community Reputation

12 Good

About je82

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi, I wanted to use this pihole docker as my second dns server, i already have a hardware raspberry pi as my main pihole dns, but having a backup is always good. I was wondering how i could change hostname of this pihole installation? I was hoping to simply ssh in and change /etc/hostname to whatever i wanted but there is no editor included in the docker so unsure how i would accomplish this. Ideas? Thanks. EDIT: Figured it out: ssh into the docker cp /etc/hostname /etc/pihole/ ssh into your unraid install nano /mnt/cache/appdata/pihole/pihole/hostname changename + save ssh back into the docker cp /etc/pihole/hostname /etc/ EDIT AGAIN: Wont survive docker container restart damn. EDIT One More time! added --hostname pihole2 to the run command in docker, seems to work!
  2. I'm no expert but i've read that 6.8.0 and higher with plex installations are struggling a lot due to the incredible bad smb performance introduced with 6.8.0. If possible and not to much trouble i'd try downgrading to 6.7.2 and see if the "sluggishness" remains, because for me 6.8.0 was nearly unusuable and i have many services depending on smb and 6.7.2 handles it just fine.
  3. alright thanks for the information, would be cool to have that on the roadmap for future features... i am not fully exposed as the rdm encrypts all the data it stores and i only initiate any traffic to the unraid installation on the LAN so i should be fine for now.
  4. Hello, So i've setup my Remote Desktop Manager (which is a wonderful windows program by the way to keep all my remote stuff in one place) to run a cmd plink script that ssh's into my unraid installation and starts the Krusader docker container. Then after 5 seconds has passed it initiates the actual vnc connection to the now started docked container. When i exit the vnc connection it sends another cmd plink command to stop the Krusader container. This all works fine, but i have to pass the password in clear text using plink and i use the root login which feels bad. My question to you is: How do i setup a user that i can ssh in to unraid with that ONLY has permission to stop/start manage the docker containers and not the rest of the unraid installation? I would rather use a limited user that only has these privileges than using the root account. Any ideas? Thank you.
  5. Neither, its 2 logic cases Yes, totally worth it! No more noisy racks
  6. Looks like 6.8.1 just went stable, anyone taken the plunge and tried the smb performance?
  7. nice! i hear this may also resolve the slow smb performance of 6.8.0? i am waiting for 6.1 to settle and all the dockers to be updated if need be before i upgrade, great to see a new version so soon!
  8. I think the command restarts the vnc service on the docker, but not sure really. I noticed something else, everytime i update this jdownloader docker it adds a new download directly mount as "jdownloader" and it also adds a share called "jdownloader". Is there anyway to stop it from doing this? I have already setup my own download paths and shares to use with it.
  9. Interesting, i will look into the root share, that may be a good way for me to deal with this. The only real reason i limited my unraid shares to specific drives was in hopes of not having to "wake the entire array" everytime i want to access data. Attempting to separate the data i use personally and the data i just need to keep around into 2 different set of drives so i can access my regular data often without "waking the beast" so to say
  10. I started my unraid journey in october 2019 trying out some stuff before i decided to take the plunge and buy 2 licences (one for backups and one for my daily use). I am working with IT and knowing how much noise rack equipment makes at work i had no plans what so ever to go that route. My first attempt at an unraid build was using a large nanoxia high tower case where i could at max get 23 3.5mm drives into, but once i reached around 15 drives the tempratures started to become a problem. Also what i didn't see was how incredible hot the HBA controllers become. I decided to buy a 24bay 4u chassi to see what i could get away with just using the case and switching out the fan wall with noctua fans and such to reduce the noise levels. I got a 24bay case that supports regular ATX PSU because i know server chassis usually go for 2U psu's to have backups and they sound like jet engines so i wanted none of that. Installing everything in the 24bay 4u chassi the noise levels were almost twice as high compared to the nanoxia high tower case even when i switched the fans out and used some nifty "low noise adapters". Next up was the idea of having a 12U rack that is sound proofed, is it possible? How warm will stuff get? What can i find? I ended up taking a chance and i bought a sound proofed 12u rack from a german company called "Lehmann", it wasn't cheap in any sense but it was definitely worth it! I couldn't possible be happier with my build. From top to bottom: 10gps switch AMD Threadripper WM host server Intel Xeon Unraid server Startech 19" rackmount kvm switch APC UPS with Management Card In total 12u of 12u used! Temps: around 5c higher than room temprature, unraid disks average of 27c. Noise level? around 23db, i can sleep in the same room as the rack!
  11. anyone checked if 6.8.1RC1 resolves this issue? I'm still on 6.7.2 due to the terrible smb performance in 6.8.0.
  12. As it stand now with unraid when you move a file from a share that is for example: Share 1 (disk 1 , 2 ,3 included) Move files to Share 2 (disk 4, 5, 6 included) You issue this command over SMB it will cause unraid to start moving files from Share 1 to the Cache drive for moving to Share 2 during nighttime when the mover runs. You're limited to the speed of disks 1 ,2 ,3 during this process. Lets say 50gb of data will usually take a couple of minutes to process as most harddrives are limited to around 150/180mb/s read speeds. What i would want is to make an queue system for these kind of events and have another cronjob run before the regular mover runs that does these things right before the mover does its regular thing. The result would be: You issue a move request of files: Share 1 (disk 1 , 2 ,3 included) Move files to Share 2 (disk 4, 5, 6 included) The move would be nearly instant, because unraid just writes a log of which files it will move during night time to the cache and after this queue is processed the regular mover will run. This way all movement within the array feels instant, you're limited to the amount of files you can move per day depending on your cache size. I think it would be a great improvement to unraid making everything feel faster etc. Thoughts? Issues?
  13. holy smokes, it was a faulty network cable causing me trouble, and it decided to not work everytime i had my 10gps switch connected... what a weird strange draw of luck, it works fine now. i cannot believe it just randomly decided to not work whenever i had the switch connected, but now when i am jiggling it around a bit i notice it is dropping randomly, anyway... i can send wol fine. im making sure to cut this cable in two, this is a cursed one!
  14. Interersting, but does that not only relate to having the actual intel nic wake the system up from sleep? Or is the nic actually not capable of sending a wol packet? I've never heard of such a thing. Shouldn't any nic be capable of crafting such a package but not all nics support actually listening for them when the system is sleeping? I'm no network expert, bond seemed the easiest way to run multiple interfaces as one, i do not know what br0 is. I guess i will have to play around a little bit more, i did add a 10gps switch beteen the server and the backup machine and perhaps the switch is not passing the wol package even though it is currently configured to be just a dumb switch nothing more. I'll have to do some more testing, thanks for the reply!