SphericalRedundancy

Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by SphericalRedundancy

  1. Unfortunately no I didn't, I kept having other issues as well so I migrated my files off of my unraid server and stopped using it.
  2. This still has not been fixed and its pretty ridiculous depending on the type of rules you want to use. Using something as simple as the rule below means you can only have 8 hosts per share without messing in the exports file. 192.168.10.5(sec=sys,rw,sync)
  3. I currently have 8 NFS shares that are all working as expected configured as a private NFS share but I noticed today when I attempted to add a 9th NFS share that although the entry gets added to the /etc/exports file it does not actually export the new share as shown by showmount or exportfs -v. I noticed that when I run exportfs -ra to re-initilize the NFS service and have it export shares configured in /etc/exports it shows an error message about incorrect syntax for a bad option list. This points to line 9 in the /etc/exports file but line 9 is a completely different share that is currently working normally and exported successfully. If I remove line 9 by disabling that share in the Unraid GUI, the syntax error just moves up one line but I don't see any issues with the content of /etc/exports and I wouldn't expect any errors considering I haven't manually edited the file. All entries in /etc/exports were added by Unraid itself. I've attached some other images below for reference, the NFS share that is not working as expected is /mnt/user/tv. Unraid share settings in GUI: Content of /etc/exports: # See exports(5) for a description. # This file contains a list of all directories exported to other computers. # It is used by rpc.nfsd and rpc.mountd. "/mnt/user/backups" -async,no_subtree_check,fsid=102 192.168.1.10(sec=sys,rw,sync) "/mnt/user/comics" -async,no_subtree_check,fsid=108 192.168.1.38(sec=sys,ro,sync) 192.168.1.33(sec=sys,ro,sync) 192.168.1.17(sec=sys,rw,sync) "/mnt/user/downloads" -async,no_subtree_check,fsid=100 192.168.1.51(sec=sys,rw,sync) 192.168.1.52(sec=sys,rw,sync) 192.168.1.44(sec=sys,rw,sync) 192.168.1.27(sec=sys,rw,sync) 192.168.1.53(sec=sys,rw,sync) "/mnt/user/ebooks" -async,no_subtree_check,fsid=109 192.168.1.38(sec=sys,ro,sync) "/mnt/user/instructional" -async,no_subtree_check,fsid=110 192.168.1.38(sec=sys,ro,sync) "/mnt/user/movies" -async,no_subtree_check,fsid=107 192.168.1.47(sec=sys,ro,sync) "/mnt/user/other" -async,no_subtree_check,fsid=112 192.168.1.33(sec=sys,ro,sync) "/mnt/user/proxmox-isos" -async,no_subtree_check,fsid=105 192.168.1.14(sec=sys,rw,sync),192.168.1.15(sec=sys,rw,sync) "/mnt/user/tv" -async,no_subtree_check,fsid=106 192.168.1.47(sec=sys,ro,sync) Output of exportfs -v Any help with be very much appreciated as I don't understand what is happening. Thank you,
  4. Is there a way to update to a version that's above 6.5.3 but isn't 6.6.5? My server has been running smoothly for months and I avoided updating because I wanted to wait until 6.6.X was cleaned up of any of the bigger bugs but when I updated to 6.6.5 last week my server has crashed everyday since, downgrading to 6.5.3 and it no longer happens. I'd like to update to something other than 6.6.5 so that maybe I can narrow down which update is affecting me.
  5. Yea it's on lan, was using chrome, same thing for firefox though.
  6. So I just did this and its all working great except that after I installed the red hat qxl display controller driver, the black border around my mouse cursor was removed. So when I mouse over the file explorer, or anything that's got a white background, I can't see my mouse at all. Anyone got any ideas?
  7. Is it normally so slow? It takes several seconds after clicking or typing for anything to happen.
  8. I guess they could also implement logic to only move to cache if moving to a different drive and skip cache if moving to the same disk just a different folder. For now I guess I'll have to move share B to an unassigned device since I don't actually care about it being protected. I assume since that is outside of the array that I'd then be able to transfer to share A and it'd use the cache drive?
  9. So I've got 8 data drives, a parity drive, and a cache drive. So lets say I have two shares, share A is set as yes to use the cache drive while share B is set to not use it at all. I will sometimes need to transfer large (100 GB or more) folders from share B to the share A. Whenever I do this the transfer speed is very slow ranging from 5MB/s to 10MB/s and as far as I can tell this is because share B is not only writing to share A but also the parity. Why doesn't share B transfer to the cache drive first since share A is set to use it?
  10. Why is the drive temperature shown as an asterisk here? Happens to the bottom two drives sometimes.
  11. Gotcha, so yeah I did have it set incorrectly. Is there a way to have unraid move the files around again so they follow the new split level rules?
  12. The folders I copied were all tv shows for a single share and the share is set as follows. So that would be an example of how I have the tv share laid out, tv (share name) > type > bunch of different shows per type > season folders per show I have it set to split at level 2 which I think would only keep the season folders together? although based on your previous sentence and calling the share name itself the top level maybe it should be set to split at level 3?
  13. So I must be doing something wrong. Why is unRAID doing this? I moved a bunch of folders onto my cache drive totally about 750GB (my cache is 1TB) and before moving it disk 1 had about 900GB available and it seems like unRAID decided to move it all to one drive and I have no idea why. I'm pretty sure I have my split level set right on my shares but I don't know at this point.
  14. When there are updates for docker images and you run the update for all containers, if your time is off the docker pull/run command will fail because of a cert error but it will not display in the log shown, it'll show 0 bits pulled for the new image and rebuild the container using the existing image and show the update as successful even though no update has been applied.
  15. So I'm fairly new to unRAID and I'm in the process of transferring my current setup over to unRAID. I'm also used to using docker compose where you plan it all out very easily. So color me surprised when my Plex container, that was running fine, got completely wiped out after one failed re-run because I typed one of the volume paths incorrectly. Maybe I'm doing something wrong but if not, the idea that one failed run of a container deletes all the port, variable, and volume information is completely insane to me. Don't get me wrong, I understand that unRAID is removing and re-running the 'docker run' command so of course it removes the container but the fact that it saves no settings in case of failure is pretty dumb because now I have to set that containers settings up again. Now of course that won't take me that long for the Plex container but a few of my containers do have a lot of stuff that needs to be mapped and I'd hate to have to redo it all just because of one typo. So do I just go back to using docker compose or is there a way to have this done through the gui?
  16. That won't be a problem in my case pfsense isn't my main router, I have a ubiquiti edgerouter x for that. I only use the pfsense to redirect traffic through the VPN.
  17. So a good little while ago I had asked on here for some advise about hardware specs for a dedicated Plex server and got some great help with it but for reasons that I can't remember I never actually went with unRAID, I went with Windows LTSB and used Hyper-V to host several Linux VMs which worked out really well for me at the time. Since then though I've expanded quite a bit and realized that I hate having to depend on proprietary software to manage my server, namely Hyper-V manager and or having to RDP into it to manage it. I now know that I'd much prefer to have something accessible by IP through a simple web browser or ssh. I currently have five VMs running on Hyper-V, one has Pi-Hole loaded on it which I'll be moving to an actual raspberry pi this weekend, I then have a VM for pfSense which I use to force one specific VM's internet through a VPN. The remaining three VMs are all running plain Linux and I run docker on all three for various things like Sonarr, Radarr, qBittorrent, Deluge, Plex, letsencrypt, etc.. So basically I'm just dropping by to say hi, again I guess lol, and I'd also like to know if anyone thinks I'd have a problem emulating any of the previously mentioned things within unRAID, obviously all the docker containers are a simple thing to setup again but for example I do have qBittorrent on it's own VM which I then use pfSense as I mentioned to route all its traffic through a VPN. I know there are some containers that binhex makes that have some built-in VPN solutions but I'd prefer to route through pfSense in case I want more than one application or machine to go through that VPN, saves time for me not having to setup multiple VPN clients. Would it be possible to setup pfSense and have it with two vNICs and then assign one of those vNICs to a container or would I need to set qBittorrent up in its own VM again? Also I've heard that docker compose is available through the nerd tools plugin but that there is also something that lime tech has made called dockerman, could anyone give/link me a good explanation as to the differences? I've mainly used docker compose and really like the things it allows me to do with predefining volume locations as well as any order of startup if one container depends on another. If there's anything you think I should know or think I might like to know/use etc. feel free to shout about it. I've put my current server specs below if anyone is curious. CPU: Dual Xeon L5640 @ 2.26 GHz Motherboard: Gigabyte GA-7TESM LGA 1366 Memory: 160GB ECC Case: Rosewell 4U RSV-L4500 Power Supply: Seasonic Focus Plus 850W 80+ Gold HDD Storage: 30TB usable with one parity drive (2x 8TB, 1x 4TB, 7x 3TB) SSD Storage: 2x 250GB
  18. Yes but that's only roughly. I wanted to hear his personal experience with it.
  19. Awesome. If you don't mind me asking, how does your CPU handle Plex?
  20. I like Noctua for implementations where noise is an issue, but there are plenty of high CFM fans if this will be in a closet and noise doesn't matter. So Noctua it is then. Is there a specific type I should get or are they all relatively the same? Angled blade design vs straight blade design.
  21. Power calculator says at least 500W so I grabbed same make and model of my current one at 550W, only 4 bucks more so that works well. Have any recommendations for fans? Or does everyone just use Noctua? Yeah I've looked at the think servers a few times. Not much space in those cases it seems. I know it may sound odd saying I have a budget and then being picky but I have a habit of future proofing I guess, otherwise you just end up spending more down the line. And I'm not sure what a SAS plate is, no idea if you even talking to me or the other guy about that.
  22. Ready made like a thinkserver or something? N40L as in an Hp microserver? How do you connect rig 1 to use rig 2's storage? I'm very new to this.
  23. So Plex will need to be able to stream to 2-3 people. The majority of my media at the moment is 99% x264 and aac so if it transcodes it's subtitles and it's minimal for the moment, but that may change down the line. I'm trying to keep this as cheap as I can while still meeting my needs, while also kind of making it easy to expand in the future if I so desire. I've started a build but I'd like input on a few things, namely storage. I wanna go WD reds I think, for the slower spin rate because this will be on 24/7. But I've heard that 3TB drives are unreliable? I'd like to start with a more than 2TB usable storage, and if 3TBs are a no go then I'd have to spend another 100 bucks to get two 4TBs or buy three 2TBs and start off with a 2TB parity drive. I don't know what to do there. I'm also pretty sure I want an SSD for Plex's metadata location and all that, and also use it as the cache drive. Not sure on what brand would be best, Samsung 850 EVO 250GBs I guess but is 250GBs enough? or overkill? Is 8GBs of ram enough? Is 450W a big enough psu for this type of build while also taking into account future drive expansion? Possibly 6-8 drives in the end based on what would work with this mobo and case. Should I get an extra fan or two? Case only comes with 2. With this current build, a Samsung 850 EVO 250GBs, and two WD Red 4TB drives, its over 800 bucks ($842.29 according to pcpartpicker). And that's over my budget by a little bit but if this is where it needs to be, I could make it work. PCPartPicker part list / Price breakdown by merchant CPU: Intel Core i5-4460 3.2GHz Quad-Core Processor ($178.99 @ SuperBiiz) Motherboard: ASRock Z87 Extreme4 ATX LGA1150 Motherboard ($99.99 @ Newegg) Memory: Kingston HyperX Fury Black 8GB (1 x 8GB) DDR3-1866 Memory ($35.28 @ Newegg) Case: NZXT H230 (White) ATX Mid Tower Case ($68.99 @ SuperBiiz) Power Supply: SeaSonic 450W 80+ Gold Certified Semi-Modular ATX Power Supply ($73.99 @ SuperBiiz) Total: $457.24 Prices include shipping, taxes, and discounts when available Generated by PCPartPicker 2016-08-18 07:59 EDT-0400 So how's it look?