Jump to content

localhost

Members
  • Content Count

    16
  • Joined

  • Last visited

Community Reputation

0 Neutral

About localhost

  • Rank
    Member
  • Birthday February 18

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I've been running pi hole on my pi for a while now and its great. I decided as I have an unraid server I really should just be running it in a docker. However I can't start it, it looks to have a conflict on port 53 but looking at netstat I don't see any open connections using the port. Does unraid run dnsmasq or something on that port, I can't seem to find any options for a dns server. Thanks
  2. Its all rebuilt and back to normal now, I will keep an eye on it. Thanks
  3. Thanks for the suggestion, its a brand new disk and cable now so I should be good in the short term but will consider that for sure.
  4. Dayum, we were on the same page just then. After making the post I went out to my shed and grabbed some sata cables as a last resort. Have swapped the cable for parity 2 and its now rebuilding. I never understood this, even almost a decade ago when I just to work in IT, how the hell does a sata cable just fail. Its obviously not the first time I've seen it but I'm always just left in disbelief. Its a low power signal cable that hasn't been touched, and just fails... Anyway i'll see how it goes and if I have anymore problems with the cache i'll do the same. I refuse to believe 2 cables have failed simultaneously... Thanks for your time
  5. Hi UNRAID community, I've been having some problems for the last week or so that I've been unable to resolve. To try and avoid a wall of text I'll add some bullet points for things as they happened: Parity 2 went offline showing 2000+ errors. I could not spin the disk up and a reboot would make it disappear entirely. I assume disk failure. Swap new disk in, rebuild sucessfully - all seems resolved Test 'failed' disk - passes all extended tests Transmission stops being able to write to cache (BTRFS - SSD), errors go from I/O error to read-only error I assume a FS corruption, decide to switch cache to xfs. Reformat & works for a bit. Parity 2 goes offline - exactly as before Swap Parity 2 disk again to test the new disk - after reboot cache has no filesystem and needs formatting Now when I try to rebuild the array it always fails after a few minutes on parity 2, and the cache keeps going unreadable. I haven't pulled the cache drive for tests yet but it passes SMART. I have also run a couple of passes of memtest just incase, which passed. Any help would be much appreciated.
  6. Thats right, its the linuxserver release. I'll follow the link now thanks
  7. I have had a look at the container settings and this is how it currently is:
  8. I'm struggling to get transmission to write files/folders I can actually access. I have been looking around for a solution including on this forum, the only thing I saw which seems relevant was to adjust the umask option in transmissions settings.json file. I have done this and set it to 2 as per someones suggestion but this hasn't changed anything for me so currently I have to open a terminal and use chmod to change the permissions before I can access any of the files. I don't really understand how umask translates to permissions either. Any insight on this would be much appreciated. Thanks
  9. Oh OK I'll do that then. Thanks for the advice. I'm not too concerned about losing say 24 hours worth of the appdata share, I just didn't want to have to reconfigure everything. Now if I can just get transmission to write my downloads with permissions I can access I'll be all green lights again. Thank you
  10. I was under the impression the cache is not protected by the parity, which was why I didn't want important files on it. Am I wrong on this?
  11. Hi all, I've been cleaning house a bit on my server this week. Replaced an ssd which was in the array with a HDD and added a second parity. All went smoothly. As part of this clean up one thing thats been bugging me for ages are some files seemingly stuck on the cache. I installed the cache about a year ago and was a bit enthusiastic when adding the cache to shares, I added it to appdata. I realised later I didn't want that data on there and set the use cache option to no, then left it assuming the mover would move it all back later. I checked today and can see there are two shares data on the cache I don't want there; appdata and system. I never turned cache on for system though. I assume using dolphin to move the files back may break some dockers etc so what is the proper procedure here to get these files back on the array? TIA
  12. That lasted one reboot, back to not joined. Unraid claims there are no login servers
  13. OK so that was stupid of me, after changing the domain to the correct FQDN for the server (DC1.abc.local) I'm in. This threw me as last time I was using a windows domain simply abc.local would get me connected, but that was a few unraids ago. Anyway, easy solution. I still can't get on my shares but i'll give everything a reboot now.
  14. Hi all, I'm sure you've all had a few of these but I'm struggling to connect to a domain with my unraid server. The server has previously been connected to domains, years back a win server, then up until now a nethserver, and now I am commissioning a Server 2016 DC. My new DC is configured for a Win 2016 functional level, my windows systems are connected and dcdiag passes (for the most part with some errors i'll look at ironing out) However everytime I try to connect the unraid server it looks from the log its finding nothing and giving up. Unraid network settings are configured so that the DNS is pointing to the DC. I've been going round in circles with this for longer than I'd like to admit but now need to get my shares back online, so any help would be greatly appreciated. Here is an example of the unraid log resulting from a connection attempt: Please ask for any further details that could help Thanks PS if anyone knows how to do a DNS flush via the unraid terminal that might be handy too. I'm not sure what unraid uses to handle DNS so am hesitant to just start typing commands.
  15. Yes another domain, I already have multiple working subdomains. I have seen people using nginx with multiple subdomains, just not sure if its the same process here. I might try creating a file and replacing the default in the site-config dir. I hope nginx in this form can still do this.