Mizerka

Members
  • Content Count

    75
  • Joined

  • Last visited

Everything posted by Mizerka

  1. I'm pretty sure that's the default behaviour when making changes to a share that's already deployed. it kills off connections and relies on client to reestablish it. looking at my logs; it's specifically ran when changing smb security settings; below changing public to secure Dec 18 23:00:18 NekoUnRaid root: Starting Samba: /usr/sbin/nmbd -D Dec 18 23:00:18 NekoUnRaid root: /usr/sbin/smbd -D Dec 18 23:00:18 NekoUnRaid root: /usr/sbin/winbindd -D Dec 18 23:00:18 NekoUnRaid emhttpd: shcmd (509): smbcontrol smbd close-share 'ssd'
  2. hmm those look fine, and you don't need dhcpv6 for ipv6 devices to work, apipa v4 169.254 is a pain to use, but ipv6 fixes that and can manage itself on networks without dns quite painlessly, win10 prefers it by default. anyway, doubt it'll be it, but you can try disabling ipv6 on your client interface to force it on v4, but network discovery worked fine over v6 by the looks. win+r NCPA.CPL default eth int, and untick ipv6. also you don't have any firewalls/av in place that might prevent it? try disabling win firewall (temporarily if you act
  3. can you ping it? cmd ping x.x.x.x try hostname as well; ping hostname.local
  4. access it by ip and to root and not any shares, i.e. \\192.168.0.200\ if you get prompted for creds, use your unraid user/root. if share is public, try and access it.
  5. tried changing the unraid management port to something else? cd /boot/config/ nano go add -p to default below; /usr/local/sbin/emhttp -p 8008 & change 8008 to whatever unique on your host
  6. Hey, bit of a strange one, been on and off upgrading and working on unraid (in middle of encrypting entire array), over last week it crashed twice on me, I say crashed, but it's actually sitting at 80-100% cpu usage across all cores which it never does (24 cores, only running few dockers) and even more interestingly it's running 100% ram. Yesterday, I tried a few things like killing docker, force restarting, shutting downa array etc, but nothing worked, webui was somewhat usable and could console on to it as well. output of top: top - 19:23:29 up
  7. My bad, didn't assume to try it, given it feels compelled to refresh apps on every new view. thanks. Ye and I understand that, but again, this hasn't been an issue for ages, I will go over all mounts but all of them are being used correctly from what I could see, since I'd see data dissapearing into .img instead of my libraries and have messed up cases in the past that'd cause this.
  8. Thanks, and ye I get that, after adding fresh templates, no data or anything, it filled 10gb, all dockers should have their own appdata paths for config etc and anything pulling data is using main storage pool as well. I will go over all of them regardless, the only one I can think of would be influxdb which is storing sensors for grafana etc, I believe it should keep its db within appdata as well thought. What I meant is that I've a number of dockers in use and tested tripple that before as well. so I've 4 pages of old apps, unless im blind or missed some config somewhere I can
  9. next morning, so restarted docker and its all gone again... fml. So deleted docker.img this time properly and will try adding through apps and see how it goes. edit; so adding through apps is more convenient (despite being paged limited) but not faster, running a single docker add at a time compared to 5 tabs etc. edit2; okay, so manually deleting .img seems to have to worked, after reinstall and docker restart, everything still there. 10gig used after just install of what I acutally use (16, typical plex/sonarr/deluge setup etc). Restoring appdat
  10. thanks, looks like unraid did that for me, only 500mb used when expanding size, so ended up moving existing data into subdir rather than extract from backup, after going through all templates (seems like unraid only handles 5 adds at once) setting and mappings were preserved which is nice, and now I've stopped docker and I'm moving data back into dir's. I'm still assuming that it was the 20gig that filled up, if I didn't force delete the .img will unraid be happy on next boot, don't feel like going through restore again.
  11. Title says it all really. so... I suspect it was deluge or plex, not sure, but from the looks of it although .img was full (now expanded to 50gb and 500mb x3 logs). All appdata is there and accessible within cache mirror. Anyone know how I can remap the docker containers? profiles still there but don't want to overwrite data on new container creation. I have full nightly backups but just extracting 25gig tar would take a night probably. any help appreciated.
  12. flooded the library a bit to see what happens, and look like it was on the 2tb pass, filled disk1 until 2tb, and now going for disk 2. With that in mind, I suppose ideally you would keep smaller disks as lower count if you care for "higher use" of high water allocation, as in my case, you won't even touch 4tb drives until you have 18tb's writted to share. I'll expand with 10tb's and re arrange to see what happens, I would expect it to continue filling disk 2 until 2tb free and then look at disks again to choose best target.
  13. massive dirs with thumbs? try over cifs and see if you can replicate
  14. I see, well thanks for clarifying. Is there a recommended drive allocation when using high water? Also from what you're saying, the high mark threshold/pass is set when it has reached a breakpoint on a single disk? so in theory if I were to put empty drives ahead of 8tb (filled one) when 8tb reaches 2tb high mark, it'd run through disks in order and determine new mark? And as you said, its not a huge deal, I'll be upgrading the wd1tb's to 10tb tonight so might move the drives around. So I'll break parity and move it down to data and create a new 10tb parity and 2nd will just go int
  15. split set to any at the moment, not bothered about it, might end up setting 3 subdirs in future, but data no big enough to worry about it yet. drives I've moved 900gb to other disk but didn't seem to do much and continue filling up first disk rather than reset to 4tb free mark and fill disk 2 etc. I can't confirm but I believe its on the 2tb free pass atm so filling disk 1 until 2tb free mark and then moving down the drives but was hoping to move data around to force it down quicker instead of filling 75% of first disk before even touching empty 4tb drives
  16. Hey, So a quick one hopefully, so recently added more disks to my library share, which are lower volume than original and current disks, i.e had 3x8tb and now added 4x4tb. Share is in high water allocation, I was wondering if running unbalance scatter job to rebalance data across the 7 drives would reset the high water "pass" count and instead of continuing on its current (2tb free atm) pass and retry from the start (i.e. 4tb free) given reduced disk space? I hope that makes sense?
  17. Big thanks to you and your team, overall solid product and happy to continue using it. Q1: Do you plan to develop the hypervisor functionality further? such as including common gpu hardware drivers like nvidia, amd or intel for VM passthrough, which is currently achieved through community projects. Q2: SSD's are currently experimental and not fully supported, do you plan on bringing them along with nvme drives as future data mediums change (also ties into q1 in terms of drivers)? Q3: If you were to start the whole project
  18. just reviving because 1st result in google; binhex sonarr template maps /data to its appdata and it's what binhex deluge/vpn uses as its media path. so you'll need to map a path that both can access/use/rw and match in both mapping and target path, i.e. /completed/ to /mnt/user0/data/completed. For proper automation, jackett grabs indexes, sonarr grabs all files, picks best match and throws it over to deluge with label of whatever set to move to dl to /downloads/ which sonarr has mapped as well and is able to pick up that file, then hardlink/copy and rename in right pat
  19. So started weekend project to migrate from freenas to unraid, all looking good so far, apart from expected 1.6days for data transfer and 40days for parity build lmao Anyway, moving from vm hosted yassr2 deluge. So turns out you can just move your old configs and eggs from (windows) %appdata%\deluge and drop them into your config path, default being /mnt/user/appdata/deluge.. although my plugins path is super broken pointing to c:\..\appdata\ it still picks up plugins dir fine and installs them correctly and enables according to .conf And since im