Jump to content

Mizerka

Members
  • Content Count

    13
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Mizerka

  • Rank
    Member
  1. My bad, didn't assume to try it, given it feels compelled to refresh apps on every new view. thanks. Ye and I understand that, but again, this hasn't been an issue for ages, I will go over all mounts but all of them are being used correctly from what I could see, since I'd see data dissapearing into .img instead of my libraries and have messed up cases in the past that'd cause this.
  2. Thanks, and ye I get that, after adding fresh templates, no data or anything, it filled 10gb, all dockers should have their own appdata paths for config etc and anything pulling data is using main storage pool as well. I will go over all of them regardless, the only one I can think of would be influxdb which is storing sensors for grafana etc, I believe it should keep its db within appdata as well thought. What I meant is that I've a number of dockers in use and tested tripple that before as well. so I've 4 pages of old apps, unless im blind or missed some config somewhere I can't display more than 15 items per page under previous apps view. so could do 5-6 at a time (alphabetical) and also again seems to run them in sequence, which is probably intended method of install, but using multiple tabs I could add 5-6 at a time, sure they'd time each other out and cap wan link etc. using apps did "feel" faster though either way, thanks. I'd agree, and haven't had issues for a long time, like above, after fresh container reinstall from templates, it filled 10gb. I will go over all of them but can't imagine any of them being big enough to need 10gb and even then that'd take like an hour to dl which I'd notice on reinstall. what it looks like, ignore disks, need to move some data around to reduce spinups, but ye docker 9.3g fresh installs with no configs root@..:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 48G 762M 47G 2% / tmpfs 32M 488K 32M 2% /run devtmpfs 48G 0 48G 0% /dev tmpfs 48G 0 48G 0% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 64M 65M 50% /var/log /dev/sda1 29G 447M 29G 2% /boot /dev/loop0 8.7M 8.7M 0 100% /lib/modules /dev/loop1 5.9M 5.9M 0 100% /lib/firmware /dev/md1 3.7T 341G 3.4T 10% /mnt/disk1 /dev/md2 3.7T 841G 2.9T 23% /mnt/disk2 /dev/md3 3.7T 1.6T 2.2T 43% /mnt/disk3 /dev/md4 3.7T 232G 3.5T 7% /mnt/disk4 /dev/md5 7.3T 2.7T 4.7T 37% /mnt/disk5 /dev/md6 7.3T 3.6T 3.7T 50% /mnt/disk6 /dev/md7 7.3T 3.5T 3.9T 47% /mnt/disk7 /dev/md8 7.3T 2.4T 5.0T 33% /mnt/disk8 /dev/md9 9.1T 11G 9.1T 1% /mnt/disk9 /dev/md10 932G 290G 642G 32% /mnt/disk10 /dev/md11 932G 983M 931G 1% /mnt/disk11 /dev/md12 932G 983M 931G 1% /mnt/disk12 /dev/md13 932G 983M 931G 1% /mnt/disk13 /dev/md14 932G 983M 931G 1% /mnt/disk14 /dev/md15 932G 983M 931G 1% /mnt/disk15 /dev/md16 932G 983M 931G 1% /mnt/disk16 /dev/sdz1 466G 103G 363G 23% /mnt/cache shfs 60T 16T 44T 26% /mnt/user0 shfs 60T 16T 45T 26% /mnt/user /dev/loop3 1.0G 17M 905M 2% /etc/libvirt /dev/loop2 50G 9.3G 41G 19% /var/lib/docker shm 64M 0 64M 0% /var/lib/docker/containers/97408ac7186b33a59b3af02cdef729f733f03899aea369a78caee2130a7c9fb4/mounts/shm
  3. next morning, so restarted docker and its all gone again... fml. So deleted docker.img this time properly and will try adding through apps and see how it goes. edit; so adding through apps is more convenient (despite being paged limited) but not faster, running a single docker add at a time compared to 5 tabs etc. edit2; okay, so manually deleting .img seems to have to worked, after reinstall and docker restart, everything still there. 10gig used after just install of what I acutally use (16, typical plex/sonarr/deluge setup etc). Restoring appdata now and will see what happens. plex needs to chill with their file structure; 1.25m folders and 350k files and still going...
  4. thanks, looks like unraid did that for me, only 500mb used when expanding size, so ended up moving existing data into subdir rather than extract from backup, after going through all templates (seems like unraid only handles 5 adds at once) setting and mappings were preserved which is nice, and now I've stopped docker and I'm moving data back into dir's. I'm still assuming that it was the 20gig that filled up, if I didn't force delete the .img will unraid be happy on next boot, don't feel like going through restore again.
  5. Title says it all really. so... I suspect it was deluge or plex, not sure, but from the looks of it although .img was full (now expanded to 50gb and 500mb x3 logs). All appdata is there and accessible within cache mirror. Anyone know how I can remap the docker containers? profiles still there but don't want to overwrite data on new container creation. I have full nightly backups but just extracting 25gig tar would take a night probably. any help appreciated.
  6. flooded the library a bit to see what happens, and look like it was on the 2tb pass, filled disk1 until 2tb, and now going for disk 2. With that in mind, I suppose ideally you would keep smaller disks as lower count if you care for "higher use" of high water allocation, as in my case, you won't even touch 4tb drives until you have 18tb's writted to share. I'll expand with 10tb's and re arrange to see what happens, I would expect it to continue filling disk 2 until 2tb free and then look at disks again to choose best target.
  7. massive dirs with thumbs? try over cifs and see if you can replicate
  8. I see, well thanks for clarifying. Is there a recommended drive allocation when using high water? Also from what you're saying, the high mark threshold/pass is set when it has reached a breakpoint on a single disk? so in theory if I were to put empty drives ahead of 8tb (filled one) when 8tb reaches 2tb high mark, it'd run through disks in order and determine new mark? And as you said, its not a huge deal, I'll be upgrading the wd1tb's to 10tb tonight so might move the drives around. So I'll break parity and move it down to data and create a new 10tb parity and 2nd will just go into data. Planning on grabbing a 2nd shelf at some point, but until then will probably rock single parity, given sas shares are sitting empty whilst I prep vmware move.
  9. split set to any at the moment, not bothered about it, might end up setting 3 subdirs in future, but data no big enough to worry about it yet. drives I've moved 900gb to other disk but didn't seem to do much and continue filling up first disk rather than reset to 4tb free mark and fill disk 2 etc. I can't confirm but I believe its on the 2tb free pass atm so filling disk 1 until 2tb free mark and then moving down the drives but was hoping to move data around to force it down quicker instead of filling 75% of first disk before even touching empty 4tb drives
  10. Hey, So a quick one hopefully, so recently added more disks to my library share, which are lower volume than original and current disks, i.e had 3x8tb and now added 4x4tb. Share is in high water allocation, I was wondering if running unbalance scatter job to rebalance data across the 7 drives would reset the high water "pass" count and instead of continuing on its current (2tb free atm) pass and retry from the start (i.e. 4tb free) given reduced disk space? I hope that makes sense?
  11. Big thanks to you and your team, overall solid product and happy to continue using it. Q1: Do you plan to develop the hypervisor functionality further? such as including common gpu hardware drivers like nvidia, amd or intel for VM passthrough, which is currently achieved through community projects. Q2: SSD's are currently experimental and not fully supported, do you plan on bringing them along with nvme drives as future data mediums change (also ties into q1 in terms of drivers)? Q3: If you were to start the whole project again where would you focus your time or ditch some features/projects altogether?
  12. just reviving because 1st result in google; binhex sonarr template maps /data to its appdata and it's what binhex deluge/vpn uses as its media path. so you'll need to map a path that both can access/use/rw and match in both mapping and target path, i.e. /completed/ to /mnt/user0/data/completed. For proper automation, jackett grabs indexes, sonarr grabs all files, picks best match and throws it over to deluge with label of whatever set to move to dl to /downloads/ which sonarr has mapped as well and is able to pick up that file, then hardlink/copy and rename in right path that your media player can pick up in correct and renamed format. Then you just set label to seed to whatever, forever if you care, or just seed low value like .1 and delete on target which will remove badly named seed and it'll keep hardlink alive and well in right place. Works well, with few rss feeds to auto feed new titles in etc which sonarr will add and pick up future releases, can just leave it to do its thing.
  13. So started weekend project to migrate from freenas to unraid, all looking good so far, apart from expected 1.6days for data transfer and 40days for parity build lmao Anyway, moving from vm hosted yassr2 deluge. So turns out you can just move your old configs and eggs from (windows) %appdata%\deluge and drop them into your config path, default being /mnt/user/appdata/deluge.. although my plugins path is super broken pointing to c:\..\appdata\ it still picks up plugins dir fine and installs them correctly and enables according to .conf And since im such a nice guy, here's my mega links if you want something to go off of with no previous setup, my current is just looking at some nyaa rss, so adapt as you need, very easy to read and make changes. Conf's egg's after enabling in web (will reset to off and report in deluged-web.log it has no ui) [INFO ] 10:58:58 pluginmanager:108 'YaRSS2' plugin contains no WebUI code, ignoring WebUI enable call. [INFO ] 10:58:59 pluginmanager:145 Plugin has no web ui but output from deluged.log [INFO ] 10:44:07 rpcserver:206 Deluge Client connection made from: 127.0.0.1:34092 [INFO ] 10:44:41 logger:37 YaRSS2.__init__:34: Appending to sys.path: '/config/plugins/YaRSS2-1.4.1-py2.7.egg/yarss2/include' [INFO ] 10:44:41 logger:37 YaRSS2.rssfeed_scheduler:42: Scheduled RSS Feed '1080p HorribleSubs Nyaa' with interval 5 [INFO ] 10:44:41 logger:37 YaRSS2.rssfeed_scheduler:42: Scheduled RSS Feed '1080p BlueLobster Nyaa' with interval 5 [INFO ] 10:44:41 logger:37 YaRSS2.core:40: Enabled YaRSS2 1.4.1 [INFO ] 10:44:41 pluginmanagerbase:158 Plugin YaRSS2 enabled.. .. INFO ] 11:03:59 logger:37 YaRSS2.rssfeed_handling:267: Update handler executed on RSS Feed '1080p BlueLobster Nyaa (nyaa.si)' (Update interval 5 min) [INFO ] 11:03:59 logger:37 YaRSS2.rssfeed_handling:310: Fetching subscription '1080p BlueLobster Nyaa'. [INFO ] 11:03:59 logger:37 YaRSS2.rssfeed_handling:80: Fetching RSS Feed: '1080p BlueLobster Nyaa' with Cookie: '{}' and User-agent: 'Deluge v1.3.15 YaRSS2 v1.4.1 Linux/4.19.41-Unraid'. [INFO ] 11:04:00 logger:37 YaRSS2.rssfeed_handling:332: 70 items in feed, 70 matches the filter. [INFO ] 11:04:00 logger:37 YaRSS2.rssfeed_handling:344: Not adding because of old timestamp: '[BlueLobster] Meow Meow Japanese History (01-96) [1080p] (Batch) (Neko Neko Nihonshi)' .. and proceed to use yarss2.conf file as configured. added 2nd sub, just needs a "1" :{..} and another for feed, tested and works fine, make sure they're unique and match against right feeds like so; yarss2.conf