mrpops2ko

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by mrpops2ko

  1. i've searched in the thread but can't really find it addressed too much but how do people go about mass updating their containers? i understand that the update stack button works and is great but its a bit cumbersome when you have to do that 30-40 times, rather than a single button press what solutions have people come to, to automate this? or is this something that could be added natively to the plugin? edit: seems watchtower is the solution that is posited in the previous page so i'll give that a shot
  2. just checked it all again and seems to work now. so first i tried version: "3.9" services: tautulli: image: lscr.io/linuxserver/tautulli:latest container_name: tautulli env_file: - /boot/config/plugins/compose.manager/projects/test1/.env environment: - PUID=1000 - PGID=1000 - TZ=${TZ} ports: - 8181:8181 restart: unless-stopped networks: br0: ipv4_address: 192.168.1.133 volumes: - ${DOCKERDIR}/tautulli/config:/config networks: br0: external: true which results in WARN[0000] The "TZ" variable is not set. Defaulting to a blank string. WARN[0000] The "DOCKERDIR" variable is not set. Defaulting to a blank string. then i tried the same but env_file: - /boot/config/plugins/compose.manager/projects/.env which results in the same then i tried using .env: /boot/config/plugins/compose.manager/projects/.env [+] Running 1/1 ✔ Container tautulli Started 0.2s this worked without needing to declare the env_file. sorry about wasting your time, it seems to have always worked. Would it be possible to have a drop down menu or some kind of sticky location to the stack settings env file path? or maybe some kind of config file where we can store say multiple variables with an auto enabled: true / false flag?
  3. take this one for example version: "3.9" services: tautulli: image: lscr.io/linuxserver/tautulli:latest container_name: tautulli env_file: - path: /boot/config/plugins/compose.manager/projects/.env service: tautulli environment: - PUID=1000 - PGID=1000 - TZ=${TZ} ports: - 8181:8181 restart: unless-stopped networks: br0: ipv4_address: 192.168.1.133 volumes: - ${DOCKERDIR}/tautulli/config:/config networks: br0: external: true you can delete the networking, its just using ipvlan but you can see we are declaring variables ${DOCKORDIR} and ${TZ} now define those values in your .env lets say TZ=UTC DOCKERDIR=/mnt/cache/appdata watch as you'll be unable to run, or rather when you run it you get an error stating that it cant find those variables so its defaulting to blank ones. This is because .env is special and will accept both environment variables and interpolate. declaring like above wont do that. if you wanted to get rid of the error you could on your docker host do `export DOCKERDIR="/mnt/cache/appdata"' but this would then store that in the unraid host which isn't desired. now if you just simply copy the .env from /boot/config/plugins/compose.manager/projects/.env and place it in /boot/config/plugins/compose.manager/projects/tautulli/.env (so it resides alongside the docker compose file) then everything will just work because of the things i mentioned.
  4. there are effectively 2 options to have proper global .env support, its either have a file that sits at /boot/config/plugins/compose.manager/projects/.env and symlink it to each project folder, so for example /boot/config/plugins/compose.manager/projects/plex/.env would be a symlink of /boot/config/plugins/compose.manager/projects/.env or alternatively on every click of the docker up / docker down / stack update, it would take the values from /boot/config/plugins/compose.manager/projects/.env and append them to that specific projects .env file this would then allow for variable interpolation in the compose files, as most people desire
  5. yes tested it just now and it doesn't work. its not the same. .env is special and env_file won't work, nor will -e flag it has to be in the same folder as the project
  6. yeah unfortunately this implementation suffers with the exact same issues i mentioned previously in relation to interpolation. the only real solution to do global envs is to have one which sits outside and has a symlink to it in the project directory called .env the other solutions wont work. this is because docker has some special functions associated with .env rather than anything else, which allows it to pass interpolated variables as well as env variables.
  7. thanks but can you elaborate more on this because i think it might not be exactly what i wanted now what i wanted was to put the envToUse in /boot/config/plugins/compose.manager/projects/envToUse and then that env file would subsequently be copied / replicated or symlinked to all the sub directories (i.e /boot/config/plugins/compose.manager/projects/plex/.env) the reason for this, is that the .env has special properties that otherwise addressing them via env_file declaring doesn't do. like my previous example with doing interpolation $DOCKORDIR your previous comment 'Is there a Github repo for this? I've modified some of this plugin locally to allow use of a "master" .env with multiple projects and I'd like to share it back.' seemed to hint at doing exactly this. if i declare an envToUse does it replicate across other projects?
  8. hi im not sure if you are accepting feature requests but this would be useful for me. i've been down a rabbit hole of trying to do something like having a global .env file which i can use to do references and interpolation. it seems thats not possible if you have docker-compose files in directories and try to reference outside of it, even by hardcoding. i tried both env_file full/path and env_file ../.env and neither work. i'm wondering if we could have a global menu added and what that does, is upon boot, create, start and stop of any containers it will check the values in the global .env file and place them into each containers .env file. this would then allow easy referencing in compose, so for example instead of having to write out a long /mnt/ssd/cache/appdata/ you could reference that was $DOCKORDIR and just call $DOCKORDIR each time.
  9. curious case post upgrading of parity drive never spinning down, appears to be some very small writing occuring and i cant out the cause of it
  10. i've had trouble in the past where I need to copy / paste the mac address and viewing it from settings > network settings is useful. The mac address from there are output in full capitals, whereas if im referencing something it usually wants it in lowercase format. the output of ip link show for example will all be in lowercase
  11. this 9+ year desired feature is still desired, nvme's have gotten so cheap now this would be a huge power up for our arrays.
  12. it sounds like you are experiencing the same thing i am / did, if you check some of my posts i compare the speeds of SMB on windows server vs smb on unraid and its night and day different. In addition to that though, i've noticed what you have, as well as other things. For example when we had windows server installed, you could be at your windows desktop and just unzip from the remote desktop, as if it was locally stored on your hdd. with the linux implementation of smb, we cant do that. To get around that particular scenario I created a bash script to ssh into the unraid box, check what path i was on and find the file i wanted to unzip and execute 7z and then do perms cleanup and close. I imagine you'll need to do similarly and ssh into the other unraid and execute the move / copy that way. Others have done it via using midknight commander or that other docker one with rsync. Its unfortunately the nature of the beast and if you want that functionality we need windows server.
  13. hahahah ok well thats a show stopper. i'm thinking though if its possible to execute scripts (does that work?) then we can fire off something something which starts a cron to invoke 'mover stop' after x period of time, and we use the maths to figure out how long it'd take to shift y amount of data (so in my case 1tb) my array performs on average 120 MB/s per drive. (assuming large sequential file transfer) (1 tb) / (120 (MB / s)) = 2.31 hours so i just need to do a daily mover, for 2 hours to shift like 800gb of data whenever its filled 85% and the script to stop it. that should in theory allow us to do it that way right?
  14. Thank you, i don't quite understand how to perfect the settings to accomplish this. These are mine currently but forgive my lack of process understanding, wouldn't this mean that once my cache hits 50% utilisation it would immediately transfer ALL files that exceed 15 days and not cease transferring them until it satisfies the <14 days requirement? because that'll be a show stopper if so. Or does mover just instantly stop the moment I reach below 50%? because I tested this and had some strange observations, I hit the 50% and then now my cache is at 44%, so some files got transferred. I'm assuming it moves files down to the nearest neighbour % interval?
  15. i'm assuming this isn't possible, since the options don't seem to exist but is it possible to configure mover to work based upon atime and set it to move around 10%? so anytime I hit (my cache is an 8tb nvme cache) 7tb used, it'll clear out 1tb of the least accessed content, onto spinning rust can i accomplish this and if so, how?
  16. theres arguments that macvlan is more efficient than ipvlan in terms of cpu utilisation (at least from me googling the subject when I was deciding) but i think its in the realms of 'i can save 0.1% on my grocery bill by running the london marathon' meaning you shouldn't give a shit about it. macvlan comes with the requirements of promiscuous mode a lot of the times (which can increase latency from reading into it) and also to allow forged mac transmits. so pick your poison really, but macvlan seems to me to be the worse of the options compared to ipvlan
  17. you dont need plop anymore, you can boot directly from USB attached directly to esxi. Plop is generally used for devices which has some issues booting directly from USB but those are mostly a thing of the past now it seems. just remember to change it from BIOS to EFI. then just pick it out from your menu
  18. idk the truth to this but on the face of it, it doesn't seem like a massive problem to solve. we already have HW USB GUID based validation for licencing we already have infrastructure set in place to backup / migrate those USB sticks its not a huge stretch to say have an option where you flag a specific nvme ssd or regular ssd as a 'docker / vm' device and that thing is allowed to boot without the array main splash thing operating. it could purely be limited to people who are valid licence key holders and subject to the same 'once yearly' migration plans etc none of that seems too farfetched to me, seems easily workable if enough desire to implement is had. if i was to pull a statistic out my ass, i'd guess around 75%~ of unraid users have some form of ssd (whether nvme or not) and the way the industry / market is going, you can have them for super duper cheap now. cost is barely becoming a barrier anymore. i've seen $20 deals on 250gb nvme ssd's or $25 deals on 500gb ssd's. no decision will ever have the entire userbase support but i've seen patches / bugfixes / development time spent on niche user experience stuff (like drivers for niche hardware) so if the vast majority of users could have this implemented in the way i described, i dont think it'll cause the entire house of cards to come tumbling down. you might get a few butthurt users who dont have ssd's or dont want to have say bi-weekly online auth or something but they'll soon come round when they need the functionality.
  19. i run unraid as a vm / usb passthrough on esxi 8. whilst i support the idea of this feature i dont think its a huge leap to suggest people just use something else. everybody can run esxi for free near enough, the limitations aren't huge - its performance is very similar to baremetal and you get a degree of abstraction so you can further test whether its an unraid based fault affecting you or not. (or you can run proxmox for free too)
  20. because its ZFS which means I have to incur stripping. Most users who are attracted to unraid or similar solutions do so because of the ease of expansion and JBOD nature of things. When you get data at scale (and my scale isn't even that large, only some 80tb or so) the idea of losing it ALL in one singular failure is scary as hell. I love that its all JBOD and independent. Prior to using unraid, I had been using snapraid for about 7 years. I'm still mulling over the idea of making a guide on how to set up snapraid on unraid because its a significantly better solution all round for integrity (it checksums all the files and keeps records of them, and its very easy to see if theres a parity sync mismatch which files are mismatched so you can independently verify if the parity or the filesystem is correct) - its also very easy to plug and play it. The only major downside to it, is that its not a live parity (its snapshot based so you'd generally execute a script which will tell the array to sync and you can't be writing data to it during that time). re: writeback mode I agree with you, it should be disabled and native unraid solution could lock that flag in place. (as well as potentially exposing a bunch of these options like sequential_cutoff as a GUI field / drop down menu Due to the nature of unraid's cache system, writeback mode is largely redundant isn't it? I also agree it makes no sense to bcache a cache pool outside of some super duper niche and probably datacentre level considerations - nothing for us home users
  21. what would you suggest is the best method to accomplish this on unraid as a transparent read cache for the array? pickings seem pretty slim and bcache seems the best solution for the job all things considered as it stands it is as you have said, but i dont think it'd be reinventing the wheel to have a native implementation of bcache into the unraid ecosystem for the array (and maybe even cache devices too but that seems a bit redundant) the offset functionality is how we maintain each individual array disks accessibility, as independent filesystems - is it not? the parity could have an offset too - hell if we moved this from an 8kb to 1mb offset instead... (so we can better support that alignment fix) is it really the end of the world if the first 1mb of our array just isn't backed up by parity? we could have each drive just start writing from 1mb in and in principle its all interchangeable plug and play?
  22. great to hear but can you expand more on this? from what you've said it wouldn't be capable of caching data accessed on the array? if it isn't then can you elaborate on what use case scenarios we could do with this? I remember reading about using the bcache offset to simply mount with or without the bcache wouldn't that be something unraid could do for the array? like on a parity level unraid could just ignore the first 8kb of each disk (if the cache is enabled, this would be the bcache hook in offset) and then we'd all have a transparent, interchangeable, ephemeral cache I recall reading @limetech played about with it in 2015, if we could get the 8kb offset hook in and the parity disk ignore the first 8kb (or assume its values since it'll never really matter) then that should be sufficient for array integration wouldn't it?
  23. disagree with the method of communication but this seems mostly trivial for limetech to implement in practice the real point of origin where a warning would be or should be issued, is on the clicking of the update button (as a secondary 'are you sure') style window it doesn't strike me beyond the realms of computational possibility to parse a list of plugins already installed and spit out the ones which will be invalid its at that point, the user can say 'nah i cant be bothered updating yet' because if its invalidating more plugins than the effort you plan to go to, and you are relatively happy as is, why bother? the problem ultimately was that people felt blindsided - and overall should be a simple problem to fix
  24. sorry if i came off as combative, it wasn't my intent but its been kinda my observation that sometimes discussions are wholly shut down because its not within the extremely narrow scope of an intended persons use case scenario (see for example the whole political ideological warfare landscape that is the linux kernel peeps dislike for FUSE file systems lol - the windows equivalent, using drivepool is significantly better on a lot of fronts but it just cant do hardlinks) it happens to all of us, i get it, some of us couldn't possibly imagine a scenario where its useful and on face value it sounds absolutely daft (not saying you did any of this, just talking about observations, i've also seen a lot of plonkers advocating for stupid stuff too and when drilled down it was stupid overall) the conversation then devolves into an exercise where (rightly or wrongly) you end up having to extremely justify the minutiae of your deployment - writing literal novels in the process... (again im not saying you did any of this, i'm speaking in generalities of online discussions) also yep thats exactly as you described and its the absolute exact desired state in a tiered storage scenario 'basically cache every file' - thats the absolute bottom dollar god tier 10/10, dialled to 11 position that me and probably a bunch more people want to end up in yep it will significantly reduce your SSD lifespan, its going to do some insane level of write amplification - thats also entirely desired SSDs themselves have insane levels of longevity - if we are talking meta commentary on SSD lifespan we are seeing many SSDs outliving their usefulness because of sheer capacity expansion - SSDs basically just dont die now. at least the good ones. They are not only rated for significant amounts of writes, but those stated values are not reflective of REAL WORLD values for example the samsung 840 pro (dont buy this btw for anybody reading, it has a hardware level defective which samsung 'fixed' in firmware... the 'fix' caused massive write amplfication), its got a rated TBW of some 100-200 TB? (i'm working from memory) but in torture tests where people actually seek to KILL the SSD, its survived for many PETABYTES of data. More than a 10-20x increased vs rated on the box. https://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead/ So yeah I think overall GOOD justification can be made for this kind of stuff. The used SSD market on both NVME and regular SSDs are a gold mine imo. The whole idea with the b-cache is that its interchangeable, ephemeral and you shouldn't give a shit about it - i'd adopt a pump and dump mentality with it I envisage for unraid with a plugin / script, once sufficiently advanced it could be as simple as pointing towards an ssd and clicking go. from a value proposition I think this makes TONS of sense. Imagine someone is coming to you and says 'hey {user} for £16 the next 2 PETABYTES of reads are going to enter into a 128gb cache, which is going to be fast as lightning. Would you be interested in such a deal?' I'd bite their hand off and thank them profusely for giving me such an opportunity. Probably ask if I could rake their yard in gratitude.
  25. oh also re: sequential it appears you can turn it off echo 0 > /sys/block/bcache0/bcache/sequential_cutoff