je82

Members
  • Posts

    468
  • Joined

  • Last visited

Everything posted by je82

  1. I am using cloudflare, maybe ill start with 2FA, i just feel like its very bothersome having to 2FA since i've configured my webplugin to time out after 5 minutes, having to 2FA everytime seems a little over the top. I just dont want some script to be able to sit and bruteforce my install for 2 years straight but considering my password is over 30 characters long , small + big + weird characters and its highly unlikely anyone would get through i guess.
  2. I found the fix to "remove" actually removing data too, go to torrent.rc and add # in front of this line: as you can see this line makes remove, also do remove + data, i have no idea why someone would have this in the configuration when the client already has both features but then again i'm no expert, it works fine for me now but i wouldn't trust the container with write to my full array, just limit it to the share where you do the torrenting i guess.
  3. not to sure about this crazymax thing, no support thread, rutorrent actually doesn't work great, "remove" action in crazymax version deletes the torrent + the data and not just the torrent, not great but i'll try to recompile my own version.
  4. Yeah, i have far to many applications to deal with that hassle, ill just buy a email service like runbox, i have been thinking of moving away from big brother google spying on me anyways so this is the time to do it
  5. Welp great, time to migrate away from google, i should have done it a long time ago More secure yeah... so secure i cannot even access it anymore using my own credentials which are far beyond anything you can bruteforce, thanks google More of you will probably see this soon.
  6. Any recent changes to gmail occured? My unraid installation since 2 days back has not been able to send any emails. "Jun 8 05:09:47 NAS sSMTP[12461]: SSL connection using TLS_AES_256_GCM_SHA384 Jun 8 05:09:47 NAS sSMTP[12461]: Authorization failed (535 5.7.8 https://support.google.com/mail/?p=BadCredentials u6-20020a05651206c600b00478f40f6e74sm3503090lff.140 - gsmtp)" Logging in using the same credentials works fine from the same ip via a browser. Anyone else have this sudden problem? Pretty sure it's an gmail issue because another application i have also fails now. The only thing i can think of is that i have configured both unraid and this other application to send email from itself to itself and perhaps this is something gmail has decided to block recently? EDIT: tested sending to another email and same credentials issue.
  7. I'm using vaultwarden docker on unraid and i have no problems with it, but my concerns are, what extra security measures except password is in place? i attempted to login to my vaultwarden 15 times in a row and it seems it never blocked any attempts which makes me think it is prune to 24/7 bruteforcing?
  8. How do i disable it from being checked for auto updates/check for updates?
  9. I had issues when running rsync from another server that everytime that was done the folders were changing permissions making it impossible for other users who are also accessing the shares to access them. Not sure how to resolve this any other way as i tried to set the umask and user/group in rsync command and it only worked for files, not for folders. Example: rsync -avu --chown=nobody:users --chmod=Du=rwx,Dg=rwx,Do=rwx,Fu=rw,Fg=rw,Fo=rw Anyway turns out the changes made to SMB wasn't the issue here, the container running qbittorrent had changed umask setting, maybe an update did something and this caused permission issues but it occured somewhat close to me messing with smb configuration so i defaulted to thinking this was the cause.
  10. Hi, Is there a way to disable updates for a particular docker container? As i understand you can always apprend :versionnumber to the repo name but what if the repo itself doesn't show any versions? In this case i want this container never to update: repo: pltnk/icecast2 (https://github.com/pltnk/docker-icecast2) as you can see it was updated a few days ago and this messes with my configuration because i made custom changes via the cli interface that are for whatever reason wiped out everytime it updates. How can i have this repo installed but ignore any updates it may receive in the future?
  11. Hi, I had issues whenever a new file was created in unraid the permissions and owner were not set to nobody:users rather it could be the unique user who was the owner resulting in some access issues when using multiple different users to access files on shares. I solved this by setting the smb conf global section: force create mode = 0666 force directory mode = 0777 force user = nobody force group = users create mask = 0666 The problem i notice now is that programs like sonarr and radarr which actually need execute permissions on files are unable to do their job properly. Is there anyway to tell unraid to set 666 to all files except in a particular area/share? If that answer is no, then my second question is, would it be safe to set the smb permissions conf to this instead to solve the problem or would that result in massive security risk for whatever reason i dont know? force create mode = 0766 force directory mode = 0777 force user = nobody force group = users create mask = 0766 Thanks.
  12. I've noticed with Jdownloader2, whenver a host is using hcaptcha you cannot add it to your premium hosts using this docker, there is no captcha popup anywhere to be seen to verify yourself through the captcha. If you install jdownloader2 on windows and add your account you will be redirected to a webpage where you authenticate with the hcaptcha to add your account. Is it possible to fix this for the docker?
  13. Beautiful work, this should be implemented as a feature for future releases if possible!
  14. Was going to use vaultwarden to save my passwords, but only have it accessable via the local area network, using vpn to connect to my network and sync passwords if i am not on the lan. The problem is whenever i setup Vaultwarden it tells me it wont work without https with my browser. It seems very cumbersome to deal with internal certificates, is there a way to bypass this? I do not wish to expose vaultwarden to the internet as i have access to my network when using vpn anyway. Letsencrypt only works if letsencrypt can valide the certificates from their servers which won't work if the host is offline. Any good solutions here? I want to have certificates more or less seamless without having to manually deal with them each time they expire but i also only want my vaultwarden to be accessible from lan, never from wan.
  15. Hi, thinking of selfhosting a password manager, but i have some questions. I want my passwords to be 100% selfhosted, no cloud services involved. Therefor there are some problems, i need multiple installations of either bitwarden or perhaps using something like enpass which as i understand stores the passwords locally encrypted on each device? The reason i need dual installations is if i shutdown my unraid server then what? I have no passwords? Just like DNS i like at least 2 on different hardware so i can shut down one while the other remains operational and they need to auto sync inbetween. I intend to use vpn connectivity to my network when i use devices from the outside, so i don't need any reverse proxying. What is the best password manager 2021 you would recommend to achieve such things? Is Enpass the way to go or is bitwarden (now for whatever reason called Vaultwarden?) the way to go? EDIT: Seems like enpass doesn't support syncing unless you use cloud, what a strange thing.
  16. i have 3 unraid servers now. 1. 170tb 2. 250tb 3. 40tb my 170tb one just got a little upgrade to transcode some movies
  17. Hi, I made the mistake of buying a 3060ti Gainward ghost card because it was the only one in stock and it had many rgb lights, i've managed via the command nvidia-smi -pm 1 to get the card to sleep at pstate 8, but it still draws 19watts in this mode. Does anyone else have a 3060ti or a 3series card with pstate 8 and no rgb lights? My guess is that these rgbs are at least drawing 10 watts of power, could that be possible? I have no idea how much leds usually draw but looking at some other people who have nvidia cards in the same power state they have around 6-10 watts draw and not 19. Follow up question, is there a way to disable the led lights via the nvidia driver? My guess is no because this i third party vendor card... i guess the only way to find out how much power these leds actually draw is to open the card up and find the power source to the led lights but that might be a gamble. EDIT: Ended up opening the box up again and removing the rgb power header cable which luckily was pretty straight forward with this card, voila 9w!
  18. nevermind, i understand now, only lower case files will be visible and if share already exists there are most likely files that are not all lower case. yeah smb performance really sucks in 6.8.x and onward compared to 6.7.2 which seem to be the last version that had a smb protocol that didn't add seconds of delay between accessing a file EDIT: This setting fixed my smb speed issue in 6.8.3 and onward (Default is yes, set to no to speed up smb directory listings) Not sure what i would be losing out on with this disabled, i don't use any hard links that i know of.
  19. i just did a manual upgrade from 6.7.2 to 6.8.3 and now my dockers cannot find any updates, doesn't matter if i use cloudflare dns, google dns or pihole, all the same. ideas? nevermind found solution here: https://forums.unraid.net/topic/108643-all-docker-containers-lists-version-“not-available”-under-update/?do=findComment&comment=993917
  20. Hello, I am wondering why the detailed information regarding case sensitive filenames settings in the share section that was introduced in 6.8.3. tells me the following: "A value of Forced lower is special: the case of all incoming client filenames, not just new filenames, will be set to lower-case. In other words, no matter what mixed case name is created on the Windows side, it will be stored and accessed in all lower-case. This ensures all Windows apps will properly find any file regardless of case, but case will not be preserved in folder listings. Note this setting should only be configured for new shares." I've bolded the part that i wonder about, what will happen if i do this to a share that already previously existed? Why is it only recommended to do for new shares and not old ones? Thanks!
  21. Hi, I prefer to run the "previous stable branch" to not be bleeding edge, i am now going to upgrade from 6.7.2 to 6.8.3 but via the gui i can only upgrade to either 6.9.2 or 6.9.1. I read somewhere all i have to do to upgrade is to simply overwrite the bz* files that are sitting in the root folder of the flash drive then reboot, is this correct?
  22. The multi pool support is cool for sure, separate the cache away from the dockers is a good idea, we all know how bad it is if you accidentally fill the cache on unraid thanks for the tip!
  23. oh really? oh well i've been running my unraid for over a year straight so i can only imagine how teared down my ssds are, i might as wel run them into the dirt would the writes decrease if i remade the cache volume to use xfs encrypted rather then btrfs? what would be the downside of not having btrfs?
  24. Hi, I've been lurking the forums and i saw there was a big issue with 6.8.x causing btrfs cache volumes to "constantly loop writes" and eventually killing the ssd, is this still a thing or was it ever solved in 6.8x? My unraid install has 2x btrfs mirrored, will this be an issue? Is it recommended to convert this to xfs encrypted rather then running btrfs in 6.8.3? Back when 6.7.2 was the latest version the recommended cache setting was btfs thats why i have it, i have no real idea what the upside/downside of having it really is. The rest of my array is xfs encrypted. Thanks.