fagostini

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by fagostini

  1. curious is this will allow for user input while the script is running. really wanted to add something like this ask_user() { read -p "Do you want to continue with the script? (y/n): " choice case "$choice" in y|Y ) echo "Continuing with the script...";; n|N ) echo "Exiting script..."; exit;; * ) echo "Invalid input. Exiting script..."; exit;; esac }
  2. Can anyone help me with what plugin I need to convert .ts files to .mkv Currently getting some error that it cant convert to .toLowerString ....
  3. Hey looking for some help with SWAG, nothing is getting baned at all. I have tried to trip getting banned by trying 10 times in a row with a bad login... max is set to 2. I thought it might be that i needed to add a .local for bitwardenrs and a path to the log. even after doing this still getting nothing. [Definition] failregex = ^\s*\[ERROR\]\s+Username or password is incorrect. Try again.(?:, 2FA invalid)?\. <HOST>$ this is my bitwarden.local inside filter.d [bitwarden] enabled = true filter = bitwarden logpath = /config/log/containers/bitwarden.log maxretry = 2 inside jail.local i have added my fail2ban log. i changed to debug and heavydebug still can't see why it isn't picking up the failed attempt. any help would be much appriciated fail2ban.log
  4. I forwarded 443 to a docker that is on br0 not bridged. edit i did forward port 1194 for openVPN to ip of server for docker
  5. Hello i have been noticing some weird things in my syslog recently and i am worried i might have been hacked. Mar 22 05:24:16 Gargantua smbd[78925]: [2021/03/22 05:24:16.388691, 0] ../../source3/smbd/ipc.c:843(reply_trans) Mar 22 05:24:16 Gargantua smbd[78925]: reply_trans: invalid trans parameters Mar 22 06:12:01 Gargantua smbd[94749]: [2021/03/22 06:12:01.206465, 0] ../../source3/smbd/ipc.c:843(reply_trans) Mar 22 06:12:01 Gargantua smbd[94749]: reply_trans: invalid trans parameters Mar 22 07:57:02 Gargantua smbd[130452]: [2021/03/22 07:57:02.480366, 0] ../../source3/smbd/ipc.c:843(reply_trans) Mar 22 07:57:02 Gargantua smbd[130452]: reply_trans: invalid trans parameters Mar 22 08:00:37 Gargantua smbd[862]: [2021/03/22 08:00:37.197682, 0] ../../source3/smbd/ipc.c:843(reply_trans) Mar 22 08:00:37 Gargantua smbd[862]: reply_trans: invalid trans parameters Mar 22 09:55:46 Gargantua vsftpd[39340]: connect from 192.241.229.40 (192.241.229.40) Mar 22 10:47:08 Gargantua rpcbind[56376]: connect from 147.203.255.20 to dump() Mar 22 11:39:25 Gargantua vsftpd[73482]: connect from 104.206.128.14 (104.206.128.14) Mar 22 11:45:55 Gargantua vsftpd[75646]: connect from 104.206.128.34 (104.206.128.34) Mar 22 12:24:44 Gargantua rpcbind[88532]: connect from 178.79.177.180 to dump() Mar 22 12:40:33 Gargantua smbd[93735]: [2021/03/22 12:40:33.433770, 0] ../../source3/smbd/ipc.c:843(reply_trans) Mar 22 12:40:33 Gargantua smbd[93735]: reply_trans: invalid trans parameters Mar 22 13:44:59 Gargantua rpcbind[115137]: connect from 192.241.222.139 to dump() Mar 22 14:06:32 Gargantua smbd[122228]: [2021/03/22 14:06:32.453344, 0] ../../source3/smbd/ipc.c:843(reply_trans) Mar 22 14:06:32 Gargantua smbd[122228]: reply_trans: invalid trans parameters Mar 22 16:24:30 Gargantua smbd[37268]: [2021/03/22 16:24:30.999049, 0] ../../source3/smbd/ipc.c:843(reply_trans) Mar 22 16:24:30 Gargantua smbd[37268]: reply_trans: invalid trans parameters Mar 22 16:26:16 Gargantua smbd[37856]: [2021/03/22 16:26:16.483063, 0] ../../source3/smbd/ipc.c:843(reply_trans) Mar 22 16:26:16 Gargantua smbd[37856]: reply_trans: invalid trans parameters Mar 22 19:21:44 Gargantua smbd[96285]: [2021/03/22 19:21:44.027448, 0] ../../source3/smbd/ipc.c:843(reply_trans) Mar 22 19:21:44 Gargantua smbd[96285]: reply_trans: invalid trans parameters Mar 22 19:34:58 Gargantua smbd[100588]: [2021/03/22 19:34:58.929134, 0] ../../source3/smbd/ipc.c:843(reply_trans) Mar 22 19:34:58 Gargantua smbd[100588]: reply_trans: invalid trans parameters Mar 22 19:50:20 Gargantua smbd[105914]: [2021/03/22 19:50:20.334158, 0] ../../source3/smbd/process.c:341(read_packet_remainder) Mar 22 19:50:20 Gargantua smbd[105914]: read_fd_with_timeout failed for client 23.90.145.51 read error = NT_STATUS_END_OF_FILE. Mar 22 20:07:32 Gargantua kernel: svc: svc_tcp_read_marker nfsd RPC fragment too large: 1195725856 Mar 22 21:38:25 Gargantua smbd[11326]: [2021/03/22 21:38:25.309615, 0] ../../source3/smbd/ipc.c:843(reply_trans) Mar 22 21:38:25 Gargantua smbd[11326]: reply_trans: invalid trans parameters edit: i have changed the root password in the web ui
  6. so does it move the data back to array when the system powers off?
  7. it shows all the files in /mnt/users/appdata and /mnt/cache/appdata.
  8. okay so it took all night but mover has finally filled the cache with 200g of the appdata domain and system files, it all looks mirrored onto the array as well, im guessing this is the setup that i want?
  9. will mover not just move from array to cache ?
  10. i don't think i need to start over, i got the vm running in the beginning and i am already transferring everything out of the appdata folder, outside of the plex media nothing else should be an issue. originally the pex was a docker and i thought i had to store all its files together in the appdata section. major noob here i do want to say thanks for explaining all the to me, i feel like an idiot for not figuring that out when i started.
  11. the dockers are less of a concern for me right now, so if i move all the excess data from appdata to a new share and change the share settings for the main ones to prefer the vm should stay after a reboot at least right? originally i had plex docker and it saved the data to appdata so i changed the cache to yes so it wasn't full all the time, everything worked fine for months that way even after reboots. i changed a cpu pinning for the vms and when it came back up they disappeared. thanks for all the info, when i got unraid it was difficult to get best practices from online and now it is biting me so you think migrating the bulk storage of plex to a new share and changing the cache settings wont really solve the issues?
  12. i think i did make a big oof, im going to make a new share to transfer all the vm data into and change the paths and keep the move the other shares to cache-prefer, and use the mass storage to cache-yes for faster write times and this you are saying should help the vm's / dockers reappear ?
  13. ok so im a litter curious why i would want my appdata to all be in cache, my cache is only 500g and my appdata folder has several TB of data in it, i only wanted to use the cache to dump files onto the machine and then move them into the array periodically. if i switch to prefer it will fill my cache all the time
  14. ok so i just made a new vm and pointed it to the old vm vdisk and that seems to work fine but, i dont know how to do that for the docker containers
  15. gargantua-diagnostics-20200224-1621.zipok here it is
  16. I recently rebooted unraid 6.8.2 and when it came back up the vm's and dockers are no longer showing up. I have checked the domains share and i still have vdisk.img for both vm's but under the VMS tab nothing... any help would be great