Jump to content

strike

Members
  • Content Count

    371
  • Joined

  • Last visited

Community Reputation

33 Good

About strike

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. In the syslog. If you enable notifications and have e-mail notification set up you also get a list of all the files by mail. And the files aren't necessarily "bad", the files could have just been updated and thus changed the hash. So it's a good idea to exclude files that gets updated a lot by various apps, like .nfo files.
  2. no, you enter the username and password you set in the auth file. Where "username" is your username and "password" is your password.
  3. I didn't quote you, my reply was for @rbh00723
  4. Edit the auth file located in your delugevpn appdata dir. Add the user/pass on a new line in this format: username:password:10 restart the container and you should be able to connect
  5. On second thought, I'm not really sure if I need it. All my media files (which is kinda what I care about anyway) on the mainserver I set to read only once a week, all other SUPER important files I have backed up in various clouds (in addition to my backupserver) which use versioning so I can restore. And all the files thats get's transfered to the backup server gets put in a read-only state at the end of the backup. Worst case senario, if a virus of any kind makes it to one or both of my servers and tries to do a havoc on my files I loose maybe one week of media files. All other files I can restore from the cloud. So I think I'm pretty safe, I may look at versioning/snapshotting anyway just becuase I want to learn and it could be convenient to have snapshots if worst case happens. But do I NEED it, no I don't think so.
  6. Having an updated OS, regularly run a virus scan, frequent backup intervals ( at least once a week) and be causes about clicking links in e-mails from untrusted senders are going a long way to be virus free. But things do happen, especially if you have kids clicking around and about on everything. So say you run backup once a week, then you have a small window of 7 days you can potentially be infected, and if you in those 7 days you get a virus which is then in return get to the backup server on the next sync, the virus is locked by setting it in a read-only state so it can't do any harm to your backup. So by this point, you have done a virus scan on your main machine and confirmed that you have a virus. Now what? To know whether or not it's safe to restore the backup and make the file writable again or if it's infected you need to implement checksumming. I don't know how tech-savvy you are, but every file gets a checksum, which is a string of numbers and letters which is unique to that file. If the file gets corrupted in any way it gets a new checksum. So an infected file would have a different checksum then the original. That's how you check if the file is indeed the original or if it has changed or been corrupted in any way. Now how do you implement this? You could use the BTRFS file system which has automatic checksumming. If you're not going to use BTRFS you could try the file integrity plugin, which you can set up to run on a schedule so you'll get notified if a file has been corrupted. Or you could use some other checksumming tool, I know there is one for windows which many around here use but I can't remember the name, but you'll find it if you search. I did use it myself some time ago. But what if you confirmed that both files are infected and can't restore the original file? Well, then you're screwed if you do not use versioning, it's as simple as that. I'm really a nazi about my files and how I use the internet so I haven't needed versioning, yet... I do plan to implement it in the near future though.
  7. Yeah, now that you mention it I vaguely remember reading about it.
  8. Yeah, if you need versioning you need to look into other things. Maybe combining it with rsync, like snapshotting all the disk using the BTRFS file system like already mentioned. I don't really know what a virus can do though if it gets to you backup server. I mean it only turns on for backup, and it can't do anything with your files since they are read-only. Even the file(s) which are infected get put in a read-only state, once backed up. I guess it depends on what the virus does. Well, if you make your files writable again the virus can do any changes it wants.
  9. Yes, if you do it my way you have to have the SSH port open on both servers. You need it open on the remote server to run rsync over ssh and you need it open on your local server to download the keyfile to your remote server. SCP stands for Secure Copy and uses the same port as SSH does. There is an example in my fetch_key script which I posted earlier on how to fetch the key via SCP. My backup server doesn't have IPMI, so I don't know how to answer that one. But you can't do it via SSH on a server which is powered down since the SSH service isn't running yet. So you need to SSH into another remote machine (pi?) and do the IPMI command from there. Or if there's a possibility to open a port to be able to communicate with the IPMI service directly on the remote server? I don't know since I haven't researched it and I can't try it. On my setup, I have a small NUC on the remote site also running unraid which is always on. It has the wake on lan plugin installed among other things, which includes etherwake, and I use the etherwake command to send a magic packet to wake up the backup server. So I SSH into the NUC, which wakes up the backup server, then I SSH into the backup server and run the rest of the backup script. But you can do the same thing with a pi or any other machine on the same network as long as it has SSH,etherwake and is always on. In any case, I would recommend having a small "always on" machine on the remote site running a vpn, (if you don't set up a VPN on the remote router that is). You need to be able to log in to the unraid webui securely if you want to. Edit: A quick google search reveals that IPMI uses UDP port 623, maybe you can open that to run the IPMI command directly? And of course, if you can do that then you don't need a pi, as you can power up the server directly and have a vpn server running as a docker container on unraid. But as I said I don't use IPMI so I don't know if it's possible.
  10. Yeah, but if you continue to read you will see there's a workaround to get decode to work also. There are literally pages with discussion on this, but I think you need an nvidia gpu. And since you have a P2000 why not use that? And pass it through to a plex container
  11. lol, funny spelling mistake! I must be dead tired I hope you bring enough of that encrypting toilet paper! Time to go to bed.. 🤣 Yes. Since you already have to open up a port for SSH I would fetch the file via SCP instead which uses the SSH port.
  12. I totally agree with this! Talking about folder structure, there is a huge "design flaw" using rsync with user shares on the first sync. Rsync will create the entire folder structure of a share on the first disk it chooses before it starts transferring the files. This will result in all files going to the first disk causing it to totally fill up and ultimately fail with out of space errors. This is only a problem with rsyncing user shares I think, and also only a problem on the first sync. Or I should say only a problem syncing to an empty server. I prefer using user shares, but I knew about this issue before I started transferring the files to the backup server so I didn't use rsync on the first transfer. I just mounted the server via UD and transferred all the files via Midnight Commander, locally. But I have been using rsync with user shares ever since then and it works great, just not for the first big transfer to an empty server.
  13. I don't know how recently, but it has changed. There is a patch for plex which takes care of this. I don't use plex but I have seen the discussions about it. Plex will eventually support this natively. Emby and jellyfin already does this natively. So a vm just for this is such a waste of resources. Edit: There's a lot of discussion on the plex patch here: https://forums.unraid.net/topic/77813-plugin-linuxserverio-unraid-nvidia/ AND there's also docker containers for ombi and tautulli. About the plex patch, it might only work on nvidia now that I think about it, but I'm not sure since I've not read all the info that's out there.
  14. I'm just guessing, but installing/configuring a bunch of plugins could very well be the reason for the high reads/writes. Since all plugins and all configuration changes are stored on the flash drive. I would say if it continues this behavior then something is wrong, but if not just go ahead and finish setting up your server. Welcome aboard btw
  15. Yeah, mine too really. I know how to set it up I think, but I don't know how to analyze the data. But I've not seen any leakage on delugevpn with privoxy enabled when testing on various leakage test sites so I'm choosing to trust it.