strike

Members
  • Posts

    771
  • Joined

  • Last visited

Everything posted by strike

  1. In the syslog. If you enable notifications and have e-mail notification set up you also get a list of all the files by mail. And the files aren't necessarily "bad", the files could have just been updated and thus changed the hash. So it's a good idea to exclude files that gets updated a lot by various apps, like .nfo files.
  2. no, you enter the username and password you set in the auth file. Where "username" is your username and "password" is your password.
  3. I didn't quote you, my reply was for @rbh00723
  4. Edit the auth file located in your delugevpn appdata dir. Add the user/pass on a new line in this format: username:password:10 restart the container and you should be able to connect
  5. On second thought, I'm not really sure if I need it. All my media files (which is kinda what I care about anyway) on the mainserver I set to read only once a week, all other SUPER important files I have backed up in various clouds (in addition to my backupserver) which use versioning so I can restore. And all the files thats get's transfered to the backup server gets put in a read-only state at the end of the backup. Worst case senario, if a virus of any kind makes it to one or both of my servers and tries to do a havoc on my files I loose maybe one week of media files. All other files I can restore from the cloud. So I think I'm pretty safe, I may look at versioning/snapshotting anyway just becuase I want to learn and it could be convenient to have snapshots if worst case happens. But do I NEED it, no I don't think so.
  6. Having an updated OS, regularly run a virus scan, frequent backup intervals ( at least once a week) and be causes about clicking links in e-mails from untrusted senders are going a long way to be virus free. But things do happen, especially if you have kids clicking around and about on everything. So say you run backup once a week, then you have a small window of 7 days you can potentially be infected, and if you in those 7 days you get a virus which is then in return get to the backup server on the next sync, the virus is locked by setting it in a read-only state so it can't do any harm to your backup. So by this point, you have done a virus scan on your main machine and confirmed that you have a virus. Now what? To know whether or not it's safe to restore the backup and make the file writable again or if it's infected you need to implement checksumming. I don't know how tech-savvy you are, but every file gets a checksum, which is a string of numbers and letters which is unique to that file. If the file gets corrupted in any way it gets a new checksum. So an infected file would have a different checksum then the original. That's how you check if the file is indeed the original or if it has changed or been corrupted in any way. Now how do you implement this? You could use the BTRFS file system which has automatic checksumming. If you're not going to use BTRFS you could try the file integrity plugin, which you can set up to run on a schedule so you'll get notified if a file has been corrupted. Or you could use some other checksumming tool, I know there is one for windows which many around here use but I can't remember the name, but you'll find it if you search. I did use it myself some time ago. But what if you confirmed that both files are infected and can't restore the original file? Well, then you're screwed if you do not use versioning, it's as simple as that. I'm really a nazi about my files and how I use the internet so I haven't needed versioning, yet... I do plan to implement it in the near future though.
  7. Yeah, now that you mention it I vaguely remember reading about it.
  8. Yeah, if you need versioning you need to look into other things. Maybe combining it with rsync, like snapshotting all the disk using the BTRFS file system like already mentioned. I don't really know what a virus can do though if it gets to you backup server. I mean it only turns on for backup, and it can't do anything with your files since they are read-only. Even the file(s) which are infected get put in a read-only state, once backed up. I guess it depends on what the virus does. Well, if you make your files writable again the virus can do any changes it wants.
  9. Yes, if you do it my way you have to have the SSH port open on both servers. You need it open on the remote server to run rsync over ssh and you need it open on your local server to download the keyfile to your remote server. SCP stands for Secure Copy and uses the same port as SSH does. There is an example in my fetch_key script which I posted earlier on how to fetch the key via SCP. My backup server doesn't have IPMI, so I don't know how to answer that one. But you can't do it via SSH on a server which is powered down since the SSH service isn't running yet. So you need to SSH into another remote machine (pi?) and do the IPMI command from there. Or if there's a possibility to open a port to be able to communicate with the IPMI service directly on the remote server? I don't know since I haven't researched it and I can't try it. On my setup, I have a small NUC on the remote site also running unraid which is always on. It has the wake on lan plugin installed among other things, which includes etherwake, and I use the etherwake command to send a magic packet to wake up the backup server. So I SSH into the NUC, which wakes up the backup server, then I SSH into the backup server and run the rest of the backup script. But you can do the same thing with a pi or any other machine on the same network as long as it has SSH,etherwake and is always on. In any case, I would recommend having a small "always on" machine on the remote site running a vpn, (if you don't set up a VPN on the remote router that is). You need to be able to log in to the unraid webui securely if you want to. Edit: A quick google search reveals that IPMI uses UDP port 623, maybe you can open that to run the IPMI command directly? And of course, if you can do that then you don't need a pi, as you can power up the server directly and have a vpn server running as a docker container on unraid. But as I said I don't use IPMI so I don't know if it's possible.
  10. Yeah, but if you continue to read you will see there's a workaround to get decode to work also. There are literally pages with discussion on this, but I think you need an nvidia gpu. And since you have a P2000 why not use that? And pass it through to a plex container
  11. lol, funny spelling mistake! I must be dead tired I hope you bring enough of that encrypting toilet paper! Time to go to bed.. 🤣 Yes. Since you already have to open up a port for SSH I would fetch the file via SCP instead which uses the SSH port.
  12. I totally agree with this! Talking about folder structure, there is a huge "design flaw" using rsync with user shares on the first sync. Rsync will create the entire folder structure of a share on the first disk it chooses before it starts transferring the files. This will result in all files going to the first disk causing it to totally fill up and ultimately fail with out of space errors. This is only a problem with rsyncing user shares I think, and also only a problem on the first sync. Or I should say only a problem syncing to an empty server. I prefer using user shares, but I knew about this issue before I started transferring the files to the backup server so I didn't use rsync on the first transfer. I just mounted the server via UD and transferred all the files via Midnight Commander, locally. But I have been using rsync with user shares ever since then and it works great, just not for the first big transfer to an empty server.
  13. I don't know how recently, but it has changed. There is a patch for plex which takes care of this. I don't use plex but I have seen the discussions about it. Plex will eventually support this natively. Emby and jellyfin already does this natively. So a vm just for this is such a waste of resources. Edit: There's a lot of discussion on the plex patch here: https://forums.unraid.net/topic/77813-plugin-linuxserverio-unraid-nvidia/ AND there's also docker containers for ombi and tautulli. About the plex patch, it might only work on nvidia now that I think about it, but I'm not sure since I've not read all the info that's out there.
  14. I'm just guessing, but installing/configuring a bunch of plugins could very well be the reason for the high reads/writes. Since all plugins and all configuration changes are stored on the flash drive. I would say if it continues this behavior then something is wrong, but if not just go ahead and finish setting up your server. Welcome aboard btw
  15. Yeah, mine too really. I know how to set it up I think, but I don't know how to analyze the data. But I've not seen any leakage on delugevpn with privoxy enabled when testing on various leakage test sites so I'm choosing to trust it.
  16. I think you'll need to do some "wiresharking" to figure that out.
  17. I think the answer to this is yes, correct me if I'm wrong. If it works it works, if not, then oh well. I think this is the case for all the ..VPN containers. I have the delugevpn container running on macvlan, but I think I'm one of the lucky few because almost nobody else is able to get it to work.
  18. I'm not sure I'm following..So you're looking do to a two-way sync? As in everything you do on server A gets mirrored to server B and vice versa? If so then that's a good question.. But I don't think the entire backup will fail, I think it just skips the file. I don't really know though. I only do a one-way sync so any files I delete locally after it's synced will need to be manually deleted on the remote server if I wish to delete them. I'm sure there are other ways, but I want it this way. And I don't really do much deleting anyway. I think most people around here only do a one-way sync so if you're looking to do two-way, then you might have some googling to do. I guess you can maybe use the --delete flag in some way, but I also think you need to run rsync twice to able to do a two-way sync. I'm not sure though.
  19. So here is the thread I was talking about: https://forums.unraid.net/topic/61973-encryption-and-auto-start/ So basically you make 3 scripts. One to fetch the keyfile from your local server, one to remove the keyfile once the array is started and one to call the first two scripts at the right time. The first two scripts need to be stored on the remote servers flash drive and the third scripts have to be put at the beginning of the go file, also located on the flash drive. All of this requires that you have ssh keys setup and have forwarded a port for ssh on your local router. Which you'll need to do anyway if you're going to use rsync over ssh. Be sure to not use the standard ssh port as your server will be hit by a boatload of connections trying to get in, all the time. There will still be some connection attempts even if you change port but not nearly as much. But they won't get in anyway if you disable password login and only allow keys. You can use the ssh plugin to configure port and disable password login. Here are the scripts I'm using: fetch_key script: #!/bin/bash if [[ ! -e /root/keyfile ]]; then scp -P 65331 [email protected]:/path/to/keyfile /root/keyfile. fi Change the port number in the above scp command according to your config. And the rest of the line obviously. delete_key script: #!/bin/bash rm -f /root/keyfile and the relevant part my go file: #!/bin/bash # auto unlock array mkdir -p /usr/local/emhttp/webGui/event/starting mkdir -p /usr/local/emhttp/webGui/event/started mkdir -p /usr/local/emhttp/webGui/event/stopped cp -f /boot/custom/bin/fetch_key /usr/local/emhttp/webGui/event/starting cp -f /boot/custom/bin/delete_key /usr/local/emhttp/webGui/event/started cp -f /boot/custom/bin/fetch_key /usr/local/emhttp/webGui/event/stopped # Start the Management Utility /usr/local/sbin/emhttp &
  20. Re-fraising a bit. Yes, you do need an incoming port, but since you're using PIA you don't have to worry about that, this container does it for you. All you need to do is the 2 things I mentioned. 1. Make sure you have strict port forwarding enabled and 2. connect to a supported endpoint. But you said you confirmed that you got an incoming port in the logs, so that should all be ok. The reason an open incoming port is important is that if you don't have that, nobody can connect to you and thus you can't upload anything. But your issue seems to be something else since you do have an open incoming port. Have you checked your private tracker? What does it say? The site may have an indication if your "connectable" or not. How do you add you torrents do deluge?
  21. I haven't read all the posts in this thread but I think I saw something about this server was gonna be at a remote location? So assuming nobody at the remote location are having your root password to the server then yes, encrypting the drives can be as secure as encrypting the files itself. You can make a key file to unlock the encrypted drives and save that key file on your local server. Then you can make a connection from your remote server before unraid starts to your local server (or a local pi or something that's always on) and download the key file which unlocks the disks. Once the disks are unlocked the key file sits in ram, but you can delete that on array start/stop if you want. So IF someone were to know your root password they can't get to the key file. All this can be scripted. There is a thread that discusses this, but I don't have it handy right now. I can find it for you later though.
  22. This is your issue. This container doesn't officially support customs networks, you need to switch to bridge or host. Which also means you have to switch any other containers who connects to deluge to bridge or host. I'm one of the few who have customs network working in this container but I don't know why it works for me and not for most people. The only advice I have is to try to set the the "Privileged" flag to ON in the container template then try again, if that doesn't work you'll have to switch to bridge/host.
  23. You do need an open incoming port, but if you're using PIA this container does the port forwarding for you automatically. All you have to do is enable strict port forward in the container template and connect to a supported endpoint. Have you confirmed that you get an incoming port? Check the log. Also if you're on a private tracker make sure to disable peer exchange and dht in the deluge settings.