Jump to content

maxse

Members
  • Posts

    605
  • Joined

  • Last visited

Everything posted by maxse

  1. Just unplugged EVERYTHING. only have motherboard HSF processor, and RAM. Computer stays powered on but monitor is blank. No bios screen. did I fry motherboard somehow?
  2. Fan plugged in. I didn’t think of plugging in the intel back. Will do it rn I also took out the ram and left only 1 stick of 2. And everything powers on but still no video screen or bios screen but the PC doesn’t shut down. And the only way I could get it to shut down is by holding the power button for 5 secs. Pressing once doesn’t turn the computer off. Not sure if this helps troubleshooting. damn it. plugged the intel one same result. the usb flash drive doesn’t even light up. It’s connected internal via an adapted to the mobo header. It’s like it’s not getting power
  3. Guys helppp pleaseee. So everything was working fine on my ASrock Mobo with an i5-8400. I decided to get a Noctua HSF because the Intel was quite loud. after putting everything together it powers on and then shuts off! Then powers on again stays on then shuts off! so I connected a monitor to the mobo vga port and I’m not getting any signal. I proceeded to unplug the LSI card... same... unplugged the p2000 same. I basically have nothing connected and I get the same results. Powers on the quickly shuts off then powers on a little longer then shuts off then powers on again, etc... all while the monitors is just blank and not receiving a signal. I’m pulling my hair out and don’t know what to do. Can someone help me out? What could be wrong?
  4. damn it, just when I get everything running butter smooth haha. Looks like I'll have to upgrade tomorrow, in case something goes wrong it won't be the middle of the night. BTW, any reason why someone would install anything other than Catalina, which is the latest OS? I need something super stable as I will only be using this VM to run Arq Backup for my backups. Nothing else. Is one version more stable then others?
  5. Definitely not previous apps, but I AM running 6.6.7. It's been working so smooth I don't want to upgrade, especially with the sql bug that was introduced, although I understand it was corrected in the final release. Is it not compatible with version 6.6.7?
  6. Is it just me or is the docker not showing up any more in CA? I even enabled to search in github, and it still not showing up! Anyone?
  7. GOT IT WORKING! Thanks so much spaceinvaderone! You are THE man! It was a different port that I need to open (part of the iptables commands) 1197 and not 1198. I did accidentally open the PIA port by copy and pasting your commands before realizing I needed different port. But I don't know how to close the PIA one that I opened. Do I need to worry about this? Or does it get closed on reboot sicne I saved the new iptables, after correcting it? @SpaceInvaderOne Also, I am still getting a DNS Leak when tested on the web site. and those last 2 lines of code to the conf file were already a part of the mullvad conf file. Should I be concerned about this? I don't know why it's still showing that my DNS is leaking *EDIT* BTW, in order to save this exact vm and setting, I can just use the CA backup plugin I've been using and just select a destination for libvert.img correct? This is amazing, thank you so much!
  8. Struggled all day and just can’t get it to work with Mullvad vpn. seems like the mullvad vpn config files are missing one of the files that came with PIA OpenVPN set up. I just can’t get this to work now. @SpaceInvaderOne or any one else. Any advice on how to get this working?
  9. Hey guys, I have been using this and it's been working perfectly. But I recently switched from PIA to Mullvad and I Cannot get this to work. I updated the username and password and moved the new mullvad config files to the new directory, but just can't get this to work. Can someone help me out please?
  10. Seems like Veeam free version does not allow S3 backup, so will have to go with the Cloudberry docker and just live with the non-obfuscated filenames and hope they add it in the future. Windows version of cloudberry is $300 so will go with the docker available here. *EDIT* ahh, I just realized. It’s about 70tb let’s say. Cloudberry allows for incremental backups after running the full one. But it’s recommended not to have the chain be too long. So after say 30 days, I would need to create another “full” backup?? that would be crazy to keep having to transfer everything all over again every 30 days especially as data continues to grow! Am I missing something? Why is this so difficult? lol. Sucks that rsync won’t allow any encryption! *EDIT 2* HMM, I wonder if it is possible to use RCLONE like people do when they upload to gdrive, but instead of the gdrive, I would do it to my own minio S3 instance. I think that would pretty much be exactly what I want/need by basically creating my own cloud, and rclone supports encryption and obfuscation which is what people usually use for their gdrive backups. Anyone know if it could be implemented with What I need? I would use the copy command not sync. Only issue is If there’s a ransom we’re infection on the main server. How would I remember which are the good versions of the files to restore?
  11. Hey guys, So it's been a about ayear since my last post when I was figuring out a backup strategy for an offsite unraid server. I orignally decided after much research of going with rsync and SSH to remotely turn on/off the box, but now that I am ready to implement, I want to encrypt and obfuscate the filenames, which rsync won't do. Soo the new plan is to just keep the remote box always on. I will set up minio as others have done on unraid, and have openvpn running on an rpi (already set the vpn up works well). Now here is my issue: I would like to use Veeam and it only runs on windows (I looked at cloudberry for linux, however, while encrypting the files, it does not obfuscate the file names). So... I am thinking that on the main unraid box, to always have a windows 10 vm running with veeam running on that. And to somehow pass through (not sure if I'm using correct terms as I have 0 experience with VMs) the unraid shares to the windows 10 VM, and have then Veeam that's running on the windows 10VM will just backup the shares (which will correspond to the unraid shares) to the minio S3 instance running on the remote unraid. What do you guys think of this? Would the windows VM be stable enough to continously be running in the background without any issues? This will be its sole purpose. The main unraid box is an i5-8400 processor with 16 gigs of RAM. Thanks so much!
  12. Wow time flies! Believe it or not guys I am still working on this!!! I am still in the process of converting files to X265, it's taking almost half a year LOL, but I should be done in a month or so. In the meantime I've been playing around with a raspberry pi and was able to set up DuckDNS on it successfully so I should be ready to start this process soon. So back to this just to make sure again. I will be copying share-share and not disk/disk, the disks on the remote backup are not all the same size as the source. There's not way I want to be able to monitor and exclude disks on the remote server, this needs to be a set it and forget it type of thing once it's running... Do you mean to set the share to ""Automatically split any directory as require?" I don't have an option that says "...file as required." So that would be fine then? Also I thought to get around this I could copy just one folder to the backup server on the share, and then I would be able to run rsync and it would just copy over the new files automatically skipping what's already on the target, and then I won't have the issue stated by @strike? I thought that's all I had to do is just make sure that each share has at least one file in it, and then just rsync the rest and it won't create this problem?
  13. Hey guys, I wanted to get some input from you. I am looking at upgrading my 2013 macbook air to the new ones. What's crazy is how expensive the dive space is and it's not upgradable! I am looking to get 16 gigs of ram and with a 512 SSD its around $1800 which is just nuts to me. I have an awesome unraid/ server that I have. I was wondering if there's anyway I could save money and just get 256gig SSD version and utilize my unraid server space? Is there an easy way that I could map the "documents" folder on the macbook to point to a location on the unraid server? I could then get a nice SSD drive and set up a share to use only the SSD so read/write would be quick? The other issue is that if I take my macbook out of the house, I would like to be able to seamlessly still access that "documents" folder that's actually residing on the unraid server at home. Any thoughts on this?? If there's a way to do this, I think I would be able to save a ton of $$, and I'm sure this would help others also? Do you guys see anything wrong with this scenario? Do you see this being a feasible reasonable solution?
  14. maxse

    Solved

    Thanks so much. I was able to get around 100MB/s after using the unassigned devices plugin and then running it through krusader. I originally mounted the smb share directly from within krusader which caused the slow down. Thank you
  15. you guys are good. yes I meant /mnt/user, couldn't remember if it was abbreviated or not in unraid. Thanks again guys, love this place and I would not have been able set unraid up and troubleshoot without you guys here. Really appreciate it
  16. Right on the money Squid! Sucks that I had to reconfigure, but the appdata folder was actually pointing to the cache drive, while the backup docker was pointed to the mnt/usr. Even though the appdata folder is set to use "cache only." I must have missed this in my original configs. Thanks so much!
  17. I have used watch nvidia-smi and looked under GPU-Util on the right side of the box. Right below that is a percentage. Is that something else?
  18. right, I know it's working. I just dont get how it spike to 56% when I irst run the stream or fast forward while plex fills it's buffer I guess. If it goes tht high with one stream, how will I be able to use multiple streams? I thought one stream is supposed to barely affect the p2000. Is this normal?
×
×
  • Create New...