CrazyMaurice

Members
  • Posts

    4
  • Joined

  • Last visited

CrazyMaurice's Achievements

Noob

Noob (1/14)

0

Reputation

  1. This could have been the problem. I'm also wondering if there's a permissions error as it actually won't let me go in that directory anyway. Hopefully the plugin has it all sorted though!
  2. Thank you so much for pointing me to this! I believe this solves the problem. I wasn't fully aware stuff needed to be added to /etc/cron.d/root. I've created a custom user script to run using the custom cron schedule. To check this should I just have a look in the /etc/cron.d/root directory? Cheers
  3. Hey all, I'm trying to automate a backup from one server to another via SSH - I've got this working (mostly) pretty nicely when I just paste my rsync command in terminal: rsync -avz --delete -e "ssh -i /root/.ssh/TODD-rsync-key" [email protected]:/mnt/user/SoundDesign/ /mnt/user/SoundDesign/ However when I try to create a cron job for this it isn't running: 01 00 * * * rsync -avz --delete -e "ssh -i /root/.ssh/TODD-rsync-key" [email protected]:/mnt/user/SoundDesign/ /mnt/user/SoundDesign/ Its in a file called backupSoundDesign.cron in boot\config\plugins\cronjobs. The file was created from terminal. I ran update_cron and also restarted etc but I'm still not seeing new files in the destination after I get back the next day. Any thoughts? I must be missing something simple I imagine. Cheers Harry
  4. Hi all! Apologies if this is in the wrong section. It appears I'm a little out of my depth although it's all a good learning experience. I've built a couple of medium size file servers (one for physical backup) for the small business I work at as we drastically needed to expand our storage and storage speed from regular consumer/enthusiast NAS enclosures. I thought I'd have a crack at it myself with Unraid! Here's my problem. The main server uses 2x 10Gbe SFP+ cards (bonded) connected to our new Netgear 28port 10Gbe ethernet+sfp+ managed switch (XS728T). (I wanted to see if I could bond all 4 ports to eliminate any bottleneck on that side) Specs are as follows (roughly): -4 core CPU -32GB ram (probably overkill) -Samsung 970 evo 1tb Cache -4x Samsung 860 Evo 1tb (for our 'faster' share) -6x 8tb Seagate HDDs for storage & parity It all appears to be working OK however when I'm connected to the switch on my workstation I'm only getting roughly 100MB/s transfer speed from my own NVME drive to the unraid NVME cache - this is wrong and leads me to believe that I'm essentially transferring from my PC, to the 1gb router and then back to the unraid server at 1gbe. There's probably something I'm missing but the method I'm using is slightly different to the guides I've seen online. Is there any way of getting whatever goes through the switch to just go straight to the server via 10gbe? This is where my knowledge ends as I'm mainly just a PC building enthusiast with a bit of budget. *edit: I managed to get a much better speed as one of the NICs was in a much slower PCIe slot which was limiting. I did a few tests copying from my NVME c:\ drive to the Unraid server and it reached speeds of about 600MB/s max - much better. The issue I'm having now is that whenever I copy a large file it starts off very slow (around 2-3MB/s then steadily reaching its way up to the 5-600MB/s mark over about 5 or 10 seconds. Is there any reason for this? Thanks in advance Harry